
From nobody Tue Oct 10 12:53:18 2017
Return-Path: <wwwrun@rfc-editor.org>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 6EC691331D2; Tue, 10 Oct 2017 12:53:09 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.2
X-Spam-Level: 
X-Spam-Status: No, score=-4.2 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id VPkOqkApYzOD; Tue, 10 Oct 2017 12:53:06 -0700 (PDT)
Received: from rfc-editor.org (rfc-editor.org [4.31.198.49]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 7EC9A126D0C; Tue, 10 Oct 2017 12:53:03 -0700 (PDT)
Received: by rfc-editor.org (Postfix, from userid 30) id 0C21BB8117D; Tue, 10 Oct 2017 12:52:51 -0700 (PDT)
To: ietf-announce@ietf.org, rfc-dist@rfc-editor.org
X-PHP-Originating-Script: 1005:ams_util_lib.php
From: rfc-editor@rfc-editor.org
Cc: rfc-editor@rfc-editor.org, drafts-update-ref@iana.org, slim@ietf.org
Message-Id: <20171010195251.0C21BB8117D@rfc-editor.org>
Date: Tue, 10 Oct 2017 12:52:51 -0700 (PDT)
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/4MS8dDQ2Evnn-Nl-1ZyygZJrYl4>
Subject: [Slim] RFC 8255 on Multiple Language Content Type
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 10 Oct 2017 19:53:09 -0000

A new Request for Comments is now available in online RFC libraries.

        
        RFC 8255

        Title:      Multiple Language Content Type 
        Author:     N. Tomkinson,
                    N. Borenstein
        Status:     Standards Track
        Stream:     IETF
        Date:       October 2017
        Mailbox:    rfc.nik.tomkinson@gmail.com, 
                    nsb@mimecast.com
        Pages:      19
        Characters: 36982
        Updates/Obsoletes/SeeAlso:   None

        I-D Tag:    draft-ietf-slim-multilangcontent-14.txt

        URL:        https://www.rfc-editor.org/info/rfc8255

        DOI:        10.17487/RFC8255

This document defines the 'multipart/multilingual' content type,
which is an addition to the Multipurpose Internet Mail Extensions
(MIME) standard.  This content type makes it possible to send one
message that contains multiple language versions of the same
information.  The translations would be identified by a language tag
and selected by the email client based on a user's language settings.

This document is a product of the Selection of Language for Internet Media Working Group of the IETF.

This is now a Proposed Standard.

STANDARDS TRACK: This document specifies an Internet Standards Track
protocol for the Internet community, and requests discussion and suggestions
for improvements.  Please refer to the current edition of the Official
Internet Protocol Standards (https://www.rfc-editor.org/standards) for the 
standardization state and status of this protocol.  Distribution of this 
memo is unlimited.

This announcement is sent to the IETF-Announce and rfc-dist lists.
To subscribe or unsubscribe, see
  https://www.ietf.org/mailman/listinfo/ietf-announce
  https://mailman.rfc-editor.org/mailman/listinfo/rfc-dist

For searching the RFC series, see https://www.rfc-editor.org/search
For downloading RFCs, see https://www.rfc-editor.org/retrieve/bulk

Requests for special distribution should be addressed to either the
author of the RFC in question, or to rfc-editor@rfc-editor.org.  Unless
specifically noted otherwise on the RFC itself, all RFCs are for
unlimited distribution.


The RFC Editor Team
Association Management Solutions, LLC


From nobody Thu Oct 12 22:56:58 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5F26A13263F for <slim@ietfa.amsl.com>; Thu, 12 Oct 2017 22:56:57 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 0.099
X-Spam-Level: 
X-Spam-Status: No, score=0.099 tagged_above=-999 required=5 tests=[BAYES_50=0.8, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id a-OjYqJ1or7A for <slim@ietfa.amsl.com>; Thu, 12 Oct 2017 22:56:55 -0700 (PDT)
Received: from bin-vsp-out-03.atm.binero.net (vsp-unauthed02.binero.net [195.74.38.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 4A2A0132D79 for <slim@ietf.org>; Thu, 12 Oct 2017 22:56:55 -0700 (PDT)
X-Halon-ID: 4b0eebcd-afdb-11e7-83a7-0050569116f7
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-03.atm.binero.net (Halon) with ESMTPSA id 4b0eebcd-afdb-11e7-83a7-0050569116f7; Fri, 13 Oct 2017 07:56:49 +0200 (CEST)
To: "slim@ietf.org" <slim@ietf.org>, Randall Gellens <rg+ietf@randy.pensive.org>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <20aa7974-45a5-03ce-af01-334ac2176fc8@omnitor.se>
Date: Fri, 13 Oct 2017 07:56:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/u2d1gn4X-Wl-Iy3j1q4VVwm5rjU>
Subject: [Slim] New preliminary version of draft-ietf-slim-negotiating-human-language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 05:56:57 -0000

In order to ease the task to create a new version (14) of 
draft-ietf-slim-negotiating-human-language, I have made proposed edits 
in the .xml version and sent to Randall.

The edits act on all open issues except #43 on how it can be made easy 
for an implementation to "understand" what modality a language tag is 
meant to indicate. I suggest that we leave it unsolved.

The changes done in the proposed new version are briefly:

------------------------------------------------------------------------------------------------------------

9.1. Changes from draft-ietf-slim-...-13 to draft-ietf-slim-...-14

  o Deleted the asterisk parameter for not failing the call as agreed
  for Issue #26

  o "or text" deleted from sentence on undefined combinations for sign
  language, Issue #41

  o Reworded spoken/written language tag as requested by Issue #44

  o Rewording in section 5.2 to make clear that multiple indications
  represent alternatives to be selected from, as requested by Issue
  #46

  o Changed wording in introduction to avoid unsupported requirement
  as requested by Issue #47.

  o Deleted unused reference to draft-hellstrom-slim-modalitypref-02.

  o Changed reference from draft-ietf-slim-multilangcontent-08 to RFC
  8255.

  o Inserted Brian Rosen and Natasha Rooney in the acknowledgements

--------------------------------------------------------------------------------------------------

The tickets are here: https://trac.ietf.org/trac/slim/report/3

I hope Randall finds the edits good and submits the result as version 
-14, so we can progress the draft.

I also hope to get draft-hellstrom-slim-modality-grouping reviewed and 
progressed as next topic in SLIM.

Regards

Gunnar

-- 
-----------------------------------------
Gunnar Hellstrm
Omnitor
gunnar.hellstrom@omnitor.se



From nobody Fri Oct 13 04:40:08 2017
Return-Path: <rg+ietf@randy.pensive.org>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3A4A312EC30 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 04:40:07 -0700 (PDT)
X-Quarantine-ID: <bLv6OkuvX72o>
X-Virus-Scanned: amavisd-new at amsl.com
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "MIME-Version"
X-Spam-Flag: NO
X-Spam-Score: -0.001
X-Spam-Level: 
X-Spam-Status: No, score=-0.001 tagged_above=-999 required=5 tests=[BAYES_40=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id bLv6OkuvX72o for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 04:40:05 -0700 (PDT)
Received: from turing.pensive.org (turing.pensive.org [99.111.97.161]) by ietfa.amsl.com (Postfix) with ESMTP id CF1DE1286C7 for <slim@ietf.org>; Fri, 13 Oct 2017 04:40:05 -0700 (PDT)
Received: from [99.111.97.136] (99.111.97.161) by turing.pensive.org with ESMTP (EIMS X 3.3.9); Fri, 13 Oct 2017 04:44:09 -0700
Mime-Version: 1.0
Message-Id: <p0624061fd606562afe15@[99.111.97.136]>
In-Reply-To: <5833ea9b-c7fe-1cfa-2015-21e42b5c3d55@omnitor.se>
References: <5833ea9b-c7fe-1cfa-2015-21e42b5c3d55@omnitor.se>
X-Mailer: Eudora for Mac OS X
Date: Fri, 13 Oct 2017 04:40:00 -0700
To: Gunnar =?iso-8859-1?Q?Hellstr=F6m?=  <gunnar.hellstrom@omnitor.se>, "slim@ietf.org" <slim@ietf.org>
From: Randall Gellens <rg+ietf@randy.pensive.org>
Mime-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1" ; format="flowed"
Content-Transfer-Encoding: quoted-printable
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/lXs5_dxd-IhapD8WO9edzuisPss>
Subject: Re: [Slim] Simultaneity requirement in draft-ietf-slim-negotiating-human-language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 11:40:07 -0000

At 1:55 PM +0200 7/28/17, Gunnar Hellstr=F6m wrote:

>  Rereading=20
> draft-ietf-slim-negotiating-human-language-13,=20
> I found a couple of minor issues. I bring them=20
> up in separate mails.
>
>  1. The introduction says that we support=20
> request of simultaneous text and voice, but=20
> that is excluded from the protocol.

How can it be excluded?  This is supported by SIP=20
and SDP.  An audio media stream plus a text media=20
stream.

>
>  This is the sentence:
>
>  "Another example would be a user who is able to
>     speak but is deaf or hard-of-hearing and requires a voice stream plus
>     a text stream."
>
>  This looks as a need to specify that the user=20
> wants to receive both voice and captions at the=20
> same time,

I don't think it implies anything about captions.=20
It says a voice stream plus a text stream.

>   but that is one of the requirements I have=20
> tried to convince the group that we need, but=20
> it has not been accepted. We have said that=20
> specifying language in the same direction in=20
> two media means that they are alternatives to=20
> select from.
>
>  I suggest that the sentence is reworded to a=20
> case that is supported by this change:
>  "Another example would be a user who is able to
>     speak but is deaf or hard-of-hearing and=20
> requires to send spoken language in a voice=20
> stream and receive
>     written language in a text stream."
>
>  /Gunnar
>
>
>  --
>  -----------------------------------------
>  Gunnar Hellstr=F6m
>  Omnitor
>  gunnar.hellstrom@omnitor.se
>
>  _______________________________________________
>  SLIM mailing list
>  SLIM@ietf.org
>  https://www.ietf.org/mailman/listinfo/slim


-- 
Randall Gellens
Opinions are personal;    facts are suspect;    I speak for myself only
-------------- Randomly selected tag: ---------------
If there's a possibility that something might be controversial,
then why not eliminate it?
  --Wild Rose, Wis. district administrator, explaining why _Bury
  My Heart at Wounded Knee_ by Dee Brown was removed from schools.


From nobody Fri Oct 13 04:41:56 2017
Return-Path: <rg+ietf@randy.pensive.org>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id CBCEF12EC30 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 04:41:55 -0700 (PDT)
X-Quarantine-ID: <ricLllM9K8mw>
X-Virus-Scanned: amavisd-new at amsl.com
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "MIME-Version"
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Level: 
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ricLllM9K8mw for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 04:41:54 -0700 (PDT)
Received: from turing.pensive.org (turing.pensive.org [99.111.97.161]) by ietfa.amsl.com (Postfix) with ESMTP id 886E81326FE for <slim@ietf.org>; Fri, 13 Oct 2017 04:41:54 -0700 (PDT)
Received: from [99.111.97.136] (99.111.97.161) by turing.pensive.org with ESMTP (EIMS X 3.3.9); Fri, 13 Oct 2017 04:45:59 -0700
Mime-Version: 1.0
Message-Id: <p06240620d60656c321e5@[99.111.97.136]>
In-Reply-To: <376ade2a-be29-33a6-b539-5cab2b847fcd@omnitor.se>
References: <376ade2a-be29-33a6-b539-5cab2b847fcd@omnitor.se>
X-Mailer: Eudora for Mac OS X
Date: Fri, 13 Oct 2017 04:41:49 -0700
To: Gunnar =?iso-8859-1?Q?Hellstr=F6m?=  <gunnar.hellstrom@omnitor.se>, "slim@ietf.org" <slim@ietf.org>
From: Randall Gellens <rg+ietf@randy.pensive.org>
Mime-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1" ; format="flowed"
Content-Transfer-Encoding: quoted-printable
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/d1h8bBFZM8UGGScDM2sPRsSyDUM>
Subject: Re: [Slim] How to know modality in draft-ietf-slim-negotiating-human-language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 11:41:56 -0000

At 2:12 PM +0200 7/28/17, Gunnar Hellstr=F6m wrote:

>  I remember a comment in one of the reviews=20
> (maybe from Adam ?) mentioning that it would be=20
> good if there was a simple way to decide if a=20
> language tag is a sign language or a written or=20
> spoken language.
>  We have not responded to that comment.
>
>  I know one application scanning the IANA=20
> language registry at startup for that purpose=20
> and scanning for the word "sign" in the tag=20
> description. But that might be seen as an=20
> inappropriate way to use IANA registers if it=20
> get used by every phone in the future.
>
>  What can we say about this review comment? Do=20
> we need to add a modality indication parameter=20
> in the syntax? Or shall we strictly limit audio=20
> to have spoken languages, video to have signed=20
> languages and text and webrtc data channels to=20
> have written languages? Or shall we leave this=20
> problem to implementation?

I think it exceeds the scope of the draft.  It=20
reads to me as a request for functionality that=20
is inherent in all uses of language tags, not=20
just the applications of this draft.

>
>  /Gunnar
>
>
>  --
>  -----------------------------------------
>  Gunnar Hellstr=F6m
>  Omnitor
>  gunnar.hellstrom@omnitor.se
>
>  _______________________________________________
>  SLIM mailing list
>  SLIM@ietf.org
>  https://www.ietf.org/mailman/listinfo/slim


-- 
Randall Gellens
Opinions are personal;    facts are suspect;    I speak for myself only
-------------- Randomly selected tag: ---------------
Peace:  In international affairs, a period of cheating between two
periods of fighting.


From nobody Fri Oct 13 04:51:30 2017
Return-Path: <rg+ietf@randy.pensive.org>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 09AE4133047 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 04:51:29 -0700 (PDT)
X-Quarantine-ID: <ArxvUJUuCWfh>
X-Virus-Scanned: amavisd-new at amsl.com
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "MIME-Version"
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Level: 
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ArxvUJUuCWfh for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 04:51:27 -0700 (PDT)
Received: from turing.pensive.org (turing.pensive.org [99.111.97.161]) by ietfa.amsl.com (Postfix) with ESMTP id 7B91E1326FE for <slim@ietf.org>; Fri, 13 Oct 2017 04:51:27 -0700 (PDT)
Received: from [99.111.97.136] (99.111.97.161) by turing.pensive.org with ESMTP (EIMS X 3.3.9); Fri, 13 Oct 2017 04:55:32 -0700
Mime-Version: 1.0
Message-Id: <p06240621d606585e823d@[99.111.97.136]>
In-Reply-To: <3e945827-8310-56aa-b2e5-7a9405ff85c4@omnitor.se>
References: <3e945827-8310-56aa-b2e5-7a9405ff85c4@omnitor.se>
X-Mailer: Eudora for Mac OS X
Date: Fri, 13 Oct 2017 04:51:21 -0700
To: Gunnar =?iso-8859-1?Q?Hellstr=F6m?=  <gunnar.hellstrom@omnitor.se>, "slim@ietf.org" <slim@ietf.org>
From: Randall Gellens <rg+ietf@randy.pensive.org>
Mime-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1" ; format="flowed"
Content-Transfer-Encoding: quoted-printable
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/78d11Hj_XR4Yb7WJykrazfhnDzI>
Subject: Re: [Slim] Indication of modality alternatives in draft-ietf-slim-negotiating-human-language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 11:51:29 -0000

At 12:06 AM +0200 7/29/17, Gunnar Hellstr=F6m wrote:

>  We have dealt with this topic before, but=20
> rereading the draft indicates to me that we=20
> still need some tuning of the wording so that=20
> it is clear that the language indications for=20
> the same direction for different media are=20
> alternatives with no requirements that they=20
> need to be provided together, so that it is=20
> allowed to answer with just one media in each=20
> direction having language indication.
>
>  Suggested wording changes to make this clear:
>
>  ---Change 1 in 5.2, first paragraph----------------
>  ------old text---------
>  This document defines two media-level attributes starting with
>     'hlang' (short for "human interactive language") to negotiate which
>     human language is selected for use in each interactive media stream.
>  ------------new text--------------------
>  This document defines two media-level attributes starting with
>     'hlang' (short for "human interactive language") to negotiate which
>     human language is selected for use in each=20
> media stream used for interactive language=20
> communication.
>  -------end of change 1-------

I don't see how changing "each interactive media=20
stream" to "each media stream used for=20
interactive language communication" improves=20
anything.  The term "interactive" implies human=20
interaction.

>
>  ----Change 2 in 5.2, third paragraph ------
>  ----old text------
>    In an answer, 'hlang-send' is the language the answerer will send if
>     using the media for language (which in most cases is one of the
>     languages in the offer's 'hlang-recv'), and 'hlang-recv' is the
>     language the answerer expects to receive in the media (which in most
>     cases is one of the languages in the offer's 'hlang-send').
>  -----new text----
>    In an answer, 'hlang-send' is the language the answerer will send if
>     using the media for language (which in most cases is one of the
>     languages in the offer's 'hlang-recv'), and 'hlang-recv' is the
>     language the answerer expects to receive in the media if
>     using the media for language (which in most
>     cases is one of the languages in the offer's 'hlang-send').
>  ----end of change 2-------------------------------

I'm OK adding "if using the media for language" to the second clause.

>
>
>  /Gunnar
>
>  --
>  -----------------------------------------
>  Gunnar Hellstr=F6m
>  Omnitor
>  gunnar.hellstrom@omnitor.se
>
>  _______________________________________________
>  SLIM mailing list
>  SLIM@ietf.org
>  https://www.ietf.org/mailman/listinfo/slim


-- 
Randall Gellens
Opinions are personal;    facts are suspect;    I speak for myself only
-------------- Randomly selected tag: ---------------
The idea that people know what they want is wrong.
They need to be pulled through the Web.
      --Laura Jennings, Vice President, Microsoft Network


From nobody Fri Oct 13 05:01:16 2017
Return-Path: <rg+ietf@randy.pensive.org>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 399B313207A for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 05:01:15 -0700 (PDT)
X-Quarantine-ID: <l5JIYGmwqLXa>
X-Virus-Scanned: amavisd-new at amsl.com
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "MIME-Version"
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Level: 
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id l5JIYGmwqLXa for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 05:01:12 -0700 (PDT)
Received: from turing.pensive.org (turing.pensive.org [99.111.97.161]) by ietfa.amsl.com (Postfix) with ESMTP id DEC0412EC30 for <slim@ietf.org>; Fri, 13 Oct 2017 05:01:12 -0700 (PDT)
Received: from [99.111.97.136] (99.111.97.161) by turing.pensive.org with ESMTP (EIMS X 3.3.9); Fri, 13 Oct 2017 05:05:17 -0700
Mime-Version: 1.0
Message-Id: <p06240623d6065b412f8c@[99.111.97.136]>
In-Reply-To: <8233c526-f66d-041e-e4eb-cad6c22b8a73@omnitor.se>
References: <376ade2a-be29-33a6-b539-5cab2b847fcd@omnitor.se> <8233c526-f66d-041e-e4eb-cad6c22b8a73@omnitor.se>
X-Mailer: Eudora for Mac OS X
Date: Fri, 13 Oct 2017 05:01:06 -0700
To: Gunnar =?iso-8859-1?Q?Hellstr=F6m?=  <gunnar.hellstrom@omnitor.se>, "slim@ietf.org" <slim@ietf.org>
From: Randall Gellens <rg+ietf@randy.pensive.org>
Mime-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1" ; format="flowed"
Content-Transfer-Encoding: quoted-printable
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/L97kYaN-CjwAjEEjCykObclycFM>
Subject: Re: [Slim] How to know modality in draft-ietf-slim-negotiating-human-language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 12:01:15 -0000

At 12:46 AM +0200 7/30/17, Gunnar Hellstr=F6m wrote:

>  The review comment on this topic was from Dale=20
> Worley and is found in section B of:
>
>=20
> <https://www.ietf.org/mail-archive/web/slim/current/msg00766.html>https://=
www.ietf.org/mail-archive/web/slim/current/msg00766.html
>
>  by the sentence about specifying a view of the speaker in video: "
>
>  I think this mechanism needs to be described more exactly, and in
>  particular, it should not depend on the UA understanding which
>  language tags are spoken language tags."
>
>  It is this part we have not handled: " it=20
> should not depend on the UA understanding which=20
> language tags are spoken language tags"
>  That is a general issue, not really linked to=20
> the issue of the view of a speaker in the video=20
> stream.

Dale's suggestion of clarifying that it is the=20
use of the exact same language tag in both the=20
audio and video stream, rather than just a spoken=20
tag on the video stream, is a good one.  I'll add=20
that.

>
>
>  Den 2017-07-28 kl. 14:12, skrev Gunnar Hellstr=F6m:
>
>>  I remember a comment in one of the reviews=20
>> (maybe from Adam ?) mentioning that it would=20
>> be good if there was a simple way to decide if=20
>> a language tag is a sign language or a written=20
>> or spoken language.
>>  We have not responded to that comment.
>>
>>  I know one application scanning the IANA=20
>> language registry at startup for that purpose=20
>> and scanning for the word "sign" in the tag=20
>> description. But that might be seen as an=20
>> inappropriate way to use IANA registers if it=20
>> get used by every phone in the future.
>>
>>  What can we say about this review comment? Do=20
>> we need to add a modality indication parameter=20
>> in the syntax? Or shall we strictly limit=20
>> audio to have spoken languages, video to have=20
>> signed languages and text and webrtc data=20
>> channels to have written languages? Or shall=20
>> we leave this problem to implementation?
>>
>>  /Gunnar
>>
>
>  --
>  -----------------------------------------
>  Gunnar Hellstr=F6m
>  Omnitor
>  <mailto:gunnar.hellstrom@omnitor.se>gunnar.hellstrom@omnitor.se
>  +46 708 204 288
>
>  _______________________________________________
>  SLIM mailing list
>  SLIM@ietf.org
>  https://www.ietf.org/mailman/listinfo/slim


-- 
Randall Gellens
Opinions are personal;    facts are suspect;    I speak for myself only
-------------- Randomly selected tag: ---------------
Why does every plane have two pilots? Really, you only need one pilot.
Let's take out the second pilot. Let the bloody computer fly it.
    --Michael O'Leary, Ryanair CEO, regards eliminating co-pilots in
      airline operations. Interview in Bloomberg Businessweek,
      2 September 2010


From nobody Fri Oct 13 05:10:56 2017
Return-Path: <rg+ietf@randy.pensive.org>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id A835E13293A for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 05:10:54 -0700 (PDT)
X-Quarantine-ID: <lqddbqYz410f>
X-Virus-Scanned: amavisd-new at amsl.com
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "MIME-Version"
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Level: 
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id lqddbqYz410f for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 05:10:53 -0700 (PDT)
Received: from turing.pensive.org (turing.pensive.org [99.111.97.161]) by ietfa.amsl.com (Postfix) with ESMTP id 42C5913213D for <slim@ietf.org>; Fri, 13 Oct 2017 05:10:53 -0700 (PDT)
Received: from [99.111.97.136] (99.111.97.161) by turing.pensive.org with ESMTP (EIMS X 3.3.9); Fri, 13 Oct 2017 05:14:57 -0700
Mime-Version: 1.0
Message-Id: <p06240624d6065db8c35f@[99.111.97.136]>
In-Reply-To: <CAOW+2dsOnkbmvWAUmB8ovnCsSMWubOVPyFtRCodFa43zkJ9TAg@mail.gmail.com>
References: <CAOW+2dv0dM+h4OG=iiE+PXakS88=tUj9YzqVB03R93P=FR-upA@mail.gmail.com> <b5f308dc-0a38-0d0c-c5c1-a3c079ee3d94@omnitor.se> <d0d6b3ed-4a6d-a16f-0f7c-42afec619ef5@alum.mit.edu> <CAOW+2duaNtCu0_rCOrBKriWz6eyoKWu3OkQWRmOCHFg39aG7+A@mail.gmail.com> <518f72c7-da4f-120e-f77f-cd61719410f3@alum.mit.edu> <7f6b44ad-8b90-0c21-b841-763be03c32af@omnitor.se> <p06240604d59e9107f51c@99.111.97.136> <5f73c02c-801e-bf33-c41d-1809dd9dc25b@comcast.net> <CAOW+2duxE9-zGoczKmTR8opwcohVdO1Ma-bXPyJE44s56Vg_yw@mail.gmail.com> <p06240600d59ec392cdbd@99.111.97.136> <CAOW+2dvm5UvBL9kg=9pey7LQz13yO0q0ibDG3UCScfw9Dmm6mg@mail.gmail.com> <p06240605d59ed132ff34@99.111.97.136> <71dcb50e-6d36-1445-6cc5-6b91f001bc50@omnitor.se> <CAOW+2dsOnkbmvWAUmB8ovnCsSMWubOVPyFtRCodFa43zkJ9TAg@mail.gmail.com>
X-Mailer: Eudora for Mac OS X
Date: Fri, 13 Oct 2017 05:10:47 -0700
To: Bernard Aboba <bernard.aboba@gmail.com>, slim@ietf.org
From: Randall Gellens <rg+ietf@randy.pensive.org>
Mime-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1" ; format="flowed"
Content-Transfer-Encoding: quoted-printable
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/c9lSKI4jHqNdPhM5qu2PhAE9Z0g>
Subject: Re: [Slim] Human-language Issue 26: Asterisk modifier scope
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 12:10:54 -0000

At 3:01 PM -0700 8/1/17, Bernard Aboba wrote:

>  A potential resolution has been proposed to Issue 26: Delete the Asterisk=
=2E
>
>  Are there any objections to this approach?

Hearing none, the asterisk is deleted in -14.

>
>  On Tue, Aug 1, 2017 at 2:46 PM, Gunnar=20
> Hellstr=F6m=20
> <<mailto:gunnar.hellstrom@omnitor.se>gunnar.hellstrom@omnitor.se>=20
> wrote:
>
>  Den 2017-07-27 kl. 01:11, skrev Randall Gellens:
>
>  At 3:20 PM -0700 7/26/17, Bernard Aboba wrote:
>
>   Randy said:
>
>   "Therefore, if the asterisk is causing=20
> heartburn, we can remove it without technical=20
> impact."
>
>   [BA] Unless there is some compelling reason to=20
> keep it, removing the "*" might be the simplest=20
> way to resolve Issue 26.
>
>
>  There is no compelling reason to keep it,=20
> especially since it's purely advisory.
>
>  I suggest we delete it, revise the draft, and=20
> advance it.  In my view, we've spent far too=20
> much time on the asterisk, which is a supremely=20
> trivial aspect of the draft.
>
>  --Randy
>
>  +1
>  Gunnar
>
>
>
>   On Wed, Jul 26, 2017 at 3:12 PM, Randall=20
> Gellens=20
> <<mailto:<mailto:rg%2Bietf@randy.pensive.org>rg+ietf@randy.pensive.org><ma=
ilto:rg%2Bietf@randy.pensive.org>rg+ietf@randy.pensive.org>=20
> wrote:
>
>   At 2:35 PM -0700 7/26/17, Bernard Aboba wrote:
>
>    Paul said:
>
>    "Then what is the expectation regarding=20
> whether to accept a call with no matching=20
> language?"
>
>    I guess it would make the most sense for the=20
> callee to accept the call unless he doesn't=20
> want a call without matching language. Then if=20
> the callee does accept the call without the=20
> matching language the caller can still=20
> terminate the call once he realizes that is the=20
> situation.
>
>    That seems reasonable to me."
>
>    [BA] The inability to negotiate a matching=20
> language is different from other SDP=20
> negotiation failures such as the inability to=20
> negotiate a matching codec, since even without=20
> a matching language, it still may be useful to=20
> successfully negotiate media characteristics=20
> and bring up the call. Therefore it is not=20
> clear to me how much value the "*" has in the=20
> current draft, regardless of how it is defined.
>
>    For example, if the callee has policy that=20
> dictates that it will always accept the call=20
> (e.g. a PSAP), the callee might always ignore=20
> the "*".  This might be frustrating to the=20
> caller, but the caller may still choose not to=20
> terminate the call.
>
>    Even if the callee cares deeply about a=20
> matching language (e.g. a voice recognition or=20
> chat bot system that is only supports a subset=20
> of languages), the callee might still choose to=20
> ignore the "*".
>
>    For example, the callee might accept the call=20
> in order to provide information to the caller=20
> (e.g. a pre-recorded voice or text message=20
> indicating that the caller's languages are not=20
> supported).
>
>
>   The draft has said for some time that the=20
> asterisk is nothing more than advisory, and has=20
> explicitly said that the callee is free to=20
> ignore either the presence or absence of an=20
> asterisk:
>
>      The called party MAY ignore the indication, e.g., for the emergency
>      services use case, regardless of the absence of an asterisk, a PSAP
>      will likely not fail the call; some call centers might reject a call
>      even if the offer contains an asterisk.
>
>   Therefore, if the asterisk is causing=20
> heartburn, we can remove it without technical=20
> impact.
>
>   --Randy
>
>
>    On Wed, Jul 26, 2017 at 1:46 PM, Paul Kyzivat=20
> <<mailto:<mailto:<mailto:paul.kyzivat@comcast.net>paul.kyzivat@comcast.net=
><mailto:paul.kyzivat@comcast.net>paul.kyzivat@comcast.net><mailto:<mailto:p=
aul.kyzivat@comcast.net>paul.kyzivat@comcast.net><mailto:paul.kyzivat@comcas=
t.net>paul.kyzivat@comcast.net>=20
> wrote:
>
>    On 7/26/17 2:35 PM, Randall Gellens wrote:
>
>    Why don't we just take out of the current=20
> draft the asterisk and the ability to indicate=20
> a caller preference to not fail the call? Then=20
> Gunnar's draft(s) are free to use the asterisk.
>
>
>    Then what is the expectation regarding=20
> whether to accept a call with no matching=20
> language?
>
>    I guess it would make the most sense for the=20
> callee to accept the call unless he doesn't=20
> want a call without matching language. Then if=20
> the callee does accept the call without the=20
> matching language the caller can still=20
> terminate the call once he realizes that is the=20
> situation.
>
>    That seems reasonable to me.
>
>            Thanks,
>            Paul
>
>
>    _______________________________________________
>    SLIM mailing list
>
>=20
> <mailto:<mailto:<mailto:SLIM@ietf.org>SLIM@ietf.org><mailto:SLIM@ietf.org>=
SLIM@ietf.org><mailto:<mailto:SLIM@ietf.org>SLIM@ietf.org><mailto:SLIM@ietf.=
org>SLIM@ietf.org
>
>
>=20
> <<<https://www.ietf.org/mailman/listinfo/slim>https://www.ietf.org/mailman=
/listinfo/slim><https://www.ietf.org/mailman/listinfo/slim>https://www.ietf.=
org/mailman/listinfo/slim><<https://www.ietf.org/mailman/listinfo/slim>https=
://www.ietf.org/mailman/listinfo/slim><https://www.ietf.org/mailman/listinfo=
/slim>https://www.ietf.org/mailman/listinfo/slim
>
>
>
>    _______________________________________________
>    SLIM mailing list
>=20
> <mailto:<mailto:SLIM@ietf.org>SLIM@ietf.org><mailto:SLIM@ietf.org>SLIM@iet=
f.org
>
>=20
> <<https://www.ietf.org/mailman/listinfo/slim>https://www.ietf.org/mailman/=
listinfo/slim><https://www.ietf.org/mailman/listinfo/slim>https://www.ietf.o=
rg/mailman/listinfo/slim
>
>
>
>   --
>   Randall Gellens
>   Opinions are personal;    facts are suspect;    I speak for myself only
>   -------------- Randomly selected tag: ---------------
>   It isn't pollution that's harming the environment.  It's the
>   impurities in our air and water that are doing it.
>                   --Dan Quayle (then-U.S. Vice-President)
>
>
>
>   _______________________________________________
>   SLIM mailing list
>   <mailto:SLIM@ietf.org>SLIM@ietf.org
>=20
> <https://www.ietf.org/mailman/listinfo/slim>https://www.ietf.org/mailman/l=
istinfo/slim
>
>
>
>
>  --
>
>  -----------------------------------------
>  Gunnar Hellstr=F6m
>  Omnitor
>  <mailto:gunnar.hellstrom@omnitor.se>gunnar.hellstrom@omnitor.se
>  <tel:%2B46%20708%20204%20288>+46 708 204 288
>
>
>
>  _______________________________________________
>  SLIM mailing list
>  SLIM@ietf.org
>  https://www.ietf.org/mailman/listinfo/slim


-- 
Randall Gellens
Opinions are personal;    facts are suspect;    I speak for myself only
-------------- Randomly selected tag: ---------------
The human body is not so much the temple of Aphrodite as a
badly-designed gravity-resisting mechanism in constant danger
of going wrong.                               --Quentin Crisp


From nobody Fri Oct 13 05:21:48 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 16F0B132705 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 05:21:46 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.601
X-Spam-Level: 
X-Spam-Status: No, score=-2.601 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id jUeDzl9-flg0 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 05:21:44 -0700 (PDT)
Received: from bin-vsp-out-02.atm.binero.net (bin-mail-out-05.binero.net [195.74.38.228]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 334F913213D for <slim@ietf.org>; Fri, 13 Oct 2017 05:21:44 -0700 (PDT)
X-Halon-ID: 067c3223-b011-11e7-99c0-005056917f90
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-02.atm.binero.net (Halon) with ESMTPSA id 067c3223-b011-11e7-99c0-005056917f90; Fri, 13 Oct 2017 14:21:23 +0200 (CEST)
To: Randall Gellens <rg+ietf@randy.pensive.org>, "slim@ietf.org" <slim@ietf.org>
References: <376ade2a-be29-33a6-b539-5cab2b847fcd@omnitor.se> <8233c526-f66d-041e-e4eb-cad6c22b8a73@omnitor.se> <p06240623d6065b412f8c@[99.111.97.136]>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <127db3d4-910b-f8f9-2c34-9c9d2f928d35@omnitor.se>
Date: Fri, 13 Oct 2017 14:21:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <p06240623d6065b412f8c@[99.111.97.136]>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/jlo7Ukkslg2qdp_Hku4hFjfwCFw>
Subject: Re: [Slim] How to know modality in draft-ietf-slim-negotiating-human-language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 12:21:46 -0000

Den 2017-10-13 kl. 14:01, skrev Randall Gellens:
> At 12:46 AM +0200 7/30/17, Gunnar Hellstrm wrote:
>
>> The review comment on this topic was from Dale Worley and is found 
>> in section B of:
>>
>>
>> <https://www.ietf.org/mail-archive/web/slim/current/msg00766.html>https://www.ietf.org/mail-archive/web/slim/current/msg00766.html 
>>
>>
>> by the sentence about specifying a view of the speaker in video: "
>>
>> I think this mechanism needs to be described more exactly, and in
>> particular, it should not depend on the UA understanding which
>> language tags are spoken language tags."
>>
>> It is this part we have not handled: " it should not depend on the 
>> UA understanding which language tags are spoken language tags"
>> That is a general issue, not really linked to the issue of the view 
>> of a speaker in the video stream.
>
> Dale's suggestion of clarifying that it is the use of the exact same 
> language tag in both the audio and video stream, rather than just a 
> spoken tag on the video stream, is a good one. I'll add that.
<GH>We already have that as a valid combination. But you may if you want 
add that it is the way to know that video is used for a view of a 
talking person.
>
>>
>>
>> Den 2017-07-28 kl. 14:12, skrev Gunnar Hellstrm:
>>
>>> I remember a comment in one of the reviews (maybe from Adam ?) 
>>> mentioning that it would be good if there was a simple way to decide 
>>> if a language tag is a sign language or a written or spoken language.
>>> We have not responded to that comment.
>>>
>>> I know one application scanning the IANA language registry at 
>>> startup for that purpose and scanning for the word "sign" in the tag 
>>> description. But that might be seen as an inappropriate way to use 
>>> IANA registers if it get used by every phone in the future.
>>>
>>> What can we say about this review comment? Do we need to add a 
>>> modality indication parameter in the syntax? Or shall we strictly 
>>> limit audio to have spoken languages, video to have signed languages 
>>> and text and webrtc data channels to have written languages? Or 
>>> shall we leave this problem to implementation?
>>>
>>> /Gunnar
>>>
>>
>> --
>> -----------------------------------------
>> Gunnar Hellstrm
>> Omnitor
>> <mailto:gunnar.hellstrom@omnitor.se>gunnar.hellstrom@omnitor.se
>> +46 708 204 288
>>
>> _______________________________________________
>> SLIM mailing list
>> SLIM@ietf.org
>> https://www.ietf.org/mailman/listinfo/slim
>
>

-- 
-----------------------------------------
Gunnar Hellstrm
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


From nobody Fri Oct 13 05:28:15 2017
Return-Path: <internet-drafts@ietf.org>
X-Original-To: slim@ietf.org
Delivered-To: slim@ietfa.amsl.com
Received: from ietfa.amsl.com (localhost [IPv6:::1]) by ietfa.amsl.com (Postfix) with ESMTP id B2A9D13207A; Fri, 13 Oct 2017 05:28:14 -0700 (PDT)
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
From: internet-drafts@ietf.org
To: <i-d-announce@ietf.org>
Cc: slim@ietf.org
X-Test-IDTracker: no
X-IETF-IDTracker: 6.63.1
Auto-Submitted: auto-generated
Precedence: bulk
Message-ID: <150789769465.23923.10838316479776071981@ietfa.amsl.com>
Date: Fri, 13 Oct 2017 05:28:14 -0700
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/lRNq-KJIA4FLZxhmuLFdhPMkR5k>
Subject: [Slim] I-D Action: draft-ietf-slim-negotiating-human-language-14.txt
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 12:28:15 -0000

A New Internet-Draft is available from the on-line Internet-Drafts directories.
This draft is a work item of the Selection of Language for Internet Media WG of the IETF.

        Title           : Negotiating Human Language in Real-Time Communications
        Author          : Randall Gellens
	Filename        : draft-ietf-slim-negotiating-human-language-14.txt
	Pages           : 16
	Date            : 2017-10-13

Abstract:
   Users have various human (natural) language needs, abilities, and
   preferences regarding spoken, written, and signed languages.  This
   document adds new SDP media-level attributes so that when
   establishing interactive communication sessions ("calls"), it is
   possible to negotiate (communicate and match) the caller's language
   and media needs with the capabilities of the called party.  This is
   especially important with emergency calls, where a call can be
   handled by a call taker capable of communicating with the user, or a
   translator or relay operator can be bridged into the call during
   setup, but this applies to non-emergency calls as well (as an
   example, when calling a company call center).

   This document describes the need and a solution using new SDP media
   attributes.


The IETF datatracker status page for this draft is:
https://datatracker.ietf.org/doc/draft-ietf-slim-negotiating-human-language/

There are also htmlized versions available at:
https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-14
https://datatracker.ietf.org/doc/html/draft-ietf-slim-negotiating-human-language-14

A diff from the previous version is available at:
https://www.ietf.org/rfcdiff?url2=draft-ietf-slim-negotiating-human-language-14


Please note that it may take a couple of minutes from the time of submission
until the htmlized version and diff are available at tools.ietf.org.

Internet-Drafts are also available by anonymous FTP at:
ftp://ftp.ietf.org/internet-drafts/


From nobody Fri Oct 13 05:42:05 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E89EA132705 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 05:42:03 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.6
X-Spam-Level: 
X-Spam-Status: No, score=-2.6 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id kBP4goBL5lO5 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 05:42:02 -0700 (PDT)
Received: from bin-vsp-out-02.atm.binero.net (bin-mail-out-06.binero.net [195.74.38.229]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id CCABE13207A for <slim@ietf.org>; Fri, 13 Oct 2017 05:42:01 -0700 (PDT)
X-Halon-ID: dc623a3a-b013-11e7-99c0-005056917f90
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-02.atm.binero.net (Halon) with ESMTPSA id dc623a3a-b013-11e7-99c0-005056917f90; Fri, 13 Oct 2017 14:41:41 +0200 (CEST)
To: slim@ietf.org
References: <5833ea9b-c7fe-1cfa-2015-21e42b5c3d55@omnitor.se> <p0624061fd606562afe15@[99.111.97.136]>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <2b75cef5-296e-359d-433f-b113bce7a540@omnitor.se>
Date: Fri, 13 Oct 2017 14:41:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <p0624061fd606562afe15@[99.111.97.136]>
Content-Type: multipart/alternative; boundary="------------FE848451980288376AE1DBEB"
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/IQdo6h3wY7zU6mI-fsfW-UmGjyg>
Subject: Re: [Slim] Simultaneity requirement in draft-ietf-slim-negotiating-human-language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 12:42:04 -0000

This is a multi-part message in MIME format.
--------------FE848451980288376AE1DBEB
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 8bit

Den 2017-10-13 kl. 13:40, skrev Randall Gellens:
> At 1:55 PM +0200 7/28/17, Gunnar Hellstrm wrote:
>
>> Rereading draft-ietf-slim-negotiating-human-language-13, I found a 
>> couple of minor issues. I bring them up in separate mails.
>>
>> 1. The introduction says that we support request of simultaneous 
>> text and voice, but that is excluded from the protocol.
>
> How can it be excluded? This is supported by SIP and SDP. An audio 
> media stream plus a text media stream.
<GH> This is one of the shortcomings I have struggled all the time to 
get acceptance for to amend. We do not have any difference in specifying 
"I prefer strongly to receive both text and voice, both together are of 
value to me" vs "I can accept to receive either text or voice, or both 
if you want, you select."

The sentence in the intro indicated the first interpretation. Since we 
lack a way to differentiate that from the second, I prefer that we have 
a sentence aiming at the second interpretation. As seen at the end of 
this mail I have in the proposed edits made a slight change so that it 
aims at the second interpretation. I prefer to have that change made.


>
>>
>> This is the sentence:
>>
>> "Another example would be a user who is able to
>>  speak but is deaf or hard-of-hearing and requires a voice stream 
>> plus
>>  a text stream."
>>
>> This looks as a need to specify that the user wants to receive both 
>> voice and captions at the same time,
>
> I don't think it implies anything about captions. It says a voice 
> stream plus a text stream.

<GH> Right. Call it text then. But the "plus" indicates simultaneity, 
and we have no way to indicate preference for simultaneity versus 
selection among alternatives. That is why I have drafted 
draft-hellstrom-slim-modality-grouping-00
>
>>  but that is one of the requirements I have tried to convince the 
>> group that we need, but it has not been accepted. We have said that 
>> specifying language in the same direction in two media means that 
>> they are alternatives to select from.
>>
>> I suggest that the sentence is reworded to a case that is supported 
>> by this change:
>> "Another example would be a user who is able to
>>  speak but is deaf or hard-of-hearing and requires to send spoken 
>> language in a voice stream and receive
>>  written language in a text stream."
<GH>I still think this is needed to make the intro true.
>>
>> /Gunnar
>>
>>
>> --
>> -----------------------------------------
>> Gunnar Hellstrm
>> Omnitor
>> gunnar.hellstrom@omnitor.se
>>
>> _______________________________________________
>> SLIM mailing list
>> SLIM@ietf.org
>> https://www.ietf.org/mailman/listinfo/slim
>
>

-- 
-----------------------------------------
Gunnar Hellstrm
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


--------------FE848451980288376AE1DBEB
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html;
      charset=windows-1252">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    Den 2017-10-13 kl. 13:40, skrev Randall Gellens:<br>
    <blockquote type="cite"
      cite="mid:p0624061fd606562afe15@[99.111.97.136]">At 1:55 PM +0200
      7/28/17, Gunnar Hellstrm wrote:
      <br>
      <br>
      <blockquote type="cite">Rereading
        draft-ietf-slim-negotiating-human-language-13, I found a couple
        of minor issues. I bring them up in separate mails.
        <br>
        <br>
        1. The introduction says that we support request of
        simultaneous text and voice, but that is excluded from the
        protocol.
        <br>
      </blockquote>
      <br>
      How can it be excluded? This is supported by SIP and SDP. An
      audio media stream plus a text media stream.
      <br>
    </blockquote>
    &lt;GH&gt; This is one of the shortcomings I have struggled all the
    time to get acceptance for to amend. We do not have any difference
    in specifying "I prefer strongly to receive both text and voice,
    both together are of value to me" vs "I can accept to receive either
    text or voice, or both if you want, you select."<br>
    <br>
    The sentence in the intro indicated the first interpretation. Since
    we lack a way to differentiate that from the second, I prefer that
    we have a sentence aiming at the second interpretation. As seen at
    the end of this mail I have in the proposed edits made a slight
    change so that it aims at the second interpretation. I prefer to
    have that change made.<br>
    <br>
    <br>
    <blockquote type="cite"
      cite="mid:p0624061fd606562afe15@[99.111.97.136]">
      <br>
      <blockquote type="cite">
        <br>
        This is the sentence:
        <br>
        <br>
        "Another example would be a user who is able to
        <br>
         speak but is deaf or hard-of-hearing and requires a voice
        stream plus
        <br>
         a text stream."
        <br>
        <br>
        This looks as a need to specify that the user wants to receive
        both voice and captions at the same time,
        <br>
      </blockquote>
      <br>
      I don't think it implies anything about captions. It says a voice
      stream plus a text stream.
      <br>
    </blockquote>
    <br>
    &lt;GH&gt; Right. Call it text then. But the "plus" indicates
    simultaneity, and we have no way to indicate preference for
    simultaneity versus selection among alternatives. That is why I
    have drafted <a class="moz-txt-link-freetext"
href="https://tools.ietf.org/html/draft-hellstrom-slim-modality-grouping-00">draft-hellstrom-slim-modality-grouping-00</a>
    <blockquote type="cite"
      cite="mid:p0624061fd606562afe15@[99.111.97.136]">
      <br>
      <blockquote type="cite"> but that is one of the requirements I
        have tried to convince the group that we need, but it has not
        been accepted. We have said that specifying language in the same
        direction in two media means that they are alternatives to
        select from.
        <br>
        <br>
        I suggest that the sentence is reworded to a case that is
        supported by this change:
        <br>
        "Another example would be a user who is able to
        <br>
         speak but is deaf or hard-of-hearing and requires to send
        spoken language in a voice stream and receive
        <br>
         written language in a text stream."
        <br>
      </blockquote>
    </blockquote>
    &lt;GH&gt;I still think this is needed to make the intro true.<br>
    <blockquote type="cite"
      cite="mid:p0624061fd606562afe15@[99.111.97.136]">
      <blockquote type="cite">
        <br>
        /Gunnar
        <br>
        <br>
        <br>
        --
        <br>
        -----------------------------------------
        <br>
        Gunnar Hellstrm
        <br>
        Omnitor
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se">gunnar.hellstrom@omnitor.se</a>
        <br>
        <br>
        _______________________________________________
        <br>
        SLIM mailing list
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:SLIM@ietf.org">SLIM@ietf.org</a>
        <br>
        <a class="moz-txt-link-freetext" href="https://www.ietf.org/mailman/listinfo/slim">https://www.ietf.org/mailman/listinfo/slim</a>
        <br>
      </blockquote>
      <br>
      <br>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
-----------------------------------------
Gunnar Hellstrm
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
  </body>
</html>

--------------FE848451980288376AE1DBEB--


From nobody Fri Oct 13 05:57:16 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3E03B13301F for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 05:57:15 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.601
X-Spam-Level: 
X-Spam-Status: No, score=-2.601 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id SuYUAI4IgltQ for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 05:57:13 -0700 (PDT)
Received: from bin-vsp-out-02.atm.binero.net (vsp-unauthed02.binero.net [195.74.38.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 69FBC13207A for <slim@ietf.org>; Fri, 13 Oct 2017 05:57:13 -0700 (PDT)
X-Halon-ID: fba806c8-b015-11e7-99c0-005056917f90
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-02.atm.binero.net (Halon) with ESMTPSA id fba806c8-b015-11e7-99c0-005056917f90; Fri, 13 Oct 2017 14:56:52 +0200 (CEST)
To: Randall Gellens <rg+ietf@randy.pensive.org>, "slim@ietf.org" <slim@ietf.org>
References: <3e945827-8310-56aa-b2e5-7a9405ff85c4@omnitor.se> <p06240621d606585e823d@[99.111.97.136]>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <57690f3d-faa2-18d8-f270-8ae179f39e68@omnitor.se>
Date: Fri, 13 Oct 2017 14:57:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <p06240621d606585e823d@[99.111.97.136]>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/QQcxTF_Syl2GIF6ObmOoAka8Kf4>
Subject: Re: [Slim] Indication of modality alternatives in draft-ietf-slim-negotiating-human-language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 12:57:15 -0000

Den 2017-10-13 kl. 13:51, skrev Randall Gellens:
> At 12:06 AM +0200 7/29/17, Gunnar Hellstrm wrote:
>
>> We have dealt with this topic before, but rereading the draft 
>> indicates to me that we still need some tuning of the wording so that 
>> it is clear that the language indications for the same direction for 
>> different media are alternatives with no requirements that they need 
>> to be provided together, so that it is allowed to answer with just 
>> one media in each direction having language indication.
>>
>> Suggested wording changes to make this clear:
>>
>> ---Change 1 in 5.2, first paragraph----------------
>> ------old text---------
>> This document defines two media-level attributes starting with
>>  'hlang' (short for "human interactive language") to negotiate which
>>  human language is selected for use in each interactive media stream.
>> ------------new text--------------------
>> This document defines two media-level attributes starting with
>>  'hlang' (short for "human interactive language") to negotiate which
>>  human language is selected for use in each media stream used for 
>> interactive language communication.
>> -------end of change 1-------
>
> I don't see how changing "each interactive media stream" to "each 
> media stream used for interactive language communication" improves 
> anything. The term "interactive" implies human interaction.
<GH>Yes, but human interaction can be to show things in video without 
being language communication.
What I am aiming at is to clearly indicate that the language indications 
are alternatives to select from. The wording "use in each interactive 
media stream" sounds to me that you MUST use all the agreed languages. 
That is the same mistake that you initially blamed the Lang SDP 
attribute to mean. We need to get away from that interpretation. My 
wording was intended to accomplish that, but it might have been too 
weak. The key word is "used" that is intended to mean that if a media 
stream is selected to be used for language communication then the agreed 
language is the one to be used.
So, I prefer my wording, or if you can create something even more clear 
that we are talking about alternatives to select from.
>
>>
>> ----Change 2 in 5.2, third paragraph ------
>> ----old text------
>>  In an answer, 'hlang-send' is the language the answerer will send if
>>  using the media for language (which in most cases is one of the
>>  languages in the offer's 'hlang-recv'), and 'hlang-recv' is the
>>  language the answerer expects to receive in the media (which in most
>>  cases is one of the languages in the offer's 'hlang-send').
>> -----new text----
>>  In an answer, 'hlang-send' is the language the answerer will send if
>>  using the media for language (which in most cases is one of the
>>  languages in the offer's 'hlang-recv'), and 'hlang-recv' is the
>>  language the answerer expects to receive in the media if
>>  using the media for language (which in most
>>  cases is one of the languages in the offer's 'hlang-send').
>> ----end of change 2-------------------------------
>
> I'm OK adding "if using the media for language" to the second clause.
>
>>
>>
>> /Gunnar
>>
>> --
>> -----------------------------------------
>> Gunnar Hellstrm
>> Omnitor
>> gunnar.hellstrom@omnitor.se
>>
>> _______________________________________________
>> SLIM mailing list
>> SLIM@ietf.org
>> https://www.ietf.org/mailman/listinfo/slim
>
>

-- 
-----------------------------------------
Gunnar Hellstrm
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


From nobody Fri Oct 13 06:13:11 2017
Return-Path: <rg+ietf@randy.pensive.org>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id D1CAA13301F for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 06:13:09 -0700 (PDT)
X-Quarantine-ID: <3j1iqJ3khjU4>
X-Virus-Scanned: amavisd-new at amsl.com
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "MIME-Version"
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Level: 
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 3j1iqJ3khjU4 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 06:13:08 -0700 (PDT)
Received: from turing.pensive.org (turing.pensive.org [99.111.97.161]) by ietfa.amsl.com (Postfix) with ESMTP id 8350213207A for <slim@ietf.org>; Fri, 13 Oct 2017 06:13:08 -0700 (PDT)
Received: from [99.111.97.136] (99.111.97.161) by turing.pensive.org with ESMTP (EIMS X 3.3.9); Fri, 13 Oct 2017 06:17:12 -0700
Mime-Version: 1.0
Message-Id: <p06240628d6066c091e76@[99.111.97.136]>
In-Reply-To: <57690f3d-faa2-18d8-f270-8ae179f39e68@omnitor.se>
References: <3e945827-8310-56aa-b2e5-7a9405ff85c4@omnitor.se> <p06240621d606585e823d@[99.111.97.136]> <57690f3d-faa2-18d8-f270-8ae179f39e68@omnitor.se>
X-Mailer: Eudora for Mac OS X
Date: Fri, 13 Oct 2017 06:13:03 -0700
To: Gunnar =?iso-8859-1?Q?Hellstr=F6m?=  <gunnar.hellstrom@omnitor.se>, "slim@ietf.org" <slim@ietf.org>
From: Randall Gellens <rg+ietf@randy.pensive.org>
Mime-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1" ; format="flowed"
Content-Transfer-Encoding: quoted-printable
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/WTXbtyzySA7uY-YSlqrD9WZPBYE>
Subject: Re: [Slim] Indication of modality alternatives in draft-ietf-slim-negotiating-human-language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 13:13:10 -0000

I think we've addressed the concerns that existed=20
with earlier versions of the draft.

At 2:57 PM +0200 10/13/17, Gunnar Hellstr=F6m wrote:

>  Den 2017-10-13 kl. 13:51, skrev Randall Gellens:
>>  At 12:06 AM +0200 7/29/17, Gunnar Hellstr=F6m wrote:
>>
>>>   We have dealt with this topic before, but=20
>>> rereading the draft indicates to me that we=20
>>> still need some tuning of the wording so that=20
>>> it is clear that the language indications for=20
>>> the same direction for different media are=20
>>> alternatives with no requirements that they=20
>>> need to be provided together, so that it is=20
>>> allowed to answer with just one media in each=20
>>> direction having language indication.
>>>
>>>   Suggested wording changes to make this clear:
>>>
>>>   ---Change 1 in 5.2, first paragraph----------------
>>>   ------old text---------
>>>   This document defines two media-level attributes starting with
>>>      'hlang' (short for "human interactive language") to negotiate which
>>>      human language is selected for use in each interactive media stream=
=2E
>>>   ------------new text--------------------
>>>   This document defines two media-level attributes starting with
>>>      'hlang' (short for "human interactive language") to negotiate which
>>>      human language is selected for use in=20
>>> each media stream used for interactive=20
>>> language communication.
>>>   -------end of change 1-------
>>
>>  I don't see how changing "each interactive=20
>> media stream" to "each media stream used for=20
>> interactive language communication" improves=20
>> anything.  The term "interactive" implies=20
>> human interaction.
>  <GH>Yes, but human interaction can be to show=20
> things in video without being language=20
> communication.
>  What I am aiming at is to clearly indicate that=20
> the language indications are alternatives to=20
> select from. The wording "use in each=20
> interactive media stream" sounds to me that you=20
> MUST use all the agreed languages. That is the=20
> same mistake that you initially blamed the Lang=20
> SDP attribute to mean. We need to get away from=20
> that interpretation. My wording was intended to=20
> accomplish that, but it might have been too=20
> weak. The key word is "used" that is intended=20
> to mean that if a media stream is selected to=20
> be used for language communication then the=20
> agreed language is the one to be used.
>  So, I prefer my wording, or if you can create=20
> something even more clear that we are talking=20
> about alternatives to select from.
>>
>>>
>>>   ----Change 2 in 5.2, third paragraph ------
>>>   ----old text------
>>>     In an answer, 'hlang-send' is the language the answerer will send if
>>>      using the media for language (which in most cases is one of the
>>>      languages in the offer's 'hlang-recv'), and 'hlang-recv' is the
>>>      language the answerer expects to receive in the media (which in mos=
t
>>>      cases is one of the languages in the offer's 'hlang-send').
>>>   -----new text----
>>>     In an answer, 'hlang-send' is the language the answerer will send if
>>>      using the media for language (which in most cases is one of the
>>>      languages in the offer's 'hlang-recv'), and 'hlang-recv' is the
>>>      language the answerer expects to receive in the media if
>>>      using the media for language (which in most
>>>      cases is one of the languages in the offer's 'hlang-send').
>>>   ----end of change 2-------------------------------
>>
>>  I'm OK adding "if using the media for language" to the second clause.
>>
>>>
>>>
>>>   /Gunnar
>>>
>>>   --
>>>   -----------------------------------------
>>>   Gunnar Hellstr=F6m
>>>   Omnitor
>>>   gunnar.hellstrom@omnitor.se
>>>
>>>   _______________________________________________
>>>   SLIM mailing list
>>>   SLIM@ietf.org
>>>   https://www.ietf.org/mailman/listinfo/slim
>>
>>
>
>  --
>  -----------------------------------------
>  Gunnar Hellstr=F6m
>  Omnitor
>  gunnar.hellstrom@omnitor.se
>  +46 708 204 288


-- 
Randall Gellens
Opinions are personal;    facts are suspect;    I speak for myself only
-------------- Randomly selected tag: ---------------
The old believe everything; the middle-aged suspect everything;
the young know everything.                        --Oscar Wilde


From nobody Fri Oct 13 06:15:42 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 4A326133079 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 06:15:41 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.6
X-Spam-Level: 
X-Spam-Status: No, score=-2.6 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qmH8JKJvAESk for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 06:15:39 -0700 (PDT)
Received: from bin-vsp-out-01.atm.binero.net (bin-mail-out-05.binero.net [195.74.38.228]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 39EA213304A for <slim@ietf.org>; Fri, 13 Oct 2017 06:15:35 -0700 (PDT)
X-Halon-ID: 832cbd95-b018-11e7-9c60-005056917a89
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-01.atm.binero.net (Halon) with ESMTPSA id 832cbd95-b018-11e7-9c60-005056917a89; Fri, 13 Oct 2017 15:14:58 +0200 (CEST)
To: slim@ietf.org
References: <150789769465.23923.10838316479776071981@ietfa.amsl.com>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <2e32f32c-7631-47ad-499b-f97beb8e8d66@omnitor.se>
Date: Fri, 13 Oct 2017 15:15:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <150789769465.23923.10838316479776071981@ietfa.amsl.com>
Content-Type: multipart/alternative; boundary="------------5CC4EE4A504189C26D9E4918"
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/S9IRmgS4pSyDLMFadMJsWMKI2Pk>
Subject: Re: [Slim] I-D Action: draft-ietf-slim-negotiating-human-language-14.txt
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 13:15:41 -0000

This is a multi-part message in MIME format.
--------------5CC4EE4A504189C26D9E4918
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 8bit

Thanks Randall, good to see progress.

Apart from the comments I made to the separate issues, I also noticed 
that you without reason deleted this sentence from the introduction.

"Note that separate work may introduce additional information
regarding language/modality preferences among media."

The separate work is going on, and it is not related to the asterisk 
anymore.

Regards

Gunnar





Den 2017-10-13 kl. 14:28, skrev internet-drafts@ietf.org:
> A New Internet-Draft is available from the on-line Internet-Drafts directories.
> This draft is a work item of the Selection of Language for Internet Media WG of the IETF.
>
>          Title           : Negotiating Human Language in Real-Time Communications
>          Author          : Randall Gellens
> 	Filename        : draft-ietf-slim-negotiating-human-language-14.txt
> 	Pages           : 16
> 	Date            : 2017-10-13
>
> Abstract:
>     Users have various human (natural) language needs, abilities, and
>     preferences regarding spoken, written, and signed languages.  This
>     document adds new SDP media-level attributes so that when
>     establishing interactive communication sessions ("calls"), it is
>     possible to negotiate (communicate and match) the caller's language
>     and media needs with the capabilities of the called party.  This is
>     especially important with emergency calls, where a call can be
>     handled by a call taker capable of communicating with the user, or a
>     translator or relay operator can be bridged into the call during
>     setup, but this applies to non-emergency calls as well (as an
>     example, when calling a company call center).
>
>     This document describes the need and a solution using new SDP media
>     attributes.
>
>
> The IETF datatracker status page for this draft is:
> https://datatracker.ietf.org/doc/draft-ietf-slim-negotiating-human-language/
>
> There are also htmlized versions available at:
> https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-14
> https://datatracker.ietf.org/doc/html/draft-ietf-slim-negotiating-human-language-14
>
> A diff from the previous version is available at:
> https://www.ietf.org/rfcdiff?url2=draft-ietf-slim-negotiating-human-language-14
>
>
> Please note that it may take a couple of minutes from the time of submission
> until the htmlized version and diff are available at tools.ietf.org.
>
> Internet-Drafts are also available by anonymous FTP at:
> ftp://ftp.ietf.org/internet-drafts/
>
> _______________________________________________
> SLIM mailing list
> SLIM@ietf.org
> https://www.ietf.org/mailman/listinfo/slim

-- 
-----------------------------------------
Gunnar Hellstrm
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


--------------5CC4EE4A504189C26D9E4918
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html;
      charset=windows-1252">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <p>Thanks Randall, good to see progress.<br>
    </p>
    <p>Apart from the comments I made to the separate issues, I also
      noticed that you without reason deleted this sentence from the
      introduction.</p>
    <p>"<span class="delete" style="background-color: rgb(170, 204,
        255);">Note that separate work may introduce additional
        information <br>
        regarding language/modality preferences among media."</span></p>
    <p><span class="delete" style="background-color: rgb(170, 204,
        255);">The separate work is going on, and it is not related to
        the asterisk anymore.</span></p>
    <p><span class="delete" style="background-color: rgb(170, 204,
        255);">Regards</span></p>
    <p><span class="delete" style="background-color: rgb(170, 204,
        255);">Gunnar<br>
      </span></p>
    <p><span class="delete" style="background-color: rgb(170, 204,
        255);"><br>
      </span></p>
    <p><span class="delete" style="background-color: rgb(170, 204,
        255);"><br>
      </span></p>
    <table style="color: rgb(0, 0, 0); font-family: &quot;Times New
      Roman&quot;; font-size: medium; font-style: normal;
      font-variant-ligatures: normal; font-variant-caps: normal;
      font-weight: normal; letter-spacing: normal; text-align: start;
      text-indent: 0px; text-transform: none; white-space: normal;
      word-spacing: 0px; -webkit-text-stroke-width: 0px;
      text-decoration-style: initial; text-decoration-color: initial;"
      cellspacing="0" cellpadding="0" height="25" width="12" border="0">
      <tbody>
        <tr>
          <td valign="top"><br>
          </td>
        </tr>
        <tr>
        </tr>
      </tbody>
    </table>
    <br>
    <div class="moz-cite-prefix">Den 2017-10-13 kl. 14:28, skrev
      <a class="moz-txt-link-abbreviated" href="mailto:internet-drafts@ietf.org">internet-drafts@ietf.org</a>:<br>
    </div>
    <blockquote type="cite"
      cite="mid:150789769465.23923.10838316479776071981@ietfa.amsl.com">
      <pre wrap="">
A New Internet-Draft is available from the on-line Internet-Drafts directories.
This draft is a work item of the Selection of Language for Internet Media WG of the IETF.

        Title           : Negotiating Human Language in Real-Time Communications
        Author          : Randall Gellens
	Filename        : draft-ietf-slim-negotiating-human-language-14.txt
	Pages           : 16
	Date            : 2017-10-13

Abstract:
   Users have various human (natural) language needs, abilities, and
   preferences regarding spoken, written, and signed languages.  This
   document adds new SDP media-level attributes so that when
   establishing interactive communication sessions ("calls"), it is
   possible to negotiate (communicate and match) the caller's language
   and media needs with the capabilities of the called party.  This is
   especially important with emergency calls, where a call can be
   handled by a call taker capable of communicating with the user, or a
   translator or relay operator can be bridged into the call during
   setup, but this applies to non-emergency calls as well (as an
   example, when calling a company call center).

   This document describes the need and a solution using new SDP media
   attributes.


The IETF datatracker status page for this draft is:
<a class="moz-txt-link-freetext" href="https://datatracker.ietf.org/doc/draft-ietf-slim-negotiating-human-language/">https://datatracker.ietf.org/doc/draft-ietf-slim-negotiating-human-language/</a>

There are also htmlized versions available at:
<a class="moz-txt-link-freetext" href="https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-14">https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-14</a>
<a class="moz-txt-link-freetext" href="https://datatracker.ietf.org/doc/html/draft-ietf-slim-negotiating-human-language-14">https://datatracker.ietf.org/doc/html/draft-ietf-slim-negotiating-human-language-14</a>

A diff from the previous version is available at:
<a class="moz-txt-link-freetext" href="https://www.ietf.org/rfcdiff?url2=draft-ietf-slim-negotiating-human-language-14">https://www.ietf.org/rfcdiff?url2=draft-ietf-slim-negotiating-human-language-14</a>


Please note that it may take a couple of minutes from the time of submission
until the htmlized version and diff are available at tools.ietf.org.

Internet-Drafts are also available by anonymous FTP at:
<a class="moz-txt-link-freetext" href="ftp://ftp.ietf.org/internet-drafts/">ftp://ftp.ietf.org/internet-drafts/</a>

_______________________________________________
SLIM mailing list
<a class="moz-txt-link-abbreviated" href="mailto:SLIM@ietf.org">SLIM@ietf.org</a>
<a class="moz-txt-link-freetext" href="https://www.ietf.org/mailman/listinfo/slim">https://www.ietf.org/mailman/listinfo/slim</a>
</pre>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
-----------------------------------------
Gunnar Hellstrm
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
  </body>
</html>

--------------5CC4EE4A504189C26D9E4918--


From nobody Fri Oct 13 06:33:09 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 47A91132F69 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 06:33:08 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.6
X-Spam-Level: 
X-Spam-Status: No, score=-2.6 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id o1sRVG2OIrdQ for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 06:33:01 -0700 (PDT)
Received: from bin-vsp-out-02.atm.binero.net (bin-mail-out-06.binero.net [195.74.38.229]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 4CD7713207A for <slim@ietf.org>; Fri, 13 Oct 2017 06:33:01 -0700 (PDT)
X-Halon-ID: fbb450f8-b01a-11e7-99c0-005056917f90
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-02.atm.binero.net (Halon) with ESMTPSA id fbb450f8-b01a-11e7-99c0-005056917f90; Fri, 13 Oct 2017 15:32:40 +0200 (CEST)
To: Randall Gellens <rg+ietf@randy.pensive.org>, "slim@ietf.org" <slim@ietf.org>
References: <3e945827-8310-56aa-b2e5-7a9405ff85c4@omnitor.se> <p06240621d606585e823d@[99.111.97.136]> <57690f3d-faa2-18d8-f270-8ae179f39e68@omnitor.se> <p06240628d6066c091e76@[99.111.97.136]>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <fea21ce6-398a-ebbb-5881-abe732c8983b@omnitor.se>
Date: Fri, 13 Oct 2017 15:32:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <p06240628d6066c091e76@[99.111.97.136]>
Content-Type: multipart/alternative; boundary="------------07C723E5D7BDB05DC82DB3FE"
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/h5E6xVuD3i977IDZ_oPGV4YmDdI>
Subject: Re: [Slim] Indication of modality alternatives in draft-ietf-slim-negotiating-human-language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 13:33:08 -0000

This is a multi-part message in MIME format.
--------------07C723E5D7BDB05DC82DB3FE
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 8bit

Change 2 is fine and solves part of the problem.

But the current wording at my proposed change 1 still tells me that if I 
offer English text and English voice, it means that I have selected to 
use both, and even stronger if an answer contains English text and 
English voice, then both will be used in the session, exactly as you 
indicated was the problem with the Lang attribute. We need to get the 
possibility to select among alternatives clearly into the draft so that 
not next generation implementers also say that it is too vague about 
what it means.

The current wording at change one still says that each interactive 
stream is used.

How about: "to negotiate which
  human language is selected for*possible *use in each interactive 
media stream."

/Gunnar


Den 2017-10-13 kl. 15:13, skrev Randall Gellens:
> I think we've addressed the concerns that existed with earlier 
> versions of the draft.
>
> At 2:57 PM +0200 10/13/17, Gunnar Hellstrm wrote:
>
>> Den 2017-10-13 kl. 13:51, skrev Randall Gellens:
>>> At 12:06 AM +0200 7/29/17, Gunnar Hellstrm wrote:
>>>
>>>>  We have dealt with this topic before, but rereading the draft 
>>>> indicates to me that we still need some tuning of the wording so 
>>>> that it is clear that the language indications for the same 
>>>> direction for different media are alternatives with no requirements 
>>>> that they need to be provided together, so that it is allowed to 
>>>> answer with just one media in each direction having language 
>>>> indication.
>>>>
>>>>  Suggested wording changes to make this clear:
>>>>
>>>>  ---Change 1 in 5.2, first paragraph----------------
>>>>  ------old text---------
>>>>  This document defines two media-level attributes starting with
>>>>  'hlang' (short for "human interactive language") to negotiate 
>>>> which
>>>>  human language is selected for use in each interactive media 
>>>> stream.
>>>>  ------------new text--------------------
>>>>  This document defines two media-level attributes starting with
>>>>  'hlang' (short for "human interactive language") to negotiate 
>>>> which
>>>>  human language is selected for use in each media stream used 
>>>> for interactive language communication.
>>>>  -------end of change 1-------
>>>
>>> I don't see how changing "each interactive media stream" to "each 
>>> media stream used for interactive language communication" improves 
>>> anything. The term "interactive" implies human interaction.
>> <GH>Yes, but human interaction can be to show things in video 
>> without being language communication.
>> What I am aiming at is to clearly indicate that the language 
>> indications are alternatives to select from. The wording "use in each 
>> interactive media stream" sounds to me that you MUST use all the 
>> agreed languages. That is the same mistake that you initially blamed 
>> the Lang SDP attribute to mean. We need to get away from that 
>> interpretation. My wording was intended to accomplish that, but it 
>> might have been too weak. The key word is "used" that is intended to 
>> mean that if a media stream is selected to be used for language 
>> communication then the agreed language is the one to be used.
>> So, I prefer my wording, or if you can create something even more 
>> clear that we are talking about alternatives to select from.
>>>
>>>>
>>>>  ----Change 2 in 5.2, third paragraph ------
>>>>  ----old text------
>>>>  In an answer, 'hlang-send' is the language the answerer will 
>>>> send if
>>>>  using the media for language (which in most cases is one of the
>>>>  languages in the offer's 'hlang-recv'), and 'hlang-recv' is the
>>>>  language the answerer expects to receive in the media (which 
>>>> in most
>>>>  cases is one of the languages in the offer's 'hlang-send').
>>>>  -----new text----
>>>>  In an answer, 'hlang-send' is the language the answerer will 
>>>> send if
>>>>  using the media for language (which in most cases is one of the
>>>>  languages in the offer's 'hlang-recv'), and 'hlang-recv' is the
>>>>  language the answerer expects to receive in the media if
>>>>  using the media for language (which in most
>>>>  cases is one of the languages in the offer's 'hlang-send').
>>>>  ----end of change 2-------------------------------
>>>
>>> I'm OK adding "if using the media for language" to the second clause.
>>>
>>>>
>>>>
>>>>  /Gunnar
>>>>
>>>>  --
>>>>  -----------------------------------------
>>>>  Gunnar Hellstrm
>>>>  Omnitor
>>>>  gunnar.hellstrom@omnitor.se
>>>>
>>>>  _______________________________________________
>>>>  SLIM mailing list
>>>>  SLIM@ietf.org
>>>>  https://www.ietf.org/mailman/listinfo/slim
>>>
>>>
>>
>> --
>> -----------------------------------------
>> Gunnar Hellstrm
>> Omnitor
>> gunnar.hellstrom@omnitor.se
>> +46 708 204 288
>
>

-- 
-----------------------------------------
Gunnar Hellstrm
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


--------------07C723E5D7BDB05DC82DB3FE
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html;
      charset=windows-1252">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <p>Change 2 is fine and solves part of the problem.</p>
    <p>But the current wording at my proposed change 1 still tells me
      that if I offer English text and English voice, it means that I
      have selected to use both, and even stronger if an answer contains
      English text and English voice, then both will be used in the
      session, exactly as you indicated was the problem with the Lang
      attribute. We need to get the possibility to select among
      alternatives clearly into the draft so that not next generation
      implementers also say that it is too vague about what it means.</p>
    <p>The current wording at change one still says that each
      interactive stream is used. <br>
    </p>
    <p>How about: "to negotiate which
      <br>
       human language is selected for<b> possible </b>use in each
      interactive media stream."</p>
    <p>/Gunnar<br>
    </p>
    <br>
    <div class="moz-cite-prefix">Den 2017-10-13 kl. 15:13, skrev Randall
      Gellens:<br>
    </div>
    <blockquote type="cite"
      cite="mid:p06240628d6066c091e76@[99.111.97.136]">I think we've
      addressed the concerns that existed with earlier versions of the
      draft.
      <br>
      <br>
      At 2:57 PM +0200 10/13/17, Gunnar Hellstrm wrote:
      <br>
      <br>
      <blockquote type="cite">Den 2017-10-13 kl. 13:51, skrev Randall
        Gellens:
        <br>
        <blockquote type="cite">At 12:06 AM +0200 7/29/17, Gunnar
          Hellstrm wrote:
          <br>
          <br>
          <blockquote type="cite"> We have dealt with this topic
            before, but rereading the draft indicates to me that we
            still need some tuning of the wording so that it is clear
            that the language indications for the same direction for
            different media are alternatives with no requirements that
            they need to be provided together, so that it is allowed to
            answer with just one media in each direction having language
            indication.
            <br>
            <br>
             Suggested wording changes to make this clear:
            <br>
            <br>
             ---Change 1 in 5.2, first paragraph----------------
            <br>
             ------old text---------
            <br>
             This document defines two media-level attributes starting
            with
            <br>
             'hlang' (short for "human interactive language") to
            negotiate which
            <br>
             human language is selected for use in each interactive
            media stream.
            <br>
             ------------new text--------------------
            <br>
             This document defines two media-level attributes starting
            with
            <br>
             'hlang' (short for "human interactive language") to
            negotiate which
            <br>
             human language is selected for use in each media stream
            used for interactive language communication.
            <br>
             -------end of change 1-------
            <br>
          </blockquote>
          <br>
          I don't see how changing "each interactive media stream" to
          "each media stream used for interactive language
          communication" improves anything. The term "interactive"
          implies human interaction.
          <br>
        </blockquote>
        &lt;GH&gt;Yes, but human interaction can be to show things in
        video without being language communication.
        <br>
        What I am aiming at is to clearly indicate that the language
        indications are alternatives to select from. The wording "use in
        each interactive media stream" sounds to me that you MUST use
        all the agreed languages. That is the same mistake that you
        initially blamed the Lang SDP attribute to mean. We need to get
        away from that interpretation. My wording was intended to
        accomplish that, but it might have been too weak. The key word
        is "used" that is intended to mean that if a media stream is
        selected to be used for language communication then the agreed
        language is the one to be used.
        <br>
        So, I prefer my wording, or if you can create something even
        more clear that we are talking about alternatives to select
        from.
        <br>
        <blockquote type="cite">
          <br>
          <blockquote type="cite">
            <br>
             ----Change 2 in 5.2, third paragraph ------
            <br>
             ----old text------
            <br>
             In an answer, 'hlang-send' is the language the answerer
            will send if
            <br>
             using the media for language (which in most cases is
            one of the
            <br>
             languages in the offer's 'hlang-recv'), and
            'hlang-recv' is the
            <br>
             language the answerer expects to receive in the media
            (which in most
            <br>
             cases is one of the languages in the offer's
            'hlang-send').
            <br>
             -----new text----
            <br>
             In an answer, 'hlang-send' is the language the answerer
            will send if
            <br>
             using the media for language (which in most cases is
            one of the
            <br>
             languages in the offer's 'hlang-recv'), and
            'hlang-recv' is the
            <br>
             language the answerer expects to receive in the media
            if
            <br>
             using the media for language (which in most
            <br>
             cases is one of the languages in the offer's
            'hlang-send').
            <br>
             ----end of change 2-------------------------------
            <br>
          </blockquote>
          <br>
          I'm OK adding "if using the media for language" to the second
          clause.
          <br>
          <br>
          <blockquote type="cite">
            <br>
            <br>
             /Gunnar
            <br>
            <br>
             --
            <br>
             -----------------------------------------
            <br>
             Gunnar Hellstrm
            <br>
             Omnitor
            <br>
             <a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se">gunnar.hellstrom@omnitor.se</a>
            <br>
            <br>
             _______________________________________________
            <br>
             SLIM mailing list
            <br>
             <a class="moz-txt-link-abbreviated" href="mailto:SLIM@ietf.org">SLIM@ietf.org</a>
            <br>
             <a class="moz-txt-link-freetext" href="https://www.ietf.org/mailman/listinfo/slim">https://www.ietf.org/mailman/listinfo/slim</a>
            <br>
          </blockquote>
          <br>
          <br>
        </blockquote>
        <br>
        --
        <br>
        -----------------------------------------
        <br>
        Gunnar Hellstrm
        <br>
        Omnitor
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se">gunnar.hellstrom@omnitor.se</a>
        <br>
        +46 708 204 288
        <br>
      </blockquote>
      <br>
      <br>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
-----------------------------------------
Gunnar Hellstrm
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
  </body>
</html>

--------------07C723E5D7BDB05DC82DB3FE--


From nobody Fri Oct 13 07:58:47 2017
Return-Path: <bernard.aboba@gmail.com>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 1F7E913295C for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 07:58:45 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level: 
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id HL8UBpTM-fhv for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 07:58:42 -0700 (PDT)
Received: from mail-vk0-x231.google.com (mail-vk0-x231.google.com [IPv6:2607:f8b0:400c:c05::231]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 77B3E132026 for <slim@ietf.org>; Fri, 13 Oct 2017 07:58:42 -0700 (PDT)
Received: by mail-vk0-x231.google.com with SMTP id d12so4457912vkf.1 for <slim@ietf.org>; Fri, 13 Oct 2017 07:58:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;  h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=vpF2TD8iK7d3ldLhlxsTew1qlEZ25YGtMY7IQmqioNo=; b=iqVL+7BzegubC6q4HOiyzr+OmRBmrr75O1Si6jCpgf6yMyb2emocoKMylJCIFVj0vc 0dxJ76idetr+MCMe4o06pXJ+Wms790f14pNEGebOVMgMxVBpjLqsRhff27SiOfsGIvkj 8kFtGkg7k4uZhoRclXd2dl1ECTfcMhOBggG/F1z+wVr229HxQlJ2IIauFXPAE/kmjIp/ JHyCuk+k0pyYCUrE1f/HYJ9Og1ciSvR8RJMZyXs6YMBmmhI8fyoZq8eqjXU/y7WPkIB3 DwZboljA2NWUYpLRU6IXUMdsV1xDoO2YkVxOMrS7qm5WbrsArXBC9CdaMY8WXcBpKLIW Qd3g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=vpF2TD8iK7d3ldLhlxsTew1qlEZ25YGtMY7IQmqioNo=; b=lsaYr5DgnTwGoCLjb2hR0JnGLeZn8VeMc2vrnMq/CUDIU6vhaUBXWluLdLGohcp85G AMS4e0SELHxRLv+K18K9hAxORWUcxtvBuldKpWf7Arnmd8kjbq1cpmIUW21/lPJ9qYhJ iLm4ZEhGLEnSGj0BQ25o9A9qmqSjAhttPQdwC8JOIDK+dooZZ4qNBOvf+cS7ebTKMcPk 10FYy7KAHaBblcS42n0tE6ZZarH0dfGPrdQnkC2se7LtFt3OB+32Pdv+Tl5573Mx1Zam kOIMkF7HNC1RrGIrsVy5Zk3AUjFz2j6X7Gld6PfBBfGNhka6hnOkHBf5ZSi7W5nWtI5e yN/A==
X-Gm-Message-State: AMCzsaVBkwC5Z1i5K8EqAyrnkvvMVUBsfHzPsFUkF89gPmazCvr91/3k PD2Cqc0Nmps2Lqzy9diaJ5418ZsShonhRrcvMue7Kg==
X-Google-Smtp-Source: AOwi7QAE9NMHbjD7Uw6UbaoS8kLExAFW2ERJ77PKOqzfNRSjraTm7Bz0y807vnsvy6YVQnoNqw7MlJryE5baGH3VmDg=
X-Received: by 10.31.62.76 with SMTP id l73mr1278204vka.107.1507906721076; Fri, 13 Oct 2017 07:58:41 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.159.32.76 with HTTP; Fri, 13 Oct 2017 07:58:20 -0700 (PDT)
In-Reply-To: <fea21ce6-398a-ebbb-5881-abe732c8983b@omnitor.se>
References: <3e945827-8310-56aa-b2e5-7a9405ff85c4@omnitor.se> <p06240621d606585e823d@99.111.97.136> <57690f3d-faa2-18d8-f270-8ae179f39e68@omnitor.se> <p06240628d6066c091e76@99.111.97.136> <fea21ce6-398a-ebbb-5881-abe732c8983b@omnitor.se>
From: Bernard Aboba <bernard.aboba@gmail.com>
Date: Fri, 13 Oct 2017 07:58:20 -0700
Message-ID: <CAOW+2dubW_Pc-JKtTOZjSGeCWw=3bSwd1tqvObSwf4fyzs4Eig@mail.gmail.com>
To: =?UTF-8?Q?Gunnar_Hellstr=C3=B6m?= <gunnar.hellstrom@omnitor.se>
Cc: Randall Gellens <rg+ietf@randy.pensive.org>, "slim@ietf.org" <slim@ietf.org>
Content-Type: multipart/alternative; boundary="001a114476c276ebb8055b6ee4b0"
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/fcqUxZSYCThG1Hwp3s60ZtDplHc>
Subject: Re: [Slim] Indication of modality alternatives in draft-ietf-slim-negotiating-human-language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 14:58:45 -0000

--001a114476c276ebb8055b6ee4b0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Gunnar said:

"to negotiate which human language is selected for possible use in each
interactive media stream"

[BA] Given that audio can be muted, video can be turned off, etc. aren't
media streams negotiated in SDP always for "possible" use?

On Fri, Oct 13, 2017 at 6:32 AM, Gunnar Hellstr=C3=B6m <
gunnar.hellstrom@omnitor.se> wrote:

> Change 2 is fine and solves part of the problem.
>
> But the current wording at my proposed change 1 still tells me that if I
> offer English text and English voice, it means that I have selected to us=
e
> both, and even stronger if an answer contains English text and English
> voice, then both will be used in the session, exactly as you indicated wa=
s
> the problem with the Lang attribute. We need to get the possibility to
> select among alternatives clearly into the draft so that not next
> generation implementers also say that it is too vague about what it means=
.
>
> The current wording at change one still says that each interactive stream
> is used.
>
> How about:  "to negotiate which
>      human language is selected for* possible *use in each interactive
> media stream."
>
> /Gunnar
>
> Den 2017-10-13 kl. 15:13, skrev Randall Gellens:
>
> I think we've addressed the concerns that existed with earlier versions o=
f
> the draft.
>
> At 2:57 PM +0200 10/13/17, Gunnar Hellstr=C3=B6m wrote:
>
>  Den 2017-10-13 kl. 13:51, skrev Randall Gellens:
>
>  At 12:06 AM +0200 7/29/17, Gunnar Hellstr=C3=B6m wrote:
>
>   We have dealt with this topic before, but rereading the draft indicates
> to me that we still need some tuning of the wording so that it is clear
> that the language indications for the same direction for different media
> are alternatives with no requirements that they need to be provided
> together, so that it is allowed to answer with just one media in each
> direction having language indication.
>
>   Suggested wording changes to make this clear:
>
>   ---Change 1 in 5.2, first paragraph----------------
>   ------old text---------
>   This document defines two media-level attributes starting with
>      'hlang' (short for "human interactive language") to negotiate which
>      human language is selected for use in each interactive media stream.
>   ------------new text--------------------
>   This document defines two media-level attributes starting with
>      'hlang' (short for "human interactive language") to negotiate which
>      human language is selected for use in each media stream used for
> interactive language communication.
>   -------end of change 1-------
>
>
>  I don't see how changing "each interactive media stream" to "each media
> stream used for interactive language communication" improves anything.  T=
he
> term "interactive" implies human interaction.
>
>  <GH>Yes, but human interaction can be to show things in video without
> being language communication.
>  What I am aiming at is to clearly indicate that the language indications
> are alternatives to select from. The wording "use in each interactive med=
ia
> stream" sounds to me that you MUST use all the agreed languages. That is
> the same mistake that you initially blamed the Lang SDP attribute to mean=
.
> We need to get away from that interpretation. My wording was intended to
> accomplish that, but it might have been too weak. The key word is "used"
> that is intended to mean that if a media stream is selected to be used fo=
r
> language communication then the agreed language is the one to be used.
>  So, I prefer my wording, or if you can create something even more clear
> that we are talking about alternatives to select from.
>
>
>
>   ----Change 2 in 5.2, third paragraph ------
>   ----old text------
>     In an answer, 'hlang-send' is the language the answerer will send if
>      using the media for language (which in most cases is one of the
>      languages in the offer's 'hlang-recv'), and 'hlang-recv' is the
>      language the answerer expects to receive in the media (which in most
>      cases is one of the languages in the offer's 'hlang-send').
>   -----new text----
>     In an answer, 'hlang-send' is the language the answerer will send if
>      using the media for language (which in most cases is one of the
>      languages in the offer's 'hlang-recv'), and 'hlang-recv' is the
>      language the answerer expects to receive in the media if
>      using the media for language (which in most
>      cases is one of the languages in the offer's 'hlang-send').
>   ----end of change 2-------------------------------
>
>
>  I'm OK adding "if using the media for language" to the second clause.
>
>
>
>   /Gunnar
>
>   --
>   -----------------------------------------
>   Gunnar Hellstr=C3=B6m
>   Omnitor
>   gunnar.hellstrom@omnitor.se
>
>   _______________________________________________
>   SLIM mailing list
>   SLIM@ietf.org
>   https://www.ietf.org/mailman/listinfo/slim
>
>
>
>
>  --
>  -----------------------------------------
>  Gunnar Hellstr=C3=B6m
>  Omnitor
>  gunnar.hellstrom@omnitor.se
>  +46 708 204 288
>
>
>
>
> --
> -----------------------------------------
> Gunnar Hellstr=C3=B6m
> Omnitorgunnar.hellstrom@omnitor.se
> +46 708 204 288
>
>
> _______________________________________________
> SLIM mailing list
> SLIM@ietf.org
> https://www.ietf.org/mailman/listinfo/slim
>
>

--001a114476c276ebb8055b6ee4b0
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Gunnar said:=C2=A0<div><br></div><div>&quot;to negotiate w=
hich human language is selected for possible use in each interactive media =
stream&quot;</div><div><br></div><div>[BA] Given that audio can be muted, v=
ideo can be turned off, etc. aren&#39;t media streams negotiated in SDP alw=
ays for &quot;possible&quot; use?</div></div><div class=3D"gmail_extra"><br=
><div class=3D"gmail_quote">On Fri, Oct 13, 2017 at 6:32 AM, Gunnar Hellstr=
=C3=B6m <span dir=3D"ltr">&lt;<a href=3D"mailto:gunnar.hellstrom@omnitor.se=
" target=3D"_blank">gunnar.hellstrom@omnitor.se</a>&gt;</span> wrote:<br><b=
lockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px =
#ccc solid;padding-left:1ex">
 =20
   =20
 =20
  <div text=3D"#000000" bgcolor=3D"#FFFFFF">
    <p>Change 2 is fine and solves part of the problem.</p>
    <p>But the current wording at my proposed change 1 still tells me
      that if I offer English text and English voice, it means that I
      have selected to use both, and even stronger if an answer contains
      English text and English voice, then both will be used in the
      session, exactly as you indicated was the problem with the Lang
      attribute. We need to get the possibility to select among
      alternatives clearly into the draft so that not next generation
      implementers also say that it is too vague about what it means.</p>
    <p>The current wording at change one still says that each
      interactive stream is used. <br>
    </p>
    <p>How about:=C2=A0 &quot;to negotiate which
      <br>
      =C2=A0=C2=A0=C2=A0=C2=A0 human language is selected for<b> possible <=
/b>use in each
      interactive media stream.&quot;</p><span class=3D"HOEnZb"><font color=
=3D"#888888">
    <p>/Gunnar<br>
    </p></font></span><div><div class=3D"h5">
    <br>
    <div class=3D"m_5473125126441079136moz-cite-prefix">Den 2017-10-13 kl. =
15:13, skrev Randall
      Gellens:<br>
    </div>
    <blockquote type=3D"cite">I think we&#39;ve
      addressed the concerns that existed with earlier versions of the
      draft.
      <br>
      <br>
      At 2:57 PM +0200 10/13/17, Gunnar Hellstr=C3=B6m wrote:
      <br>
      <br>
      <blockquote type=3D"cite">=C2=A0Den 2017-10-13 kl. 13:51, skrev Randa=
ll
        Gellens:
        <br>
        <blockquote type=3D"cite">=C2=A0At 12:06 AM +0200 7/29/17, Gunnar
          Hellstr=C3=B6m wrote:
          <br>
          <br>
          <blockquote type=3D"cite">=C2=A0 We have dealt with this topic
            before, but rereading the draft indicates to me that we
            still need some tuning of the wording so that it is clear
            that the language indications for the same direction for
            different media are alternatives with no requirements that
            they need to be provided together, so that it is allowed to
            answer with just one media in each direction having language
            indication.
            <br>
            <br>
            =C2=A0 Suggested wording changes to make this clear:
            <br>
            <br>
            =C2=A0 ---Change 1 in 5.2, first paragraph----------------
            <br>
            =C2=A0 ------old text---------
            <br>
            =C2=A0 This document defines two media-level attributes startin=
g
            with
            <br>
            =C2=A0=C2=A0=C2=A0=C2=A0 &#39;hlang&#39; (short for &quot;human=
 interactive language&quot;) to
            negotiate which
            <br>
            =C2=A0=C2=A0=C2=A0=C2=A0 human language is selected for use in =
each interactive
            media stream.
            <br>
            =C2=A0 ------------new text--------------------
            <br>
            =C2=A0 This document defines two media-level attributes startin=
g
            with
            <br>
            =C2=A0=C2=A0=C2=A0=C2=A0 &#39;hlang&#39; (short for &quot;human=
 interactive language&quot;) to
            negotiate which
            <br>
            =C2=A0=C2=A0=C2=A0=C2=A0 human language is selected for use in =
each media stream
            used for interactive language communication.
            <br>
            =C2=A0 -------end of change 1-------
            <br>
          </blockquote>
          <br>
          =C2=A0I don&#39;t see how changing &quot;each interactive media s=
tream&quot; to
          &quot;each media stream used for interactive language
          communication&quot; improves anything.=C2=A0 The term &quot;inter=
active&quot;
          implies human interaction.
          <br>
        </blockquote>
        =C2=A0&lt;GH&gt;Yes, but human interaction can be to show things in
        video without being language communication.
        <br>
        =C2=A0What I am aiming at is to clearly indicate that the language
        indications are alternatives to select from. The wording &quot;use =
in
        each interactive media stream&quot; sounds to me that you MUST use
        all the agreed languages. That is the same mistake that you
        initially blamed the Lang SDP attribute to mean. We need to get
        away from that interpretation. My wording was intended to
        accomplish that, but it might have been too weak. The key word
        is &quot;used&quot; that is intended to mean that if a media stream=
 is
        selected to be used for language communication then the agreed
        language is the one to be used.
        <br>
        =C2=A0So, I prefer my wording, or if you can create something even
        more clear that we are talking about alternatives to select
        from.
        <br>
        <blockquote type=3D"cite">
          <br>
          <blockquote type=3D"cite">
            <br>
            =C2=A0 ----Change 2 in 5.2, third paragraph ------
            <br>
            =C2=A0 ----old text------
            <br>
            =C2=A0=C2=A0=C2=A0 In an answer, &#39;hlang-send&#39; is the la=
nguage the answerer
            will send if
            <br>
            =C2=A0=C2=A0=C2=A0=C2=A0 using the media for language (which in=
 most cases is
            one of the
            <br>
            =C2=A0=C2=A0=C2=A0=C2=A0 languages in the offer&#39;s &#39;hlan=
g-recv&#39;), and
            &#39;hlang-recv&#39; is the
            <br>
            =C2=A0=C2=A0=C2=A0=C2=A0 language the answerer expects to recei=
ve in the media
            (which in most
            <br>
            =C2=A0=C2=A0=C2=A0=C2=A0 cases is one of the languages in the o=
ffer&#39;s
            &#39;hlang-send&#39;).
            <br>
            =C2=A0 -----new text----
            <br>
            =C2=A0=C2=A0=C2=A0 In an answer, &#39;hlang-send&#39; is the la=
nguage the answerer
            will send if
            <br>
            =C2=A0=C2=A0=C2=A0=C2=A0 using the media for language (which in=
 most cases is
            one of the
            <br>
            =C2=A0=C2=A0=C2=A0=C2=A0 languages in the offer&#39;s &#39;hlan=
g-recv&#39;), and
            &#39;hlang-recv&#39; is the
            <br>
            =C2=A0=C2=A0=C2=A0=C2=A0 language the answerer expects to recei=
ve in the media
            if
            <br>
            =C2=A0=C2=A0=C2=A0=C2=A0 using the media for language (which in=
 most
            <br>
            =C2=A0=C2=A0=C2=A0=C2=A0 cases is one of the languages in the o=
ffer&#39;s
            &#39;hlang-send&#39;).
            <br>
            =C2=A0 ----end of change 2-----------------------------<wbr>--
            <br>
          </blockquote>
          <br>
          =C2=A0I&#39;m OK adding &quot;if using the media for language&quo=
t; to the second
          clause.
          <br>
          <br>
          <blockquote type=3D"cite">
            <br>
            <br>
            =C2=A0 /Gunnar
            <br>
            <br>
            =C2=A0 --
            <br>
            =C2=A0 ------------------------------<wbr>-----------
            <br>
            =C2=A0 Gunnar Hellstr=C3=B6m
            <br>
            =C2=A0 Omnitor
            <br>
            =C2=A0 <a class=3D"m_5473125126441079136moz-txt-link-abbreviate=
d" href=3D"mailto:gunnar.hellstrom@omnitor.se" target=3D"_blank">gunnar.hel=
lstrom@omnitor.se</a>
            <br>
            <br>
            =C2=A0 ______________________________<wbr>_________________
            <br>
            =C2=A0 SLIM mailing list
            <br>
            =C2=A0 <a class=3D"m_5473125126441079136moz-txt-link-abbreviate=
d" href=3D"mailto:SLIM@ietf.org" target=3D"_blank">SLIM@ietf.org</a>
            <br>
            =C2=A0 <a class=3D"m_5473125126441079136moz-txt-link-freetext" =
href=3D"https://www.ietf.org/mailman/listinfo/slim" target=3D"_blank">https=
://www.ietf.org/mailman/<wbr>listinfo/slim</a>
            <br>
          </blockquote>
          <br>
          <br>
        </blockquote>
        <br>
        =C2=A0--
        <br>
        =C2=A0-----------------------------<wbr>------------
        <br>
        =C2=A0Gunnar Hellstr=C3=B6m
        <br>
        =C2=A0Omnitor
        <br>
        =C2=A0<a class=3D"m_5473125126441079136moz-txt-link-abbreviated" hr=
ef=3D"mailto:gunnar.hellstrom@omnitor.se" target=3D"_blank">gunnar.hellstro=
m@omnitor.se</a>
        <br>
        =C2=A0+46 708 204 288
        <br>
      </blockquote>
      <br>
      <br>
    </blockquote>
    <br>
    <pre class=3D"m_5473125126441079136moz-signature" cols=3D"72">--=20
------------------------------<wbr>-----------
Gunnar Hellstr=C3=B6m
Omnitor
<a class=3D"m_5473125126441079136moz-txt-link-abbreviated" href=3D"mailto:g=
unnar.hellstrom@omnitor.se" target=3D"_blank">gunnar.hellstrom@omnitor.se</=
a>
+46 708 204 288</pre>
  </div></div></div>

<br>______________________________<wbr>_________________<br>
SLIM mailing list<br>
<a href=3D"mailto:SLIM@ietf.org">SLIM@ietf.org</a><br>
<a href=3D"https://www.ietf.org/mailman/listinfo/slim" rel=3D"noreferrer" t=
arget=3D"_blank">https://www.ietf.org/mailman/<wbr>listinfo/slim</a><br>
<br></blockquote></div><br></div>

--001a114476c276ebb8055b6ee4b0--


From nobody Fri Oct 13 11:21:52 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 622C513420B for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 11:21:51 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.6
X-Spam-Level: 
X-Spam-Status: No, score=-2.6 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 7iaEm8tW1ROF for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 11:21:48 -0700 (PDT)
Received: from bin-vsp-out-03.atm.binero.net (vsp-unauthed02.binero.net [195.74.38.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id E0B6813420C for <slim@ietf.org>; Fri, 13 Oct 2017 11:21:47 -0700 (PDT)
X-Halon-ID: 5ae52968-b043-11e7-83a7-0050569116f7
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-03.atm.binero.net (Halon) with ESMTPSA id 5ae52968-b043-11e7-83a7-0050569116f7; Fri, 13 Oct 2017 20:21:40 +0200 (CEST)
To: Bernard Aboba <bernard.aboba@gmail.com>
Cc: Randall Gellens <rg+ietf@randy.pensive.org>, "slim@ietf.org" <slim@ietf.org>
References: <3e945827-8310-56aa-b2e5-7a9405ff85c4@omnitor.se> <p06240621d606585e823d@99.111.97.136> <57690f3d-faa2-18d8-f270-8ae179f39e68@omnitor.se> <p06240628d6066c091e76@99.111.97.136> <fea21ce6-398a-ebbb-5881-abe732c8983b@omnitor.se> <CAOW+2dubW_Pc-JKtTOZjSGeCWw=3bSwd1tqvObSwf4fyzs4Eig@mail.gmail.com>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <9dafe618-8d7d-76ba-91e2-41e3b5ce1f3b@omnitor.se>
Date: Fri, 13 Oct 2017 20:21:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <CAOW+2dubW_Pc-JKtTOZjSGeCWw=3bSwd1tqvObSwf4fyzs4Eig@mail.gmail.com>
Content-Type: multipart/alternative; boundary="------------396026F4BA1D3205996955A1"
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/sb7HWuoJ6--g9q1lUBxFqtn_KAs>
Subject: Re: [Slim] Indication of modality alternatives in draft-ietf-slim-negotiating-human-language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 18:21:51 -0000

This is a multi-part message in MIME format.
--------------396026F4BA1D3205996955A1
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit

Den 2017-10-13 kl. 16:58, skrev Bernard Aboba:
> Gunnar said:
>
> "to negotiate which human language is selected for possible use in 
> each interactive media stream"
>
> [BA] Given that audio can be muted, video can be turned off, etc. 
> aren't media streams negotiated in SDP always for "possible" use?
<GH>That may be true, but we are not talking about the media flow in the 
streams. We are talking about the use for language. Our draft must 
reflect clearly what the language negotiation result really means. To 
me,  "is selected for use in each interactive media stream" sounds as a 
promise that a negotiated language will actually be used. That means 
that if two media streams end up with negotiated languages in the same 
direction, then both must be provided together. According to the 
discussions in the WG, that is not the desired result. The desired 
result should be that the users can select between use of the negotiated 
languages and usually use just one in each direction.  We introduced 
"selected" some time ago, but it did not have the right effect.

I will try to come up with new wording proposals.

/Gunnar

>
> On Fri, Oct 13, 2017 at 6:32 AM, Gunnar Hellström 
> <gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>> wrote:
>
>     Change 2 is fine and solves part of the problem.
>
>     But the current wording at my proposed change 1 still tells me
>     that if I offer English text and English voice, it means that I
>     have selected to use both, and even stronger if an answer contains
>     English text and English voice, then both will be used in the
>     session, exactly as you indicated was the problem with the Lang
>     attribute. We need to get the possibility to select among
>     alternatives clearly into the draft so that not next generation
>     implementers also say that it is too vague about what it means.
>
>     The current wording at change one still says that each interactive
>     stream is used.
>
>     How about:  "to negotiate which
>          human language is selected for*possible *use in each
>     interactive media stream."
>
>     /Gunnar
>
>
>     Den 2017-10-13 kl. 15:13, skrev Randall Gellens:
>>     I think we've addressed the concerns that existed with earlier
>>     versions of the draft.
>>
>>     At 2:57 PM +0200 10/13/17, Gunnar Hellström wrote:
>>
>>>      Den 2017-10-13 kl. 13:51, skrev Randall Gellens:
>>>>      At 12:06 AM +0200 7/29/17, Gunnar Hellström wrote:
>>>>
>>>>>       We have dealt with this topic before, but rereading the
>>>>>     draft indicates to me that we still need some tuning of the
>>>>>     wording so that it is clear that the language indications for
>>>>>     the same direction for different media are alternatives with
>>>>>     no requirements that they need to be provided together, so
>>>>>     that it is allowed to answer with just one media in each
>>>>>     direction having language indication.
>>>>>
>>>>>       Suggested wording changes to make this clear:
>>>>>
>>>>>       ---Change 1 in 5.2, first paragraph----------------
>>>>>       ------old text---------
>>>>>       This document defines two media-level attributes starting with
>>>>>          'hlang' (short for "human interactive language") to
>>>>>     negotiate which
>>>>>          human language is selected for use in each interactive
>>>>>     media stream.
>>>>>       ------------new text--------------------
>>>>>       This document defines two media-level attributes starting with
>>>>>          'hlang' (short for "human interactive language") to
>>>>>     negotiate which
>>>>>          human language is selected for use in each media stream
>>>>>     used for interactive language communication.
>>>>>       -------end of change 1-------
>>>>
>>>>      I don't see how changing "each interactive media stream" to
>>>>     "each media stream used for interactive language communication"
>>>>     improves anything.  The term "interactive" implies human
>>>>     interaction.
>>>      <GH>Yes, but human interaction can be to show things in video
>>>     without being language communication.
>>>      What I am aiming at is to clearly indicate that the language
>>>     indications are alternatives to select from. The wording "use in
>>>     each interactive media stream" sounds to me that you MUST use
>>>     all the agreed languages. That is the same mistake that you
>>>     initially blamed the Lang SDP attribute to mean. We need to get
>>>     away from that interpretation. My wording was intended to
>>>     accomplish that, but it might have been too weak. The key word
>>>     is "used" that is intended to mean that if a media stream is
>>>     selected to be used for language communication then the agreed
>>>     language is the one to be used.
>>>      So, I prefer my wording, or if you can create something even
>>>     more clear that we are talking about alternatives to select from.
>>>>
>>>>>
>>>>>       ----Change 2 in 5.2, third paragraph ------
>>>>>       ----old text------
>>>>>         In an answer, 'hlang-send' is the language the answerer
>>>>>     will send if
>>>>>          using the media for language (which in most cases is one
>>>>>     of the
>>>>>          languages in the offer's 'hlang-recv'), and 'hlang-recv'
>>>>>     is the
>>>>>          language the answerer expects to receive in the media
>>>>>     (which in most
>>>>>          cases is one of the languages in the offer's 'hlang-send').
>>>>>       -----new text----
>>>>>         In an answer, 'hlang-send' is the language the answerer
>>>>>     will send if
>>>>>          using the media for language (which in most cases is one
>>>>>     of the
>>>>>          languages in the offer's 'hlang-recv'), and 'hlang-recv'
>>>>>     is the
>>>>>          language the answerer expects to receive in the media if
>>>>>          using the media for language (which in most
>>>>>          cases is one of the languages in the offer's 'hlang-send').
>>>>>       ----end of change 2-------------------------------
>>>>
>>>>      I'm OK adding "if using the media for language" to the second
>>>>     clause.
>>>>
>>>>>
>>>>>
>>>>>       /Gunnar
>>>>>
>>>>>       --
>>>>>       -----------------------------------------
>>>>>       Gunnar Hellström
>>>>>       Omnitor
>>>>>     gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>
>>>>>
>>>>>       _______________________________________________
>>>>>       SLIM mailing list
>>>>>     SLIM@ietf.org <mailto:SLIM@ietf.org>
>>>>>     https://www.ietf.org/mailman/listinfo/slim
>>>>>     <https://www.ietf.org/mailman/listinfo/slim>
>>>>
>>>>
>>>
>>>      --
>>>      -----------------------------------------
>>>      Gunnar Hellström
>>>      Omnitor
>>>     gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>
>>>      +46 708 204 288
>>
>>
>
>     -- 
>     -----------------------------------------
>     Gunnar Hellström
>     Omnitor
>     gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>
>     +46 708 204 288
>
>
>     _______________________________________________
>     SLIM mailing list
>     SLIM@ietf.org <mailto:SLIM@ietf.org>
>     https://www.ietf.org/mailman/listinfo/slim
>     <https://www.ietf.org/mailman/listinfo/slim>
>
>

-- 
-----------------------------------------
Gunnar Hellström
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


--------------396026F4BA1D3205996955A1
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    Den 2017-10-13 kl. 16:58, skrev Bernard Aboba:<br>
    <blockquote type="cite"
cite="mid:CAOW+2dubW_Pc-JKtTOZjSGeCWw=3bSwd1tqvObSwf4fyzs4Eig@mail.gmail.com">
      <div dir="ltr">Gunnar said: 
        <div><br>
        </div>
        <div>"to negotiate which human language is selected for possible
          use in each interactive media stream"</div>
        <div><br>
        </div>
        <div>[BA] Given that audio can be muted, video can be turned
          off, etc. aren't media streams negotiated in SDP always for
          "possible" use?</div>
      </div>
    </blockquote>
    &lt;GH&gt;That may be true, but we are not talking about the media
    flow in the streams. We are talking about the use for language. Our
    draft must reflect clearly what the language negotiation result
    really means. To me,  "is selected for use in each interactive media
    stream" sounds as a promise that a negotiated language will actually
    be used. That means that if two media streams end up with negotiated
    languages in the same direction, then both must be provided
    together. According to the discussions in the WG, that is not the
    desired result. The desired result should be that the users can
    select between use of the negotiated languages and usually use just
    one in each direction.  We introduced "selected" some time ago, but
    it did not have the right effect.   <br>
    <br>
    I will try to come up with new wording proposals.<br>
    <br>
    /Gunnar<br>
     <br>
    <blockquote type="cite"
cite="mid:CAOW+2dubW_Pc-JKtTOZjSGeCWw=3bSwd1tqvObSwf4fyzs4Eig@mail.gmail.com">
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On Fri, Oct 13, 2017 at 6:32 AM, Gunnar
          Hellström <span dir="ltr">&lt;<a
              href="mailto:gunnar.hellstrom@omnitor.se" target="_blank"
              moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>&gt;</span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div text="#000000" bgcolor="#FFFFFF">
              <p>Change 2 is fine and solves part of the problem.</p>
              <p>But the current wording at my proposed change 1 still
                tells me that if I offer English text and English voice,
                it means that I have selected to use both, and even
                stronger if an answer contains English text and English
                voice, then both will be used in the session, exactly as
                you indicated was the problem with the Lang attribute.
                We need to get the possibility to select among
                alternatives clearly into the draft so that not next
                generation implementers also say that it is too vague
                about what it means.</p>
              <p>The current wording at change one still says that each
                interactive stream is used. <br>
              </p>
              <p>How about:  "to negotiate which <br>
                     human language is selected for<b> possible </b>use
                in each interactive media stream."</p>
              <span class="HOEnZb"><font color="#888888">
                  <p>/Gunnar<br>
                  </p>
                </font></span>
              <div>
                <div class="h5"> <br>
                  <div class="m_5473125126441079136moz-cite-prefix">Den
                    2017-10-13 kl. 15:13, skrev Randall Gellens:<br>
                  </div>
                  <blockquote type="cite">I think we've addressed the
                    concerns that existed with earlier versions of the
                    draft. <br>
                    <br>
                    At 2:57 PM +0200 10/13/17, Gunnar Hellström wrote: <br>
                    <br>
                    <blockquote type="cite"> Den 2017-10-13 kl. 13:51,
                      skrev Randall Gellens: <br>
                      <blockquote type="cite"> At 12:06 AM +0200
                        7/29/17, Gunnar Hellström wrote: <br>
                        <br>
                        <blockquote type="cite">  We have dealt with
                          this topic before, but rereading the draft
                          indicates to me that we still need some tuning
                          of the wording so that it is clear that the
                          language indications for the same direction
                          for different media are alternatives with no
                          requirements that they need to be provided
                          together, so that it is allowed to answer with
                          just one media in each direction having
                          language indication. <br>
                          <br>
                            Suggested wording changes to make this
                          clear: <br>
                          <br>
                            ---Change 1 in 5.2, first
                          paragraph---------------- <br>
                            ------old text--------- <br>
                            This document defines two media-level
                          attributes starting with <br>
                               'hlang' (short for "human interactive
                          language") to negotiate which <br>
                               human language is selected for use in
                          each interactive media stream. <br>
                            ------------new text-------------------- <br>
                            This document defines two media-level
                          attributes starting with <br>
                               'hlang' (short for "human interactive
                          language") to negotiate which <br>
                               human language is selected for use in
                          each media stream used for interactive
                          language communication. <br>
                            -------end of change 1------- <br>
                        </blockquote>
                        <br>
                         I don't see how changing "each interactive
                        media stream" to "each media stream used for
                        interactive language communication" improves
                        anything.  The term "interactive" implies human
                        interaction. <br>
                      </blockquote>
                       &lt;GH&gt;Yes, but human interaction can be to
                      show things in video without being language
                      communication. <br>
                       What I am aiming at is to clearly indicate that
                      the language indications are alternatives to
                      select from. The wording "use in each interactive
                      media stream" sounds to me that you MUST use all
                      the agreed languages. That is the same mistake
                      that you initially blamed the Lang SDP attribute
                      to mean. We need to get away from that
                      interpretation. My wording was intended to
                      accomplish that, but it might have been too weak.
                      The key word is "used" that is intended to mean
                      that if a media stream is selected to be used for
                      language communication then the agreed language is
                      the one to be used. <br>
                       So, I prefer my wording, or if you can create
                      something even more clear that we are talking
                      about alternatives to select from. <br>
                      <blockquote type="cite"> <br>
                        <blockquote type="cite"> <br>
                            ----Change 2 in 5.2, third paragraph ------
                          <br>
                            ----old text------ <br>
                              In an answer, 'hlang-send' is the language
                          the answerer will send if <br>
                               using the media for language (which in
                          most cases is one of the <br>
                               languages in the offer's 'hlang-recv'),
                          and 'hlang-recv' is the <br>
                               language the answerer expects to receive
                          in the media (which in most <br>
                               cases is one of the languages in the
                          offer's 'hlang-send'). <br>
                            -----new text---- <br>
                              In an answer, 'hlang-send' is the language
                          the answerer will send if <br>
                               using the media for language (which in
                          most cases is one of the <br>
                               languages in the offer's 'hlang-recv'),
                          and 'hlang-recv' is the <br>
                               language the answerer expects to receive
                          in the media if <br>
                               using the media for language (which in
                          most <br>
                               cases is one of the languages in the
                          offer's 'hlang-send'). <br>
                            ----end of change
                          2-----------------------------<wbr>-- <br>
                        </blockquote>
                        <br>
                         I'm OK adding "if using the media for language"
                        to the second clause. <br>
                        <br>
                        <blockquote type="cite"> <br>
                          <br>
                            /Gunnar <br>
                          <br>
                            -- <br>
                            ------------------------------<wbr>-----------
                          <br>
                            Gunnar Hellström <br>
                            Omnitor <br>
                            <a
                            class="m_5473125126441079136moz-txt-link-abbreviated"
                            href="mailto:gunnar.hellstrom@omnitor.se"
                            target="_blank" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>
                          <br>
                          <br>
                            ______________________________<wbr>_________________
                          <br>
                            SLIM mailing list <br>
                            <a
                            class="m_5473125126441079136moz-txt-link-abbreviated"
                            href="mailto:SLIM@ietf.org" target="_blank"
                            moz-do-not-send="true">SLIM@ietf.org</a> <br>
                            <a
                            class="m_5473125126441079136moz-txt-link-freetext"
href="https://www.ietf.org/mailman/listinfo/slim" target="_blank"
                            moz-do-not-send="true">https://www.ietf.org/mailman/<wbr>listinfo/slim</a>
                          <br>
                        </blockquote>
                        <br>
                        <br>
                      </blockquote>
                      <br>
                       -- <br>
                       -----------------------------<wbr>------------ <br>
                       Gunnar Hellström <br>
                       Omnitor <br>
                       <a
                        class="m_5473125126441079136moz-txt-link-abbreviated"
                        href="mailto:gunnar.hellstrom@omnitor.se"
                        target="_blank" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>
                      <br>
                       +46 708 204 288 <br>
                    </blockquote>
                    <br>
                    <br>
                  </blockquote>
                  <br>
                  <pre class="m_5473125126441079136moz-signature" cols="72">-- 
------------------------------<wbr>-----------
Gunnar Hellström
Omnitor
<a class="m_5473125126441079136moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se" target="_blank" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
                </div>
              </div>
            </div>
            <br>
            ______________________________<wbr>_________________<br>
            SLIM mailing list<br>
            <a href="mailto:SLIM@ietf.org" moz-do-not-send="true">SLIM@ietf.org</a><br>
            <a href="https://www.ietf.org/mailman/listinfo/slim"
              rel="noreferrer" target="_blank" moz-do-not-send="true">https://www.ietf.org/mailman/<wbr>listinfo/slim</a><br>
            <br>
          </blockquote>
        </div>
        <br>
      </div>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
-----------------------------------------
Gunnar Hellström
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
  </body>
</html>

--------------396026F4BA1D3205996955A1--


From nobody Fri Oct 13 11:32:01 2017
Return-Path: <br@brianrosen.net>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3E07E134216 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 11:32:00 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.589
X-Spam-Level: 
X-Spam-Status: No, score=-2.589 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, T_SPF_PERMERROR=0.01] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=brianrosen-net.20150623.gappssmtp.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id o9koUKsTOQ4F for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 11:31:57 -0700 (PDT)
Received: from mail-qk0-x235.google.com (mail-qk0-x235.google.com [IPv6:2607:f8b0:400d:c09::235]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id BA89F132F8F for <slim@ietf.org>; Fri, 13 Oct 2017 11:31:56 -0700 (PDT)
Received: by mail-qk0-x235.google.com with SMTP id q83so3003557qke.6 for <slim@ietf.org>; Fri, 13 Oct 2017 11:31:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brianrosen-net.20150623.gappssmtp.com; s=20150623; h=from:message-id:mime-version:subject:date:in-reply-to:cc:to :references; bh=8fdyS2kkhOY3AU8/5aqXXziUGPLHNPzJWFRW4iNZmtQ=; b=tLEUULBxk1JBT1dTBv6nFlk1SqYoziwEhFjN64/0TLWy1vK6Bg3+/+SBE0j8ibj2ja ovyL+0+B7oi9zhO7uHiQqpx1j9FK11KSoNvq2X5UhB+2qcrVCPUPp7ee5R86DshVgt/3 Y4fNlcCHU2/HKGS75JZgrbYCsESqmKknnLLLZ6HkPq+yjJ9/Yer2avJTXH6ZQJdDPxMF Sdi6Z4eeQrKRTMCO6GfG8uwg0m4SId/djHt0ZfGEYFtW7uRk32Wis9d2cZT8U491jdUv iJN/0W09bjxFfJaEHM63f1BdTIckZjET96GIiiEQbhVYEKQML2aUaTIa7Cnd6JCOuwIw YQuA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:message-id:mime-version:subject:date :in-reply-to:cc:to:references; bh=8fdyS2kkhOY3AU8/5aqXXziUGPLHNPzJWFRW4iNZmtQ=; b=pfdEi3fySEw9F0UHlimp2OU0Y+qcNwxZagcNH38/AF4JzMVpr1MyQEFr8YcK1GsYrK ZwP52LolcR2lg4DWrZ1S8XH7uUUrNa9C604jP1/0jKbE0/z2smihLewvphsaGJMfCttK zZO/c2SuU/xiJZe4AI55hYlbT+Xkf9VHP+d2hyriJ794lvuFrekJoE5ug0TfVY1sGCcL YIFbPqoIpZz6mJ98w9KNNUfyJpkz9PuLN6LeQqGlXUwoYsPpxDMOOyB3pwOj0PYUDr7w zBp0neheaGBLMALwPBRFBjSOS+6HM6ImlQn7IQYgfIwiImonA5bm6J/HMw2ChTa5dLye LLLA==
X-Gm-Message-State: AMCzsaUis6+8RgO/uuwUhU0bqg7jDqSD1XhDZ2QQYFveNqqcbEIWjP90 xBkqMI8+V3e+MSw/GatKPqm1/g==
X-Google-Smtp-Source: ABhQp+Rbxs93UwfqF4GuYvaxB5+1iKoNg6R4uK2Csqd6fF+/9vOA/vJ008HYmv8HLHzo/MtsXlzhqg==
X-Received: by 10.233.232.8 with SMTP id a8mr3403343qkg.263.1507919515749; Fri, 13 Oct 2017 11:31:55 -0700 (PDT)
Received: from [10.33.193.3] (neustar-sthide-nat1.neustar.biz. [156.154.81.54]) by smtp.gmail.com with ESMTPSA id z187sm905538qke.0.2017.10.13.11.31.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Oct 2017 11:31:54 -0700 (PDT)
From: Brian Rosen <br@brianrosen.net>
Message-Id: <ABDCB89A-4BF0-494C-A729-3EB6529DA618@brianrosen.net>
Content-Type: multipart/alternative; boundary="Apple-Mail=_2B5EC9A8-03DF-471E-8FBA-D961983499DD"
Mime-Version: 1.0 (Mac OS X Mail 10.3 \(3273\))
Date: Fri, 13 Oct 2017 14:31:51 -0400
In-Reply-To: <9dafe618-8d7d-76ba-91e2-41e3b5ce1f3b@omnitor.se>
Cc: Bernard Aboba <bernard.aboba@gmail.com>, "slim@ietf.org" <slim@ietf.org>,  Randall Gellens <rg+ietf@randy.pensive.org>
To: =?utf-8?Q?Gunnar_Hellstr=C3=B6m?= <gunnar.hellstrom@omnitor.se>
References: <3e945827-8310-56aa-b2e5-7a9405ff85c4@omnitor.se> <p06240621d606585e823d@99.111.97.136> <57690f3d-faa2-18d8-f270-8ae179f39e68@omnitor.se> <p06240628d6066c091e76@99.111.97.136> <fea21ce6-398a-ebbb-5881-abe732c8983b@omnitor.se> <CAOW+2dubW_Pc-JKtTOZjSGeCWw=3bSwd1tqvObSwf4fyzs4Eig@mail.gmail.com> <9dafe618-8d7d-76ba-91e2-41e3b5ce1f3b@omnitor.se>
X-Mailer: Apple Mail (2.3273)
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/fqngSQWrHwwhI9meIBrPg1Ylky8>
Subject: Re: [Slim] Indication of modality alternatives in draft-ietf-slim-negotiating-human-language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 18:32:00 -0000

--Apple-Mail=_2B5EC9A8-03DF-471E-8FBA-D961983499DD
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8

Gunnar

Protocol documents are for engineers to write software/create hardware.  =
They don=E2=80=99t try to control user behavior.  I think in this case, =
you are trying to get the document to describe user behavior and not =
implementation software/hardware.

Although we do sometimes describe how we expect the protocol to be used =
by people, that is not normative, and we should be careful to not =
proscribe behavior.

Brian

> On Oct 13, 2017, at 2:21 PM, Gunnar Hellstr=C3=B6m =
<gunnar.hellstrom@omnitor.se> wrote:
>=20
> Den 2017-10-13 kl. 16:58, skrev Bernard Aboba:
>> Gunnar said:=20
>>=20
>> "to negotiate which human language is selected for possible use in =
each interactive media stream"
>>=20
>> [BA] Given that audio can be muted, video can be turned off, etc. =
aren't media streams negotiated in SDP always for "possible" use?
> <GH>That may be true, but we are not talking about the media flow in =
the streams. We are talking about the use for language. Our draft must =
reflect clearly what the language negotiation result really means. To =
me,  "is selected for use in each interactive media stream" sounds as a =
promise that a negotiated language will actually be used. That means =
that if two media streams end up with negotiated languages in the same =
direction, then both must be provided together. According to the =
discussions in the WG, that is not the desired result. The desired =
result should be that the users can select between use of the negotiated =
languages and usually use just one in each direction.  We introduced =
"selected" some time ago, but it did not have the right effect.  =20
>=20
> I will try to come up with new wording proposals.
>=20
> /Gunnar
> =20
>>=20
>> On Fri, Oct 13, 2017 at 6:32 AM, Gunnar Hellstr=C3=B6m =
<gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>> =
wrote:
>> Change 2 is fine and solves part of the problem.
>>=20
>> But the current wording at my proposed change 1 still tells me that =
if I offer English text and English voice, it means that I have selected =
to use both, and even stronger if an answer contains English text and =
English voice, then both will be used in the session, exactly as you =
indicated was the problem with the Lang attribute. We need to get the =
possibility to select among alternatives clearly into the draft so that =
not next generation implementers also say that it is too vague about =
what it means.
>>=20
>> The current wording at change one still says that each interactive =
stream is used.=20
>>=20
>> How about:  "to negotiate which=20
>>      human language is selected for possible use in each interactive =
media stream."
>>=20
>> /Gunnar
>>=20
>>=20
>> Den 2017-10-13 kl. 15:13, skrev Randall Gellens:
>>> I think we've addressed the concerns that existed with earlier =
versions of the draft.=20
>>>=20
>>> At 2:57 PM +0200 10/13/17, Gunnar Hellstr=C3=B6m wrote:=20
>>>=20
>>>>  Den 2017-10-13 kl. 13:51, skrev Randall Gellens:=20
>>>>>  At 12:06 AM +0200 7/29/17, Gunnar Hellstr=C3=B6m wrote:=20
>>>>>=20
>>>>>>   We have dealt with this topic before, but rereading the draft =
indicates to me that we still need some tuning of the wording so that it =
is clear that the language indications for the same direction for =
different media are alternatives with no requirements that they need to =
be provided together, so that it is allowed to answer with just one =
media in each direction having language indication.=20
>>>>>>=20
>>>>>>   Suggested wording changes to make this clear:=20
>>>>>>=20
>>>>>>   ---Change 1 in 5.2, first paragraph----------------=20
>>>>>>   ------old text---------=20
>>>>>>   This document defines two media-level attributes starting with=20=

>>>>>>      'hlang' (short for "human interactive language") to =
negotiate which=20
>>>>>>      human language is selected for use in each interactive media =
stream.=20
>>>>>>   ------------new text--------------------=20
>>>>>>   This document defines two media-level attributes starting with=20=

>>>>>>      'hlang' (short for "human interactive language") to =
negotiate which=20
>>>>>>      human language is selected for use in each media stream used =
for interactive language communication.=20
>>>>>>   -------end of change 1-------=20
>>>>>=20
>>>>>  I don't see how changing "each interactive media stream" to "each =
media stream used for interactive language communication" improves =
anything.  The term "interactive" implies human interaction.=20
>>>>  <GH>Yes, but human interaction can be to show things in video =
without being language communication.=20
>>>>  What I am aiming at is to clearly indicate that the language =
indications are alternatives to select from. The wording "use in each =
interactive media stream" sounds to me that you MUST use all the agreed =
languages. That is the same mistake that you initially blamed the Lang =
SDP attribute to mean. We need to get away from that interpretation. My =
wording was intended to accomplish that, but it might have been too =
weak. The key word is "used" that is intended to mean that if a media =
stream is selected to be used for language communication then the agreed =
language is the one to be used.=20
>>>>  So, I prefer my wording, or if you can create something even more =
clear that we are talking about alternatives to select from.=20
>>>>>=20
>>>>>>=20
>>>>>>   ----Change 2 in 5.2, third paragraph ------=20
>>>>>>   ----old text------=20
>>>>>>     In an answer, 'hlang-send' is the language the answerer will =
send if=20
>>>>>>      using the media for language (which in most cases is one of =
the=20
>>>>>>      languages in the offer's 'hlang-recv'), and 'hlang-recv' is =
the=20
>>>>>>      language the answerer expects to receive in the media (which =
in most=20
>>>>>>      cases is one of the languages in the offer's 'hlang-send').=20=

>>>>>>   -----new text----=20
>>>>>>     In an answer, 'hlang-send' is the language the answerer will =
send if=20
>>>>>>      using the media for language (which in most cases is one of =
the=20
>>>>>>      languages in the offer's 'hlang-recv'), and 'hlang-recv' is =
the=20
>>>>>>      language the answerer expects to receive in the media if=20
>>>>>>      using the media for language (which in most=20
>>>>>>      cases is one of the languages in the offer's 'hlang-send').=20=

>>>>>>   ----end of change 2-------------------------------=20
>>>>>=20
>>>>>  I'm OK adding "if using the media for language" to the second =
clause.=20
>>>>>=20
>>>>>>=20
>>>>>>=20
>>>>>>   /Gunnar=20
>>>>>>=20
>>>>>>   --=20
>>>>>>   -----------------------------------------=20
>>>>>>   Gunnar Hellstr=C3=B6m=20
>>>>>>   Omnitor=20
>>>>>>   gunnar.hellstrom@omnitor.se =
<mailto:gunnar.hellstrom@omnitor.se>=20
>>>>>>=20
>>>>>>   _______________________________________________=20
>>>>>>   SLIM mailing list=20
>>>>>>   SLIM@ietf.org <mailto:SLIM@ietf.org>=20
>>>>>>   https://www.ietf.org/mailman/listinfo/slim =
<https://www.ietf.org/mailman/listinfo/slim>=20
>>>>>=20
>>>>>=20
>>>>=20
>>>>  --=20
>>>>  -----------------------------------------=20
>>>>  Gunnar Hellstr=C3=B6m=20
>>>>  Omnitor=20
>>>>  gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>=20=

>>>>  +46 708 204 288=20
>>>=20
>>>=20
>>=20
>> --=20
>> -----------------------------------------
>> Gunnar Hellstr=C3=B6m
>> Omnitor
>> gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>
>> +46 708 204 288
>>=20
>> _______________________________________________
>> SLIM mailing list
>> SLIM@ietf.org <mailto:SLIM@ietf.org>
>> https://www.ietf.org/mailman/listinfo/slim =
<https://www.ietf.org/mailman/listinfo/slim>
>>=20
>>=20
>=20
> --=20
> -----------------------------------------
> Gunnar Hellstr=C3=B6m
> Omnitor
> gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>
> +46 708 204 288
> _______________________________________________
> SLIM mailing list
> SLIM@ietf.org <mailto:SLIM@ietf.org>
> https://www.ietf.org/mailman/listinfo/slim =
<https://www.ietf.org/mailman/listinfo/slim>


--Apple-Mail=_2B5EC9A8-03DF-471E-8FBA-D961983499DD
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=utf-8

<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" =
class=3D"">Gunnar<div class=3D""><br class=3D""></div><div =
class=3D"">Protocol documents are for engineers to write software/create =
hardware. &nbsp;They don=E2=80=99t try to control user behavior. &nbsp;I =
think in this case, you are trying to get the document to describe user =
behavior and not implementation software/hardware.</div><div =
class=3D""><br class=3D""></div><div class=3D"">Although we do sometimes =
describe how we expect the protocol to be used by people, that is not =
normative, and we should be careful to not proscribe behavior.</div><div =
class=3D""><br class=3D""></div><div class=3D"">Brian</div><div =
class=3D""><br class=3D""></div><div class=3D""><div><blockquote =
type=3D"cite" class=3D""><div class=3D"">On Oct 13, 2017, at 2:21 PM, =
Gunnar Hellstr=C3=B6m &lt;<a href=3D"mailto:gunnar.hellstrom@omnitor.se" =
class=3D"">gunnar.hellstrom@omnitor.se</a>&gt; wrote:</div><br =
class=3D"Apple-interchange-newline"><div class=3D""><span =
style=3D"font-family: Helvetica; font-size: 12px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
background-color: rgb(255, 255, 255); float: none; display: inline =
!important;" class=3D"">Den 2017-10-13 kl. 16:58, skrev Bernard =
Aboba:</span><br style=3D"font-family: Helvetica; font-size: 12px; =
font-style: normal; font-variant-caps: normal; font-weight: normal; =
letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255);" =
class=3D""><blockquote type=3D"cite" =
cite=3D"mid:CAOW+2dubW_Pc-JKtTOZjSGeCWw=3D3bSwd1tqvObSwf4fyzs4Eig@mail.gma=
il.com" style=3D"font-family: Helvetica; font-size: 12px; font-style: =
normal; font-variant-caps: normal; font-weight: normal; letter-spacing: =
normal; orphans: auto; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; widows: auto; word-spacing: =
0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; =
background-color: rgb(255, 255, 255);" class=3D""><div dir=3D"ltr" =
class=3D"">Gunnar said:&nbsp;<div class=3D""><br class=3D""></div><div =
class=3D"">"to negotiate which human language is selected for possible =
use in each interactive media stream"</div><div class=3D""><br =
class=3D""></div><div class=3D"">[BA] Given that audio can be muted, =
video can be turned off, etc. aren't media streams negotiated in SDP =
always for "possible" use?</div></div></blockquote><span =
style=3D"font-family: Helvetica; font-size: 12px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
background-color: rgb(255, 255, 255); float: none; display: inline =
!important;" class=3D"">&lt;GH&gt;That may be true, but we are not =
talking about the media flow in the streams. We are talking about the =
use for language. Our draft must reflect clearly what the language =
negotiation result really means. To me,&nbsp; "is selected for use in =
each interactive media stream" sounds as a promise that a negotiated =
language will actually be used. That means that if two media streams end =
up with negotiated languages in the same direction, then both must be =
provided together. According to the discussions in the WG, that is not =
the desired result. The desired result should be that the users can =
select between use of the negotiated languages and usually use just one =
in each direction.&nbsp; We introduced "selected" some time ago, but it =
did not have the right effect. &nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span></span><br =
style=3D"font-family: Helvetica; font-size: 12px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
background-color: rgb(255, 255, 255);" class=3D""><br =
style=3D"font-family: Helvetica; font-size: 12px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
background-color: rgb(255, 255, 255);" class=3D""><span =
style=3D"font-family: Helvetica; font-size: 12px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
background-color: rgb(255, 255, 255); float: none; display: inline =
!important;" class=3D"">I will try to come up with new wording =
proposals.</span><br style=3D"font-family: Helvetica; font-size: 12px; =
font-style: normal; font-variant-caps: normal; font-weight: normal; =
letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255);" =
class=3D""><br style=3D"font-family: Helvetica; font-size: 12px; =
font-style: normal; font-variant-caps: normal; font-weight: normal; =
letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255);" =
class=3D""><span style=3D"font-family: Helvetica; font-size: 12px; =
font-style: normal; font-variant-caps: normal; font-weight: normal; =
letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255); =
float: none; display: inline !important;" class=3D"">/Gunnar</span><br =
style=3D"font-family: Helvetica; font-size: 12px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
background-color: rgb(255, 255, 255);" class=3D""><span =
style=3D"font-family: Helvetica; font-size: 12px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
background-color: rgb(255, 255, 255); float: none; display: inline =
!important;" class=3D"">&nbsp;</span><br style=3D"font-family: =
Helvetica; font-size: 12px; font-style: normal; font-variant-caps: =
normal; font-weight: normal; letter-spacing: normal; text-align: start; =
text-indent: 0px; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: =
rgb(255, 255, 255);" class=3D""><blockquote type=3D"cite" =
cite=3D"mid:CAOW+2dubW_Pc-JKtTOZjSGeCWw=3D3bSwd1tqvObSwf4fyzs4Eig@mail.gma=
il.com" style=3D"font-family: Helvetica; font-size: 12px; font-style: =
normal; font-variant-caps: normal; font-weight: normal; letter-spacing: =
normal; orphans: auto; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; widows: auto; word-spacing: =
0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; =
background-color: rgb(255, 255, 255);" class=3D""><div =
class=3D"gmail_extra"><br class=3D""><div class=3D"gmail_quote">On Fri, =
Oct 13, 2017 at 6:32 AM, Gunnar Hellstr=C3=B6m<span =
class=3D"Apple-converted-space">&nbsp;</span><span dir=3D"ltr" =
class=3D"">&lt;<a href=3D"mailto:gunnar.hellstrom@omnitor.se" =
target=3D"_blank" moz-do-not-send=3D"true" =
class=3D"">gunnar.hellstrom@omnitor.se</a>&gt;</span><span =
class=3D"Apple-converted-space">&nbsp;</span>wrote:<br =
class=3D""><blockquote class=3D"gmail_quote" style=3D"margin: 0px 0px =
0px 0.8ex; border-left-width: 1px; border-left-style: solid; =
border-left-color: rgb(204, 204, 204); padding-left: 1ex;"><div =
text=3D"#000000" bgcolor=3D"#FFFFFF" class=3D""><p class=3D"">Change 2 =
is fine and solves part of the problem.</p><p class=3D"">But the current =
wording at my proposed change 1 still tells me that if I offer English =
text and English voice, it means that I have selected to use both, and =
even stronger if an answer contains English text and English voice, then =
both will be used in the session, exactly as you indicated was the =
problem with the Lang attribute. We need to get the possibility to =
select among alternatives clearly into the draft so that not next =
generation implementers also say that it is too vague about what it =
means.</p><p class=3D"">The current wording at change one still says =
that each interactive stream is used.<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D""></p><p =
class=3D"">How about:&nbsp; "to negotiate which<span =
class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;&nbsp;&nbsp;&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>human language is selected =
for<b class=3D""><span =
class=3D"Apple-converted-space">&nbsp;</span>possible<span =
class=3D"Apple-converted-space">&nbsp;</span></b>use in each interactive =
media stream."</p><span class=3D"HOEnZb"><font color=3D"#888888" =
class=3D""><p class=3D"">/Gunnar<br class=3D""></p></font></span><div =
class=3D""><div class=3D"h5"><br class=3D""><div =
class=3D"m_5473125126441079136moz-cite-prefix">Den 2017-10-13 kl. 15:13, =
skrev Randall Gellens:<br class=3D""></div><blockquote type=3D"cite" =
class=3D"">I think we've addressed the concerns that existed with =
earlier versions of the draft.<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D""><br =
class=3D"">At 2:57 PM +0200 10/13/17, Gunnar Hellstr=C3=B6m wrote:<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D""><br =
class=3D""><blockquote type=3D"cite" class=3D"">&nbsp;Den 2017-10-13 kl. =
13:51, skrev Randall Gellens:<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D""><blockquote =
type=3D"cite" class=3D"">&nbsp;At 12:06 AM +0200 7/29/17, Gunnar =
Hellstr=C3=B6m wrote:<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D""><br =
class=3D""><blockquote type=3D"cite" class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>We have dealt with this =
topic before, but rereading the draft indicates to me that we still need =
some tuning of the wording so that it is clear that the language =
indications for the same direction for different media are alternatives =
with no requirements that they need to be provided together, so that it =
is allowed to answer with just one media in each direction having =
language indication.<span class=3D"Apple-converted-space">&nbsp;</span><br=
 class=3D""><br class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>Suggested wording changes =
to make this clear:<span class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D""><br class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>---Change 1 in 5.2, first =
paragraph----------------<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>------old =
text---------<span class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;<span class=3D"Apple-converted-space">&nbsp;</span>This =
document defines two media-level attributes starting with<span =
class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;&nbsp;&nbsp;&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>'hlang' (short for "human =
interactive language") to negotiate which<span =
class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;&nbsp;&nbsp;&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>human language is selected =
for use in each interactive media stream.<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>------------new =
text--------------------<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>This document defines two =
media-level attributes starting with<span =
class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;&nbsp;&nbsp;&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>'hlang' (short for "human =
interactive language") to negotiate which<span =
class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;&nbsp;&nbsp;&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>human language is selected =
for use in each media stream used for interactive language =
communication.<span class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>-------end of change =
1-------<span class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D""></blockquote><br class=3D"">&nbsp;I don't see how changing =
"each interactive media stream" to "each media stream used for =
interactive language communication" improves anything.&nbsp; The term =
"interactive" implies human interaction.<span =
class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D""></blockquote>&nbsp;&lt;GH&gt;Yes, but human interaction can =
be to show things in video without being language communication.<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D"">&nbsp;What I =
am aiming at is to clearly indicate that the language indications are =
alternatives to select from. The wording "use in each interactive media =
stream" sounds to me that you MUST use all the agreed languages. That is =
the same mistake that you initially blamed the Lang SDP attribute to =
mean. We need to get away from that interpretation. My wording was =
intended to accomplish that, but it might have been too weak. The key =
word is "used" that is intended to mean that if a media stream is =
selected to be used for language communication then the agreed language =
is the one to be used.<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D"">&nbsp;So, I =
prefer my wording, or if you can create something even more clear that =
we are talking about alternatives to select from.<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D""><blockquote =
type=3D"cite" class=3D""><br class=3D""><blockquote type=3D"cite" =
class=3D""><br class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>----Change 2 in 5.2, third =
paragraph ------<span class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>----old text------<span =
class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;&nbsp;&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>In an answer, 'hlang-send' =
is the language the answerer will send if<span =
class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;&nbsp;&nbsp;&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>using the media for =
language (which in most cases is one of the<span =
class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;&nbsp;&nbsp;&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>languages in the offer's =
'hlang-recv'), and 'hlang-recv' is the<span =
class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;&nbsp;&nbsp;&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>language the answerer =
expects to receive in the media (which in most<span =
class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;&nbsp;&nbsp;&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>cases is one of the =
languages in the offer's 'hlang-send').<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>-----new text----<span =
class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;&nbsp;&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>In an answer, 'hlang-send' =
is the language the answerer will send if<span =
class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;&nbsp;&nbsp;&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>using the media for =
language (which in most cases is one of the<span =
class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;&nbsp;&nbsp;&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>languages in the offer's =
'hlang-recv'), and 'hlang-recv' is the<span =
class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;&nbsp;&nbsp;&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>language the answerer =
expects to receive in the media if<span =
class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;&nbsp;&nbsp;&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>using the media for =
language (which in most<span =
class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;&nbsp;&nbsp;&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>cases is one of the =
languages in the offer's 'hlang-send').<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>----end of change =
2-----------------------------<wbr class=3D"">--<span =
class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D""></blockquote><br class=3D"">&nbsp;I'm OK adding "if using the =
media for language" to the second clause.<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D""><br =
class=3D""><blockquote type=3D"cite" class=3D""><br class=3D""><br =
class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>/Gunnar<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D""><br =
class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>--<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>-----------------------------=
-<wbr class=3D"">-----------<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>Gunnar Hellstr=C3=B6m<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>Omnitor<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span><a =
class=3D"m_5473125126441079136moz-txt-link-abbreviated" =
href=3D"mailto:gunnar.hellstrom@omnitor.se" target=3D"_blank" =
moz-do-not-send=3D"true">gunnar.hellstrom@omnitor.se</a><span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D""><br =
class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>_____________________________=
_<wbr class=3D"">_________________<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span>SLIM mailing list<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span><a =
class=3D"m_5473125126441079136moz-txt-link-abbreviated" =
href=3D"mailto:SLIM@ietf.org" target=3D"_blank" =
moz-do-not-send=3D"true">SLIM@ietf.org</a><span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D"">&nbsp;<span =
class=3D"Apple-converted-space">&nbsp;</span><a =
class=3D"m_5473125126441079136moz-txt-link-freetext" =
href=3D"https://www.ietf.org/mailman/listinfo/slim" target=3D"_blank" =
moz-do-not-send=3D"true">https://www.ietf.org/mailman/<wbr =
class=3D"">listinfo/slim</a><span =
class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D""></blockquote><br class=3D""><br class=3D""></blockquote><br =
class=3D"">&nbsp;--<span class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;-----------------------------<wbr =
class=3D"">------------<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D"">&nbsp;Gunnar =
Hellstr=C3=B6m<span class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">&nbsp;Omnitor<span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D"">&nbsp;<a =
class=3D"m_5473125126441079136moz-txt-link-abbreviated" =
href=3D"mailto:gunnar.hellstrom@omnitor.se" target=3D"_blank" =
moz-do-not-send=3D"true">gunnar.hellstrom@omnitor.se</a><span =
class=3D"Apple-converted-space">&nbsp;</span><br class=3D"">&nbsp;+46 =
708 204 288<span class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D""></blockquote><br class=3D""><br class=3D""></blockquote><br =
class=3D""><pre class=3D"m_5473125126441079136moz-signature" =
cols=3D"72">--=20
------------------------------<wbr class=3D"">-----------
Gunnar Hellstr=C3=B6m
Omnitor
<a class=3D"m_5473125126441079136moz-txt-link-abbreviated" =
href=3D"mailto:gunnar.hellstrom@omnitor.se" target=3D"_blank" =
moz-do-not-send=3D"true">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre></div></div></div><br =
class=3D"">______________________________<wbr =
class=3D"">_________________<br class=3D"">SLIM mailing list<br =
class=3D""><a href=3D"mailto:SLIM@ietf.org" moz-do-not-send=3D"true" =
class=3D"">SLIM@ietf.org</a><br class=3D""><a =
href=3D"https://www.ietf.org/mailman/listinfo/slim" rel=3D"noreferrer" =
target=3D"_blank" moz-do-not-send=3D"true" =
class=3D"">https://www.ietf.org/mailman/<wbr =
class=3D"">listinfo/slim</a><br class=3D""><br =
class=3D""></blockquote></div><br class=3D""></div></blockquote><br =
style=3D"font-family: Helvetica; font-size: 12px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
background-color: rgb(255, 255, 255);" class=3D""><pre =
class=3D"moz-signature" cols=3D"72" style=3D"font-size: 12px; =
font-style: normal; font-variant-caps: normal; font-weight: normal; =
letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
background-color: rgb(255, 255, 255);">--=20
-----------------------------------------
Gunnar Hellstr=C3=B6m
Omnitor
<a class=3D"moz-txt-link-abbreviated" =
href=3D"mailto:gunnar.hellstrom@omnitor.se">gunnar.hellstrom@omnitor.se</a=
>
+46 708 204 288</pre><span style=3D"font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: =
normal; letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255); =
float: none; display: inline !important;" =
class=3D"">_______________________________________________</span><br =
style=3D"font-family: Helvetica; font-size: 12px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
background-color: rgb(255, 255, 255);" class=3D""><span =
style=3D"font-family: Helvetica; font-size: 12px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
background-color: rgb(255, 255, 255); float: none; display: inline =
!important;" class=3D"">SLIM mailing list</span><br style=3D"font-family: =
Helvetica; font-size: 12px; font-style: normal; font-variant-caps: =
normal; font-weight: normal; letter-spacing: normal; text-align: start; =
text-indent: 0px; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: =
rgb(255, 255, 255);" class=3D""><a href=3D"mailto:SLIM@ietf.org" =
style=3D"font-family: Helvetica; font-size: 12px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
orphans: auto; text-align: start; text-indent: 0px; text-transform: =
none; white-space: normal; widows: auto; word-spacing: 0px; =
-webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; =
background-color: rgb(255, 255, 255);" class=3D"">SLIM@ietf.org</a><br =
style=3D"font-family: Helvetica; font-size: 12px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
background-color: rgb(255, 255, 255);" class=3D""><a =
href=3D"https://www.ietf.org/mailman/listinfo/slim" style=3D"font-family: =
Helvetica; font-size: 12px; font-style: normal; font-variant-caps: =
normal; font-weight: normal; letter-spacing: normal; orphans: auto; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; widows: auto; word-spacing: 0px; -webkit-text-size-adjust: auto; =
-webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255);" =
class=3D"">https://www.ietf.org/mailman/listinfo/slim</a><br =
style=3D"font-family: Helvetica; font-size: 12px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
background-color: rgb(255, 255, 255);" =
class=3D""></div></blockquote></div><br class=3D""></div></body></html>=

--Apple-Mail=_2B5EC9A8-03DF-471E-8FBA-D961983499DD--


From nobody Fri Oct 13 13:22:47 2017
Return-Path: <bernard.aboba@gmail.com>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3AD231331BA for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 13:22:46 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level: 
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id j4lEm8aVdOvz for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 13:22:44 -0700 (PDT)
Received: from mail-ua0-x230.google.com (mail-ua0-x230.google.com [IPv6:2607:f8b0:400c:c08::230]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 35DD2133039 for <slim@ietf.org>; Fri, 13 Oct 2017 13:22:44 -0700 (PDT)
Received: by mail-ua0-x230.google.com with SMTP id z4so6100406uaz.5 for <slim@ietf.org>; Fri, 13 Oct 2017 13:22:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;  h=mime-version:from:date:message-id:subject:to; bh=pTWjbk1C51NiKu3M2BjCWiiQ1cveX9y92mmXGikbxlE=; b=REOSKy1cR+IbbRlRFBSwsXInekJN4GdnMYly/TGh63iK+jq7ssgM9OuWfxqv0TfbIx Wz4hJfTjSVK7YoeA5uh3GXMGjlJGaxKYsmTsaMUFMwu35r62wItbddLt4qbrVKmxMIdr 7Iuwj6qq36WiZwQIXwKV4dlLTNqm1wuu0wJaHwQ87laUvBNK0VfShiRDkg4n88awWfkM v8cttxSKjkIV3xXV5RzfyBzfkFC0jkmvdTLhlwJHXNIu+eOy/+YGgelXKAi7iXdO2FPU 4fU3mt7IsRFaWOwelCsDj0Tm6gWyG/MfBRhpQDhjXhVbr/hpGfSmsfc7gw07KroXVyF6 3zLA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=pTWjbk1C51NiKu3M2BjCWiiQ1cveX9y92mmXGikbxlE=; b=eY3/6w6K675g7wV8uf2i3W/IJtXmL6FKKohmNuZ57Rcl9WPtjjPEcS4gashCKx+6sC oO7HxjCwlK8KWywzRloTzVZ1X7rM+byCaqq1upaR3LF1n4W0jABGw9wXgOMJKb4BfmD/ d3AKEIJUTtpSEVLYOnx/qKZX2K4wXDucFCBJ+DnNouBCMi28WUTxzIjLKCT4rFeCuof+ XxU1FlMJeLVGJf8ZwbxTyHHi0+0XVwx5FZ5Wv4j8F1AEMFQ1DVDubgfPEnntyAZBiZds 2h0fEPrMbptAdzJfe6PS1ARQnlO6wPrB0BQjKeMqRyulII96h/T95J+39R34TSP5FOof rkFQ==
X-Gm-Message-State: AMCzsaXReaLhqioQxv0Zw/c37QM852Cli01hcOFOPRXg/tT6oMTcf1G7 p1kuHITkHIbeRsabQrn2b1R1khBe/JaWUx9wCCDquCIj
X-Google-Smtp-Source: AOwi7QBv+5+HGHiKl58gGjxkBPoJzxQr3FA05NTndwmiMDEfz8qEInzTVd86HRH/yINywfxx4iOKghR3UNJYFrPKspw=
X-Received: by 10.176.81.68 with SMTP id f4mr1937846uaa.52.1507926162702; Fri, 13 Oct 2017 13:22:42 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.159.32.76 with HTTP; Fri, 13 Oct 2017 13:22:22 -0700 (PDT)
From: Bernard Aboba <bernard.aboba@gmail.com>
Date: Fri, 13 Oct 2017 13:22:22 -0700
Message-ID: <CAOW+2dvYsCXY-eSNBm5U5gWhWzc9Q2a_bx+PkA3bG74eBXsQqg@mail.gmail.com>
To: slim@ietf.org
Content-Type: multipart/alternative; boundary="94eb2c1913184685f4055b736b6e"
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/geuNd0gA5X0MYE3ILrNCxSg_-ls>
Subject: [Slim] Issue 41: Allow sign languages in the text stream for text notations of sign language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 20:22:46 -0000

--94eb2c1913184685f4055b736b6e
Content-Type: text/plain; charset="UTF-8"

Issue 41 (see: https://trac.ietf.org/trac/slim/ticket/41 ) relates to the
potential future use of sign language within a text stream, such as Formal
Signwriting, described here:
https://tools.ietf.org/html/draft-slevinski-formal-signwriting

In the Issue, the following change is suggested:

Therefore, I suggest this minimal change:
---------------------------old text 1 in
5.4-------------------------------------
the behavior when specifying a spoken/written language tag for a video
media stream, or a signed language tag for an audio or text media stream,
is not defined.
--------------------------new text---------------------------------

the behavior when specifying a spoken/written language tag for a video
media stream, or a signed language tag for an audio media stream, is not
defined.
--------------------------end of change 1---------------------------

Since draft-slevinski has not been widely implemented, it probably cannot
be assumed that negotiation of a signed language tag for a text media
stream implies use of this (or any other) sign language textual encoding
mechanism.  So it would not be correct to imply that use of a signed
language tag for an text media stream has a well defined meaning.

One way to resolve this would be to keep the existing text but add a
sentence, such as:

"Note that mechanisms for encoding signed language in a text media stream
have been proposed
[draft-slevinski] but are not yet well developed enough for incorporation
within the negotiation mechanism described in this document."

Would such a resolution make sense?

--94eb2c1913184685f4055b736b6e
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Issue 41 (see:=C2=A0<a href=3D"https://trac.ietf.org/trac/=
slim/ticket/41">https://trac.ietf.org/trac/slim/ticket/41</a> ) relates to =
the potential future use of sign language within a text stream, such as For=
mal Signwriting, described here:=C2=A0<div><a href=3D"https://tools.ietf.or=
g/html/draft-slevinski-formal-signwriting">https://tools.ietf.org/html/draf=
t-slevinski-formal-signwriting</a></div><div><br></div><div>In the Issue, t=
he following change is suggested:=C2=A0</div><div><br></div><div><blockquot=
e style=3D"color:rgb(0,0,0);font-family:Verdana,Arial,&quot;Bitstream Vera =
Sans&quot;,Helvetica,sans-serif;font-size:13px;background-color:rgb(255,255=
,221)"><p>Therefore, I suggest this minimal change:<br>--------------------=
-------old text 1 in 5.4-------------------------------------<br>the behavi=
or when specifying a spoken/written language tag for a video media stream, =
or a signed language tag for an audio or text media stream, is not defined.=
<br>--------------------------new text---------------------------------<br>=
</p></blockquote><blockquote style=3D"color:rgb(0,0,0);font-family:Verdana,=
Arial,&quot;Bitstream Vera Sans&quot;,Helvetica,sans-serif;font-size:13px;b=
ackground-color:rgb(255,255,221)"><p>the behavior when specifying a spoken/=
written language tag for a video media stream, or a signed language tag for=
 an audio media stream, is not defined.<br>--------------------------end of=
 change 1---------------------------</p></blockquote></div><div>Since draft=
-slevinski has not been widely implemented, it probably cannot be assumed t=
hat negotiation of a signed language tag for a text media stream implies us=
e of this (or any other) sign language textual encoding mechanism.=C2=A0 So=
 it would not be correct to imply that use of a signed language tag for an =
text media stream has a well defined meaning.=C2=A0</div><div><br></div><di=
v>One way to resolve this would be to keep the existing text but add a sent=
ence, such as:=C2=A0</div><div><br></div><div>&quot;Note that mechanisms fo=
r encoding signed language in a text media stream have been proposed</div><=
div>[draft-slevinski] but are not yet well developed enough for incorporati=
on within the negotiation mechanism described in this document.&quot;=C2=A0=
</div><div><br></div><div>Would such a resolution make sense?</div><div><br=
><div><br></div><div><br><div><br></div><div><br></div></div></div></div>

--94eb2c1913184685f4055b736b6e--


From nobody Fri Oct 13 13:27:16 2017
Return-Path: <bernard.aboba@gmail.com>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id B5BB71331DD for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 13:27:15 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -0.1
X-Spam-Level: 
X-Spam-Status: No, score=-0.1 tagged_above=-999 required=5 tests=[BAYES_20=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id yLRmy-UB6GUw for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 13:27:14 -0700 (PDT)
Received: from mail-ua0-x235.google.com (mail-ua0-x235.google.com [IPv6:2607:f8b0:400c:c08::235]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 4C42A133210 for <slim@ietf.org>; Fri, 13 Oct 2017 13:27:13 -0700 (PDT)
Received: by mail-ua0-x235.google.com with SMTP id s41so6088931uab.10 for <slim@ietf.org>; Fri, 13 Oct 2017 13:27:13 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;  h=mime-version:from:date:message-id:subject:to; bh=dXQIdntaRbevPm/HR8yqEMBR2o9DehXfkijKPfVSIFI=; b=oIv+nryxA5HZ5zrj9/Gex3WW/0mSpFu4ELdMeKnc0tcBQRqnyEFF1E4WtMwC0Vf4L8 MKBJFBHcImKpabsOGmHQsV7bvCbxsHK0JwNuxoDKSpGUe7Y7Ni0Ccvt1Y0hRk0HIBMUn 3qm4nxS8QzvA3qXonzOc3GVXt9yOAquhkUSQIFF80QInKFQCQ6UNKR7OP2B+Azv56MG6 J4+ZzjwTyjeI7hBMRI/NLtWRRxtfaukH4W5lVTLm+pwCNEKUVIGAdg61+2O6eRtZBa+X D1eVMc3a8o1C9De6NmJYnI7rFfpfqYMM5GVPz0ENyXZEu5BSXeY6M2y6D8LpUPhmZ3ZO em6g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=dXQIdntaRbevPm/HR8yqEMBR2o9DehXfkijKPfVSIFI=; b=g4TYuwii1lHtW7Y++t1tAImoy2SZOn/TGo88q0XS41sxDi9PZ/qH5yALUPCsen0CwG kn8kkUUXZoa9TxMkobmRyuwoX/igCb+UXZa+O4wSZV7wbKnbnGTYtiG52aoi6ivXchJL +D2XV/bw2ojHNcB+qb20CrAt+Ne+4Vx9spmJ3+0e8RCtoR9jJQmVIexQLe1GnyK/lXSt ZR4kktI7lV0UxT4Fu/oPpDIZvTDpuXBht+ck/px+SJXQdVZa95meOYF80SjgYtzTnXlq Gfo3LHXOfig+MQ6ta7tM0UGqXw4bLSVfxoeBr6QQC+Q7VB9plL0/+SsyvnuAUEkWlPCs mA+Q==
X-Gm-Message-State: AMCzsaW2Tnopy6Z+RW8ahu187nTSV3Rz79o9lGv4iiOWDY8cjTd+M699 3DVUVm4NT/mtvBVlorakly3Of9ChekchyfGnMSxFGavs
X-Google-Smtp-Source: ABhQp+TH0wCLfVjnLzMUyGY4ziBQACa4SzXhK3X6o3r2Ao7r5mOeEeNSVR0CQXKQiYMyfoFaIiTqnR8qxozuEA/t1FI=
X-Received: by 10.176.74.211 with SMTP id t19mr2236279uae.83.1507926431969; Fri, 13 Oct 2017 13:27:11 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.159.32.76 with HTTP; Fri, 13 Oct 2017 13:26:51 -0700 (PDT)
From: Bernard Aboba <bernard.aboba@gmail.com>
Date: Fri, 13 Oct 2017 13:26:51 -0700
Message-ID: <CAOW+2dsVjhvT7tWrvPgz1Rp14v4Td+u8Pe_UTG4WqmtoeFmhiA@mail.gmail.com>
To: slim@ietf.org
Content-Type: multipart/alternative; boundary="f403045f8af2533603055b737b81"
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/CRzWl920_PjccmSMrAeFG6rz-Q4>
Subject: [Slim] Issue 47
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 20:27:16 -0000

--f403045f8af2533603055b737b81
Content-Type: text/plain; charset="UTF-8"

Issue 47 ( https://trac.ietf.org/trac/slim/ticket/47 ) suggest the
following change to the text:

Change:

""Another example would be a user who is able to speak but is deaf or
hard-of-hearing and requires a voice stream plus a text stream.""

To:

"Another example would be a user who is able to speak but is deaf or
hard-of-hearing and requires to send spoken language in a voice stream and
receive written language in a text stream."


Can we Accept the proposed resolution?

--f403045f8af2533603055b737b81
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Issue 47 ( <a href=3D"https://trac.ietf.org/trac/slim/tick=
et/47">https://trac.ietf.org/trac/slim/ticket/47</a> ) suggest the followin=
g change to the text:=C2=A0<div><br></div><div>Change:=C2=A0</div><div><br>=
</div><div>&quot;<span style=3D"background-color:rgb(255,255,221);color:rgb=
(0,0,0);font-family:Verdana,Arial,&quot;Bitstream Vera Sans&quot;,Helvetica=
,sans-serif;font-size:13px">&quot;Another example would be a user who is ab=
le to spea</span><span style=3D"background-color:rgb(255,255,221);color:rgb=
(0,0,0);font-family:Verdana,Arial,&quot;Bitstream Vera Sans&quot;,Helvetica=
,sans-serif;font-size:13px">k but is deaf or hard-of-hearing and requires a=
 voice stream plus=C2=A0</span><span style=3D"font-size:13px;background-col=
or:rgb(255,255,221);color:rgb(0,0,0);font-family:Verdana,Arial,&quot;Bitstr=
eam Vera Sans&quot;,Helvetica,sans-serif">a text stream.&quot;</span>&quot;=
</div><div><br></div><div>To:=C2=A0</div><div><br></div><div><p style=3D"co=
lor:rgb(0,0,0);font-family:Verdana,Arial,&quot;Bitstream Vera Sans&quot;,He=
lvetica,sans-serif;font-size:13px;background-color:rgb(255,255,221)">&quot;=
Another example would be a user who is able to speak but is deaf or hard-of=
-hearing and requires to send spoken language in a voice stream and receive=
 written language in a text stream.&quot;</p><p style=3D"color:rgb(0,0,0);f=
ont-family:Verdana,Arial,&quot;Bitstream Vera Sans&quot;,Helvetica,sans-ser=
if;font-size:13px;background-color:rgb(255,255,221)"><br></p><p style=3D"co=
lor:rgb(0,0,0);font-family:Verdana,Arial,&quot;Bitstream Vera Sans&quot;,He=
lvetica,sans-serif;font-size:13px;background-color:rgb(255,255,221)">Can we=
 Accept the proposed resolution?</p></div></div>

--f403045f8af2533603055b737b81--


From nobody Fri Oct 13 13:38:00 2017
Return-Path: <br@brianrosen.net>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9584313330C for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 13:37:58 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.589
X-Spam-Level: 
X-Spam-Status: No, score=-2.589 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, T_SPF_PERMERROR=0.01] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=brianrosen-net.20150623.gappssmtp.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id eJyPcV1Y5vpq for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 13:37:56 -0700 (PDT)
Received: from mail-qk0-x22f.google.com (mail-qk0-x22f.google.com [IPv6:2607:f8b0:400d:c09::22f]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 65B47132620 for <slim@ietf.org>; Fri, 13 Oct 2017 13:37:56 -0700 (PDT)
Received: by mail-qk0-x22f.google.com with SMTP id o187so6483909qke.7 for <slim@ietf.org>; Fri, 13 Oct 2017 13:37:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brianrosen-net.20150623.gappssmtp.com; s=20150623; h=from:message-id:mime-version:subject:date:in-reply-to:cc:to :references; bh=/+/ZIUStn6aAvxrcIkcEsaT6j6mItRoUCfu2BQr75lM=; b=HeMG44cpo9AEFLkBvhI+SMqtgwAstEqGBbAviiQCFR7TCzmv3O4nrQX/qT2+/aRQvL k8KdIORGdmyj9IiS36GxdQOpDIC+Pmn4hOCWoHg2vtqfXHvgJcampTsKJPPHecw3tIo+ bzA3XKRRve6H5tECEMLSf5H9mdlXfFpzQPzGbfh3fi9tZfCyxXqWWymwuYkbbRqlfwOR sfx1mLR7G3Txs+f/aLYUaxu3/q+gMcpAbCQOw/uONDoGRg6yNKrUlsa5vFlKiyI1qcvi sNDSeVyamBfI6YHLEOSm1dQIdRdCxmiRGTDKt5bd0X54kTjsweV7qGgaf4utX4so0HtU xg7A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:message-id:mime-version:subject:date :in-reply-to:cc:to:references; bh=/+/ZIUStn6aAvxrcIkcEsaT6j6mItRoUCfu2BQr75lM=; b=WFQnyE14V+cthMR5NA6UqXwfWZt6NtldqyPFllk02F19kzPB2jDB0z/1EPnnMhR88v d7vKwUY5JpKdGmXCUaI8YlOqnklBKmjXkTU1rWnTYjkibETLJIFPKVeBi85rk1NXJ+KW dfgccV/pKJogimuLWZMYwQARnMv42AUnFtNNslWQunDN0y2yS+R++aQqBY1Ey3Txpg/5 f4Z6khSDhCJHt6TfxzAAdZBlyT/azAgcWvJaGXgNisKr6E+hLp78JrffgrzU1+u86V7E Dmc4QKJX8NoG895hEl97AMcy7KoyqL0GWbruiu1PpxZLhhzoQWNAtSDDVaJEpVAGeSOS WKnA==
X-Gm-Message-State: AMCzsaVHF7Fgkq8ir2Io4Q1FtFHLDIndsxsdF8I5RZ0e9iw+VbSSu5Kh E9TWfqB5hIYivfnvVveV0dT24A==
X-Google-Smtp-Source: ABhQp+Td0FP8q8IcqBETeKH2a1vlT9AnA5rovNObhLicWq+hwnjMF6Py658auCAXAlbTUSWugA9L9Q==
X-Received: by 10.55.137.65 with SMTP id l62mr3582457qkd.257.1507927075400; Fri, 13 Oct 2017 13:37:55 -0700 (PDT)
Received: from [10.33.193.3] (neustar-sthide-nat1.neustar.biz. [156.154.81.54]) by smtp.gmail.com with ESMTPSA id s22sm1109439qta.67.2017.10.13.13.37.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Oct 2017 13:37:54 -0700 (PDT)
From: Brian Rosen <br@brianrosen.net>
Message-Id: <A1444985-1CB3-443B-BB90-9E0A6B83EEE5@brianrosen.net>
Content-Type: multipart/alternative; boundary="Apple-Mail=_321DAA97-D7CC-4AC3-AE42-ED808878B686"
Mime-Version: 1.0 (Mac OS X Mail 10.3 \(3273\))
Date: Fri, 13 Oct 2017 16:37:52 -0400
In-Reply-To: <CAOW+2dsVjhvT7tWrvPgz1Rp14v4Td+u8Pe_UTG4WqmtoeFmhiA@mail.gmail.com>
Cc: slim@ietf.org
To: Bernard Aboba <bernard.aboba@gmail.com>
References: <CAOW+2dsVjhvT7tWrvPgz1Rp14v4Td+u8Pe_UTG4WqmtoeFmhiA@mail.gmail.com>
X-Mailer: Apple Mail (2.3273)
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/0HopAMcfzThT0eWegIQlrsJiE68>
Subject: Re: [Slim] Issue 47
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 20:37:59 -0000

--Apple-Mail=_321DAA97-D7CC-4AC3-AE42-ED808878B686
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8

=E2=80=9Crequires to=E2=80=9D is awkward in my dialect of English.  =
Perhaps =E2=80=9Cdesires to=E2=80=9D?


> On Oct 13, 2017, at 4:26 PM, Bernard Aboba <bernard.aboba@gmail.com> =
wrote:
>=20
> Issue 47 ( https://trac.ietf.org/trac/slim/ticket/47 =
<https://trac.ietf.org/trac/slim/ticket/47> ) suggest the following =
change to the text:=20
>=20
> Change:=20
>=20
> ""Another example would be a user who is able to speak but is deaf or =
hard-of-hearing and requires a voice stream plus a text stream.""
>=20
> To:=20
>=20
> "Another example would be a user who is able to speak but is deaf or =
hard-of-hearing and requires to send spoken language in a voice stream =
and receive written language in a text stream."
>=20
>=20
>=20
> Can we Accept the proposed resolution?
>=20
> _______________________________________________
> SLIM mailing list
> SLIM@ietf.org
> https://www.ietf.org/mailman/listinfo/slim


--Apple-Mail=_321DAA97-D7CC-4AC3-AE42-ED808878B686
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=utf-8

<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" =
class=3D"">=E2=80=9Crequires to=E2=80=9D is awkward in my dialect of =
English. &nbsp;Perhaps =E2=80=9Cdesires to=E2=80=9D?<div class=3D""><br =
class=3D""></div><div class=3D""><br class=3D""><div><blockquote =
type=3D"cite" class=3D""><div class=3D"">On Oct 13, 2017, at 4:26 PM, =
Bernard Aboba &lt;<a href=3D"mailto:bernard.aboba@gmail.com" =
class=3D"">bernard.aboba@gmail.com</a>&gt; wrote:</div><br =
class=3D"Apple-interchange-newline"><div class=3D""><div dir=3D"ltr" =
class=3D"">Issue 47 ( <a =
href=3D"https://trac.ietf.org/trac/slim/ticket/47" =
class=3D"">https://trac.ietf.org/trac/slim/ticket/47</a> ) suggest the =
following change to the text:&nbsp;<div class=3D""><br =
class=3D""></div><div class=3D"">Change:&nbsp;</div><div class=3D""><br =
class=3D""></div><div class=3D"">"<span style=3D"background-color: =
rgb(255, 255, 221); font-family: Verdana, Arial, 'Bitstream Vera Sans', =
Helvetica, sans-serif; font-size: 13px;" class=3D"">"Another example =
would be a user who is able to spea</span><span style=3D"background-color:=
 rgb(255, 255, 221); font-family: Verdana, Arial, 'Bitstream Vera Sans', =
Helvetica, sans-serif; font-size: 13px;" class=3D"">k but is deaf or =
hard-of-hearing and requires a voice stream plus&nbsp;</span><span =
style=3D"font-size: 13px; background-color: rgb(255, 255, 221); =
font-family: Verdana, Arial, 'Bitstream Vera Sans', Helvetica, =
sans-serif;" class=3D"">a text stream."</span>"</div><div class=3D""><br =
class=3D""></div><div class=3D"">To:&nbsp;</div><div class=3D""><br =
class=3D""></div><div class=3D""><p style=3D"font-family: Verdana, =
Arial, 'Bitstream Vera Sans', Helvetica, sans-serif; font-size: 13px; =
background-color: rgb(255, 255, 221);" class=3D"">"Another example would =
be a user who is able to speak but is deaf or hard-of-hearing and =
requires to send spoken language in a voice stream and receive written =
language in a text stream."</p><p style=3D"font-family: Verdana, Arial, =
'Bitstream Vera Sans', Helvetica, sans-serif; font-size: 13px; =
background-color: rgb(255, 255, 221);" class=3D""><br class=3D""></p><p =
style=3D"font-family: Verdana, Arial, 'Bitstream Vera Sans', Helvetica, =
sans-serif; font-size: 13px; background-color: rgb(255, 255, 221);" =
class=3D"">Can we Accept the proposed resolution?</p></div></div>
_______________________________________________<br class=3D"">SLIM =
mailing list<br class=3D""><a href=3D"mailto:SLIM@ietf.org" =
class=3D"">SLIM@ietf.org</a><br =
class=3D"">https://www.ietf.org/mailman/listinfo/slim<br =
class=3D""></div></blockquote></div><br class=3D""></div></body></html>=

--Apple-Mail=_321DAA97-D7CC-4AC3-AE42-ED808878B686--


From nobody Fri Oct 13 13:47:09 2017
Return-Path: <bernard.aboba@gmail.com>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3707A133188 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 13:47:08 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level: 
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id I7UCXpV6OodW for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 13:47:06 -0700 (PDT)
Received: from mail-ua0-x232.google.com (mail-ua0-x232.google.com [IPv6:2607:f8b0:400c:c08::232]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 1913C12895E for <slim@ietf.org>; Fri, 13 Oct 2017 13:47:06 -0700 (PDT)
Received: by mail-ua0-x232.google.com with SMTP id b11so6140005uae.12 for <slim@ietf.org>; Fri, 13 Oct 2017 13:47:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;  h=mime-version:from:date:message-id:subject:to; bh=CdlbDIx9Ce862n0k88cqU7KWfKRdDRxysAFEwn4BcCU=; b=sEj7lB9w2Gtf+5LNFe/UFTSlY13zrKlMBh1G9Dcz2gCrAxXkDj5+L/Jdkj3v18ga1O x1mtRSgmuDjkl6aiOHEj1QDDqepaQtHlxWxjgDQmGBcLUupI8X1QPusLd1ETDi2qDLZn RO/Y2MAzZaRrhWE2rk4xtBYQXQQXn/lw5NeOj0hec7rtMfMxbBQA9p8RtOqH+oIXc9mU 7G5dw4plR6QnrL+8i0735vCCO/QxcLWBmDGn+sGKywzLjMbvEAqZB9C35KNT/tHAIedO vLdU8lM7mNfPugPikfv3WO7d8sq/yj4MiW/mYbrKnzl+JN7QPf/2SgN0Vqo9NR4IbsCj mVVQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=CdlbDIx9Ce862n0k88cqU7KWfKRdDRxysAFEwn4BcCU=; b=BgSqVc0aM8r4gJgqUqJtdKwy1DKHXNyspQhqFcUciUROhH4nbXy0rbaIueJqCgox3g dyaQlOeIhYn7FCSW+RpDIbVzct8HQZ+RhCoWIVWD+BK0VVt+es9/Lo4rTpou2OakFso7 iDOfiPsWkVa2kH3ksfxFxbpv2Q4OPf90dSEZOwbOY3tJW/YyX1DWqMmuM5Cu4Kzi8dy6 +qU0lsa1dodM7KdhBt+DpvR1qORdhmITbaRKJzY4xUtIuvt+EdpvkGMYvhHpHwLR/oVU RmIV08NV1Btq7pbGkOrNN1rC4MUG4CPHJJ7NLrLkyHKsx30Z6kJsDBnCYNoIwmsfp/y2 PnDQ==
X-Gm-Message-State: AMCzsaXKLWacIrVjUDkSyKHHuJwnc4z2CoqR5fxd486VBkp9AC085Zlf IcgcSjRpC3L47LGBRVI4U39k5/pWns0YE5iCEiEo8p5V
X-Google-Smtp-Source: AOwi7QAYwTrqmYgMzEYhGEa+0bqYYgMNoN9QG64ms4x3xtX3brxPDj31IVnUZw4qOGaQ861iy7TmUueGUKUY/Iqug3A=
X-Received: by 10.159.35.226 with SMTP id 89mr2217987uao.195.1507927624720; Fri, 13 Oct 2017 13:47:04 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.159.32.76 with HTTP; Fri, 13 Oct 2017 13:46:44 -0700 (PDT)
From: Bernard Aboba <bernard.aboba@gmail.com>
Date: Fri, 13 Oct 2017 13:46:44 -0700
Message-ID: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com>
To: slim@ietf.org
Content-Type: multipart/alternative; boundary="94eb2c03da366b25c9055b73c2cb"
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/oE-p2SPSFkVlihd3ig9oN4aEkXk>
Subject: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 20:47:08 -0000

--94eb2c03da366b25c9055b73c2cb
Content-Type: text/plain; charset="UTF-8"

Issue 43 ( https://trac.ietf.org/trac/slim/ticket/43 ) results from a
review comment that said that a simple way is required to decide if a
language tag is a sign language or a written or spoken language.

Some applications scan the IANA language registry at startup for the word
"Sign" in the tag description:
https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry

Currently, there are 319 language subtags that include "Sign Language" in
their description.

Given the current layout of the language subtag registry, it is not clear
to me that there is an easier way to determine which tags represent sign
languages.  Nor is it within the SLIM WG charter to develop a modification
to the language subtag registry to address this concern.

So I am wondering whether we might resolve this with a Note outlining the
problem but not offering a solution.

--94eb2c03da366b25c9055b73c2cb
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Issue 43 ( <a href=3D"https://trac.ietf.org/trac/slim/tick=
et/43">https://trac.ietf.org/trac/slim/ticket/43</a> ) results from a revie=
w comment that=C2=A0<span style=3D"color:rgb(0,0,0);font-family:Verdana,Ari=
al,&quot;Bitstream Vera Sans&quot;,Helvetica,sans-serif;font-size:13px;back=
ground-color:rgb(255,255,221)">said that a simple way is required to decide=
 if a language tag is a sign language or a written or spoken language.</spa=
n><div><span style=3D"color:rgb(0,0,0);font-family:Verdana,Arial,&quot;Bits=
tream Vera Sans&quot;,Helvetica,sans-serif;font-size:13px;background-color:=
rgb(255,255,221)"><br></span></div><div><span style=3D"color:rgb(0,0,0);fon=
t-family:Verdana,Arial,&quot;Bitstream Vera Sans&quot;,Helvetica,sans-serif=
;font-size:13px;background-color:rgb(255,255,221)">Some applications scan</=
span><span style=3D"background-color:rgb(255,255,221);color:rgb(0,0,0);font=
-family:Verdana,Arial,&quot;Bitstream Vera Sans&quot;,Helvetica,sans-serif;=
font-size:13px">=C2=A0the IANA language registry </span><span style=3D"font=
-size:13px;background-color:rgb(255,255,221);color:rgb(0,0,0);font-family:V=
erdana,Arial,&quot;Bitstream Vera Sans&quot;,Helvetica,sans-serif">at start=
up for the word &quot;Sign&quot; in the tag description:</span></div><div><=
span style=3D"background-color:rgb(255,255,221)"><font color=3D"#000000" fa=
ce=3D"Verdana, Arial, Bitstream Vera Sans, Helvetica, sans-serif"><a href=
=3D"https://www.iana.org/assignments/language-subtag-registry/language-subt=
ag-registry">https://www.iana.org/assignments/language-subtag-registry/lang=
uage-subtag-registry</a></font><br></span></div><div><span style=3D"backgro=
und-color:rgb(255,255,221)"><font color=3D"#000000" face=3D"Verdana, Arial,=
 Bitstream Vera Sans, Helvetica, sans-serif"><br></font></span></div><div><=
span style=3D"background-color:rgb(255,255,221)"><font color=3D"#000000" fa=
ce=3D"Verdana, Arial, Bitstream Vera Sans, Helvetica, sans-serif">Currently=
, there are 319 language subtags that include &quot;Sign Language&quot; in =
their description.=C2=A0</font></span></div><div><span style=3D"background-=
color:rgb(255,255,221)"><font color=3D"#000000" face=3D"Verdana, Arial, Bit=
stream Vera Sans, Helvetica, sans-serif"><br></font></span></div><div><span=
 style=3D"background-color:rgb(255,255,221)"><font color=3D"#000000" face=
=3D"Verdana, Arial, Bitstream Vera Sans, Helvetica, sans-serif">Given the c=
urrent layout of the language subtag registry, it is not clear to me that t=
here is an easier way to determine which tags represent sign languages.=C2=
=A0 Nor is it within the SLIM WG charter to develop a modification to the l=
anguage subtag registry to address this concern.=C2=A0</font></span></div><=
div><span style=3D"background-color:rgb(255,255,221)"><font color=3D"#00000=
0" face=3D"Verdana, Arial, Bitstream Vera Sans, Helvetica, sans-serif"><br>=
</font></span></div><div><span style=3D"background-color:rgb(255,255,221)">=
<font color=3D"#000000" face=3D"Verdana, Arial, Bitstream Vera Sans, Helvet=
ica, sans-serif">So I am wondering whether we might resolve this with a Not=
e outlining the problem but not offering a solution.=C2=A0</font></span></d=
iv><div><span style=3D"background-color:rgb(255,255,221)"><font color=3D"#0=
00000" face=3D"Verdana, Arial, Bitstream Vera Sans, Helvetica, sans-serif">=
<br></font></span></div></div>

--94eb2c03da366b25c9055b73c2cb--


From nobody Fri Oct 13 13:58:12 2017
Return-Path: <br@brianrosen.net>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 23222134216 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 13:58:10 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.589
X-Spam-Level: 
X-Spam-Status: No, score=-2.589 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, T_SPF_PERMERROR=0.01] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=brianrosen-net.20150623.gappssmtp.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id N751PVJ5O-GU for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 13:58:07 -0700 (PDT)
Received: from mail-qk0-x22c.google.com (mail-qk0-x22c.google.com [IPv6:2607:f8b0:400d:c09::22c]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id C3E6B1331D7 for <slim@ietf.org>; Fri, 13 Oct 2017 13:58:06 -0700 (PDT)
Received: by mail-qk0-x22c.google.com with SMTP id n5so6531070qke.11 for <slim@ietf.org>; Fri, 13 Oct 2017 13:58:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brianrosen-net.20150623.gappssmtp.com; s=20150623; h=from:message-id:mime-version:subject:date:in-reply-to:cc:to :references; bh=DiNEd9zRTtBYGLNweK8iUt8TdcDJAK3OsG7kF5RCPts=; b=BoHT5Y+kbxWyyohmDrqjwZxLhVHlg673McAJsCFyYqqvxp+4leqvApMlcp2lXH42YB NKe5gKFobK0RoPOKhK1yFTRnGLfd+gXMEKGWVz63+457wiqINFrPK3ibWpmsU9SFe6/3 dQqG933s6A9AfjwHxhSCX98/NpN0dl65f3He3turzkDLsuXW8Ae57J7geZPAQYwWUiw2 WGUqv4RfRzENS6kdHOUVZnWCTdMuW7q2ByTnfeS4/nonbNr+zA0WTK0rSdJAmPvIyvSm 2eFzPUF+FejkJt1nbcHFu59ymt3B6cn87I0J/cVFgjlLLnPOqiD3aSBhbdgqvEW+NjW/ +tyQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:message-id:mime-version:subject:date :in-reply-to:cc:to:references; bh=DiNEd9zRTtBYGLNweK8iUt8TdcDJAK3OsG7kF5RCPts=; b=TwbRn1wZAw7L1Gez2sZ0FPBTETepoOxrePH/v5+OQ4PkEiXfTQbc1Kodl5l0tJpVq/ pLe8CP4nWuvKYBqnL0A+rrpaRbG0UZIUK5xn+U/S/JfXDt2S5glmmTNZJCWz0XupY5Kn lMXYAgkFpbcTQhU2PgkqE7GZz2Vrj+xsr50MiWUV9/wEUJqwLiB5IeU5qxQ3NvGiDEF0 3+6/sEokCky0mBZerLTcWSxZPU7JZ9g5MMGtlHJL8wGs+E+6u7s1FoxZa26t0ayeTjpT CWBZQlX5FOkIbbu4UidNPc+n7vnev6zUL41/982k1h6SbxPFZaaTW8s/NIgH2fHgGTC0 MU4w==
X-Gm-Message-State: AMCzsaUAipJFu7N/HDPtls3N/QlSpzeNjO7kDgoPB/yrgk1pptXEIzZR WpxdYATtCwWFSkSVA0qZzZIyYg==
X-Google-Smtp-Source: AOwi7QC5MIhGFa5/Db04ri1fJjR4I8j4Uw6u4UusWE3jj1P+FY0HyylU8fP3qFHO7JxtuTwAdGhj5Q==
X-Received: by 10.55.163.138 with SMTP id m132mr4002053qke.60.1507928285911; Fri, 13 Oct 2017 13:58:05 -0700 (PDT)
Received: from [10.33.193.3] (neustar-sthide-nat1.neustar.biz. [156.154.81.54]) by smtp.gmail.com with ESMTPSA id v40sm1120247qtj.81.2017.10.13.13.58.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Oct 2017 13:58:04 -0700 (PDT)
From: Brian Rosen <br@brianrosen.net>
Message-Id: <F099DEE5-0D77-45B9-B10E-ACDB4667B8ED@brianrosen.net>
Content-Type: multipart/alternative; boundary="Apple-Mail=_6B827CB9-D4AA-43FD-A9CE-2DC21B3D3826"
Mime-Version: 1.0 (Mac OS X Mail 10.3 \(3273\))
Date: Fri, 13 Oct 2017 16:58:02 -0400
In-Reply-To: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com>
Cc: slim@ietf.org
To: Bernard Aboba <bernard.aboba@gmail.com>
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com>
X-Mailer: Apple Mail (2.3273)
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/turTwP_Qxn6q8PHn33s3yBosbwk>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 20:58:10 -0000

--Apple-Mail=_6B827CB9-D4AA-43FD-A9CE-2DC21B3D3826
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=us-ascii

Sounds good to me.

Brian

> On Oct 13, 2017, at 4:46 PM, Bernard Aboba <bernard.aboba@gmail.com> =
wrote:
>=20
> Issue 43 ( https://trac.ietf.org/trac/slim/ticket/43 =
<https://trac.ietf.org/trac/slim/ticket/43> ) results from a review =
comment that said that a simple way is required to decide if a language =
tag is a sign language or a written or spoken language.
>=20
> Some applications scan the IANA language registry at startup for the =
word "Sign" in the tag description:
> =
https://www.iana.org/assignments/language-subtag-registry/language-subtag-=
registry =
<https://www.iana.org/assignments/language-subtag-registry/language-subtag=
-registry>
>=20
> Currently, there are 319 language subtags that include "Sign Language" =
in their description.=20
>=20
> Given the current layout of the language subtag registry, it is not =
clear to me that there is an easier way to determine which tags =
represent sign languages.  Nor is it within the SLIM WG charter to =
develop a modification to the language subtag registry to address this =
concern.=20
>=20
> So I am wondering whether we might resolve this with a Note outlining =
the problem but not offering a solution.=20
>=20
> _______________________________________________
> SLIM mailing list
> SLIM@ietf.org
> https://www.ietf.org/mailman/listinfo/slim


--Apple-Mail=_6B827CB9-D4AA-43FD-A9CE-2DC21B3D3826
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=us-ascii

<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dus-ascii"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" =
class=3D"">Sounds good to me.<div class=3D""><br class=3D""></div><div =
class=3D"">Brian</div><div class=3D""><br class=3D""><div><blockquote =
type=3D"cite" class=3D""><div class=3D"">On Oct 13, 2017, at 4:46 PM, =
Bernard Aboba &lt;<a href=3D"mailto:bernard.aboba@gmail.com" =
class=3D"">bernard.aboba@gmail.com</a>&gt; wrote:</div><br =
class=3D"Apple-interchange-newline"><div class=3D""><div dir=3D"ltr" =
class=3D"">Issue 43 ( <a =
href=3D"https://trac.ietf.org/trac/slim/ticket/43" =
class=3D"">https://trac.ietf.org/trac/slim/ticket/43</a> ) results from =
a review comment that&nbsp;<span style=3D"font-family: Verdana, Arial, =
'Bitstream Vera Sans', Helvetica, sans-serif; font-size: 13px; =
background-color: rgb(255, 255, 221);" class=3D"">said that a simple way =
is required to decide if a language tag is a sign language or a written =
or spoken language.</span><div class=3D""><span style=3D"font-family: =
Verdana, Arial, 'Bitstream Vera Sans', Helvetica, sans-serif; font-size: =
13px; background-color: rgb(255, 255, 221);" class=3D""><br =
class=3D""></span></div><div class=3D""><span style=3D"font-family: =
Verdana, Arial, 'Bitstream Vera Sans', Helvetica, sans-serif; font-size: =
13px; background-color: rgb(255, 255, 221);" class=3D"">Some =
applications scan</span><span style=3D"background-color: rgb(255, 255, =
221); font-family: Verdana, Arial, 'Bitstream Vera Sans', Helvetica, =
sans-serif; font-size: 13px;" class=3D"">&nbsp;the IANA language =
registry </span><span style=3D"font-size: 13px; background-color: =
rgb(255, 255, 221); font-family: Verdana, Arial, 'Bitstream Vera Sans', =
Helvetica, sans-serif;" class=3D"">at startup for the word "Sign" in the =
tag description:</span></div><div class=3D""><span =
style=3D"background-color:rgb(255,255,221)" class=3D""><font =
face=3D"Verdana, Arial, Bitstream Vera Sans, Helvetica, sans-serif" =
class=3D""><a =
href=3D"https://www.iana.org/assignments/language-subtag-registry/language=
-subtag-registry" =
class=3D"">https://www.iana.org/assignments/language-subtag-registry/langu=
age-subtag-registry</a></font><br class=3D""></span></div><div =
class=3D""><span style=3D"background-color:rgb(255,255,221)" =
class=3D""><font face=3D"Verdana, Arial, Bitstream Vera Sans, Helvetica, =
sans-serif" class=3D""><br class=3D""></font></span></div><div =
class=3D""><span style=3D"background-color:rgb(255,255,221)" =
class=3D""><font face=3D"Verdana, Arial, Bitstream Vera Sans, Helvetica, =
sans-serif" class=3D"">Currently, there are 319 language subtags that =
include "Sign Language" in their =
description.&nbsp;</font></span></div><div class=3D""><span =
style=3D"background-color:rgb(255,255,221)" class=3D""><font =
face=3D"Verdana, Arial, Bitstream Vera Sans, Helvetica, sans-serif" =
class=3D""><br class=3D""></font></span></div><div class=3D""><span =
style=3D"background-color:rgb(255,255,221)" class=3D""><font =
face=3D"Verdana, Arial, Bitstream Vera Sans, Helvetica, sans-serif" =
class=3D"">Given the current layout of the language subtag registry, it =
is not clear to me that there is an easier way to determine which tags =
represent sign languages.&nbsp; Nor is it within the SLIM WG charter to =
develop a modification to the language subtag registry to address this =
concern.&nbsp;</font></span></div><div class=3D""><span =
style=3D"background-color:rgb(255,255,221)" class=3D""><font =
face=3D"Verdana, Arial, Bitstream Vera Sans, Helvetica, sans-serif" =
class=3D""><br class=3D""></font></span></div><div class=3D""><span =
style=3D"background-color:rgb(255,255,221)" class=3D""><font =
face=3D"Verdana, Arial, Bitstream Vera Sans, Helvetica, sans-serif" =
class=3D"">So I am wondering whether we might resolve this with a Note =
outlining the problem but not offering a =
solution.&nbsp;</font></span></div><div class=3D""><span =
style=3D"background-color:rgb(255,255,221)" class=3D""><font =
face=3D"Verdana, Arial, Bitstream Vera Sans, Helvetica, sans-serif" =
class=3D""><br class=3D""></font></span></div></div>
_______________________________________________<br class=3D"">SLIM =
mailing list<br class=3D""><a href=3D"mailto:SLIM@ietf.org" =
class=3D"">SLIM@ietf.org</a><br =
class=3D"">https://www.ietf.org/mailman/listinfo/slim<br =
class=3D""></div></blockquote></div><br class=3D""></div></body></html>=

--Apple-Mail=_6B827CB9-D4AA-43FD-A9CE-2DC21B3D3826--


From nobody Fri Oct 13 14:06:02 2017
Return-Path: <bernard.aboba@gmail.com>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E170B127517 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 14:06:01 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level: 
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 5Fimq2SFPdmN for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 14:06:00 -0700 (PDT)
Received: from mail-ua0-x234.google.com (mail-ua0-x234.google.com [IPv6:2607:f8b0:400c:c08::234]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id E828E126B6E for <slim@ietf.org>; Fri, 13 Oct 2017 14:05:59 -0700 (PDT)
Received: by mail-ua0-x234.google.com with SMTP id n38so6153661uai.11 for <slim@ietf.org>; Fri, 13 Oct 2017 14:05:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;  h=mime-version:from:date:message-id:subject:to; bh=jepLmvmm6EOIwFjmT6Qk/3s0QKySP5dAhDU2Sehx8zQ=; b=iCynG14QQvauxUwVl+zAz9rlZo4amA7iz3xw5lzh9VPP5S/P0Uq4gRKz6pLJjhnujt buGRd/K7L2IKgOPCbs1qfUudmOmzgUbeaKbY4AnluAOggM8ttHmo+Ic+qZ99MGABwRB8 MUrRzZoR9nlxovLfOTPhk21HhVe7KGO/8aKZ7W01NUvcU3PEyByQVKXFM38xMfSv0lth wcb1E7McB79FAIbHOOvLD1VM+v+sz2frEDTe73z7gCLK/gD8BNNV5oBguQmz8muGIy5w hsYCddBUvNvSR3ZkdAGtdwDgwkGk/a66GC1FC4GOwrR9HoEjOcOO0BuEHXzkn3oATt5d 3yow==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=jepLmvmm6EOIwFjmT6Qk/3s0QKySP5dAhDU2Sehx8zQ=; b=KlAUHt24HWmK2W9cCIp2CKlkgXxSEwWyHfyTjgj0FEA+laWL9sFvWq9+BXmEiGmkoP lkjnPavigESwxoYLgM3bIS6cnzigg9AgzDFiL2bABcophubRo0X+p9lIpIu6NSmuda8R Bv9TyGCPlcsG9E9hh0wtVh9LQnUllLAC/ZVPvjBngqc4+rr25FKoImrawnw80R/nRghI lkFfcsYBXd2xfNYNPJsZQWvQbS3b1cD6i0CfkFsCjA+Rsl63AvBJK+cAvdAcSf3JUBqY SLB4M+XUqjZ+Asg635AWOJINztlnmby5bA1dYdfcMS2AsePXYFKMeVezXL87OI3xCjzr xlug==
X-Gm-Message-State: AMCzsaW+QJaB7zUo/UMuizXOVvBIewcE3BsIOw7fi26rx/ae+5kJkoRs nAmoiXdg+K2Vm1BpHdMHH2zgI5l3OyJuzkijXaWIhjKn
X-Google-Smtp-Source: AOwi7QCN3EadacWqrTOLF2RLZaqEO6cy1sX7k9c4grPR3IU5kGItm7NFOf+z3m7x/XvWz7y4HRD4in+DGZ1H5JNbpyE=
X-Received: by 10.176.20.225 with SMTP id f30mr2330429uae.66.1507928758494; Fri, 13 Oct 2017 14:05:58 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.159.32.76 with HTTP; Fri, 13 Oct 2017 14:05:38 -0700 (PDT)
From: Bernard Aboba <bernard.aboba@gmail.com>
Date: Fri, 13 Oct 2017 14:05:38 -0700
Message-ID: <CAOW+2dvguu3FkzTYZGDid5+aJrB8hX70Zv9aVTcvQGGtsGme5Q@mail.gmail.com>
To: slim@ietf.org
Content-Type: multipart/alternative; boundary="001a1145ab00ff2c3f055b740553"
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/Wcs6kS6zk54RI6LSbRlzijD3aKk>
Subject: [Slim] Status of draft-ietf-slim-negotiating-human-language-14
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 21:06:02 -0000

--001a1145ab00ff2c3f055b740553
Content-Type: text/plain; charset="UTF-8"

Looking at TRAC, it appears that 3 issues remain open against the document
(see: https://trac.ietf.org/trac/slim/report/1 ):

Ticket <https://trac.ietf.org/trac/slim/report/1?sort=ticket&asc=1&page=1>
Summary <https://trac.ietf.org/trac/slim/report/1?sort=summary&asc=1&page=1>
Component
<https://trac.ietf.org/trac/slim/report/1?sort=component&asc=1&page=1>
Version <https://trac.ietf.org/trac/slim/report/1?sort=version&asc=1&page=1>
Milestone
<https://trac.ietf.org/trac/slim/report/1?sort=milestone&asc=1&page=1>Type
<https://trac.ietf.org/trac/slim/report/1?sort=type&asc=1&page=1>Owner
<https://trac.ietf.org/trac/slim/report/1?sort=owner&asc=1&page=1>Status
<https://trac.ietf.org/trac/slim/report/1?sort=status&asc=1&page=1>Created
<https://trac.ietf.org/trac/slim/report/1?sort=created&asc=1&page=1>
#47 <https://trac.ietf.org/trac/slim/ticket/47> Unsupported simultaneity
requirement <https://trac.ietf.org/trac/slim/ticket/47>
negotiating-human-language defect
draft-ietf-slim-negotiating-human-language@ietf.org new Jul 31, 2017
#43 <https://trac.ietf.org/trac/slim/ticket/43> How to know the modality of
a language indication? <https://trac.ietf.org/trac/slim/ticket/43>
negotiating-human-language defect
draft-ietf-slim-negotiating-human-language@ietf.org new Jul 31, 2017
#41 <https://trac.ietf.org/trac/slim/ticket/41> Allow sign languages in the
text stream for text notations of sign language
<https://trac.ietf.org/trac/slim/ticket/41> negotiating-human-language
enhancement draft-ietf-slim-negotiating-human-language@ietf.org new Jun 29,
2017

--001a1145ab00ff2c3f055b740553
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Looking at TRAC, it appears that 3 issues remain open agai=
nst the document (see:=C2=A0<a href=3D"https://trac.ietf.org/trac/slim/repo=
rt/1">https://trac.ietf.org/trac/slim/report/1</a> ):<div><br></div><div><t=
able class=3D"gmail-listing gmail-tickets" style=3D"clear:both;border-botto=
m:1px solid rgb(215,215,215);border-collapse:collapse;margin-top:1em;width:=
1878px;color:rgb(0,0,0);font-family:Verdana,Arial,&quot;Bitstream Vera Sans=
&quot;,Helvetica,sans-serif;font-size:13px"><thead><tr class=3D"gmail-trac-=
columns" style=3D"font-stretch:normal;line-height:normal;background:rgb(247=
,247,240)"><th style=3D"font-stretch:normal;font-size:11px;line-height:norm=
al;text-align:left;padding:2px 0.5em;border-width:1px;border-style:solid;bo=
rder-color:rgb(215,215,215) rgb(215,215,215) rgb(153,153,153);vertical-alig=
n:bottom;white-space:nowrap;background-image:initial;background-position:in=
itial;background-size:initial;background-repeat:initial;background-origin:i=
nitial;background-clip:initial;text-transform:capitalize"><a href=3D"https:=
//trac.ietf.org/trac/slim/report/1?sort=3Dticket&amp;asc=3D1&amp;page=3D1" =
style=3D"text-decoration-line:none;color:rgb(187,0,0);border:none;padding-r=
ight:12px">Ticket</a></th><th style=3D"font-stretch:normal;font-size:11px;l=
ine-height:normal;text-align:left;padding:2px 0.5em;border-width:1px;border=
-style:solid;border-color:rgb(215,215,215) rgb(215,215,215) rgb(153,153,153=
);vertical-align:bottom;white-space:nowrap;background-image:initial;backgro=
und-position:initial;background-size:initial;background-repeat:initial;back=
ground-origin:initial;background-clip:initial;text-transform:capitalize"><a=
 href=3D"https://trac.ietf.org/trac/slim/report/1?sort=3Dsummary&amp;asc=3D=
1&amp;page=3D1" style=3D"text-decoration-line:none;color:rgb(187,0,0);borde=
r:none;padding-right:12px">Summary</a></th><th style=3D"font-stretch:normal=
;font-size:11px;line-height:normal;text-align:left;padding:2px 0.5em;border=
-width:1px;border-style:solid;border-color:rgb(215,215,215) rgb(215,215,215=
) rgb(153,153,153);vertical-align:bottom;white-space:nowrap;background-imag=
e:initial;background-position:initial;background-size:initial;background-re=
peat:initial;background-origin:initial;background-clip:initial;text-transfo=
rm:capitalize"><a href=3D"https://trac.ietf.org/trac/slim/report/1?sort=3Dc=
omponent&amp;asc=3D1&amp;page=3D1" style=3D"text-decoration-line:none;color=
:rgb(187,0,0);border:none;padding-right:12px">Component</a></th><th style=
=3D"font-stretch:normal;font-size:11px;line-height:normal;text-align:left;p=
adding:2px 0.5em;border-width:1px;border-style:solid;border-color:rgb(215,2=
15,215) rgb(215,215,215) rgb(153,153,153);vertical-align:bottom;white-space=
:nowrap;background-image:initial;background-position:initial;background-siz=
e:initial;background-repeat:initial;background-origin:initial;background-cl=
ip:initial;text-transform:capitalize"><a href=3D"https://trac.ietf.org/trac=
/slim/report/1?sort=3Dversion&amp;asc=3D1&amp;page=3D1" style=3D"text-decor=
ation-line:none;color:rgb(187,0,0);border:none;padding-right:12px">Version<=
/a></th><th style=3D"font-stretch:normal;font-size:11px;line-height:normal;=
text-align:left;padding:2px 0.5em;border-width:1px;border-style:solid;borde=
r-color:rgb(215,215,215) rgb(215,215,215) rgb(153,153,153);vertical-align:b=
ottom;white-space:nowrap;background-image:initial;background-position:initi=
al;background-size:initial;background-repeat:initial;background-origin:init=
ial;background-clip:initial;text-transform:capitalize"><a href=3D"https://t=
rac.ietf.org/trac/slim/report/1?sort=3Dmilestone&amp;asc=3D1&amp;page=3D1" =
style=3D"text-decoration-line:none;color:rgb(187,0,0);border:none;padding-r=
ight:12px">Milestone</a></th><th style=3D"font-stretch:normal;font-size:11p=
x;line-height:normal;text-align:left;padding:2px 0.5em;border-width:1px;bor=
der-style:solid;border-color:rgb(215,215,215) rgb(215,215,215) rgb(153,153,=
153);vertical-align:bottom;white-space:nowrap;background-image:initial;back=
ground-position:initial;background-size:initial;background-repeat:initial;b=
ackground-origin:initial;background-clip:initial;text-transform:capitalize"=
><a href=3D"https://trac.ietf.org/trac/slim/report/1?sort=3Dtype&amp;asc=3D=
1&amp;page=3D1" style=3D"text-decoration-line:none;color:rgb(187,0,0);borde=
r:none;padding-right:12px">Type</a></th><th style=3D"font-stretch:normal;fo=
nt-size:11px;line-height:normal;text-align:left;padding:2px 0.5em;border-wi=
dth:1px;border-style:solid;border-color:rgb(215,215,215) rgb(215,215,215) r=
gb(153,153,153);vertical-align:bottom;white-space:nowrap;background-image:i=
nitial;background-position:initial;background-size:initial;background-repea=
t:initial;background-origin:initial;background-clip:initial;text-transform:=
capitalize"><a href=3D"https://trac.ietf.org/trac/slim/report/1?sort=3Downe=
r&amp;asc=3D1&amp;page=3D1" style=3D"text-decoration-line:none;color:rgb(18=
7,0,0);border:none;padding-right:12px">Owner</a></th><th style=3D"font-stre=
tch:normal;font-size:11px;line-height:normal;text-align:left;padding:2px 0.=
5em;border-width:1px;border-style:solid;border-color:rgb(215,215,215) rgb(2=
15,215,215) rgb(153,153,153);vertical-align:bottom;white-space:nowrap;backg=
round-image:initial;background-position:initial;background-size:initial;bac=
kground-repeat:initial;background-origin:initial;background-clip:initial;te=
xt-transform:capitalize"><a href=3D"https://trac.ietf.org/trac/slim/report/=
1?sort=3Dstatus&amp;asc=3D1&amp;page=3D1" style=3D"text-decoration-line:non=
e;color:rgb(187,0,0);border:none;padding-right:12px">Status</a></th><th sty=
le=3D"font-stretch:normal;font-size:11px;line-height:normal;text-align:left=
;padding:2px 0.5em;border-width:1px;border-style:solid;border-color:rgb(215=
,215,215) rgb(215,215,215) rgb(153,153,153);vertical-align:bottom;white-spa=
ce:nowrap;background-image:initial;background-position:initial;background-s=
ize:initial;background-repeat:initial;background-origin:initial;background-=
clip:initial;text-transform:capitalize"><a href=3D"https://trac.ietf.org/tr=
ac/slim/report/1?sort=3Dcreated&amp;asc=3D1&amp;page=3D1" style=3D"text-dec=
oration-line:none;color:rgb(187,0,0);border:none;padding-right:12px">Create=
d</a></th></tr></thead><tbody><tr class=3D"gmail-color3-even" style=3D"font=
-stretch:normal;line-height:normal;border-bottom:1px solid rgb(204,204,204)=
;border-top:1px solid rgb(204,204,204);background:rgb(246,246,246);border-r=
ight-color:rgb(204,204,204);border-left-color:rgb(204,204,204);color:rgb(51=
,51,51)"><td class=3D"gmail-ticket" style=3D"padding:0.3em 0.5em;border:1px=
 dotted rgb(221,221,221);vertical-align:top"><a title=3D"View ticket" href=
=3D"https://trac.ietf.org/trac/slim/ticket/47" style=3D"text-decoration-lin=
e:none;color:rgb(187,0,0);border-bottom:none">#47</a></td><td class=3D"gmai=
l-summary" style=3D"padding:0.3em 0.5em;border:1px dotted rgb(221,221,221);=
vertical-align:top"><a title=3D"View ticket" href=3D"https://trac.ietf.org/=
trac/slim/ticket/47" style=3D"text-decoration-line:none;color:rgb(187,0,0);=
border-bottom:none">Unsupported simultaneity requirement</a></td><td class=
=3D"gmail-component" style=3D"padding:0.3em 0.5em;border:1px dotted rgb(221=
,221,221);vertical-align:top">negotiating-human-language</td><td class=3D"g=
mail-version" style=3D"padding:0.3em 0.5em;border:1px dotted rgb(221,221,22=
1);vertical-align:top"></td><td class=3D"gmail-milestone" style=3D"padding:=
0.3em 0.5em;border:1px dotted rgb(221,221,221);vertical-align:top"></td><td=
 class=3D"gmail-type" style=3D"padding:0.3em 0.5em;border:1px dotted rgb(22=
1,221,221);vertical-align:top">defect</td><td class=3D"gmail-owner" style=
=3D"padding:0.3em 0.5em;border:1px dotted rgb(221,221,221);vertical-align:t=
op"><a href=3D"mailto:draft-ietf-slim-negotiating-human-language@ietf.org">=
draft-ietf-slim-negotiating-human-language@ietf.org</a></td><td class=3D"gm=
ail-status" style=3D"padding:0.3em 0.5em;border:1px dotted rgb(221,221,221)=
;vertical-align:top">new</td><td class=3D"gmail-date" style=3D"padding:0.3e=
m 0.5em;border:1px dotted rgb(221,221,221);vertical-align:top">Jul 31, 2017=
</td></tr><tr class=3D"gmail-color4-odd" style=3D"font-stretch:normal;line-=
height:normal;border-bottom:1px solid rgb(204,238,238);border-top:1px solid=
 rgb(204,238,238);background:rgb(231,255,255);border-right-color:rgb(204,23=
8,238);border-left-color:rgb(204,238,238);color:rgb(0,153,153)"><td class=
=3D"gmail-ticket" style=3D"padding:0.3em 0.5em;border:1px dotted rgb(221,22=
1,221);vertical-align:top"><a title=3D"View ticket" href=3D"https://trac.ie=
tf.org/trac/slim/ticket/43" style=3D"text-decoration-line:none;color:rgb(18=
7,0,0);border-bottom:none">#43</a></td><td class=3D"gmail-summary" style=3D=
"padding:0.3em 0.5em;border:1px dotted rgb(221,221,221);vertical-align:top"=
><a title=3D"View ticket" href=3D"https://trac.ietf.org/trac/slim/ticket/43=
" style=3D"text-decoration-line:none;color:rgb(187,0,0);border-bottom:none"=
>How to know the modality of a language indication?</a></td><td class=3D"gm=
ail-component" style=3D"padding:0.3em 0.5em;border:1px dotted rgb(221,221,2=
21);vertical-align:top">negotiating-human-language</td><td class=3D"gmail-v=
ersion" style=3D"padding:0.3em 0.5em;border:1px dotted rgb(221,221,221);ver=
tical-align:top"></td><td class=3D"gmail-milestone" style=3D"padding:0.3em =
0.5em;border:1px dotted rgb(221,221,221);vertical-align:top"></td><td class=
=3D"gmail-type" style=3D"padding:0.3em 0.5em;border:1px dotted rgb(221,221,=
221);vertical-align:top">defect</td><td class=3D"gmail-owner" style=3D"padd=
ing:0.3em 0.5em;border:1px dotted rgb(221,221,221);vertical-align:top"><a h=
ref=3D"mailto:draft-ietf-slim-negotiating-human-language@ietf.org">draft-ie=
tf-slim-negotiating-human-language@ietf.org</a></td><td class=3D"gmail-stat=
us" style=3D"padding:0.3em 0.5em;border:1px dotted rgb(221,221,221);vertica=
l-align:top">new</td><td class=3D"gmail-date" style=3D"padding:0.3em 0.5em;=
border:1px dotted rgb(221,221,221);vertical-align:top">Jul 31, 2017</td></t=
r><tr class=3D"gmail-color4-even" style=3D"font-stretch:normal;line-height:=
normal;border-bottom:1px solid rgb(187,238,238);border-top:1px solid rgb(18=
7,238,238);background:rgb(221,255,255);border-right-color:rgb(187,238,238);=
border-left-color:rgb(187,238,238);color:rgb(0,153,153)"><td class=3D"gmail=
-ticket" style=3D"padding:0.3em 0.5em;border:1px dotted rgb(221,221,221);ve=
rtical-align:top"><a title=3D"View ticket" href=3D"https://trac.ietf.org/tr=
ac/slim/ticket/41" style=3D"text-decoration-line:none;color:rgb(187,0,0);bo=
rder-bottom:none">#41</a></td><td class=3D"gmail-summary" style=3D"padding:=
0.3em 0.5em;border:1px dotted rgb(221,221,221);vertical-align:top"><a title=
=3D"View ticket" href=3D"https://trac.ietf.org/trac/slim/ticket/41" style=
=3D"text-decoration-line:none;color:rgb(187,0,0);border-bottom:none">Allow =
sign languages in the text stream for text notations of sign language</a></=
td><td class=3D"gmail-component" style=3D"padding:0.3em 0.5em;border:1px do=
tted rgb(221,221,221);vertical-align:top">negotiating-human-language</td><t=
d class=3D"gmail-version" style=3D"padding:0.3em 0.5em;border:1px dotted rg=
b(221,221,221);vertical-align:top"></td><td class=3D"gmail-milestone" style=
=3D"padding:0.3em 0.5em;border:1px dotted rgb(221,221,221);vertical-align:t=
op"></td><td class=3D"gmail-type" style=3D"padding:0.3em 0.5em;border:1px d=
otted rgb(221,221,221);vertical-align:top">enhancement</td><td class=3D"gma=
il-owner" style=3D"padding:0.3em 0.5em;border:1px dotted rgb(221,221,221);v=
ertical-align:top"><a href=3D"mailto:draft-ietf-slim-negotiating-human-lang=
uage@ietf.org">draft-ietf-slim-negotiating-human-language@ietf.org</a></td>=
<td class=3D"gmail-status" style=3D"padding:0.3em 0.5em;border:1px dotted r=
gb(221,221,221);vertical-align:top">new</td><td class=3D"gmail-date" style=
=3D"padding:0.3em 0.5em;border:1px dotted rgb(221,221,221);vertical-align:t=
op">Jun 29, 2017<br><br></td></tr></tbody></table><div><br></div></div></di=
v>

--001a1145ab00ff2c3f055b740553--


From nobody Fri Oct 13 15:03:49 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 735D11320CF for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 15:03:44 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.6
X-Spam-Level: 
X-Spam-Status: No, score=-2.6 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id K8JJEC22Wf1d for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 15:03:42 -0700 (PDT)
Received: from bin-vsp-out-02.atm.binero.net (bin-mail-out-06.binero.net [195.74.38.229]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 70E5B124239 for <slim@ietf.org>; Fri, 13 Oct 2017 15:03:42 -0700 (PDT)
X-Halon-ID: 533a0060-b062-11e7-99c0-005056917f90
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-02.atm.binero.net (Halon) with ESMTPSA id 533a0060-b062-11e7-99c0-005056917f90; Sat, 14 Oct 2017 00:03:21 +0200 (CEST)
To: slim@ietf.org
References: <CAOW+2dsVjhvT7tWrvPgz1Rp14v4Td+u8Pe_UTG4WqmtoeFmhiA@mail.gmail.com>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <69f14269-9600-6536-dcdd-421fa06d823a@omnitor.se>
Date: Sat, 14 Oct 2017 00:03:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <CAOW+2dsVjhvT7tWrvPgz1Rp14v4Td+u8Pe_UTG4WqmtoeFmhiA@mail.gmail.com>
Content-Type: multipart/alternative; boundary="------------4CFB04BAE34399F8AFEEFBFE"
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/c98Y21eU4JDbpd69mZGMGu64Y1s>
Subject: Re: [Slim] Issue 47
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 22:03:44 -0000

This is a multi-part message in MIME format.
--------------4CFB04BAE34399F8AFEEFBFE
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit

Den 2017-10-13 kl. 22:26, skrev Bernard Aboba:
> Issue 47 ( https://trac.ietf.org/trac/slim/ticket/47 ) suggest the 
> following change to the text:
>
> Change:
>
> ""Another example would be a user who is able to speak but is deaf or 
> hard-of-hearing and requires a voice stream plus a text stream.""
>
> To:
>
> "Another example would be a user who is able to speak but is deaf or 
> hard-of-hearing and requires to send spoken language in a voice stream 
> and receive written language in a text stream."
>
>
> Can we Accept the proposed resolution?
>
<GH>I accept it.

>
>
> _______________________________________________
> SLIM mailing list
> SLIM@ietf.org
> https://www.ietf.org/mailman/listinfo/slim

-- 
-----------------------------------------
Gunnar Hellström
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


--------------4CFB04BAE34399F8AFEEFBFE
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    Den 2017-10-13 kl. 22:26, skrev Bernard Aboba:<br>
    <blockquote type="cite"
cite="mid:CAOW+2dsVjhvT7tWrvPgz1Rp14v4Td+u8Pe_UTG4WqmtoeFmhiA@mail.gmail.com">
      <div dir="ltr">Issue 47 ( <a
          href="https://trac.ietf.org/trac/slim/ticket/47"
          moz-do-not-send="true">https://trac.ietf.org/trac/slim/ticket/47</a>
        ) suggest the following change to the text: 
        <div><br>
        </div>
        <div>Change: </div>
        <div><br>
        </div>
        <div>"<span
style="background-color:rgb(255,255,221);color:rgb(0,0,0);font-family:Verdana,Arial,&quot;Bitstream
            Vera Sans&quot;,Helvetica,sans-serif;font-size:13px">"Another
            example would be a user who is able to spea</span><span
style="background-color:rgb(255,255,221);color:rgb(0,0,0);font-family:Verdana,Arial,&quot;Bitstream
            Vera Sans&quot;,Helvetica,sans-serif;font-size:13px">k but
            is deaf or hard-of-hearing and requires a voice stream plus </span><span
style="font-size:13px;background-color:rgb(255,255,221);color:rgb(0,0,0);font-family:Verdana,Arial,&quot;Bitstream
            Vera Sans&quot;,Helvetica,sans-serif">a text stream."</span>"</div>
        <div><br>
        </div>
        <div>To: </div>
        <div><br>
        </div>
        <div>
          <p
            style="color:rgb(0,0,0);font-family:Verdana,Arial,&quot;Bitstream
            Vera
Sans&quot;,Helvetica,sans-serif;font-size:13px;background-color:rgb(255,255,221)">"Another
            example would be a user who is able to speak but is deaf or
            hard-of-hearing and requires to send spoken language in a
            voice stream and receive written language in a text stream."</p>
          <p
            style="color:rgb(0,0,0);font-family:Verdana,Arial,&quot;Bitstream
            Vera
Sans&quot;,Helvetica,sans-serif;font-size:13px;background-color:rgb(255,255,221)"><br>
          </p>
          <p
            style="color:rgb(0,0,0);font-family:Verdana,Arial,&quot;Bitstream
            Vera
Sans&quot;,Helvetica,sans-serif;font-size:13px;background-color:rgb(255,255,221)">Can
            we Accept the proposed resolution?</p>
        </div>
      </div>
    </blockquote>
    &lt;GH&gt;I accept it.<br>
    <br>
    <blockquote type="cite"
cite="mid:CAOW+2dsVjhvT7tWrvPgz1Rp14v4Td+u8Pe_UTG4WqmtoeFmhiA@mail.gmail.com"><br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
SLIM mailing list
<a class="moz-txt-link-abbreviated" href="mailto:SLIM@ietf.org">SLIM@ietf.org</a>
<a class="moz-txt-link-freetext" href="https://www.ietf.org/mailman/listinfo/slim">https://www.ietf.org/mailman/listinfo/slim</a>
</pre>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
-----------------------------------------
Gunnar Hellström
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
  </body>
</html>

--------------4CFB04BAE34399F8AFEEFBFE--


From nobody Fri Oct 13 15:05:19 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 7F819124239 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 15:05:15 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.6
X-Spam-Level: 
X-Spam-Status: No, score=-2.6 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id bL_Wmz23-x1m for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 15:05:13 -0700 (PDT)
Received: from bin-vsp-out-02.atm.binero.net (vsp-unauthed02.binero.net [195.74.38.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 94E1A1320CF for <slim@ietf.org>; Fri, 13 Oct 2017 15:05:13 -0700 (PDT)
X-Halon-ID: 8a50ba7e-b062-11e7-99c0-005056917f90
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-02.atm.binero.net (Halon) with ESMTPSA id 8a50ba7e-b062-11e7-99c0-005056917f90; Sat, 14 Oct 2017 00:04:53 +0200 (CEST)
To: slim@ietf.org
References: <CAOW+2dsVjhvT7tWrvPgz1Rp14v4Td+u8Pe_UTG4WqmtoeFmhiA@mail.gmail.com> <A1444985-1CB3-443B-BB90-9E0A6B83EEE5@brianrosen.net>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <687a1a59-c72a-275b-6573-0f92a1f3de96@omnitor.se>
Date: Sat, 14 Oct 2017 00:05:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <A1444985-1CB3-443B-BB90-9E0A6B83EEE5@brianrosen.net>
Content-Type: multipart/alternative; boundary="------------095F36FD523241F388652F9E"
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/eun8dbCWoSv0dFhXblH8qysoYRQ>
Subject: Re: [Slim] Issue 47
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 22:05:15 -0000

This is a multi-part message in MIME format.
--------------095F36FD523241F388652F9E
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit



Den 2017-10-13 kl. 22:37, skrev Brian Rosen:
> “requires to” is awkward in my dialect of English.  Perhaps “desires to”?
<GH> I can accept that too.
>
>
>> On Oct 13, 2017, at 4:26 PM, Bernard Aboba <bernard.aboba@gmail.com 
>> <mailto:bernard.aboba@gmail.com>> wrote:
>>
>> Issue 47 ( https://trac.ietf.org/trac/slim/ticket/47 ) suggest the 
>> following change to the text:
>>
>> Change:
>>
>> ""Another example would be a user who is able to speak but is deaf or 
>> hard-of-hearing and requires a voice stream plus a text stream.""
>>
>> To:
>>
>> "Another example would be a user who is able to speak but is deaf or 
>> hard-of-hearing and requires to send spoken language in a voice 
>> stream and receive written language in a text stream."
>>
>>
>> Can we Accept the proposed resolution?
>>
>> _______________________________________________
>> SLIM mailing list
>> SLIM@ietf.org <mailto:SLIM@ietf.org>
>> https://www.ietf.org/mailman/listinfo/slim
>
>
>
> _______________________________________________
> SLIM mailing list
> SLIM@ietf.org
> https://www.ietf.org/mailman/listinfo/slim

-- 
-----------------------------------------
Gunnar Hellström
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


--------------095F36FD523241F388652F9E
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <p><br>
    </p>
    <br>
    <div class="moz-cite-prefix">Den 2017-10-13 kl. 22:37, skrev Brian
      Rosen:<br>
    </div>
    <blockquote type="cite"
      cite="mid:A1444985-1CB3-443B-BB90-9E0A6B83EEE5@brianrosen.net">
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      “requires to” is awkward in my dialect of English.  Perhaps
      “desires to”?</blockquote>
    &lt;GH<font size="-1">&gt; I can accept that too.</font><br>
    <blockquote type="cite"
      cite="mid:A1444985-1CB3-443B-BB90-9E0A6B83EEE5@brianrosen.net">
      <div class=""><br class="">
      </div>
      <div class=""><br class="">
        <div>
          <blockquote type="cite" class="">
            <div class="">On Oct 13, 2017, at 4:26 PM, Bernard Aboba
              &lt;<a href="mailto:bernard.aboba@gmail.com" class=""
                moz-do-not-send="true">bernard.aboba@gmail.com</a>&gt;
              wrote:</div>
            <br class="Apple-interchange-newline">
            <div class="">
              <div dir="ltr" class="">Issue 47 ( <a
                  href="https://trac.ietf.org/trac/slim/ticket/47"
                  class="" moz-do-not-send="true">https://trac.ietf.org/trac/slim/ticket/47</a>
                ) suggest the following change to the text: 
                <div class=""><br class="">
                </div>
                <div class="">Change: </div>
                <div class=""><br class="">
                </div>
                <div class="">"<span style="background-color: rgb(255,
                    255, 221); font-family: Verdana, Arial, 'Bitstream
                    Vera Sans', Helvetica, sans-serif; font-size: 13px;"
                    class="">"Another example would be a user who is
                    able to spea</span><span style="background-color:
                    rgb(255, 255, 221); font-family: Verdana, Arial,
                    'Bitstream Vera Sans', Helvetica, sans-serif;
                    font-size: 13px;" class="">k but is deaf or
                    hard-of-hearing and requires a voice stream plus </span><span
                    style="font-size: 13px; background-color: rgb(255,
                    255, 221); font-family: Verdana, Arial, 'Bitstream
                    Vera Sans', Helvetica, sans-serif;" class="">a text
                    stream."</span>"</div>
                <div class=""><br class="">
                </div>
                <div class="">To: </div>
                <div class=""><br class="">
                </div>
                <div class="">
                  <p style="font-family: Verdana, Arial, 'Bitstream Vera
                    Sans', Helvetica, sans-serif; font-size: 13px;
                    background-color: rgb(255, 255, 221);" class="">"Another
                    example would be a user who is able to speak but is
                    deaf or hard-of-hearing and requires to send spoken
                    language in a voice stream and receive written
                    language in a text stream."</p>
                  <p style="font-family: Verdana, Arial, 'Bitstream Vera
                    Sans', Helvetica, sans-serif; font-size: 13px;
                    background-color: rgb(255, 255, 221);" class=""><br
                      class="">
                  </p>
                  <p style="font-family: Verdana, Arial, 'Bitstream Vera
                    Sans', Helvetica, sans-serif; font-size: 13px;
                    background-color: rgb(255, 255, 221);" class="">Can
                    we Accept the proposed resolution?</p>
                </div>
              </div>
              _______________________________________________<br
                class="">
              SLIM mailing list<br class="">
              <a href="mailto:SLIM@ietf.org" class=""
                moz-do-not-send="true">SLIM@ietf.org</a><br class="">
              <a class="moz-txt-link-freetext" href="https://www.ietf.org/mailman/listinfo/slim">https://www.ietf.org/mailman/listinfo/slim</a><br class="">
            </div>
          </blockquote>
        </div>
        <br class="">
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
SLIM mailing list
<a class="moz-txt-link-abbreviated" href="mailto:SLIM@ietf.org">SLIM@ietf.org</a>
<a class="moz-txt-link-freetext" href="https://www.ietf.org/mailman/listinfo/slim">https://www.ietf.org/mailman/listinfo/slim</a>
</pre>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
-----------------------------------------
Gunnar Hellström
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
  </body>
</html>

--------------095F36FD523241F388652F9E--


From nobody Fri Oct 13 15:21:57 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 8D1801321B6 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 15:21:55 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.34
X-Spam-Level: 
X-Spam-Status: No, score=-1.34 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, HTML_OBFUSCATE_05_10=0.26, MANY_SPAN_IN_TEXT=1, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=no autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id DpmFy_KOObPg for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 15:21:52 -0700 (PDT)
Received: from bin-vsp-out-01.atm.binero.net (vsp-unauthed02.binero.net [195.74.38.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 3233013219F for <slim@ietf.org>; Fri, 13 Oct 2017 15:21:52 -0700 (PDT)
X-Halon-ID: d339c165-b064-11e7-9c60-005056917a89
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-01.atm.binero.net (Halon) with ESMTPSA id d339c165-b064-11e7-9c60-005056917a89; Sat, 14 Oct 2017 00:21:15 +0200 (CEST)
To: Brian Rosen <br@brianrosen.net>
Cc: Bernard Aboba <bernard.aboba@gmail.com>, "slim@ietf.org" <slim@ietf.org>,  Randall Gellens <rg+ietf@randy.pensive.org>
References: <3e945827-8310-56aa-b2e5-7a9405ff85c4@omnitor.se> <p06240621d606585e823d@99.111.97.136> <57690f3d-faa2-18d8-f270-8ae179f39e68@omnitor.se> <p06240628d6066c091e76@99.111.97.136> <fea21ce6-398a-ebbb-5881-abe732c8983b@omnitor.se> <CAOW+2dubW_Pc-JKtTOZjSGeCWw=3bSwd1tqvObSwf4fyzs4Eig@mail.gmail.com> <9dafe618-8d7d-76ba-91e2-41e3b5ce1f3b@omnitor.se> <ABDCB89A-4BF0-494C-A729-3EB6529DA618@brianrosen.net>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <59f36c7d-41fc-68f5-1395-b0450689f5ca@omnitor.se>
Date: Sat, 14 Oct 2017 00:21:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <ABDCB89A-4BF0-494C-A729-3EB6529DA618@brianrosen.net>
Content-Type: multipart/alternative; boundary="------------CECC3D3CA34D7329D5CCE966"
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/Cbb9bcq-WewKLMPKuy215__9wqw>
Subject: Re: [Slim] Indication of modality alternatives in draft-ietf-slim-negotiating-human-language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Oct 2017 22:21:55 -0000

This is a multi-part message in MIME format.
--------------CECC3D3CA34D7329D5CCE966
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit

Den 2017-10-13 kl. 20:31, skrev Brian Rosen:
> Gunnar
>
> Protocol documents are for engineers to write software/create 
> hardware.  They don’t try to control user behavior.  I think in this 
> case, you are trying to get the document to describe user behavior and 
> not implementation software/hardware.
>
> Although we do sometimes describe how we expect the protocol to be 
> used by people, that is not normative, and we should be careful to not 
> proscribe behavior.
<GH>Our protocol needs to be well defined regardless if the source and 
sink of language is automata or humans.
As long as we can read the specification differently it is not well defined.
When I ask what the result of the negotiation really means, you use to 
say that it is alternative languages that the users are supposed to 
select from and use one or more in each direction.
I agree that that is a good result.
I think the wording still means that all negotiated languages should be 
used, and I want to avoid that interpretation.

(There is also use for a result saying that a couple of languages are 
desired together in the same direction, but it has been said many times 
that that is not the intention of the current draft, so that requires 
separate work. )

We are reasonably good now with change 2 implemented. The first section 
of 5.2, that change 1 aims at, may be seen as introductory and does 
maybe not need to be completely clarifying, if that requires too 
complicated wording. But it must not contradict the intention of the 
protocol.

Gunnar
>
> Brian
>
>> On Oct 13, 2017, at 2:21 PM, Gunnar Hellström 
>> <gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>> wrote:
>>
>> Den 2017-10-13 kl. 16:58, skrev Bernard Aboba:
>>> Gunnar said:
>>>
>>> "to negotiate which human language is selected for possible use in 
>>> each interactive media stream"
>>>
>>> [BA] Given that audio can be muted, video can be turned off, etc. 
>>> aren't media streams negotiated in SDP always for "possible" use?
>> <GH>That may be true, but we are not talking about the media flow in 
>> the streams. We are talking about the use for language. Our draft 
>> must reflect clearly what the language negotiation result really 
>> means. To me,  "is selected for use in each interactive media stream" 
>> sounds as a promise that a negotiated language will actually be used. 
>> That means that if two media streams end up with negotiated languages 
>> in the same direction, then both must be provided together. According 
>> to the discussions in the WG, that is not the desired result. The 
>> desired result should be that the users can select between use of the 
>> negotiated languages and usually use just one in each direction.  We 
>> introduced "selected" some time ago, but it did not have the right 
>> effect.
>>
>> I will try to come up with new wording proposals.
>>
>> /Gunnar
>>
>>>
>>> On Fri, Oct 13, 2017 at 6:32 AM, Gunnar 
>>> Hellström<gunnar.hellstrom@omnitor.se 
>>> <mailto:gunnar.hellstrom@omnitor.se>>wrote:
>>>
>>>     Change 2 is fine and solves part of the problem.
>>>
>>>     But the current wording at my proposed change 1 still tells me
>>>     that if I offer English text and English voice, it means that I
>>>     have selected to use both, and even stronger if an answer
>>>     contains English text and English voice, then both will be used
>>>     in the session, exactly as you indicated was the problem with
>>>     the Lang attribute. We need to get the possibility to select
>>>     among alternatives clearly into the draft so that not next
>>>     generation implementers also say that it is too vague about what
>>>     it means.
>>>
>>>     The current wording at change one still says that each
>>>     interactive stream is used.
>>>
>>>     How about:  "to negotiate which
>>>     human language is selected for*possible*use in each interactive
>>>     media stream."
>>>
>>>     /Gunnar
>>>
>>>
>>>     Den 2017-10-13 kl. 15:13, skrev Randall Gellens:
>>>>     I think we've addressed the concerns that existed with earlier
>>>>     versions of the draft.
>>>>
>>>>     At 2:57 PM +0200 10/13/17, Gunnar Hellström wrote:
>>>>
>>>>>      Den 2017-10-13 kl. 13:51, skrev Randall Gellens:
>>>>>>      At 12:06 AM +0200 7/29/17, Gunnar Hellström wrote:
>>>>>>
>>>>>>>     We have dealt with this topic before, but rereading the
>>>>>>>     draft indicates to me that we still need some tuning of the
>>>>>>>     wording so that it is clear that the language indications
>>>>>>>     for the same direction for different media are alternatives
>>>>>>>     with no requirements that they need to be provided together,
>>>>>>>     so that it is allowed to answer with just one media in each
>>>>>>>     direction having language indication.
>>>>>>>
>>>>>>>     Suggested wording changes to make this clear:
>>>>>>>
>>>>>>>     ---Change 1 in 5.2, first paragraph----------------
>>>>>>>     ------old text---------
>>>>>>>     This document defines two media-level attributes starting with
>>>>>>>     'hlang' (short for "human interactive language") to
>>>>>>>     negotiate which
>>>>>>>     human language is selected for use in each interactive media
>>>>>>>     stream.
>>>>>>>     ------------new text--------------------
>>>>>>>     This document defines two media-level attributes starting with
>>>>>>>     'hlang' (short for "human interactive language") to
>>>>>>>     negotiate which
>>>>>>>     human language is selected for use in each media stream used
>>>>>>>     for interactive language communication.
>>>>>>>     -------end of change 1-------
>>>>>>
>>>>>>      I don't see how changing "each interactive media stream" to
>>>>>>     "each media stream used for interactive language
>>>>>>     communication" improves anything.  The term "interactive"
>>>>>>     implies human interaction.
>>>>>      <GH>Yes, but human interaction can be to show things in video
>>>>>     without being language communication.
>>>>>      What I am aiming at is to clearly indicate that the language
>>>>>     indications are alternatives to select from. The wording "use
>>>>>     in each interactive media stream" sounds to me that you MUST
>>>>>     use all the agreed languages. That is the same mistake that
>>>>>     you initially blamed the Lang SDP attribute to mean. We need
>>>>>     to get away from that interpretation. My wording was intended
>>>>>     to accomplish that, but it might have been too weak. The key
>>>>>     word is "used" that is intended to mean that if a media stream
>>>>>     is selected to be used for language communication then the
>>>>>     agreed language is the one to be used.
>>>>>      So, I prefer my wording, or if you can create something even
>>>>>     more clear that we are talking about alternatives to select from.
>>>>>>
>>>>>>>
>>>>>>>     ----Change 2 in 5.2, third paragraph ------
>>>>>>>     ----old text------
>>>>>>>     In an answer, 'hlang-send' is the language the answerer will
>>>>>>>     send if
>>>>>>>     using the media for language (which in most cases is one of the
>>>>>>>     languages in the offer's 'hlang-recv'), and 'hlang-recv' is the
>>>>>>>     language the answerer expects to receive in the media (which
>>>>>>>     in most
>>>>>>>     cases is one of the languages in the offer's 'hlang-send').
>>>>>>>     -----new text----
>>>>>>>     In an answer, 'hlang-send' is the language the answerer will
>>>>>>>     send if
>>>>>>>     using the media for language (which in most cases is one of the
>>>>>>>     languages in the offer's 'hlang-recv'), and 'hlang-recv' is the
>>>>>>>     language the answerer expects to receive in the media if
>>>>>>>     using the media for language (which in most
>>>>>>>     cases is one of the languages in the offer's 'hlang-send').
>>>>>>>     ----end of change 2-------------------------------
>>>>>>
>>>>>>      I'm OK adding "if using the media for language" to the
>>>>>>     second clause.
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>     /Gunnar
>>>>>>>
>>>>>>>     --
>>>>>>>     -----------------------------------------
>>>>>>>     Gunnar Hellström
>>>>>>>     Omnitor
>>>>>>>     gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>
>>>>>>>
>>>>>>>     _______________________________________________
>>>>>>>     SLIM mailing list
>>>>>>>     SLIM@ietf.org <mailto:SLIM@ietf.org>
>>>>>>>     https://www.ietf.org/mailman/listinfo/slim
>>>>>>>     <https://www.ietf.org/mailman/listinfo/slim>
>>>>>>
>>>>>>
>>>>>
>>>>>      --
>>>>>      -----------------------------------------
>>>>>      Gunnar Hellström
>>>>>      Omnitor
>>>>>     gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>
>>>>>      +46 708 204 288
>>>>
>>>>
>>>
>>>     -- 
>>>     -----------------------------------------
>>>     Gunnar Hellström
>>>     Omnitor
>>>     gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>
>>>     +46 708 204 288
>>>
>>>
>>>     _______________________________________________
>>>     SLIM mailing list
>>>     SLIM@ietf.org <mailto:SLIM@ietf.org>
>>>     https://www.ietf.org/mailman/listinfo/slim
>>>     <https://www.ietf.org/mailman/listinfo/slim>
>>>
>>>
>>
>> -- 
>> -----------------------------------------
>> Gunnar Hellström
>> Omnitor
>> gunnar.hellstrom@omnitor.se
>> +46 708 204 288
>> _______________________________________________
>> SLIM mailing list
>> SLIM@ietf.org <mailto:SLIM@ietf.org>
>> https://www.ietf.org/mailman/listinfo/slim
>

-- 
-----------------------------------------
Gunnar Hellström
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


--------------CECC3D3CA34D7329D5CCE966
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    Den 2017-10-13 kl. 20:31, skrev Brian Rosen:<br>
    <blockquote type="cite"
      cite="mid:ABDCB89A-4BF0-494C-A729-3EB6529DA618@brianrosen.net">
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      Gunnar
      <div class=""><br class="">
      </div>
      <div class="">Protocol documents are for engineers to write
        software/create hardware.  They don’t try to control user
        behavior.  I think in this case, you are trying to get the
        document to describe user behavior and not implementation
        software/hardware.</div>
      <div class=""><br class="">
      </div>
      <div class="">Although we do sometimes describe how we expect the
        protocol to be used by people, that is not normative, and we
        should be careful to not proscribe behavior.</div>
    </blockquote>
    &lt;GH&gt;Our protocol needs to be well defined regardless if the
    source and sink of language is automata or humans.<br>
    As long as we can read the specification differently it is not well
    defined.<br>
    When I ask what the result of the negotiation really means, you use
    to say that it is alternative languages that the users are supposed
    to select from and use one or more in each direction. <br>
    I agree that that is a good result.<br>
    I think the wording still means that all negotiated languages should
    be used, and I want to avoid that interpretation.<br>
    <br>
    (There is also use for a result saying that a couple of languages
    are desired together in the same direction, but it has been said
    many times that that is not the intention of the current draft, so
    that requires separate work. ) <br>
    <br>
    We are reasonably good now with change 2 implemented. The first
    section of 5.2, that change 1 aims at, may be seen as introductory
    and does maybe not need to be completely clarifying, if that
    requires too complicated wording. But it must not contradict the
    intention of the protocol.<br>
    <br>
    Gunnar<br>
    <blockquote type="cite"
      cite="mid:ABDCB89A-4BF0-494C-A729-3EB6529DA618@brianrosen.net">
      <div class=""><br class="">
      </div>
      <div class="">Brian</div>
      <div class=""><br class="">
      </div>
      <div class="">
        <div>
          <blockquote type="cite" class="">
            <div class="">On Oct 13, 2017, at 2:21 PM, Gunnar Hellström
              &lt;<a href="mailto:gunnar.hellstrom@omnitor.se" class=""
                moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>&gt;
              wrote:</div>
            <br class="Apple-interchange-newline">
            <div class=""><span style="font-family: Helvetica;
                font-size: 12px; font-style: normal; font-variant-caps:
                normal; font-weight: normal; letter-spacing: normal;
                text-align: start; text-indent: 0px; text-transform:
                none; white-space: normal; word-spacing: 0px;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255); float: none; display: inline
                !important;" class="">Den 2017-10-13 kl. 16:58, skrev
                Bernard Aboba:</span><br style="font-family: Helvetica;
                font-size: 12px; font-style: normal; font-variant-caps:
                normal; font-weight: normal; letter-spacing: normal;
                text-align: start; text-indent: 0px; text-transform:
                none; white-space: normal; word-spacing: 0px;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255);" class="">
              <blockquote type="cite"
cite="mid:CAOW+2dubW_Pc-JKtTOZjSGeCWw=3bSwd1tqvObSwf4fyzs4Eig@mail.gmail.com"
                style="font-family: Helvetica; font-size: 12px;
                font-style: normal; font-variant-caps: normal;
                font-weight: normal; letter-spacing: normal; orphans:
                auto; text-align: start; text-indent: 0px;
                text-transform: none; white-space: normal; widows: auto;
                word-spacing: 0px; -webkit-text-size-adjust: auto;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255);" class="">
                <div dir="ltr" class="">Gunnar said: 
                  <div class=""><br class="">
                  </div>
                  <div class="">"to negotiate which human language is
                    selected for possible use in each interactive media
                    stream"</div>
                  <div class=""><br class="">
                  </div>
                  <div class="">[BA] Given that audio can be muted,
                    video can be turned off, etc. aren't media streams
                    negotiated in SDP always for "possible" use?</div>
                </div>
              </blockquote>
              <span style="font-family: Helvetica; font-size: 12px;
                font-style: normal; font-variant-caps: normal;
                font-weight: normal; letter-spacing: normal; text-align:
                start; text-indent: 0px; text-transform: none;
                white-space: normal; word-spacing: 0px;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255); float: none; display: inline
                !important;" class="">&lt;GH&gt;That may be true, but we
                are not talking about the media flow in the streams. We
                are talking about the use for language. Our draft must
                reflect clearly what the language negotiation result
                really means. To me,  "is selected for use in each
                interactive media stream" sounds as a promise that a
                negotiated language will actually be used. That means
                that if two media streams end up with negotiated
                languages in the same direction, then both must be
                provided together. According to the discussions in the
                WG, that is not the desired result. The desired result
                should be that the users can select between use of the
                negotiated languages and usually use just one in each
                direction.  We introduced "selected" some time ago, but
                it did not have the right effect.  <span
                  class="Apple-converted-space"> </span></span><br
                style="font-family: Helvetica; font-size: 12px;
                font-style: normal; font-variant-caps: normal;
                font-weight: normal; letter-spacing: normal; text-align:
                start; text-indent: 0px; text-transform: none;
                white-space: normal; word-spacing: 0px;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255);" class="">
              <br style="font-family: Helvetica; font-size: 12px;
                font-style: normal; font-variant-caps: normal;
                font-weight: normal; letter-spacing: normal; text-align:
                start; text-indent: 0px; text-transform: none;
                white-space: normal; word-spacing: 0px;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255);" class="">
              <span style="font-family: Helvetica; font-size: 12px;
                font-style: normal; font-variant-caps: normal;
                font-weight: normal; letter-spacing: normal; text-align:
                start; text-indent: 0px; text-transform: none;
                white-space: normal; word-spacing: 0px;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255); float: none; display: inline
                !important;" class="">I will try to come up with new
                wording proposals.</span><br style="font-family:
                Helvetica; font-size: 12px; font-style: normal;
                font-variant-caps: normal; font-weight: normal;
                letter-spacing: normal; text-align: start; text-indent:
                0px; text-transform: none; white-space: normal;
                word-spacing: 0px; -webkit-text-stroke-width: 0px;
                background-color: rgb(255, 255, 255);" class="">
              <br style="font-family: Helvetica; font-size: 12px;
                font-style: normal; font-variant-caps: normal;
                font-weight: normal; letter-spacing: normal; text-align:
                start; text-indent: 0px; text-transform: none;
                white-space: normal; word-spacing: 0px;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255);" class="">
              <span style="font-family: Helvetica; font-size: 12px;
                font-style: normal; font-variant-caps: normal;
                font-weight: normal; letter-spacing: normal; text-align:
                start; text-indent: 0px; text-transform: none;
                white-space: normal; word-spacing: 0px;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255); float: none; display: inline
                !important;" class="">/Gunnar</span><br
                style="font-family: Helvetica; font-size: 12px;
                font-style: normal; font-variant-caps: normal;
                font-weight: normal; letter-spacing: normal; text-align:
                start; text-indent: 0px; text-transform: none;
                white-space: normal; word-spacing: 0px;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255);" class="">
              <span style="font-family: Helvetica; font-size: 12px;
                font-style: normal; font-variant-caps: normal;
                font-weight: normal; letter-spacing: normal; text-align:
                start; text-indent: 0px; text-transform: none;
                white-space: normal; word-spacing: 0px;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255); float: none; display: inline
                !important;" class=""> </span><br style="font-family:
                Helvetica; font-size: 12px; font-style: normal;
                font-variant-caps: normal; font-weight: normal;
                letter-spacing: normal; text-align: start; text-indent:
                0px; text-transform: none; white-space: normal;
                word-spacing: 0px; -webkit-text-stroke-width: 0px;
                background-color: rgb(255, 255, 255);" class="">
              <blockquote type="cite"
cite="mid:CAOW+2dubW_Pc-JKtTOZjSGeCWw=3bSwd1tqvObSwf4fyzs4Eig@mail.gmail.com"
                style="font-family: Helvetica; font-size: 12px;
                font-style: normal; font-variant-caps: normal;
                font-weight: normal; letter-spacing: normal; orphans:
                auto; text-align: start; text-indent: 0px;
                text-transform: none; white-space: normal; widows: auto;
                word-spacing: 0px; -webkit-text-size-adjust: auto;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255);" class="">
                <div class="gmail_extra"><br class="">
                  <div class="gmail_quote">On Fri, Oct 13, 2017 at 6:32
                    AM, Gunnar Hellström<span
                      class="Apple-converted-space"> </span><span
                      dir="ltr" class="">&lt;<a
                        href="mailto:gunnar.hellstrom@omnitor.se"
                        target="_blank" moz-do-not-send="true" class="">gunnar.hellstrom@omnitor.se</a>&gt;</span><span
                      class="Apple-converted-space"> </span>wrote:<br
                      class="">
                    <blockquote class="gmail_quote" style="margin: 0px
                      0px 0px 0.8ex; border-left-width: 1px;
                      border-left-style: solid; border-left-color:
                      rgb(204, 204, 204); padding-left: 1ex;">
                      <div text="#000000" bgcolor="#FFFFFF" class="">
                        <p class="">Change 2 is fine and solves part of
                          the problem.</p>
                        <p class="">But the current wording at my
                          proposed change 1 still tells me that if I
                          offer English text and English voice, it means
                          that I have selected to use both, and even
                          stronger if an answer contains English text
                          and English voice, then both will be used in
                          the session, exactly as you indicated was the
                          problem with the Lang attribute. We need to
                          get the possibility to select among
                          alternatives clearly into the draft so that
                          not next generation implementers also say that
                          it is too vague about what it means.</p>
                        <p class="">The current wording at change one
                          still says that each interactive stream is
                          used.<span class="Apple-converted-space"> </span><br
                            class="">
                        </p>
                        <p class="">How about:  "to negotiate which<span
                            class="Apple-converted-space"> </span><br
                            class="">
                              <span class="Apple-converted-space"> </span>human
                          language is selected for<b class=""><span
                              class="Apple-converted-space"> </span>possible<span
                              class="Apple-converted-space"> </span></b>use
                          in each interactive media stream."</p>
                        <span class="HOEnZb"><font class=""
                            color="#888888">
                            <p class="">/Gunnar<br class="">
                            </p>
                          </font></span>
                        <div class="">
                          <div class="h5"><br class="">
                            <div
                              class="m_5473125126441079136moz-cite-prefix">Den
                              2017-10-13 kl. 15:13, skrev Randall
                              Gellens:<br class="">
                            </div>
                            <blockquote type="cite" class="">I think
                              we've addressed the concerns that existed
                              with earlier versions of the draft.<span
                                class="Apple-converted-space"> </span><br
                                class="">
                              <br class="">
                              At 2:57 PM +0200 10/13/17, Gunnar
                              Hellström wrote:<span
                                class="Apple-converted-space"> </span><br
                                class="">
                              <br class="">
                              <blockquote type="cite" class=""> Den
                                2017-10-13 kl. 13:51, skrev Randall
                                Gellens:<span
                                  class="Apple-converted-space"> </span><br
                                  class="">
                                <blockquote type="cite" class=""> At
                                  12:06 AM +0200 7/29/17, Gunnar
                                  Hellström wrote:<span
                                    class="Apple-converted-space"> </span><br
                                    class="">
                                  <br class="">
                                  <blockquote type="cite" class=""> <span
                                      class="Apple-converted-space"> </span>We
                                    have dealt with this topic before,
                                    but rereading the draft indicates to
                                    me that we still need some tuning of
                                    the wording so that it is clear that
                                    the language indications for the
                                    same direction for different media
                                    are alternatives with no
                                    requirements that they need to be
                                    provided together, so that it is
                                    allowed to answer with just one
                                    media in each direction having
                                    language indication.<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                    <br class="">
                                     <span class="Apple-converted-space"> </span>Suggested
                                    wording changes to make this clear:<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                    <br class="">
                                     <span class="Apple-converted-space"> </span>---Change
                                    1 in 5.2, first
                                    paragraph----------------<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                     <span class="Apple-converted-space"> </span>------old
                                    text---------<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                     <span class="Apple-converted-space"> </span>This
                                    document defines two media-level
                                    attributes starting with<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                        <span
                                      class="Apple-converted-space"> </span>'hlang'
                                    (short for "human interactive
                                    language") to negotiate which<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                        <span
                                      class="Apple-converted-space"> </span>human
                                    language is selected for use in each
                                    interactive media stream.<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                     <span class="Apple-converted-space"> </span>------------new
                                    text--------------------<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                     <span class="Apple-converted-space"> </span>This
                                    document defines two media-level
                                    attributes starting with<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                        <span
                                      class="Apple-converted-space"> </span>'hlang'
                                    (short for "human interactive
                                    language") to negotiate which<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                        <span
                                      class="Apple-converted-space"> </span>human
                                    language is selected for use in each
                                    media stream used for interactive
                                    language communication.<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                     <span class="Apple-converted-space"> </span>-------end
                                    of change 1-------<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                  </blockquote>
                                  <br class="">
                                   I don't see how changing "each
                                  interactive media stream" to "each
                                  media stream used for interactive
                                  language communication" improves
                                  anything.  The term "interactive"
                                  implies human interaction.<span
                                    class="Apple-converted-space"> </span><br
                                    class="">
                                </blockquote>
                                 &lt;GH&gt;Yes, but human interaction
                                can be to show things in video without
                                being language communication.<span
                                  class="Apple-converted-space"> </span><br
                                  class="">
                                 What I am aiming at is to clearly
                                indicate that the language indications
                                are alternatives to select from. The
                                wording "use in each interactive media
                                stream" sounds to me that you MUST use
                                all the agreed languages. That is the
                                same mistake that you initially blamed
                                the Lang SDP attribute to mean. We need
                                to get away from that interpretation. My
                                wording was intended to accomplish that,
                                but it might have been too weak. The key
                                word is "used" that is intended to mean
                                that if a media stream is selected to be
                                used for language communication then the
                                agreed language is the one to be used.<span
                                  class="Apple-converted-space"> </span><br
                                  class="">
                                 So, I prefer my wording, or if you can
                                create something even more clear that we
                                are talking about alternatives to select
                                from.<span class="Apple-converted-space"> </span><br
                                  class="">
                                <blockquote type="cite" class=""><br
                                    class="">
                                  <blockquote type="cite" class=""><br
                                      class="">
                                     <span class="Apple-converted-space"> </span>----Change
                                    2 in 5.2, third paragraph ------<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                     <span class="Apple-converted-space"> </span>----old
                                    text------<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                       <span
                                      class="Apple-converted-space"> </span>In
                                    an answer, 'hlang-send' is the
                                    language the answerer will send if<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                        <span
                                      class="Apple-converted-space"> </span>using
                                    the media for language (which in
                                    most cases is one of the<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                        <span
                                      class="Apple-converted-space"> </span>languages
                                    in the offer's 'hlang-recv'), and
                                    'hlang-recv' is the<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                        <span
                                      class="Apple-converted-space"> </span>language
                                    the answerer expects to receive in
                                    the media (which in most<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                        <span
                                      class="Apple-converted-space"> </span>cases
                                    is one of the languages in the
                                    offer's 'hlang-send').<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                     <span class="Apple-converted-space"> </span>-----new
                                    text----<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                       <span
                                      class="Apple-converted-space"> </span>In
                                    an answer, 'hlang-send' is the
                                    language the answerer will send if<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                        <span
                                      class="Apple-converted-space"> </span>using
                                    the media for language (which in
                                    most cases is one of the<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                        <span
                                      class="Apple-converted-space"> </span>languages
                                    in the offer's 'hlang-recv'), and
                                    'hlang-recv' is the<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                        <span
                                      class="Apple-converted-space"> </span>language
                                    the answerer expects to receive in
                                    the media if<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                        <span
                                      class="Apple-converted-space"> </span>using
                                    the media for language (which in
                                    most<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                        <span
                                      class="Apple-converted-space"> </span>cases
                                    is one of the languages in the
                                    offer's 'hlang-send').<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                     <span class="Apple-converted-space"> </span>----end
                                    of change
                                    2-----------------------------<wbr
                                      class="">--<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                  </blockquote>
                                  <br class="">
                                   I'm OK adding "if using the media for
                                  language" to the second clause.<span
                                    class="Apple-converted-space"> </span><br
                                    class="">
                                  <br class="">
                                  <blockquote type="cite" class=""><br
                                      class="">
                                    <br class="">
                                     <span class="Apple-converted-space"> </span>/Gunnar<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                    <br class="">
                                     <span class="Apple-converted-space"> </span>--<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                     <span class="Apple-converted-space"> </span>------------------------------<wbr
                                      class="">-----------<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                     <span class="Apple-converted-space"> </span>Gunnar
                                    Hellström<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                     <span class="Apple-converted-space"> </span>Omnitor<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                     <span class="Apple-converted-space"> </span><a
class="m_5473125126441079136moz-txt-link-abbreviated"
                                      href="mailto:gunnar.hellstrom@omnitor.se"
                                      target="_blank"
                                      moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a><span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                    <br class="">
                                     <span class="Apple-converted-space"> </span>______________________________<wbr
                                      class="">_________________<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                     <span class="Apple-converted-space"> </span>SLIM
                                    mailing list<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                     <span class="Apple-converted-space"> </span><a
class="m_5473125126441079136moz-txt-link-abbreviated"
                                      href="mailto:SLIM@ietf.org"
                                      target="_blank"
                                      moz-do-not-send="true">SLIM@ietf.org</a><span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                     <span class="Apple-converted-space"> </span><a
class="m_5473125126441079136moz-txt-link-freetext"
                                      href="https://www.ietf.org/mailman/listinfo/slim"
                                      target="_blank"
                                      moz-do-not-send="true">https://www.ietf.org/mailman/<wbr
                                        class="">listinfo/slim</a><span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                  </blockquote>
                                  <br class="">
                                  <br class="">
                                </blockquote>
                                <br class="">
                                 --<span class="Apple-converted-space"> </span><br
                                  class="">
                                 -----------------------------<wbr
                                  class="">------------<span
                                  class="Apple-converted-space"> </span><br
                                  class="">
                                 Gunnar Hellström<span
                                  class="Apple-converted-space"> </span><br
                                  class="">
                                 Omnitor<span
                                  class="Apple-converted-space"> </span><br
                                  class="">
                                 <a
                                  class="m_5473125126441079136moz-txt-link-abbreviated"
href="mailto:gunnar.hellstrom@omnitor.se" target="_blank"
                                  moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a><span
                                  class="Apple-converted-space"> </span><br
                                  class="">
                                 +46 708 204 288<span
                                  class="Apple-converted-space"> </span><br
                                  class="">
                              </blockquote>
                              <br class="">
                              <br class="">
                            </blockquote>
                            <br class="">
                            <pre class="m_5473125126441079136moz-signature" cols="72">-- 
------------------------------<wbr class="">-----------
Gunnar Hellström
Omnitor
<a class="m_5473125126441079136moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se" target="_blank" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
                          </div>
                        </div>
                      </div>
                      <br class="">
                      ______________________________<wbr class="">_________________<br
                        class="">
                      SLIM mailing list<br class="">
                      <a href="mailto:SLIM@ietf.org"
                        moz-do-not-send="true" class="">SLIM@ietf.org</a><br
                        class="">
                      <a
                        href="https://www.ietf.org/mailman/listinfo/slim"
                        rel="noreferrer" target="_blank"
                        moz-do-not-send="true" class="">https://www.ietf.org/mailman/<wbr
                          class="">listinfo/slim</a><br class="">
                      <br class="">
                    </blockquote>
                  </div>
                  <br class="">
                </div>
              </blockquote>
              <br style="font-family: Helvetica; font-size: 12px;
                font-style: normal; font-variant-caps: normal;
                font-weight: normal; letter-spacing: normal; text-align:
                start; text-indent: 0px; text-transform: none;
                white-space: normal; word-spacing: 0px;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255);" class="">
              <pre class="moz-signature" cols="72" style="font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255);">-- 
-----------------------------------------
Gunnar Hellström
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
              <span style="font-family: Helvetica; font-size: 12px;
                font-style: normal; font-variant-caps: normal;
                font-weight: normal; letter-spacing: normal; text-align:
                start; text-indent: 0px; text-transform: none;
                white-space: normal; word-spacing: 0px;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255); float: none; display: inline
                !important;" class="">_______________________________________________</span><br
                style="font-family: Helvetica; font-size: 12px;
                font-style: normal; font-variant-caps: normal;
                font-weight: normal; letter-spacing: normal; text-align:
                start; text-indent: 0px; text-transform: none;
                white-space: normal; word-spacing: 0px;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255);" class="">
              <span style="font-family: Helvetica; font-size: 12px;
                font-style: normal; font-variant-caps: normal;
                font-weight: normal; letter-spacing: normal; text-align:
                start; text-indent: 0px; text-transform: none;
                white-space: normal; word-spacing: 0px;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255); float: none; display: inline
                !important;" class="">SLIM mailing list</span><br
                style="font-family: Helvetica; font-size: 12px;
                font-style: normal; font-variant-caps: normal;
                font-weight: normal; letter-spacing: normal; text-align:
                start; text-indent: 0px; text-transform: none;
                white-space: normal; word-spacing: 0px;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255);" class="">
              <a href="mailto:SLIM@ietf.org" style="font-family:
                Helvetica; font-size: 12px; font-style: normal;
                font-variant-caps: normal; font-weight: normal;
                letter-spacing: normal; orphans: auto; text-align:
                start; text-indent: 0px; text-transform: none;
                white-space: normal; widows: auto; word-spacing: 0px;
                -webkit-text-size-adjust: auto;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255);" class="" moz-do-not-send="true">SLIM@ietf.org</a><br
                style="font-family: Helvetica; font-size: 12px;
                font-style: normal; font-variant-caps: normal;
                font-weight: normal; letter-spacing: normal; text-align:
                start; text-indent: 0px; text-transform: none;
                white-space: normal; word-spacing: 0px;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255);" class="">
              <a href="https://www.ietf.org/mailman/listinfo/slim"
                style="font-family: Helvetica; font-size: 12px;
                font-style: normal; font-variant-caps: normal;
                font-weight: normal; letter-spacing: normal; orphans:
                auto; text-align: start; text-indent: 0px;
                text-transform: none; white-space: normal; widows: auto;
                word-spacing: 0px; -webkit-text-size-adjust: auto;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255);" class="" moz-do-not-send="true">https://www.ietf.org/mailman/listinfo/slim</a><br
                style="font-family: Helvetica; font-size: 12px;
                font-style: normal; font-variant-caps: normal;
                font-weight: normal; letter-spacing: normal; text-align:
                start; text-indent: 0px; text-transform: none;
                white-space: normal; word-spacing: 0px;
                -webkit-text-stroke-width: 0px; background-color:
                rgb(255, 255, 255);" class="">
            </div>
          </blockquote>
        </div>
        <br class="">
      </div>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
-----------------------------------------
Gunnar Hellström
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
  </body>
</html>

--------------CECC3D3CA34D7329D5CCE966--


From nobody Fri Oct 13 19:13:26 2017
Return-Path: <rg+ietf@randy.pensive.org>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id D4315132944 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 19:13:24 -0700 (PDT)
X-Quarantine-ID: <K5M1sMS-dKk7>
X-Virus-Scanned: amavisd-new at amsl.com
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "MIME-Version"
X-Spam-Flag: NO
X-Spam-Score: -1.899
X-Spam-Level: 
X-Spam-Status: No, score=-1.899 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id K5M1sMS-dKk7 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 19:13:22 -0700 (PDT)
Received: from turing.pensive.org (turing.pensive.org [99.111.97.161]) by ietfa.amsl.com (Postfix) with ESMTP id B7C261326ED for <slim@ietf.org>; Fri, 13 Oct 2017 19:13:22 -0700 (PDT)
Received: from [172.20.60.54] (99.111.97.161) by turing.pensive.org with ESMTP (EIMS X 3.3.9); Fri, 13 Oct 2017 19:17:27 -0700
Mime-Version: 1.0
Message-Id: <p06240602d60722ddf848@[172.20.60.54]>
In-Reply-To: <9dafe618-8d7d-76ba-91e2-41e3b5ce1f3b@omnitor.se>
References: <3e945827-8310-56aa-b2e5-7a9405ff85c4@omnitor.se> <p06240621d606585e823d@99.111.97.136> <57690f3d-faa2-18d8-f270-8ae179f39e68@omnitor.se> <p06240628d6066c091e76@99.111.97.136> <fea21ce6-398a-ebbb-5881-abe732c8983b@omnitor.se> <CAOW+2dubW_Pc-JKtTOZjSGeCWw=3bSwd1tqvObSwf4fyzs4Eig@mail.gmail.com> <9dafe618-8d7d-76ba-91e2-41e3b5ce1f3b@omnitor.se>
X-Mailer: Eudora for Mac OS X
Date: Fri, 13 Oct 2017 19:13:16 -0700
To: Gunnar =?iso-8859-1?Q?Hellstr=F6m?=  <gunnar.hellstrom@omnitor.se>, Bernard Aboba <bernard.aboba@gmail.com>
From: Randall Gellens <rg+ietf@randy.pensive.org>
Cc: "slim@ietf.org" <slim@ietf.org>
Mime-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1" ; format="flowed"
Content-Transfer-Encoding: quoted-printable
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/3DigQRd-BZXMFu8Ifa4ShI2fFAo>
Subject: Re: [Slim] Indication of modality alternatives in draft-ietf-slim-negotiating-human-language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 02:13:25 -0000

At 8:21 PM +0200 10/13/17, Gunnar Hellstr=F6m wrote:

>  Den 2017-10-13 kl. 16:58, skrev Bernard Aboba:
>
>>  Gunnar said:
>>
>>  "to negotiate which human language is selected=20
>> for possible use in each interactive media=20
>> stream"
>>
>>  [BA] Given that audio can be muted, video can=20
>> be turned off, etc. aren't media streams=20
>> negotiated in SDP always for "possible" use?
>>
>  <GH>That may be true, but we are not talking=20
> about the media flow in the streams. We are=20
> talking about the use for language. Our draft=20
> must reflect clearly what the language=20
> negotiation result really means. To me,  "is=20
> selected for use in each interactive media=20
> stream" sounds as a promise that a negotiated=20
> language will actually be used.

I don't think it reads that way at all.

=46urther, the draft says, in 5.2:

    Note that media and language negotiation might result in more media
    streams being accepted than are needed by the users (e.g., if more
    preferred and less preferred combinations of media and language are
    all accepted).  This is not a problem.

This explicitly states that not all negotiated media will necessarily be use=
d.

>  That means that if two media streams end up=20
> with negotiated languages in the same=20
> direction, then both must be provided together.=20
> According to the discussions in the WG, that is=20
> not the desired result. The desired result=20
> should be that the users can select between use=20
> of the negotiated languages and usually use=20
> just one in each direction.  We introduced=20
> "selected" some time ago, but it did not have=20
> the right effect.  
>
>  I will try to come up with new wording proposals.

I think the wording is fine as it is.

>
>>
>>  On Fri, Oct 13, 2017 at 6:32 AM, Gunnar=20
>> Hellstr=F6m=20
>> <<mailto:gunnar.hellstrom@omnitor.se>gunnar.hellstrom@omnitor.se>=20
>> wrote:
>>
>>  Change 2 is fine and solves part of the problem.
>>
>>  But the current wording at my proposed change=20
>> 1 still tells me that if I offer English text=20
>> and English voice, it means that I have=20
>> selected to use both, and even stronger if an=20
>> answer contains English text and English=20
>> voice, then both will be used in the session,=20
>> exactly as you indicated was the problem with=20
>> the Lang attribute. We need to get the=20
>> possibility to select among alternatives=20
>> clearly into the draft so that not next=20
>> generation implementers also say that it is=20
>> too vague about what it means.
>>
>>  The current wording at change one still says=20
>> that each interactive stream is used.
>>
>>  How about:  "to negotiate which
>>       human language is selected for possible=20
>> use in each interactive media stream."
>>
>>  /Gunnar
>>
>>
>>  Den 2017-10-13 kl. 15:13, skrev Randall Gellens:
>>
>>>  I think we've addressed the concerns that=20
>>> existed with earlier versions of the draft.
>>>
>>>  At 2:57 PM +0200 10/13/17, Gunnar Hellstr=F6m wrote:
>>>
>>>>   Den 2017-10-13 kl. 13:51, skrev Randall Gellens:
>>>>
>>>>>   At 12:06 AM +0200 7/29/17, Gunnar Hellstr=F6m wrote:
>>>>>
>>>>>>    We have dealt with this topic before,=20
>>>>>> but rereading the draft indicates to me=20
>>>>>> that we still need some tuning of the=20
>>>>>> wording so that it is clear that the=20
>>>>>> language indications for the same=20
>>>>>> direction for different media are=20
>>>>>> alternatives with no requirements that=20
>>>>>> they need to be provided together, so that=20
>>>>>> it is allowed to answer with just one=20
>>>>>> media in each direction having language=20
>>>>>> indication.
>>>>>>
>>>>>>    Suggested wording changes to make this clear:
>>>>>>
>>>>>>    ---Change 1 in 5.2, first paragraph----------------
>>>>>>    ------old text---------
>>>>>>    This document defines two media-level attributes starting with
>>>>>>       'hlang' (short for "human interactive language") to negotiate w=
hich
>>>>>>       human language is selected for use in=20
>>>>>> each interactive media stream.
>>>>>>    ------------new text--------------------
>>>>>>    This document defines two media-level attributes starting with
>>>>>>       'hlang' (short for "human interactive language") to negotiate w=
hich
>>>>>>       human language is selected for use in=20
>>>>>> each media stream used for interactive=20
>>>>>> language communication.
>>>>>>    -------end of change 1-------
>>>>>>
>>>>
>>>>   I don't see how changing "each interactive=20
>>>> media stream" to "each media stream used for=20
>>>> interactive language communication" improves=20
>>>> anything.  The term "interactive" implies=20
>>>> human interaction.
>>>>
>>>   <GH>Yes, but human interaction can be to=20
>>> show things in video without being language=20
>>> communication.
>>>   What I am aiming at is to clearly indicate=20
>>> that the language indications are=20
>>> alternatives to select from. The wording "use=20
>>> in each interactive media stream" sounds to=20
>>> me that you MUST use all the agreed=20
>>> languages. That is the same mistake that you=20
>>> initially blamed the Lang SDP attribute to=20
>>> mean. We need to get away from that=20
>>> interpretation. My wording was intended to=20
>>> accomplish that, but it might have been too=20
>>> weak. The key word is "used" that is intended=20
>>> to mean that if a media stream is selected to=20
>>> be used for language communication then the=20
>>> agreed language is the one to be used.
>>>   So, I prefer my wording, or if you can=20
>>> create something even more clear that we are=20
>>> talking about alternatives to select from.
>>>
>>>>
>>>>>
>>>>>    ----Change 2 in 5.2, third paragraph ------
>>>>>    ----old text------
>>>>>      In an answer, 'hlang-send' is the language the answerer will send=
 if
>>>>>       using the media for language (which in most cases is one of the
>>>>>       languages in the offer's 'hlang-recv'), and 'hlang-recv' is the
>>>>>       language the answerer expects to receive in the media (which in =
most
>>>>>       cases is one of the languages in the offer's 'hlang-send').
>>>>>    -----new text----
>>>>>      In an answer, 'hlang-send' is the language the answerer will send=
 if
>>>>>       using the media for language (which in most cases is one of the
>>>>>       languages in the offer's 'hlang-recv'), and 'hlang-recv' is the
>>>>>       language the answerer expects to receive in the media if
>>>>>       using the media for language (which in most
>>>>>       cases is one of the languages in the offer's 'hlang-send').
>>>>>    ----end of change 2-------------------------------
>>>>>
>>>
>>>   I'm OK adding "if using the media for language" to the second clause.
>>>
>>>>
>>>>
>>>>    /Gunnar
>>>>
>>>>    --
>>>>    -----------------------------------------
>>>>    Gunnar Hellstr=F6m
>>>>    Omnitor
>>>>    <mailto:gunnar.hellstrom@omnitor.se>gunnar.hellstrom@omnitor.se
>>>>
>>>>    _______________________________________________
>>>>    SLIM mailing list
>>>>    <mailto:SLIM@ietf.org>SLIM@ietf.org
>>>>=20
>>>> <https://www.ietf.org/mailman/listinfo/slim>https://www.ietf.org/mailma=
n/listinfo/slim
>>>>
>>>
>>>
>>>
>>>   --
>>>   -----------------------------------------
>>>   Gunnar Hellstr=F6m
>>>   Omnitor
>>>   <mailto:gunnar.hellstrom@omnitor.se>gunnar.hellstrom@omnitor.se
>>>   +46 708 204 288
>>>
>>
>>
>>
>>  --
>>  -----------------------------------------
>>  Gunnar Hellstr=F6m
>>  Omnitor
>>  <mailto:gunnar.hellstrom@omnitor.se>gunnar.hellstrom@omnitor.se
>>  +46 708 204 288
>>
>>  _______________________________________________
>>  SLIM mailing list
>>  <mailto:SLIM@ietf.org>SLIM@ietf.org
>>=20
>> <https://www.ietf.org/mailman/listinfo/slim>https://www.ietf.org/mailman/=
listinfo/slim
>>
>>
>
>  --
>  -----------------------------------------
>  Gunnar Hellstr=F6m
>  Omnitor
>  <mailto:gunnar.hellstrom@omnitor.se>gunnar.hellstrom@omnitor.se
>  +46 708 204 288


-- 
Randall Gellens
Opinions are personal;    facts are suspect;    I speak for myself only
-------------- Randomly selected tag: ---------------
And still I persist in wondering if folly must always
be man's nemesis.                    --Edgar Pangborn


From nobody Fri Oct 13 19:20:27 2017
Return-Path: <rg+ietf@randy.pensive.org>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 23B4F132944 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 19:20:27 -0700 (PDT)
X-Quarantine-ID: <13ML2vpeKb1l>
X-Virus-Scanned: amavisd-new at amsl.com
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "MIME-Version"
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Level: 
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 13ML2vpeKb1l for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 19:20:26 -0700 (PDT)
Received: from turing.pensive.org (turing.pensive.org [99.111.97.161]) by ietfa.amsl.com (Postfix) with ESMTP id 505721321DF for <slim@ietf.org>; Fri, 13 Oct 2017 19:20:26 -0700 (PDT)
Received: from [172.20.60.54] (99.111.97.161) by turing.pensive.org with ESMTP (EIMS X 3.3.9); Fri, 13 Oct 2017 19:24:31 -0700
Mime-Version: 1.0
Message-Id: <p06240603d607242e4735@[172.20.60.54]>
In-Reply-To: <2b75cef5-296e-359d-433f-b113bce7a540@omnitor.se>
References: <5833ea9b-c7fe-1cfa-2015-21e42b5c3d55@omnitor.se> <p0624061fd606562afe15@[99.111.97.136]> <2b75cef5-296e-359d-433f-b113bce7a540@omnitor.se>
X-Mailer: Eudora for Mac OS X
Date: Fri, 13 Oct 2017 19:20:22 -0700
To: Gunnar =?iso-8859-1?Q?Hellstr=F6m?=  <gunnar.hellstrom@omnitor.se>, slim@ietf.org
From: Randall Gellens <rg+ietf@randy.pensive.org>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii" ; format="flowed"
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/-NHaXP4yF0VkiuPjFwjVWTKJhNA>
Subject: Re: [Slim] Simultaneity requirement in draft-ietf-slim-negotiating-human-language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 02:20:27 -0000

Gunnar,

Your proposal is to change:

    and requires a voice stream plus
    a text stream

to:

    and requires a voice stream to send spoken language plus a text
    stream to receive written language

I don't have a problem with this change, but I don't see how it helps.

-- 
Randall Gellens
Opinions are personal;    facts are suspect;    I speak for myself only
-------------- Randomly selected tag: ---------------
The last good thing written in C was Franz Schubert's Ninth Symphony.


From nobody Fri Oct 13 19:22:29 2017
Return-Path: <rg+ietf@randy.pensive.org>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 24567132944 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 19:22:28 -0700 (PDT)
X-Quarantine-ID: <QTBuq0Q_4GfE>
X-Virus-Scanned: amavisd-new at amsl.com
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "MIME-Version"
X-Spam-Flag: NO
X-Spam-Score: -1.899
X-Spam-Level: 
X-Spam-Status: No, score=-1.899 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id QTBuq0Q_4GfE for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 19:22:27 -0700 (PDT)
Received: from turing.pensive.org (turing.pensive.org [99.111.97.161]) by ietfa.amsl.com (Postfix) with ESMTP id 1E9F21241F3 for <slim@ietf.org>; Fri, 13 Oct 2017 19:22:27 -0700 (PDT)
Received: from [172.20.60.54] (99.111.97.161) by turing.pensive.org with ESMTP (EIMS X 3.3.9); Fri, 13 Oct 2017 19:26:32 -0700
Mime-Version: 1.0
Message-Id: <p06240604d607253083b1@[172.20.60.54]>
In-Reply-To: <A1444985-1CB3-443B-BB90-9E0A6B83EEE5@brianrosen.net>
References: <CAOW+2dsVjhvT7tWrvPgz1Rp14v4Td+u8Pe_UTG4WqmtoeFmhiA@mail.gmail.com> <A1444985-1CB3-443B-BB90-9E0A6B83EEE5@brianrosen.net>
X-Mailer: Eudora for Mac OS X
Date: Fri, 13 Oct 2017 19:22:21 -0700
To: Brian Rosen <br@brianrosen.net>, Bernard Aboba <bernard.aboba@gmail.com>
From: Randall Gellens <rg+ietf@randy.pensive.org>
Cc: slim@ietf.org
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii" ; format="flowed"
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/Vs0VzbcjFDXKy3thhwhNfsWRJwE>
Subject: Re: [Slim] Issue 47
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 02:22:28 -0000

At 4:37 PM -0400 10/13/17, Brian Rosen wrote:

>  "requires to" is awkward in my dialect of English.  Perhaps "desires to"?

In separate email, I suggested:

    and requires a voice stream to send spoken language plus a text
    stream to receive written language

As I said in that email, I have no objection to this change, but I 
also don't see how it helps.

--Randall

>
>
>>  On Oct 13, 2017, at 4:26 PM, Bernard Aboba 
>> <<mailto:bernard.aboba@gmail.com>bernard.aboba@gmail.com> wrote:
>>
>>  Issue 47 ( 
>> <https://trac.ietf.org/trac/slim/ticket/47>https://trac.ietf.org/trac/slim/ticket/47 
>> ) suggest the following change to the text:
>>
>>  Change:
>>
>>  ""Another example would be a user who is able to speak but is deaf 
>> or hard-of-hearing and requires a voice stream plus a text 
>> stream.""
>>
>>  To:
>>
>>  "Another example would be a user who is able to speak but is deaf 
>> or hard-of-hearing and requires to send spoken language in a voice 
>> stream and receive written language in a text stream."
>>
>>
>>  Can we Accept the proposed resolution?
>>
>>  _______________________________________________
>>  SLIM mailing list
>>  <mailto:SLIM@ietf.org>SLIM@ietf.org
>>  https://www.ietf.org/mailman/listinfo/slim
>>
>
>
>  _______________________________________________
>  SLIM mailing list
>  SLIM@ietf.org
>  https://www.ietf.org/mailman/listinfo/slim


-- 
Randall Gellens
Opinions are personal;    facts are suspect;    I speak for myself only
-------------- Randomly selected tag: ---------------
Conservatives are not necessarily stupid, but most stupid people
are conservatives.
    --John Stuart Mill (1806-1873)


From nobody Fri Oct 13 19:26:07 2017
Return-Path: <rg+ietf@randy.pensive.org>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 48D44132944 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 19:26:06 -0700 (PDT)
X-Quarantine-ID: <AS0PBtz60Z0T>
X-Virus-Scanned: amavisd-new at amsl.com
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "MIME-Version"
X-Spam-Flag: NO
X-Spam-Score: -1.899
X-Spam-Level: 
X-Spam-Status: No, score=-1.899 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id AS0PBtz60Z0T for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 19:26:04 -0700 (PDT)
Received: from turing.pensive.org (turing.pensive.org [99.111.97.161]) by ietfa.amsl.com (Postfix) with ESMTP id B67E91241F3 for <slim@ietf.org>; Fri, 13 Oct 2017 19:26:04 -0700 (PDT)
Received: from [172.20.60.54] (99.111.97.161) by turing.pensive.org with ESMTP (EIMS X 3.3.9); Fri, 13 Oct 2017 19:30:09 -0700
Mime-Version: 1.0
Message-Id: <p06240606d607257c9584@[172.20.60.54]>
In-Reply-To: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com>
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com>
X-Mailer: Eudora for Mac OS X
Date: Fri, 13 Oct 2017 19:25:59 -0700
To: Bernard Aboba <bernard.aboba@gmail.com>, slim@ietf.org
From: Randall Gellens <rg+ietf@randy.pensive.org>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii" ; format="flowed"
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/Iz8pYlymg8MDN35Nk80BqFED-M4>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 02:26:06 -0000

At 1:46 PM -0700 10/13/17, Bernard Aboba wrote:

>  Issue 43 ( 
> <https://trac.ietf.org/trac/slim/ticket/43>https://trac.ietf.org/trac/slim/ticket/43 
> ) results from a review comment that said that a simple way is 
> required to decide if a language tag is a sign language or a 
> written or spoken language.
>
>  Some applications scan the IANA language registry at startup for 
> the word "Sign" in the tag description:
> 
> <https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry>https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry
>
>
>  Currently, there are 319 language subtags that include "Sign 
> Language" in their description. 
>
>  Given the current layout of the language subtag registry, it is not 
> clear to me that there is an easier way to determine which tags 
> represent sign languages.  Nor is it within the SLIM WG charter to 
> develop a modification to the language subtag registry to address 
> this concern. 
>
>  So I am wondering whether we might resolve this with a Note 
> outlining the problem but not offering a solution. 

I think the wording in -14 addresses the comment by accepting Dale's 
suggestion that, rather than know non-signed tags, it's the use of 
the exact same tag in both an audio and a video stream that is the 
indicator.  That both tightens up the technical issue and simplifies 
it greatly.

The only other instance where we might add such a note would be in 5.4:

5.4.  Undefined Combinations

    With the exception of the case mentioned in Section 5.2 (an audio
    stream in parallel with a video stream with the exact same (spoken)
    language tag), the behavior when specifying a non-signed language tag
    for a video media stream, or a signed language tag for an audio or
    text media stream, is not defined.

We could add your suggested note to 5.4.

-- 
Randall Gellens
Opinions are personal;    facts are suspect;    I speak for myself only
-------------- Randomly selected tag: ---------------
Leisure for men of business, and business for men of leisure would
cure many complaints.                         --Esther L.S. Thrale


From nobody Fri Oct 13 20:56:37 2017
Return-Path: <bernard.aboba@gmail.com>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 32CAB132944 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 20:56:36 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level: 
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 7UM5NYObUcsX for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 20:56:35 -0700 (PDT)
Received: from mail-pf0-x236.google.com (mail-pf0-x236.google.com [IPv6:2607:f8b0:400e:c00::236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id CA1451321C9 for <slim@ietf.org>; Fri, 13 Oct 2017 20:56:34 -0700 (PDT)
Received: by mail-pf0-x236.google.com with SMTP id b85so11794121pfj.13 for <slim@ietf.org>; Fri, 13 Oct 2017 20:56:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;  h=mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=E/cCPKgBcWn++czUzonad7AFAMUdX94pkH/5ArzP6MM=; b=owByAYBqD39idea2CpdSyMSfB/qxYC/GaKICvWT4/wMSCmV0UFJWpkHn+peJTOM4yo AJxaAuygvoFTJ4ZvKAMgfgay52ae1nmVXSnmHc7sZSzBS0uTvGBEESc3ucDrIXwq+oYP rHK5FM9/WfrDv+rz2qxsPxM0B8MrG191cTWwIJLue/7Xw/CcaEK7QQZyRM28paOcq0WK jwtIKw4YKqAGL9quDTdVe1xQE8AoYDT1MuPljUVxHfeux1Az3CyPXEmon3iwnZWGF0bi 1OS3fwH7AjT8Xx8gBQeAcmDAJFk1L0Z5qV+CqOHceKLWniuAALyLiXGdlU7iL/icn4tJ e00Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=E/cCPKgBcWn++czUzonad7AFAMUdX94pkH/5ArzP6MM=; b=nyOW38hmJs9ePyDDCUle9ZEZeUf6+dcfaAJ2t//74+G3znHd7YWBChRW1sYa7/+cGy b07QQ3LA1+mNWrArrI72ICJoFJYwQ5JUK1obCbgamoDJv/lxf2YEzN2Fd4IOrvb+bf10 tj9kfcbZH2xlL7JRtRmGHbyBahWJgCzioIuDHBoSoRwEEBjEczex0K4CV1J57ed+uzGx ACbsckYuL4clCRjn5/gkLgjuRl8xTe6Vk7LN57Menx6jb33M64JiwIGDydDvpCbwoQDJ SBJE/dODyWOqOLQAiEzQI99eXxxLXHxTm/xqfWcAN/WyPtxhW+8pvNzzatP/bAx2p4xT th/g==
X-Gm-Message-State: AMCzsaVZbNuwgbR8Guqp179Cm94qhbjCj2t1njIzAp3iVYT7rPSaWfn7 KrcRzHrc3syDDCWWqVQkIHB0OpKl
X-Google-Smtp-Source: AOwi7QAdqIO6sMOXgN2ClCCNgwQ1bT1C1txhZN4XUW8WRPCRVWlbXcHOrq+76I8RDVyXZ6n5ku3lxg==
X-Received: by 10.98.211.220 with SMTP id z89mr3045348pfk.99.1507953394186; Fri, 13 Oct 2017 20:56:34 -0700 (PDT)
Received: from [192.168.1.101] (c-98-225-39-241.hsd1.wa.comcast.net. [98.225.39.241]) by smtp.gmail.com with ESMTPSA id f3sm4571428pfd.82.2017.10.13.20.56.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Oct 2017 20:56:32 -0700 (PDT)
Content-Type: text/plain; charset=us-ascii
Mime-Version: 1.0 (1.0)
From: Bernard Aboba <bernard.aboba@gmail.com>
X-Mailer: iPad Mail (15A421)
In-Reply-To: <p06240602d60722ddf848@[172.20.60.54]>
Date: Fri, 13 Oct 2017 20:56:28 -0700
Cc: =?utf-8?Q?Gunnar_Hellstr=C3=B6m?= <gunnar.hellstrom@omnitor.se>, "slim@ietf.org" <slim@ietf.org>
Content-Transfer-Encoding: quoted-printable
Message-Id: <38B0F09F-5EB4-4208-AA8E-F8691738CE41@gmail.com>
References: <3e945827-8310-56aa-b2e5-7a9405ff85c4@omnitor.se> <p06240621d606585e823d@99.111.97.136> <57690f3d-faa2-18d8-f270-8ae179f39e68@omnitor.se> <p06240628d6066c091e76@99.111.97.136> <fea21ce6-398a-ebbb-5881-abe732c8983b@omnitor.se> <CAOW+2dubW_Pc-JKtTOZjSGeCWw=3bSwd1tqvObSwf4fyzs4Eig@mail.gmail.com> <9dafe618-8d7d-76ba-91e2-41e3b5ce1f3b@omnitor.se> <p06240602d60722ddf848@[172.20.60.54]>
To: Randall Gellens <rg+ietf@randy.pensive.org>
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/uXWcI42K2Gdu50Ay0TvlSAPQb_k>
Subject: Re: [Slim] Indication of modality alternatives in draft-ietf-slim-negotiating-human-language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 03:56:36 -0000

On Oct 13, 2017, at 19:13, Randall Gellens <rg+ietf@randy.pensive.org> wrote=
:
>=20
> This explicitly states that not all negotiated media will necessarily be u=
sed.

[BA] The text (and O/A SDP docs such as RFC 3264) does seem clear about this=
.=


From nobody Fri Oct 13 22:23:36 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id C8093126D0C for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 22:23:34 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.6
X-Spam-Level: 
X-Spam-Status: No, score=-2.6 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 7H2UqjjWOKW4 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 22:23:33 -0700 (PDT)
Received: from bin-vsp-out-02.atm.binero.net (bin-mail-out-06.binero.net [195.74.38.229]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id F04541320D9 for <slim@ietf.org>; Fri, 13 Oct 2017 22:23:32 -0700 (PDT)
X-Halon-ID: c4a2db66-b09f-11e7-99c0-005056917f90
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-02.atm.binero.net (Halon) with ESMTPSA id c4a2db66-b09f-11e7-99c0-005056917f90; Sat, 14 Oct 2017 07:23:10 +0200 (CEST)
To: slim@ietf.org
References: <5833ea9b-c7fe-1cfa-2015-21e42b5c3d55@omnitor.se> <p0624061fd606562afe15@[99.111.97.136]> <2b75cef5-296e-359d-433f-b113bce7a540@omnitor.se> <p06240603d607242e4735@[172.20.60.54]>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <d8590f98-e11c-d255-19aa-f93d5157ec8b@omnitor.se>
Date: Sat, 14 Oct 2017 07:23:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <p06240603d607242e4735@[172.20.60.54]>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/Wa80K5D-DWudGxU_5mmUBYTzwAo>
Subject: Re: [Slim] Simultaneity requirement in draft-ietf-slim-negotiating-human-language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 05:23:35 -0000

Den 2017-10-14 kl. 04:20, skrev Randall Gellens:
> Gunnar,
>
> Your proposal is to change:
>
>  and requires a voice stream plus
>  a text stream
>
> to:
>
>  and requires a voice stream to send spoken language plus a text
>  stream to receive written language
>
> I don't have a problem with this change, but I don't see how it helps.
>
<GH>The original wording can be read as if the streams would both be 
used for receiving language. The "plus" then tells me that both are 
provided simultaneously. Many users with low hearing prefer that. But we 
have no means to indicate that they are desired together.
This is in the introduction so we can take another example, and the 
change is about a case when the languages are used in different 
directions that the spec covers.

Gunnar

-- 
-----------------------------------------
Gunnar Hellstrm
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


From nobody Fri Oct 13 22:37:46 2017
Return-Path: <rg+ietf@randy.pensive.org>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0B9E6132F6C for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 22:37:45 -0700 (PDT)
X-Quarantine-ID: <RBwrvQoMDxe0>
X-Virus-Scanned: amavisd-new at amsl.com
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "MIME-Version"
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Level: 
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id RBwrvQoMDxe0 for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 22:37:43 -0700 (PDT)
Received: from turing.pensive.org (turing.pensive.org [99.111.97.161]) by ietfa.amsl.com (Postfix) with ESMTP id D1220132F30 for <slim@ietf.org>; Fri, 13 Oct 2017 22:37:43 -0700 (PDT)
Received: from [172.20.60.54] (99.111.97.161) by turing.pensive.org with ESMTP (EIMS X 3.3.9); Fri, 13 Oct 2017 22:41:49 -0700
Mime-Version: 1.0
Message-Id: <p06240600d60752b4c0fd@[172.20.60.54]>
In-Reply-To: <2e32f32c-7631-47ad-499b-f97beb8e8d66@omnitor.se>
References: <150789769465.23923.10838316479776071981@ietfa.amsl.com> <2e32f32c-7631-47ad-499b-f97beb8e8d66@omnitor.se>
X-Mailer: Eudora for Mac OS X
Date: Fri, 13 Oct 2017 22:37:39 -0700
To: Gunnar =?iso-8859-1?Q?Hellstr=F6m?=  <gunnar.hellstrom@omnitor.se>, slim@ietf.org
From: Randall Gellens <rg+ietf@randy.pensive.org>
Mime-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1" ; format="flowed"
Content-Transfer-Encoding: quoted-printable
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/xDp9TMSFPFv4dXrtQkJ8iZTBozE>
Subject: Re: [Slim] I-D Action: draft-ietf-slim-negotiating-human-language-14.txt
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 05:37:45 -0000

At 3:15 PM +0200 10/13/17, Gunnar Hellstr=F6m wrote:

>  Thanks Randall, good to see progress.
>
>  Apart from the comments I made to the separate=20
> issues, I also noticed that you without reason=20
> deleted this sentence from the introduction.
>
>  "Note that separate work may introduce additional information
>  regarding language/modality preferences among media."
>
>  The separate work is going on, and it is not related to the asterisk anym=
ore.

Hi Gunnar,

The text was added, along with an informative=20
reference to your draft, to explain that the=20
asterisk was an interim measure and that other=20
work was extending it.  With the deletion of the=20
asterisk, there isn't a need to mention your=20
documents in this one.

-- 
Randall Gellens
Opinions are personal;    facts are suspect;    I speak for myself only
-------------- Randomly selected tag: ---------------
The militarization of the rhetoric supporting the war on drugs rots
the public debate with a corrosive silence.  The political weather
turns gray and pinched.  People who become accustomed to the
arbitrary intrusions of the police also learn to speak more softly
in the presence of political authority, to bow and smile and fill
out the printed forms with cowed obsequiousness of musicians
playing waltzes at a Mafia wedding.
     --Lewis Lapham, "A Political Opiate" (Harper's, December 1989)


From nobody Fri Oct 13 22:42:12 2017
Return-Path: <rg+ietf@randy.pensive.org>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id DD2DA13306F for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 22:42:10 -0700 (PDT)
X-Quarantine-ID: <qxcXY-rTNz8V>
X-Virus-Scanned: amavisd-new at amsl.com
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "MIME-Version"
X-Spam-Flag: NO
X-Spam-Score: -1.899
X-Spam-Level: 
X-Spam-Status: No, score=-1.899 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qxcXY-rTNz8V for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 22:42:09 -0700 (PDT)
Received: from turing.pensive.org (turing.pensive.org [99.111.97.161]) by ietfa.amsl.com (Postfix) with ESMTP id 7F23B126D0C for <slim@ietf.org>; Fri, 13 Oct 2017 22:42:09 -0700 (PDT)
Received: from [172.20.60.54] (99.111.97.161) by turing.pensive.org with ESMTP (EIMS X 3.3.9); Fri, 13 Oct 2017 22:46:14 -0700
Mime-Version: 1.0
Message-Id: <p06240601d60753a3f914@[172.20.60.54]>
In-Reply-To: <CAOW+2dvYsCXY-eSNBm5U5gWhWzc9Q2a_bx+PkA3bG74eBXsQqg@mail.gmail.com>
References: <CAOW+2dvYsCXY-eSNBm5U5gWhWzc9Q2a_bx+PkA3bG74eBXsQqg@mail.gmail.com>
X-Mailer: Eudora for Mac OS X
Date: Fri, 13 Oct 2017 22:42:04 -0700
To: Bernard Aboba <bernard.aboba@gmail.com>, slim@ietf.org
From: Randall Gellens <rg+ietf@randy.pensive.org>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii" ; format="flowed"
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/VpIcDPjU5EWlWcPYMp2vXOkqAK4>
Subject: Re: [Slim] Issue 41: Allow sign languages in the text stream for text notations of sign language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 05:42:11 -0000

At 1:22 PM -0700 10/13/17, Bernard Aboba wrote:

>  Issue 41 
> (see: <https://trac.ietf.org/trac/slim/ticket/41>https://trac.ietf.org/trac/slim/ticket/41 
> ) relates to the potential future use of sign language within a 
> text stream, such as Formal Signwriting, described here:
> 
> <https://tools.ietf.org/html/draft-slevinski-formal-signwriting>https://tools.ietf.org/html/draft-slevinski-formal-signwriting
>
>  In the Issue, the following change is suggested:
>
>  Therefore, I suggest this minimal change:
>  ---------------------------old text 1 in 
> 5.4-------------------------------------
>  the behavior when specifying a spoken/written language tag for a 
> video media stream, or a signed language tag for an audio or text 
> media stream, is not defined.
>  --------------------------new text---------------------------------
>
>  the behavior when specifying a spoken/written language tag for a 
> video media stream, or a signed language tag for an audio media 
> stream, is not defined.
>  --------------------------end of change 1---------------------------
>
>  Since draft-slevinski has not been widely implemented, it probably 
> cannot be assumed that negotiation of a signed language tag for a 
> text media stream implies use of this (or any other) sign language 
> textual encoding mechanism.  So it would not be correct to imply 
> that use of a signed language tag for an text media stream has a 
> well defined meaning.
>
>  One way to resolve this would be to keep the existing text but add 
> a sentence, such as:
>
>  "Note that mechanisms for encoding signed language in a text media 
> stream have been proposed
>  [draft-slevinski] but are not yet well developed enough for 
> incorporation within the negotiation mechanism described in this 
> document."
>
>  Would such a resolution make sense?

How about simply changing "is not defined" to "is not defined in this 
document"?

-- 
Randall Gellens
Opinions are personal;    facts are suspect;    I speak for myself only
-------------- Randomly selected tag: ---------------
Don't worry over what other people are thinking about you.  They're too
busy worrying over what you are thinking about them.


From nobody Fri Oct 13 22:46:58 2017
Return-Path: <internet-drafts@ietf.org>
X-Original-To: slim@ietf.org
Delivered-To: slim@ietfa.amsl.com
Received: from ietfa.amsl.com (localhost [IPv6:::1]) by ietfa.amsl.com (Postfix) with ESMTP id 29A93126D0C; Fri, 13 Oct 2017 22:46:52 -0700 (PDT)
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
From: internet-drafts@ietf.org
To: <i-d-announce@ietf.org>
Cc: slim@ietf.org
X-Test-IDTracker: no
X-IETF-IDTracker: 6.63.1
Auto-Submitted: auto-generated
Precedence: bulk
Message-ID: <150796001212.5174.4211904958812522841@ietfa.amsl.com>
Date: Fri, 13 Oct 2017 22:46:52 -0700
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/XHQHEVUrXQKzhWRapfHib3NtsPU>
Subject: [Slim] I-D Action: draft-ietf-slim-negotiating-human-language-15.txt
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 05:46:52 -0000

A New Internet-Draft is available from the on-line Internet-Drafts directories.
This draft is a work item of the Selection of Language for Internet Media WG of the IETF.

        Title           : Negotiating Human Language in Real-Time Communications
        Author          : Randall Gellens
	Filename        : draft-ietf-slim-negotiating-human-language-15.txt
	Pages           : 16
	Date            : 2017-10-13

Abstract:
   Users have various human (natural) language needs, abilities, and
   preferences regarding spoken, written, and signed languages.  This
   document adds new SDP media-level attributes so that when
   establishing interactive communication sessions ("calls"), it is
   possible to negotiate (communicate and match) the caller's language
   and media needs with the capabilities of the called party.  This is
   especially important with emergency calls, where a call can be
   handled by a call taker capable of communicating with the user, or a
   translator or relay operator can be bridged into the call during
   setup, but this applies to non-emergency calls as well (as an
   example, when calling a company call center).

   This document describes the need and a solution using new SDP media
   attributes.


The IETF datatracker status page for this draft is:
https://datatracker.ietf.org/doc/draft-ietf-slim-negotiating-human-language/

There are also htmlized versions available at:
https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-15
https://datatracker.ietf.org/doc/html/draft-ietf-slim-negotiating-human-language-15

A diff from the previous version is available at:
https://www.ietf.org/rfcdiff?url2=draft-ietf-slim-negotiating-human-language-15


Please note that it may take a couple of minutes from the time of submission
until the htmlized version and diff are available at tools.ietf.org.

Internet-Drafts are also available by anonymous FTP at:
ftp://ftp.ietf.org/internet-drafts/


From nobody Fri Oct 13 23:00:27 2017
Return-Path: <rg+ietf@randy.pensive.org>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id EBE2A13306F for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 23:00:26 -0700 (PDT)
X-Quarantine-ID: <vEpeuN_Yy0nl>
X-Virus-Scanned: amavisd-new at amsl.com
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "MIME-Version"
X-Spam-Flag: NO
X-Spam-Score: -1.899
X-Spam-Level: 
X-Spam-Status: No, score=-1.899 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id vEpeuN_Yy0nl for <slim@ietfa.amsl.com>; Fri, 13 Oct 2017 23:00:25 -0700 (PDT)
Received: from turing.pensive.org (turing.pensive.org [99.111.97.161]) by ietfa.amsl.com (Postfix) with ESMTP id CF0AE126DFE for <slim@ietf.org>; Fri, 13 Oct 2017 23:00:25 -0700 (PDT)
Received: from [172.20.60.54] (99.111.97.161) by turing.pensive.org with ESMTP (EIMS X 3.3.9); Fri, 13 Oct 2017 23:04:31 -0700
Mime-Version: 1.0
Message-Id: <p06240604d60758591393@[172.20.60.54]>
In-Reply-To: <CAOW+2dvguu3FkzTYZGDid5+aJrB8hX70Zv9aVTcvQGGtsGme5Q@mail.gmail.com>
References: <CAOW+2dvguu3FkzTYZGDid5+aJrB8hX70Zv9aVTcvQGGtsGme5Q@mail.gmail.com>
X-Mailer: Eudora for Mac OS X
Date: Fri, 13 Oct 2017 23:00:20 -0700
To: Bernard Aboba <bernard.aboba@gmail.com>, slim@ietf.org
From: Randall Gellens <rg+ietf@randy.pensive.org>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii" ; format="flowed"
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/PFgGrK881dMDfCcIifh03mWInE0>
Subject: Re: [Slim] Status of draft-ietf-slim-negotiating-human-language-14
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 06:00:27 -0000

At 2:05 PM -0700 10/13/17, Bernard Aboba wrote:

>  Looking at TRAC, it appears that 3 issues remain open against the 
> document 
> (see: <https://trac.ietf.org/trac/slim/report/1>https://trac.ietf.org/trac/slim/report/1 
> ):
>
> 
> <https://trac.ietf.org/trac/slim/report/1?sort=ticket&asc=1&page=1>Ticket<https://trac.ietf.org/trac/slim/report/1?sort=summary&asc=1&page=1>Summary<https://trac.ietf.org/trac/slim/report/1?sort=component&asc=1&page=1>Component<https://trac.ietf.org/trac/slim/report/1?sort=version&asc=1&page=1>Version<https://trac.ietf.org/trac/slim/report/1?sort=milestone&asc=1&page=1>Milestone<https://trac.ietf.org/trac/slim/report/1?sort=type&asc=1&page=1>Type<https://trac.ietf.org/trac/slim/report/1?sort=owner&asc=1&page=1>Owner<https://trac.ietf.org/trac/slim/report/1?sort=status&asc=1&page=1>Status<https://trac.ietf.org/trac/slim/report/1?sort=created&asc=1&page=1>Created<https://trac.ietf.org/trac/slim/ticket/47>#47<https://trac.ietf.org/trac/slim/ticket/47>Unsupported 
> simultaneity 
> requirementnegotiating-human-languagedefect<mailto:draft-ietf-slim-negotiating-human-language@ietf.org>draft-ietf-slim-negotiating-human-language@ietf.orgnewJul 
> 31, 
> 2017<https://trac.ietf.org/trac/slim/ticket/43>#43<https://trac.ietf.org/trac/slim/ticket/43>How 
> to know the modality of a language 
> indication?negotiating-human-languagedefect<mailto:draft-ietf-slim-negotiating-human-language@ietf.org>draft-ietf-slim-negotiating-human-language@ietf.orgnewJul 
> 31, 
> 2017<https://trac.ietf.org/trac/slim/ticket/41>#41<https://trac.ietf.org/trac/slim/ticket/41>Allow 
> sign languages in the text stream for text notations of sign 
> languagenegotiating-human-languageenhancement<mailto:draft-ietf-slim-negotiating-human-language@ietf.org>draft-ietf-slim-negotiating-human-language@ietf.orgnewJun 
> 29, 2017

I believe that -15 addresses these.

-- 
Randall Gellens
Opinions are personal;    facts are suspect;    I speak for myself only
-------------- Randomly selected tag: ---------------
Our ability to reach unity in diversity will be the beauty and
test of our civilization.                     --Mahatma Gandhi


From nobody Sat Oct 14 01:21:45 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 98A50132CE7 for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 01:21:43 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.6
X-Spam-Level: 
X-Spam-Status: No, score=-2.6 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 0DG7FAHJLEvi for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 01:21:41 -0700 (PDT)
Received: from bin-vsp-out-01.atm.binero.net (bin-mail-out-06.binero.net [195.74.38.229]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id E80B4128D0D for <slim@ietf.org>; Sat, 14 Oct 2017 01:21:40 -0700 (PDT)
X-Halon-ID: 9dca845d-b0b8-11e7-9c60-005056917a89
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-01.atm.binero.net (Halon) with ESMTPSA id 9dca845d-b0b8-11e7-9c60-005056917a89; Sat, 14 Oct 2017 10:21:02 +0200 (CEST)
To: slim@ietf.org
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com> <p06240606d607257c9584@[172.20.60.54]>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se>
Date: Sat, 14 Oct 2017 10:21:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <p06240606d607257c9584@[172.20.60.54]>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/VBVdp45y0XISPL2iB_T1QwF0eOg>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 08:21:44 -0000

Den 2017-10-14 kl. 04:25, skrev Randall Gellens:
> At 1:46 PM -0700 10/13/17, Bernard Aboba wrote:
>
>> Issue 43 ( 
>> <https://trac.ietf.org/trac/slim/ticket/43>https://trac.ietf.org/trac/slim/ticket/43 
>> ) results from a review comment that said that a simple way is 
>> required to decide if a language tag is a sign language or a written 
>> or spoken language.
>>
>> Some applications scan the IANA language registry at startup for the 
>> word "Sign" in the tag description:
>>
>> <https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry>https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry 
>>
>>
>>
>> Currently, there are 319 language subtags that include "Sign 
>> Language" in their description.
>> Given the current layout of the language subtag registry, it is not 
>> clear to me that there is an easier way to determine which tags 
>> represent sign languages. Nor is it within the SLIM WG charter to 
>> develop a modification to the language subtag registry to address 
>> this concern.
>> So I am wondering whether we might resolve this with a Note 
>> outlining the problem but not offering a solution. 
>
> I think the wording in -14 addresses the comment by accepting Dale's 
> suggestion that, rather than know non-signed tags, it's the use of the 
> exact same tag in both an audio and a video stream that is the 
> indicator. That both tightens up the technical issue and simplifies 
> it greatly.
>
> The only other instance where we might add such a note would be in 5.4:
>
> 5.4. Undefined Combinations
>
>  With the exception of the case mentioned in Section 5.2 (an audio
>  stream in parallel with a video stream with the exact same (spoken)
>  language tag), the behavior when specifying a non-signed language tag
>  for a video media stream, or a signed language tag for an audio or
>  text media stream, is not defined.
>
> We could add your suggested note to 5.4.
>
<GH>We can replace 5.4 with a more explicit section guiding applications 
to how to make the deduction simple. So, instead of a note, I suggest 
that we replace 5.4 with:

5.4 Relations between media and modality
There is no easy way to deduct the intended modality from a language 
tag. Other specifications may introduce specific notations for modality 
used in a media or in relation to a language tag. Applications not 
implementing such specific notations may use the following simple 
deductions.
- A language tag in audio media is supposed to indicate spoken modality.
- A language tag in text media is supposed to indicate written modality.
- A language tag in video media is supposed to indicate visual sign 
language modality except for the case when it is supposed to indicate a 
view of a speaking person mentioned in section 5.2 characterized by the 
exact same language tag also appearing in an audio media specification.
- A language tag in media where the modality is obvious or specified for 
the media subtype definition is supposed to indicate that modality.
- A language tag in other media descriptions than above has undefined 
modality.
---------------------------------------------------------------------------------------------------------------------

My note: by this we currently consciously exclude the following use and 
I am ok with that:
-text in mp4 video
-audio in mp4 video ( or is that only allowed in application/mp4 ??)
-any modality in message media
-most application media, however some may have explicit descriptions in 
subtype specifications.

The exception with a view of a speaker stands out as very odd now, 
requiring comparison of language tags used in different media 
descriptions, and requiring simultaneous use of language in two 
different media that is otherwise out of scope for this draft. It was 
introduced while I still hoped that we could introduce other 
dependencies between language use in different media. It is not the 
most urgent media/language combination to specify. It is also handled in 
draft-hellstrom-slim-modality-grouping. So, assuming that we can get 
progress on that draft, we could clean up the current draft by deleting 
the exception. I suggest that we delete the exception.

/Gunnar



-- 
-----------------------------------------
Gunnar Hellstrm
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


From nobody Sat Oct 14 01:58:46 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 09F401320B5 for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 01:58:45 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.339
X-Spam-Level: 
X-Spam-Status: No, score=-1.339 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, HTML_OBFUSCATE_05_10=0.26, MANY_SPAN_IN_TEXT=1, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=no autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id lJ2LCwT21Gm7 for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 01:58:42 -0700 (PDT)
Received: from bin-vsp-out-01.atm.binero.net (bin-mail-out-06.binero.net [195.74.38.229]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 967C3126B7E for <slim@ietf.org>; Sat, 14 Oct 2017 01:58:41 -0700 (PDT)
X-Halon-ID: c82c5cba-b0bd-11e7-9c60-005056917a89
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-01.atm.binero.net (Halon) with ESMTPSA id c82c5cba-b0bd-11e7-9c60-005056917a89; Sat, 14 Oct 2017 10:58:01 +0200 (CEST)
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
To: Brian Rosen <br@brianrosen.net>
Cc: Bernard Aboba <bernard.aboba@gmail.com>, "slim@ietf.org" <slim@ietf.org>,  Randall Gellens <rg+ietf@randy.pensive.org>
References: <3e945827-8310-56aa-b2e5-7a9405ff85c4@omnitor.se> <p06240621d606585e823d@99.111.97.136> <57690f3d-faa2-18d8-f270-8ae179f39e68@omnitor.se> <p06240628d6066c091e76@99.111.97.136> <fea21ce6-398a-ebbb-5881-abe732c8983b@omnitor.se> <CAOW+2dubW_Pc-JKtTOZjSGeCWw=3bSwd1tqvObSwf4fyzs4Eig@mail.gmail.com> <9dafe618-8d7d-76ba-91e2-41e3b5ce1f3b@omnitor.se> <ABDCB89A-4BF0-494C-A729-3EB6529DA618@brianrosen.net> <59f36c7d-41fc-68f5-1395-b0450689f5ca@omnitor.se>
Message-ID: <7750ee16-18a0-3f44-5d79-d50967447d8e@omnitor.se>
Date: Sat, 14 Oct 2017 10:58:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <59f36c7d-41fc-68f5-1395-b0450689f5ca@omnitor.se>
Content-Type: multipart/alternative; boundary="------------8849E2DC05D0FA661721EF08"
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/V_iI1A4hKL_2nWS457ptgHHcAIk>
Subject: Re: [Slim] Indication of modality alternatives in draft-ietf-slim-negotiating-human-language -Issue #46
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 08:58:45 -0000

This is a multi-part message in MIME format.
--------------8849E2DC05D0FA661721EF08
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit

In order to not create complicated sentences but still having the 
wording match our intentions, I want to change the proposed resolution 
for Issue # 46 Change 1 to:

---Change 1 in 5.2, first paragraph----------------
------old text---------
This document defines two media-level attributes starting with
'hlang' (short for "human interactive language") to negotiate which
human language is selected for use in each interactive media stream.
------------new text--------------------
This document defines two media-level attributes starting with
'hlang' (short for "human interactive language") to negotiate which
human language is selected for *potential* use in each media stream.
-------end of change 1-------

That matches the "if" in paragraph 3, and it is also valid for both the 
offers and answers, while paragraph 3 is only for the answer.
Please accept it, it is of importance for proper understanding of our 
intentions.

/Gunnar

Den 2017-10-14 kl. 00:21, skrev Gunnar Hellström:
> Den 2017-10-13 kl. 20:31, skrev Brian Rosen:
>> Gunnar
>>
>> Protocol documents are for engineers to write software/create 
>> hardware.  They don’t try to control user behavior.  I think in this 
>> case, you are trying to get the document to describe user behavior 
>> and not implementation software/hardware.
>>
>> Although we do sometimes describe how we expect the protocol to be 
>> used by people, that is not normative, and we should be careful to 
>> not proscribe behavior.
> <GH>Our protocol needs to be well defined regardless if the source and 
> sink of language is automata or humans.
> As long as we can read the specification differently it is not well 
> defined.
> When I ask what the result of the negotiation really means, you use to 
> say that it is alternative languages that the users are supposed to 
> select from and use one or more in each direction.
> I agree that that is a good result.
> I think the wording still means that all negotiated languages should 
> be used, and I want to avoid that interpretation.
>
> (There is also use for a result saying that a couple of languages are 
> desired together in the same direction, but it has been said many 
> times that that is not the intention of the current draft, so that 
> requires separate work. )
>
> We are reasonably good now with change 2 implemented. The first 
> section of 5.2, that change 1 aims at, may be seen as introductory and 
> does maybe not need to be completely clarifying, if that requires too 
> complicated wording. But it must not contradict the intention of the 
> protocol.
>
> Gunnar
>>
>> Brian
>>
>>> On Oct 13, 2017, at 2:21 PM, Gunnar Hellström 
>>> <gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>> 
>>> wrote:
>>>
>>> Den 2017-10-13 kl. 16:58, skrev Bernard Aboba:
>>>> Gunnar said:
>>>>
>>>> "to negotiate which human language is selected for possible use in 
>>>> each interactive media stream"
>>>>
>>>> [BA] Given that audio can be muted, video can be turned off, etc. 
>>>> aren't media streams negotiated in SDP always for "possible" use?
>>> <GH>That may be true, but we are not talking about the media flow in 
>>> the streams. We are talking about the use for language. Our draft 
>>> must reflect clearly what the language negotiation result really 
>>> means. To me,  "is selected for use in each interactive media 
>>> stream" sounds as a promise that a negotiated language will actually 
>>> be used. That means that if two media streams end up with negotiated 
>>> languages in the same direction, then both must be provided 
>>> together. According to the discussions in the WG, that is not the 
>>> desired result. The desired result should be that the users can 
>>> select between use of the negotiated languages and usually use just 
>>> one in each direction.  We introduced "selected" some time ago, but 
>>> it did not have the right effect.
>>>
>>> I will try to come up with new wording proposals.
>>>
>>> /Gunnar
>>>
>>>>
>>>> On Fri, Oct 13, 2017 at 6:32 AM, Gunnar 
>>>> Hellström<gunnar.hellstrom@omnitor.se 
>>>> <mailto:gunnar.hellstrom@omnitor.se>>wrote:
>>>>
>>>>     Change 2 is fine and solves part of the problem.
>>>>
>>>>     But the current wording at my proposed change 1 still tells me
>>>>     that if I offer English text and English voice, it means that I
>>>>     have selected to use both, and even stronger if an answer
>>>>     contains English text and English voice, then both will be used
>>>>     in the session, exactly as you indicated was the problem with
>>>>     the Lang attribute. We need to get the possibility to select
>>>>     among alternatives clearly into the draft so that not next
>>>>     generation implementers also say that it is too vague about
>>>>     what it means.
>>>>
>>>>     The current wording at change one still says that each
>>>>     interactive stream is used.
>>>>
>>>>     How about:  "to negotiate which
>>>>     human language is selected for*possible*use in each interactive
>>>>     media stream."
>>>>
>>>>     /Gunnar
>>>>
>>>>
>>>>     Den 2017-10-13 kl. 15:13, skrev Randall Gellens:
>>>>>     I think we've addressed the concerns that existed with earlier
>>>>>     versions of the draft.
>>>>>
>>>>>     At 2:57 PM +0200 10/13/17, Gunnar Hellström wrote:
>>>>>
>>>>>>      Den 2017-10-13 kl. 13:51, skrev Randall Gellens:
>>>>>>>      At 12:06 AM +0200 7/29/17, Gunnar Hellström wrote:
>>>>>>>
>>>>>>>>     We have dealt with this topic before, but rereading the
>>>>>>>>     draft indicates to me that we still need some tuning of the
>>>>>>>>     wording so that it is clear that the language indications
>>>>>>>>     for the same direction for different media are alternatives
>>>>>>>>     with no requirements that they need to be provided
>>>>>>>>     together, so that it is allowed to answer with just one
>>>>>>>>     media in each direction having language indication.
>>>>>>>>
>>>>>>>>     Suggested wording changes to make this clear:
>>>>>>>>
>>>>>>>>     ---Change 1 in 5.2, first paragraph----------------
>>>>>>>>     ------old text---------
>>>>>>>>     This document defines two media-level attributes starting with
>>>>>>>>     'hlang' (short for "human interactive language") to
>>>>>>>>     negotiate which
>>>>>>>>     human language is selected for use in each interactive
>>>>>>>>     media stream.
>>>>>>>>     ------------new text--------------------
>>>>>>>>     This document defines two media-level attributes starting with
>>>>>>>>     'hlang' (short for "human interactive language") to
>>>>>>>>     negotiate which
>>>>>>>>     human language is selected for use in each media stream
>>>>>>>>     used for interactive language communication.
>>>>>>>>     -------end of change 1-------
>>>>>>>
>>>>>>>      I don't see how changing "each interactive media stream" to
>>>>>>>     "each media stream used for interactive language
>>>>>>>     communication" improves anything.  The term "interactive"
>>>>>>>     implies human interaction.
>>>>>>      <GH>Yes, but human interaction can be to show things in
>>>>>>     video without being language communication.
>>>>>>      What I am aiming at is to clearly indicate that the language
>>>>>>     indications are alternatives to select from. The wording "use
>>>>>>     in each interactive media stream" sounds to me that you MUST
>>>>>>     use all the agreed languages. That is the same mistake that
>>>>>>     you initially blamed the Lang SDP attribute to mean. We need
>>>>>>     to get away from that interpretation. My wording was intended
>>>>>>     to accomplish that, but it might have been too weak. The key
>>>>>>     word is "used" that is intended to mean that if a media
>>>>>>     stream is selected to be used for language communication then
>>>>>>     the agreed language is the one to be used.
>>>>>>      So, I prefer my wording, or if you can create something even
>>>>>>     more clear that we are talking about alternatives to select from.
>>>>>>>
>>>>>>>>
>>>>>>>>     ----Change 2 in 5.2, third paragraph ------
>>>>>>>>     ----old text------
>>>>>>>>     In an answer, 'hlang-send' is the language the answerer
>>>>>>>>     will send if
>>>>>>>>     using the media for language (which in most cases is one of the
>>>>>>>>     languages in the offer's 'hlang-recv'), and 'hlang-recv' is the
>>>>>>>>     language the answerer expects to receive in the media
>>>>>>>>     (which in most
>>>>>>>>     cases is one of the languages in the offer's 'hlang-send').
>>>>>>>>     -----new text----
>>>>>>>>     In an answer, 'hlang-send' is the language the answerer
>>>>>>>>     will send if
>>>>>>>>     using the media for language (which in most cases is one of the
>>>>>>>>     languages in the offer's 'hlang-recv'), and 'hlang-recv' is the
>>>>>>>>     language the answerer expects to receive in the media if
>>>>>>>>     using the media for language (which in most
>>>>>>>>     cases is one of the languages in the offer's 'hlang-send').
>>>>>>>>     ----end of change 2-------------------------------
>>>>>>>
>>>>>>>      I'm OK adding "if using the media for language" to the
>>>>>>>     second clause.
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>     /Gunnar
>>>>>>>>
>>>>>>>>     --
>>>>>>>>     -----------------------------------------
>>>>>>>>     Gunnar Hellström
>>>>>>>>     Omnitor
>>>>>>>>     gunnar.hellstrom@omnitor.se
>>>>>>>>     <mailto:gunnar.hellstrom@omnitor.se>
>>>>>>>>
>>>>>>>>     _______________________________________________
>>>>>>>>     SLIM mailing list
>>>>>>>>     SLIM@ietf.org <mailto:SLIM@ietf.org>
>>>>>>>>     https://www.ietf.org/mailman/listinfo/slim
>>>>>>>>     <https://www.ietf.org/mailman/listinfo/slim>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>      --
>>>>>>      -----------------------------------------
>>>>>>      Gunnar Hellström
>>>>>>      Omnitor
>>>>>>     gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>
>>>>>>      +46 708 204 288
>>>>>
>>>>>
>>>>
>>>>     -- 
>>>>     -----------------------------------------
>>>>     Gunnar Hellström
>>>>     Omnitor
>>>>     gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>
>>>>     +46 708 204 288
>>>>
>>>>
>>>>     _______________________________________________
>>>>     SLIM mailing list
>>>>     SLIM@ietf.org <mailto:SLIM@ietf.org>
>>>>     https://www.ietf.org/mailman/listinfo/slim
>>>>     <https://www.ietf.org/mailman/listinfo/slim>
>>>>
>>>>
>>>
>>> -- 
>>> -----------------------------------------
>>> Gunnar Hellström
>>> Omnitor
>>> gunnar.hellstrom@omnitor.se
>>> +46 708 204 288
>>> _______________________________________________
>>> SLIM mailing list
>>> SLIM@ietf.org <mailto:SLIM@ietf.org>
>>> https://www.ietf.org/mailman/listinfo/slim
>>
>
> -- 
> -----------------------------------------
> Gunnar Hellström
> Omnitor
> gunnar.hellstrom@omnitor.se
> +46 708 204 288

-- 
-----------------------------------------
Gunnar Hellström
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


--------------8849E2DC05D0FA661721EF08
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <p>In order to not create complicated sentences but still having the
      wording match our intentions, I want to change the proposed
      resolution for Issue # 46 Change 1 to:</p>
    <p><span class="Apple-converted-space"></span>---Change 1 in 5.2,
      first paragraph----------------<span class="Apple-converted-space"> </span><br
        class="">
       <span class="Apple-converted-space"> </span>------old
      text---------<span class="Apple-converted-space"> </span><br
        class="">
       <span class="Apple-converted-space"> </span>This document defines
      two media-level attributes starting with<span
        class="Apple-converted-space"> </span><br class="">
          <span class="Apple-converted-space"> </span>'hlang' (short for
      "human interactive language") to negotiate which<span
        class="Apple-converted-space"> </span><br class="">
          <span class="Apple-converted-space"> </span>human language is
      selected for use in each interactive media stream.<span
        class="Apple-converted-space"> </span><br class="">
       <span class="Apple-converted-space"> </span>------------new
      text--------------------<span class="Apple-converted-space"> </span><br
        class="">
       <span class="Apple-converted-space"> </span>This document defines
      two media-level attributes starting with<span
        class="Apple-converted-space"> </span><br class="">
          <span class="Apple-converted-space"> </span>'hlang' (short for
      "human interactive language") to negotiate which<span
        class="Apple-converted-space"> </span><br class="">
          <span class="Apple-converted-space"> </span>human language is
      selected for <b>potential</b> use in each media stream.<span
        class="Apple-converted-space"> </span><br class="">
       <span class="Apple-converted-space"> </span>-------end of change
      1-------<span class="Apple-converted-space"> <br>
      </span></p>
    That matches the "if" in paragraph 3, and it is also valid for both
    the offers and answers, while paragraph 3 is only for the answer.<br>
    Please accept it, it is of importance for proper understanding of
    our intentions.<br>
    <br>
    /Gunnar<br>
    <br>
    <div class="moz-cite-prefix">Den 2017-10-14 kl. 00:21, skrev Gunnar
      Hellström:<br>
    </div>
    <blockquote type="cite"
      cite="mid:59f36c7d-41fc-68f5-1395-b0450689f5ca@omnitor.se">
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      Den 2017-10-13 kl. 20:31, skrev Brian Rosen:<br>
      <blockquote type="cite"
        cite="mid:ABDCB89A-4BF0-494C-A729-3EB6529DA618@brianrosen.net">
        <meta http-equiv="Content-Type" content="text/html;
          charset=utf-8">
        Gunnar
        <div class=""><br class="">
        </div>
        <div class="">Protocol documents are for engineers to write
          software/create hardware.  They don’t try to control user
          behavior.  I think in this case, you are trying to get the
          document to describe user behavior and not implementation
          software/hardware.</div>
        <div class=""><br class="">
        </div>
        <div class="">Although we do sometimes describe how we expect
          the protocol to be used by people, that is not normative, and
          we should be careful to not proscribe behavior.</div>
      </blockquote>
      &lt;GH&gt;Our protocol needs to be well defined regardless if the
      source and sink of language is automata or humans.<br>
      As long as we can read the specification differently it is not
      well defined.<br>
      When I ask what the result of the negotiation really means, you
      use to say that it is alternative languages that the users are
      supposed to select from and use one or more in each direction. <br>
      I agree that that is a good result.<br>
      I think the wording still means that all negotiated languages
      should be used, and I want to avoid that interpretation.<br>
      <br>
      (There is also use for a result saying that a couple of languages
      are desired together in the same direction, but it has been said
      many times that that is not the intention of the current draft, so
      that requires separate work. ) <br>
      <br>
      We are reasonably good now with change 2 implemented. The first
      section of 5.2, that change 1 aims at, may be seen as introductory
      and does maybe not need to be completely clarifying, if that
      requires too complicated wording. But it must not contradict the
      intention of the protocol.<br>
      <br>
      Gunnar<br>
      <blockquote type="cite"
        cite="mid:ABDCB89A-4BF0-494C-A729-3EB6529DA618@brianrosen.net">
        <div class=""><br class="">
        </div>
        <div class="">Brian</div>
        <div class=""><br class="">
        </div>
        <div class="">
          <div>
            <blockquote type="cite" class="">
              <div class="">On Oct 13, 2017, at 2:21 PM, Gunnar
                Hellström &lt;<a
                  href="mailto:gunnar.hellstrom@omnitor.se" class=""
                  moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>&gt;
                wrote:</div>
              <br class="Apple-interchange-newline">
              <div class=""><span style="font-family: Helvetica;
                  font-size: 12px; font-style: normal;
                  font-variant-caps: normal; font-weight: normal;
                  letter-spacing: normal; text-align: start;
                  text-indent: 0px; text-transform: none; white-space:
                  normal; word-spacing: 0px; -webkit-text-stroke-width:
                  0px; background-color: rgb(255, 255, 255); float:
                  none; display: inline !important;" class="">Den
                  2017-10-13 kl. 16:58, skrev Bernard Aboba:</span><br
                  style="font-family: Helvetica; font-size: 12px;
                  font-style: normal; font-variant-caps: normal;
                  font-weight: normal; letter-spacing: normal;
                  text-align: start; text-indent: 0px; text-transform:
                  none; white-space: normal; word-spacing: 0px;
                  -webkit-text-stroke-width: 0px; background-color:
                  rgb(255, 255, 255);" class="">
                <blockquote type="cite"
cite="mid:CAOW+2dubW_Pc-JKtTOZjSGeCWw=3bSwd1tqvObSwf4fyzs4Eig@mail.gmail.com"
                  style="font-family: Helvetica; font-size: 12px;
                  font-style: normal; font-variant-caps: normal;
                  font-weight: normal; letter-spacing: normal; orphans:
                  auto; text-align: start; text-indent: 0px;
                  text-transform: none; white-space: normal; widows:
                  auto; word-spacing: 0px; -webkit-text-size-adjust:
                  auto; -webkit-text-stroke-width: 0px;
                  background-color: rgb(255, 255, 255);" class="">
                  <div dir="ltr" class="">Gunnar said: 
                    <div class=""><br class="">
                    </div>
                    <div class="">"to negotiate which human language is
                      selected for possible use in each interactive
                      media stream"</div>
                    <div class=""><br class="">
                    </div>
                    <div class="">[BA] Given that audio can be muted,
                      video can be turned off, etc. aren't media streams
                      negotiated in SDP always for "possible" use?</div>
                  </div>
                </blockquote>
                <span style="font-family: Helvetica; font-size: 12px;
                  font-style: normal; font-variant-caps: normal;
                  font-weight: normal; letter-spacing: normal;
                  text-align: start; text-indent: 0px; text-transform:
                  none; white-space: normal; word-spacing: 0px;
                  -webkit-text-stroke-width: 0px; background-color:
                  rgb(255, 255, 255); float: none; display: inline
                  !important;" class="">&lt;GH&gt;That may be true, but
                  we are not talking about the media flow in the
                  streams. We are talking about the use for language.
                  Our draft must reflect clearly what the language
                  negotiation result really means. To me,  "is selected
                  for use in each interactive media stream" sounds as a
                  promise that a negotiated language will actually be
                  used. That means that if two media streams end up with
                  negotiated languages in the same direction, then both
                  must be provided together. According to the
                  discussions in the WG, that is not the desired result.
                  The desired result should be that the users can select
                  between use of the negotiated languages and usually
                  use just one in each direction.  We introduced
                  "selected" some time ago, but it did not have the
                  right effect.  <span class="Apple-converted-space"> </span></span><br
                  style="font-family: Helvetica; font-size: 12px;
                  font-style: normal; font-variant-caps: normal;
                  font-weight: normal; letter-spacing: normal;
                  text-align: start; text-indent: 0px; text-transform:
                  none; white-space: normal; word-spacing: 0px;
                  -webkit-text-stroke-width: 0px; background-color:
                  rgb(255, 255, 255);" class="">
                <br style="font-family: Helvetica; font-size: 12px;
                  font-style: normal; font-variant-caps: normal;
                  font-weight: normal; letter-spacing: normal;
                  text-align: start; text-indent: 0px; text-transform:
                  none; white-space: normal; word-spacing: 0px;
                  -webkit-text-stroke-width: 0px; background-color:
                  rgb(255, 255, 255);" class="">
                <span style="font-family: Helvetica; font-size: 12px;
                  font-style: normal; font-variant-caps: normal;
                  font-weight: normal; letter-spacing: normal;
                  text-align: start; text-indent: 0px; text-transform:
                  none; white-space: normal; word-spacing: 0px;
                  -webkit-text-stroke-width: 0px; background-color:
                  rgb(255, 255, 255); float: none; display: inline
                  !important;" class="">I will try to come up with new
                  wording proposals.</span><br style="font-family:
                  Helvetica; font-size: 12px; font-style: normal;
                  font-variant-caps: normal; font-weight: normal;
                  letter-spacing: normal; text-align: start;
                  text-indent: 0px; text-transform: none; white-space:
                  normal; word-spacing: 0px; -webkit-text-stroke-width:
                  0px; background-color: rgb(255, 255, 255);" class="">
                <br style="font-family: Helvetica; font-size: 12px;
                  font-style: normal; font-variant-caps: normal;
                  font-weight: normal; letter-spacing: normal;
                  text-align: start; text-indent: 0px; text-transform:
                  none; white-space: normal; word-spacing: 0px;
                  -webkit-text-stroke-width: 0px; background-color:
                  rgb(255, 255, 255);" class="">
                <span style="font-family: Helvetica; font-size: 12px;
                  font-style: normal; font-variant-caps: normal;
                  font-weight: normal; letter-spacing: normal;
                  text-align: start; text-indent: 0px; text-transform:
                  none; white-space: normal; word-spacing: 0px;
                  -webkit-text-stroke-width: 0px; background-color:
                  rgb(255, 255, 255); float: none; display: inline
                  !important;" class="">/Gunnar</span><br
                  style="font-family: Helvetica; font-size: 12px;
                  font-style: normal; font-variant-caps: normal;
                  font-weight: normal; letter-spacing: normal;
                  text-align: start; text-indent: 0px; text-transform:
                  none; white-space: normal; word-spacing: 0px;
                  -webkit-text-stroke-width: 0px; background-color:
                  rgb(255, 255, 255);" class="">
                <span style="font-family: Helvetica; font-size: 12px;
                  font-style: normal; font-variant-caps: normal;
                  font-weight: normal; letter-spacing: normal;
                  text-align: start; text-indent: 0px; text-transform:
                  none; white-space: normal; word-spacing: 0px;
                  -webkit-text-stroke-width: 0px; background-color:
                  rgb(255, 255, 255); float: none; display: inline
                  !important;" class=""> </span><br style="font-family:
                  Helvetica; font-size: 12px; font-style: normal;
                  font-variant-caps: normal; font-weight: normal;
                  letter-spacing: normal; text-align: start;
                  text-indent: 0px; text-transform: none; white-space:
                  normal; word-spacing: 0px; -webkit-text-stroke-width:
                  0px; background-color: rgb(255, 255, 255);" class="">
                <blockquote type="cite"
cite="mid:CAOW+2dubW_Pc-JKtTOZjSGeCWw=3bSwd1tqvObSwf4fyzs4Eig@mail.gmail.com"
                  style="font-family: Helvetica; font-size: 12px;
                  font-style: normal; font-variant-caps: normal;
                  font-weight: normal; letter-spacing: normal; orphans:
                  auto; text-align: start; text-indent: 0px;
                  text-transform: none; white-space: normal; widows:
                  auto; word-spacing: 0px; -webkit-text-size-adjust:
                  auto; -webkit-text-stroke-width: 0px;
                  background-color: rgb(255, 255, 255);" class="">
                  <div class="gmail_extra"><br class="">
                    <div class="gmail_quote">On Fri, Oct 13, 2017 at
                      6:32 AM, Gunnar Hellström<span
                        class="Apple-converted-space"> </span><span
                        dir="ltr" class="">&lt;<a
                          href="mailto:gunnar.hellstrom@omnitor.se"
                          target="_blank" moz-do-not-send="true"
                          class="">gunnar.hellstrom@omnitor.se</a>&gt;</span><span
                        class="Apple-converted-space"> </span>wrote:<br
                        class="">
                      <blockquote class="gmail_quote" style="margin: 0px
                        0px 0px 0.8ex; border-left-width: 1px;
                        border-left-style: solid; border-left-color:
                        rgb(204, 204, 204); padding-left: 1ex;">
                        <div text="#000000" bgcolor="#FFFFFF" class="">
                          <p class="">Change 2 is fine and solves part
                            of the problem.</p>
                          <p class="">But the current wording at my
                            proposed change 1 still tells me that if I
                            offer English text and English voice, it
                            means that I have selected to use both, and
                            even stronger if an answer contains English
                            text and English voice, then both will be
                            used in the session, exactly as you
                            indicated was the problem with the Lang
                            attribute. We need to get the possibility to
                            select among alternatives clearly into the
                            draft so that not next generation
                            implementers also say that it is too vague
                            about what it means.</p>
                          <p class="">The current wording at change one
                            still says that each interactive stream is
                            used.<span class="Apple-converted-space"> </span><br
                              class="">
                          </p>
                          <p class="">How about:  "to negotiate which<span
                              class="Apple-converted-space"> </span><br
                              class="">
                                <span class="Apple-converted-space"> </span>human
                            language is selected for<b class=""><span
                                class="Apple-converted-space"> </span>possible<span
                                class="Apple-converted-space"> </span></b>use
                            in each interactive media stream."</p>
                          <span class="HOEnZb"><font class=""
                              color="#888888">
                              <p class="">/Gunnar<br class="">
                              </p>
                            </font></span>
                          <div class="">
                            <div class="h5"><br class="">
                              <div
                                class="m_5473125126441079136moz-cite-prefix">Den
                                2017-10-13 kl. 15:13, skrev Randall
                                Gellens:<br class="">
                              </div>
                              <blockquote type="cite" class="">I think
                                we've addressed the concerns that
                                existed with earlier versions of the
                                draft.<span
                                  class="Apple-converted-space"> </span><br
                                  class="">
                                <br class="">
                                At 2:57 PM +0200 10/13/17, Gunnar
                                Hellström wrote:<span
                                  class="Apple-converted-space"> </span><br
                                  class="">
                                <br class="">
                                <blockquote type="cite" class=""> Den
                                  2017-10-13 kl. 13:51, skrev Randall
                                  Gellens:<span
                                    class="Apple-converted-space"> </span><br
                                    class="">
                                  <blockquote type="cite" class=""> At
                                    12:06 AM +0200 7/29/17, Gunnar
                                    Hellström wrote:<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                    <br class="">
                                    <blockquote type="cite" class=""> <span
                                        class="Apple-converted-space"> </span>We
                                      have dealt with this topic before,
                                      but rereading the draft indicates
                                      to me that we still need some
                                      tuning of the wording so that it
                                      is clear that the language
                                      indications for the same direction
                                      for different media are
                                      alternatives with no requirements
                                      that they need to be provided
                                      together, so that it is allowed to
                                      answer with just one media in each
                                      direction having language
                                      indication.<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                      <br class="">
                                       <span
                                        class="Apple-converted-space"> </span>Suggested
                                      wording changes to make this
                                      clear:<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                      <br class="">
                                       <span
                                        class="Apple-converted-space"> </span>---Change
                                      1 in 5.2, first
                                      paragraph----------------<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                       <span
                                        class="Apple-converted-space"> </span>------old
                                      text---------<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                       <span
                                        class="Apple-converted-space"> </span>This
                                      document defines two media-level
                                      attributes starting with<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                          <span
                                        class="Apple-converted-space"> </span>'hlang'
                                      (short for "human interactive
                                      language") to negotiate which<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                          <span
                                        class="Apple-converted-space"> </span>human
                                      language is selected for use in
                                      each interactive media stream.<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                       <span
                                        class="Apple-converted-space"> </span>------------new
                                      text--------------------<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                       <span
                                        class="Apple-converted-space"> </span>This
                                      document defines two media-level
                                      attributes starting with<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                          <span
                                        class="Apple-converted-space"> </span>'hlang'
                                      (short for "human interactive
                                      language") to negotiate which<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                          <span
                                        class="Apple-converted-space"> </span>human
                                      language is selected for use in
                                      each media stream used for
                                      interactive language
                                      communication.<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                       <span
                                        class="Apple-converted-space"> </span>-------end
                                      of change 1-------<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                    </blockquote>
                                    <br class="">
                                     I don't see how changing "each
                                    interactive media stream" to "each
                                    media stream used for interactive
                                    language communication" improves
                                    anything.  The term "interactive"
                                    implies human interaction.<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                  </blockquote>
                                   &lt;GH&gt;Yes, but human interaction
                                  can be to show things in video without
                                  being language communication.<span
                                    class="Apple-converted-space"> </span><br
                                    class="">
                                   What I am aiming at is to clearly
                                  indicate that the language indications
                                  are alternatives to select from. The
                                  wording "use in each interactive media
                                  stream" sounds to me that you MUST use
                                  all the agreed languages. That is the
                                  same mistake that you initially blamed
                                  the Lang SDP attribute to mean. We
                                  need to get away from that
                                  interpretation. My wording was
                                  intended to accomplish that, but it
                                  might have been too weak. The key word
                                  is "used" that is intended to mean
                                  that if a media stream is selected to
                                  be used for language communication
                                  then the agreed language is the one to
                                  be used.<span
                                    class="Apple-converted-space"> </span><br
                                    class="">
                                   So, I prefer my wording, or if you
                                  can create something even more clear
                                  that we are talking about alternatives
                                  to select from.<span
                                    class="Apple-converted-space"> </span><br
                                    class="">
                                  <blockquote type="cite" class=""><br
                                      class="">
                                    <blockquote type="cite" class=""><br
                                        class="">
                                       <span
                                        class="Apple-converted-space"> </span>----Change
                                      2 in 5.2, third paragraph ------<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                       <span
                                        class="Apple-converted-space"> </span>----old
                                      text------<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                         <span
                                        class="Apple-converted-space"> </span>In
                                      an answer, 'hlang-send' is the
                                      language the answerer will send if<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                          <span
                                        class="Apple-converted-space"> </span>using
                                      the media for language (which in
                                      most cases is one of the<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                          <span
                                        class="Apple-converted-space"> </span>languages
                                      in the offer's 'hlang-recv'), and
                                      'hlang-recv' is the<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                          <span
                                        class="Apple-converted-space"> </span>language
                                      the answerer expects to receive in
                                      the media (which in most<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                          <span
                                        class="Apple-converted-space"> </span>cases
                                      is one of the languages in the
                                      offer's 'hlang-send').<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                       <span
                                        class="Apple-converted-space"> </span>-----new
                                      text----<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                         <span
                                        class="Apple-converted-space"> </span>In
                                      an answer, 'hlang-send' is the
                                      language the answerer will send if<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                          <span
                                        class="Apple-converted-space"> </span>using
                                      the media for language (which in
                                      most cases is one of the<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                          <span
                                        class="Apple-converted-space"> </span>languages
                                      in the offer's 'hlang-recv'), and
                                      'hlang-recv' is the<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                          <span
                                        class="Apple-converted-space"> </span>language
                                      the answerer expects to receive in
                                      the media if<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                          <span
                                        class="Apple-converted-space"> </span>using
                                      the media for language (which in
                                      most<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                          <span
                                        class="Apple-converted-space"> </span>cases
                                      is one of the languages in the
                                      offer's 'hlang-send').<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                       <span
                                        class="Apple-converted-space"> </span>----end
                                      of change
                                      2-----------------------------<wbr
                                        class="">--<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                    </blockquote>
                                    <br class="">
                                     I'm OK adding "if using the media
                                    for language" to the second clause.<span
                                      class="Apple-converted-space"> </span><br
                                      class="">
                                    <br class="">
                                    <blockquote type="cite" class=""><br
                                        class="">
                                      <br class="">
                                       <span
                                        class="Apple-converted-space"> </span>/Gunnar<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                      <br class="">
                                       <span
                                        class="Apple-converted-space"> </span>--<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                       <span
                                        class="Apple-converted-space"> </span>------------------------------<wbr
                                        class="">-----------<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                       <span
                                        class="Apple-converted-space"> </span>Gunnar
                                      Hellström<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                       <span
                                        class="Apple-converted-space"> </span>Omnitor<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                       <span
                                        class="Apple-converted-space"> </span><a
class="m_5473125126441079136moz-txt-link-abbreviated"
                                        href="mailto:gunnar.hellstrom@omnitor.se"
                                        target="_blank"
                                        moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a><span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                      <br class="">
                                       <span
                                        class="Apple-converted-space"> </span>______________________________<wbr
                                        class="">_________________<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                       <span
                                        class="Apple-converted-space"> </span>SLIM
                                      mailing list<span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                       <span
                                        class="Apple-converted-space"> </span><a
class="m_5473125126441079136moz-txt-link-abbreviated"
                                        href="mailto:SLIM@ietf.org"
                                        target="_blank"
                                        moz-do-not-send="true">SLIM@ietf.org</a><span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                       <span
                                        class="Apple-converted-space"> </span><a
class="m_5473125126441079136moz-txt-link-freetext"
                                        href="https://www.ietf.org/mailman/listinfo/slim"
                                        target="_blank"
                                        moz-do-not-send="true">https://www.ietf.org/mailman/<wbr
                                          class="">listinfo/slim</a><span
                                        class="Apple-converted-space"> </span><br
                                        class="">
                                    </blockquote>
                                    <br class="">
                                    <br class="">
                                  </blockquote>
                                  <br class="">
                                   --<span class="Apple-converted-space"> </span><br
                                    class="">
                                   -----------------------------<wbr
                                    class="">------------<span
                                    class="Apple-converted-space"> </span><br
                                    class="">
                                   Gunnar Hellström<span
                                    class="Apple-converted-space"> </span><br
                                    class="">
                                   Omnitor<span
                                    class="Apple-converted-space"> </span><br
                                    class="">
                                   <a
                                    class="m_5473125126441079136moz-txt-link-abbreviated"
href="mailto:gunnar.hellstrom@omnitor.se" target="_blank"
                                    moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a><span
                                    class="Apple-converted-space"> </span><br
                                    class="">
                                   +46 708 204 288<span
                                    class="Apple-converted-space"> </span><br
                                    class="">
                                </blockquote>
                                <br class="">
                                <br class="">
                              </blockquote>
                              <br class="">
                              <pre class="m_5473125126441079136moz-signature" cols="72">-- 
------------------------------<wbr class="">-----------
Gunnar Hellström
Omnitor
<a class="m_5473125126441079136moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se" target="_blank" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
                            </div>
                          </div>
                        </div>
                        <br class="">
                        ______________________________<wbr class="">_________________<br
                          class="">
                        SLIM mailing list<br class="">
                        <a href="mailto:SLIM@ietf.org"
                          moz-do-not-send="true" class="">SLIM@ietf.org</a><br
                          class="">
                        <a
                          href="https://www.ietf.org/mailman/listinfo/slim"
                          rel="noreferrer" target="_blank"
                          moz-do-not-send="true" class="">https://www.ietf.org/mailman/<wbr
                            class="">listinfo/slim</a><br class="">
                        <br class="">
                      </blockquote>
                    </div>
                    <br class="">
                  </div>
                </blockquote>
                <br style="font-family: Helvetica; font-size: 12px;
                  font-style: normal; font-variant-caps: normal;
                  font-weight: normal; letter-spacing: normal;
                  text-align: start; text-indent: 0px; text-transform:
                  none; white-space: normal; word-spacing: 0px;
                  -webkit-text-stroke-width: 0px; background-color:
                  rgb(255, 255, 255);" class="">
                <pre class="moz-signature" cols="72" style="font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255);">-- 
-----------------------------------------
Gunnar Hellström
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
                <span style="font-family: Helvetica; font-size: 12px;
                  font-style: normal; font-variant-caps: normal;
                  font-weight: normal; letter-spacing: normal;
                  text-align: start; text-indent: 0px; text-transform:
                  none; white-space: normal; word-spacing: 0px;
                  -webkit-text-stroke-width: 0px; background-color:
                  rgb(255, 255, 255); float: none; display: inline
                  !important;" class="">_______________________________________________</span><br
                  style="font-family: Helvetica; font-size: 12px;
                  font-style: normal; font-variant-caps: normal;
                  font-weight: normal; letter-spacing: normal;
                  text-align: start; text-indent: 0px; text-transform:
                  none; white-space: normal; word-spacing: 0px;
                  -webkit-text-stroke-width: 0px; background-color:
                  rgb(255, 255, 255);" class="">
                <span style="font-family: Helvetica; font-size: 12px;
                  font-style: normal; font-variant-caps: normal;
                  font-weight: normal; letter-spacing: normal;
                  text-align: start; text-indent: 0px; text-transform:
                  none; white-space: normal; word-spacing: 0px;
                  -webkit-text-stroke-width: 0px; background-color:
                  rgb(255, 255, 255); float: none; display: inline
                  !important;" class="">SLIM mailing list</span><br
                  style="font-family: Helvetica; font-size: 12px;
                  font-style: normal; font-variant-caps: normal;
                  font-weight: normal; letter-spacing: normal;
                  text-align: start; text-indent: 0px; text-transform:
                  none; white-space: normal; word-spacing: 0px;
                  -webkit-text-stroke-width: 0px; background-color:
                  rgb(255, 255, 255);" class="">
                <a href="mailto:SLIM@ietf.org" style="font-family:
                  Helvetica; font-size: 12px; font-style: normal;
                  font-variant-caps: normal; font-weight: normal;
                  letter-spacing: normal; orphans: auto; text-align:
                  start; text-indent: 0px; text-transform: none;
                  white-space: normal; widows: auto; word-spacing: 0px;
                  -webkit-text-size-adjust: auto;
                  -webkit-text-stroke-width: 0px; background-color:
                  rgb(255, 255, 255);" class="" moz-do-not-send="true">SLIM@ietf.org</a><br
                  style="font-family: Helvetica; font-size: 12px;
                  font-style: normal; font-variant-caps: normal;
                  font-weight: normal; letter-spacing: normal;
                  text-align: start; text-indent: 0px; text-transform:
                  none; white-space: normal; word-spacing: 0px;
                  -webkit-text-stroke-width: 0px; background-color:
                  rgb(255, 255, 255);" class="">
                <a href="https://www.ietf.org/mailman/listinfo/slim"
                  style="font-family: Helvetica; font-size: 12px;
                  font-style: normal; font-variant-caps: normal;
                  font-weight: normal; letter-spacing: normal; orphans:
                  auto; text-align: start; text-indent: 0px;
                  text-transform: none; white-space: normal; widows:
                  auto; word-spacing: 0px; -webkit-text-size-adjust:
                  auto; -webkit-text-stroke-width: 0px;
                  background-color: rgb(255, 255, 255);" class=""
                  moz-do-not-send="true">https://www.ietf.org/mailman/listinfo/slim</a><br
                  style="font-family: Helvetica; font-size: 12px;
                  font-style: normal; font-variant-caps: normal;
                  font-weight: normal; letter-spacing: normal;
                  text-align: start; text-indent: 0px; text-transform:
                  none; white-space: normal; word-spacing: 0px;
                  -webkit-text-stroke-width: 0px; background-color:
                  rgb(255, 255, 255);" class="">
              </div>
            </blockquote>
          </div>
          <br class="">
        </div>
      </blockquote>
      <br>
      <pre class="moz-signature" cols="72">-- 
-----------------------------------------
Gunnar Hellström
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
-----------------------------------------
Gunnar Hellström
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
  </body>
</html>

--------------8849E2DC05D0FA661721EF08--


From nobody Sat Oct 14 02:09:50 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3AC15132CE7 for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 02:09:49 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.6
X-Spam-Level: 
X-Spam-Status: No, score=-2.6 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ChJllMbV0_Dp for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 02:09:47 -0700 (PDT)
Received: from bin-vsp-out-03.atm.binero.net (bin-mail-out-06.binero.net [195.74.38.229]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 465DF132143 for <slim@ietf.org>; Sat, 14 Oct 2017 02:09:47 -0700 (PDT)
X-Halon-ID: 6b32e14d-b0bf-11e7-83a8-0050569116f7
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-03.atm.binero.net (Halon) with ESMTPSA id 6b32e14d-b0bf-11e7-83a8-0050569116f7; Sat, 14 Oct 2017 11:09:44 +0200 (CEST)
To: Randall Gellens <rg+ietf@randy.pensive.org>, Bernard Aboba <bernard.aboba@gmail.com>, slim@ietf.org
References: <CAOW+2dvYsCXY-eSNBm5U5gWhWzc9Q2a_bx+PkA3bG74eBXsQqg@mail.gmail.com> <p06240601d60753a3f914@[172.20.60.54]>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <5fe48710-0b54-4bb2-c16f-a097862f4423@omnitor.se>
Date: Sat, 14 Oct 2017 11:09:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <p06240601d60753a3f914@[172.20.60.54]>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/aJJ-mdn4Qf-k0EJge97MR1AvW-k>
Subject: Re: [Slim] Issue 41: Allow sign languages in the text stream for text notations of sign language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 09:09:49 -0000

Den 2017-10-14 kl. 07:42, skrev Randall Gellens:
> At 1:22 PM -0700 10/13/17, Bernard Aboba wrote:
>
>> Issue 41 (see: 
>> <https://trac.ietf.org/trac/slim/ticket/41>https://trac.ietf.org/trac/slim/ticket/41 
>> ) relates to the potential future use of sign language within a text 
>> stream, such as Formal Signwriting, described here:
>>
>> <https://tools.ietf.org/html/draft-slevinski-formal-signwriting>https://tools.ietf.org/html/draft-slevinski-formal-signwriting 
>>
>>
>> In the Issue, the following change is suggested:
>>
>> Therefore, I suggest this minimal change:
>> ---------------------------old text 1 in 
>> 5.4-------------------------------------
>> the behavior when specifying a spoken/written language tag for a 
>> video media stream, or a signed language tag for an audio or text 
>> media stream, is not defined.
>> --------------------------new text---------------------------------
>>
>> the behavior when specifying a spoken/written language tag for a 
>> video media stream, or a signed language tag for an audio media 
>> stream, is not defined.
>> --------------------------end of change 1---------------------------
>>
>> Since draft-slevinski has not been widely implemented, it probably 
>> cannot be assumed that negotiation of a signed language tag for a 
>> text media stream implies use of this (or any other) sign language 
>> textual encoding mechanism. So it would not be correct to imply that 
>> use of a signed language tag for an text media stream has a well 
>> defined meaning.
>>
>> One way to resolve this would be to keep the existing text but add a 
>> sentence, such as:
>>
>> "Note that mechanisms for encoding signed language in a text media 
>> stream have been proposed
>> [draft-slevinski] but are not yet well developed enough for 
>> incorporation within the negotiation mechanism described in this 
>> document."
>>
>> Would such a resolution make sense?
>
> How about simply changing "is not defined" to "is not defined in this 
> document"?
<GH>Yes, "is not defined in this document" as already introduced in 
draft -15 is good and can be accepted. However, see another proposal I 
just made for issue #43, for replacement of the whole section 5.4 that 
is real guidance for implementations. That also solves this issue #41, 
since it allows new specifications to add usage.

/Gunnar



-- 
-----------------------------------------
Gunnar Hellstrm
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


From nobody Sat Oct 14 04:56:13 2017
Return-Path: <rg+ietf@randy.pensive.org>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id EC5CC13306B for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 04:56:11 -0700 (PDT)
X-Quarantine-ID: <kjG3hSaU1QCh>
X-Virus-Scanned: amavisd-new at amsl.com
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "MIME-Version"
X-Spam-Flag: NO
X-Spam-Score: -1.899
X-Spam-Level: 
X-Spam-Status: No, score=-1.899 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id kjG3hSaU1QCh for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 04:56:10 -0700 (PDT)
Received: from turing.pensive.org (turing.pensive.org [99.111.97.161]) by ietfa.amsl.com (Postfix) with ESMTP id E367F1201F8 for <slim@ietf.org>; Sat, 14 Oct 2017 04:56:09 -0700 (PDT)
Received: from [172.20.60.54] (99.111.97.161) by turing.pensive.org with ESMTP (EIMS X 3.3.9); Sat, 14 Oct 2017 05:00:14 -0700
Mime-Version: 1.0
Message-Id: <p06240607d60785dabdff@[172.20.60.54]>
In-Reply-To: <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se>
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com> <p06240606d607257c9584@[172.20.60.54]> <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se>
X-Mailer: Eudora for Mac OS X
Date: Sat, 14 Oct 2017 04:56:02 -0700
To: Gunnar =?iso-8859-1?Q?Hellstr=F6m?=  <gunnar.hellstrom@omnitor.se>, slim@ietf.org
From: Randall Gellens <rg+ietf@randy.pensive.org>
Mime-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1" ; format="flowed"
Content-Transfer-Encoding: quoted-printable
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/RPi2QOYbZqr7oHILmBOWqsBPwY8>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 11:56:12 -0000

At 10:21 AM +0200 10/14/17, Gunnar Hellstr=F6m wrote:

>  Den 2017-10-14 kl. 04:25, skrev Randall Gellens:
>>  At 1:46 PM -0700 10/13/17, Bernard Aboba wrote:
>>
>>>   Issue 43 (=20
>>> <https://trac.ietf.org/trac/slim/ticket/43>https://trac.ietf.org/trac/sl=
im/ticket/43=20
>>> ) results from a review comment that said=20
>>> that a simple way is required to decide if a=20
>>> language tag is a sign language or a written=20
>>> or spoken language.
>>>
>>>   Some applications scan the IANA language=20
>>> registry at startup for the word "Sign" in=20
>>> the tag description:
>>>
>>>=20
>>> <https://www.iana.org/assignments/language-subtag-registry/language-subt=
ag-registry>https://www.iana.org/assignments/language-subtag-registry/langua=
ge-subtag-registry
>>>
>>>
>>>   Currently, there are 319 language subtags=20
>>> that include "Sign Language" in their=20
>>> description.
>>>   Given the current layout of the language=20
>>> subtag registry, it is not clear to me that=20
>>> there is an easier way to determine which=20
>>> tags represent sign languages.  Nor is it=20
>>> within the SLIM WG charter to develop a=20
>>> modification to the language subtag registry=20
>>> to address this concern.
>>>   So I am wondering whether we might resolve=20
>>> this with a Note outlining the problem but=20
>>> not offering a solution.
>>
>>  I think the wording in -14 addresses the=20
>> comment by accepting Dale's suggestion that,=20
>> rather than know non-signed tags, it's the use=20
>> of the exact same tag in both an audio and a=20
>> video stream that is the indicator.  That both=20
>> tightens up the technical issue and simplifies=20
>> it greatly.
>>
>>  The only other instance where we might add such a note would be in 5.4:
>>
>>  5.4.  Undefined Combinations
>>
>>     With the exception of the case mentioned in Section 5.2 (an audio
>>     stream in parallel with a video stream with the exact same (spoken)
>>     language tag), the behavior when specifying a non-signed language tag
>>     for a video media stream, or a signed language tag for an audio or
>>     text media stream, is not defined.
>>
>>  We could add your suggested note to 5.4.
>>
>  <GH>We can replace 5.4 with a more explicit=20
> section guiding applications to how to make the=20
> deduction simple. So, instead of a note, I=20
> suggest that we replace 5.4 with:
>
>  5.4 Relations between media and modality
>  There is no easy way to deduct the intended=20
> modality from a language tag. Other=20
> specifications may introduce specific notations=20
> for modality used in a media or in relation to=20
> a language tag. Applications not implementing=20
> such specific notations may use the following=20
> simple deductions.
>  - A language tag in audio media is supposed to indicate spoken modality.
>  - A language tag in text media is supposed to indicate  written modality.
>  - A language tag in video media is supposed to=20
> indicate visual sign language modality except=20
> for the case when it is supposed to indicate a=20
> view of a speaking person mentioned in section=20
> 5.2 characterized by the exact same language=20
> tag also appearing in an audio media=20
> specification.
>  - A language tag in media where the modality is=20
> obvious or specified for the media subtype=20
> definition is supposed to indicate that=20
> modality.
>  - A language tag in other media descriptions=20
> than above has undefined modality.
>=20
> --------------------------------------------------------------------------=
-------------------------------------------

The suggested text makes a very different point than what's there now:

    the behavior when specifying a non-signed language tag
    for a video media stream, or a signed language tag for an audio or
    text media stream, is not defined in this document.

    The problem of knowing which language tags are signed and which are
    not is out of scope of this document.

An implementation could, for example, have a=20
table (even a static table that's fixed) of=20
language tags it knows to be for signed=20
languages.  It could treat other tags as for=20
non-signed languages.  This would be an imperfect=20
approach (yielding incorrect results if a new=20
signed language tag is introduced, e.g.), but=20
might be good enough.


>  My note:  by this we currently consciously=20
> exclude the following use and I am ok with that:
>  -text in mp4 video
>  -audio in mp4 video ( or is that only allowed in application/mp4 ??)
>  -any modality in message media
>  -most application media, however some may have=20
> explicit descriptions in subtype specifications.
>
>  The exception with a view of a speaker stands=20
> out as very odd now, requiring comparison of=20
> language tags used in different media=20
> descriptions, and requiring simultaneous use of=20
> language in two different media that is=20
> otherwise out of scope for this draft. It was=20
> introduced while I still hoped that we could=20
> introduce other dependencies between language=20
> use in different media.  It is not the most=20
> urgent media/language combination to specify.=20
> It is also handled in=20
> draft-hellstrom-slim-modality-grouping. So,=20
> assuming that we can get progress on that=20
> draft, we could clean up the current draft by=20
> deleting the exception. I suggest that we=20
> delete the exception.

I like the suggestion to delete the exception.

The exception is in 5.2:

    Note that while signed language tags are used with a video stream to
    indicate sign language, a video stream in parallel with an audio
    stream, both using the exact same (spoken) language tag, indicates a
    request for a supplemental video stream to see the speaker.

I agree that deleting this exception makes for a=20
simpler and less complex draft.

-- 
Randall Gellens
Opinions are personal;    facts are suspect;    I speak for myself only
-------------- Randomly selected tag: ---------------
    The highlight of the annual Computer Bowl occurred when Bill Gates,
who was a judge, posed the following question to the contestants:
    "What contest, held via Usenet, is dedicated to examples of weird,
obscure, bizarre, and really bad programming?"
    After a moment of silence, Jean-Louis Gassee (ex-honcho at Apple)
hit his buzzer and answered "Windows."
                                          --Recounted by Adam C. Engst


From nobody Sat Oct 14 05:00:45 2017
Return-Path: <rg+ietf@randy.pensive.org>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2BC7C133052 for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 05:00:44 -0700 (PDT)
X-Quarantine-ID: <0h90mFgqYTSE>
X-Virus-Scanned: amavisd-new at amsl.com
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "MIME-Version"
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Level: 
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 0h90mFgqYTSE for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 05:00:43 -0700 (PDT)
Received: from turing.pensive.org (turing.pensive.org [99.111.97.161]) by ietfa.amsl.com (Postfix) with ESMTP id 220921201F8 for <slim@ietf.org>; Sat, 14 Oct 2017 05:00:43 -0700 (PDT)
Received: from [172.20.60.54] (99.111.97.161) by turing.pensive.org with ESMTP (EIMS X 3.3.9); Sat, 14 Oct 2017 05:04:48 -0700
Mime-Version: 1.0
Message-Id: <p06240608d607ac1cb56d@[172.20.60.54]>
In-Reply-To: <7750ee16-18a0-3f44-5d79-d50967447d8e@omnitor.se>
References: <3e945827-8310-56aa-b2e5-7a9405ff85c4@omnitor.se> <p06240621d606585e823d@99.111.97.136> <57690f3d-faa2-18d8-f270-8ae179f39e68@omnitor.se> <p06240628d6066c091e76@99.111.97.136> <fea21ce6-398a-ebbb-5881-abe732c8983b@omnitor.se> <CAOW+2dubW_Pc-JKtTOZjSGeCWw=3bSwd1tqvObSwf4fyzs4Eig@mail.gmail.com> <9dafe618-8d7d-76ba-91e2-41e3b5ce1f3b@omnitor.se> <ABDCB89A-4BF0-494C-A729-3EB6529DA618@brianrosen.net> <59f36c7d-41fc-68f5-1395-b0450689f5ca@omnitor.se> <7750ee16-18a0-3f44-5d79-d50967447d8e@omnitor.se>
X-Mailer: Eudora for Mac OS X
Date: Sat, 14 Oct 2017 05:00:38 -0700
To: Gunnar =?iso-8859-1?Q?Hellstr=F6m?=  <gunnar.hellstrom@omnitor.se>, Brian Rosen <br@brianrosen.net>
From: Randall Gellens <rg+ietf@randy.pensive.org>
Cc: Bernard Aboba <bernard.aboba@gmail.com>, "slim@ietf.org" <slim@ietf.org>,  Randall Gellens <rg+ietf@randy.pensive.org>
Mime-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1" ; format="flowed"
Content-Transfer-Encoding: quoted-printable
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/wWg43mtDSDdXYiwXlfvoJoFBrG8>
Subject: Re: [Slim] Indication of modality alternatives in draft-ietf-slim-negotiating-human-language -Issue #46
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 12:00:44 -0000

At 10:58 AM +0200 10/14/17, Gunnar Hellstr=F6m wrote:

>  In order to not create complicated sentences=20
> but still having the wording match our=20
> intentions, I want to change the proposed=20
> resolution for Issue # 46 Change 1 to:
>
>  ---Change 1 in 5.2, first paragraph---------------- 
>    ------old text--------- 
>    This document defines two media-level attributes starting with 
>       'hlang' (short for "human interactive language") to negotiate which 
>       human language is selected for use in each interactive media stream.=
 
>    ------------new text-------------------- 
>    This document defines two media-level attributes starting with 
>       'hlang' (short for "human interactive language") to negotiate which 
>       human language is selected for potential use in each media stream.
>    -------end of change 1-------
>
>  That matches the "if" in paragraph 3, and it is=20
> also valid for both the offers and answers,=20
> while paragraph 3 is only for the answer.
>  Please accept it, it is of importance for=20
> proper understanding of our intentions.

The existing text is talking about which language=20
is selected for use in a media stream should that=20
media stream be used for interactive=20
communication; the proposed wording instead talks=20
about a language that may or may not be used in a=20
media stream, which doesn't seem correct to me.=20
Since we already have text (as noted earlier)=20
that explicitly says that not all negotiated=20
media streams need be used, I don't see a problem=20
with leaving the text as is.


-- 
Randall Gellens
Opinions are personal;    facts are suspect;    I speak for myself only
-------------- Randomly selected tag: ---------------
ondinnonk (ON-din-onk; Iroquoian; noun): the soul's innermost
benevolent desires.


From nobody Sat Oct 14 05:06:09 2017
Return-Path: <internet-drafts@ietf.org>
X-Original-To: slim@ietf.org
Delivered-To: slim@ietfa.amsl.com
Received: from ietfa.amsl.com (localhost [IPv6:::1]) by ietfa.amsl.com (Postfix) with ESMTP id 49273133052; Sat, 14 Oct 2017 05:06:08 -0700 (PDT)
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
From: internet-drafts@ietf.org
To: <i-d-announce@ietf.org>
Cc: slim@ietf.org
X-Test-IDTracker: no
X-IETF-IDTracker: 6.63.1
Auto-Submitted: auto-generated
Precedence: bulk
Message-ID: <150798276751.5159.5535601365943836790@ietfa.amsl.com>
Date: Sat, 14 Oct 2017 05:06:08 -0700
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/0LaJAa4hpRGpAZSBw_hnyYWYCxk>
Subject: [Slim] I-D Action: draft-ietf-slim-negotiating-human-language-16.txt
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 12:06:08 -0000

A New Internet-Draft is available from the on-line Internet-Drafts directories.
This draft is a work item of the Selection of Language for Internet Media WG of the IETF.

        Title           : Negotiating Human Language in Real-Time Communications
        Author          : Randall Gellens
	Filename        : draft-ietf-slim-negotiating-human-language-16.txt
	Pages           : 16
	Date            : 2017-10-14

Abstract:
   Users have various human (natural) language needs, abilities, and
   preferences regarding spoken, written, and signed languages.  This
   document adds new SDP media-level attributes so that when
   establishing interactive communication sessions ("calls"), it is
   possible to negotiate (communicate and match) the caller's language
   and media needs with the capabilities of the called party.  This is
   especially important with emergency calls, where a call can be
   handled by a call taker capable of communicating with the user, or a
   translator or relay operator can be bridged into the call during
   setup, but this applies to non-emergency calls as well (as an
   example, when calling a company call center).

   This document describes the need and a solution using new SDP media
   attributes.


The IETF datatracker status page for this draft is:
https://datatracker.ietf.org/doc/draft-ietf-slim-negotiating-human-language/

There are also htmlized versions available at:
https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-16
https://datatracker.ietf.org/doc/html/draft-ietf-slim-negotiating-human-language-16

A diff from the previous version is available at:
https://www.ietf.org/rfcdiff?url2=draft-ietf-slim-negotiating-human-language-16


Please note that it may take a couple of minutes from the time of submission
until the htmlized version and diff are available at tools.ietf.org.

Internet-Drafts are also available by anonymous FTP at:
ftp://ftp.ietf.org/internet-drafts/


From nobody Sat Oct 14 10:53:05 2017
Return-Path: <bernard.aboba@gmail.com>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 66CC5132D18 for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 10:53:04 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.998
X-Spam-Level: 
X-Spam-Status: No, score=-1.998 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id mdMvsFzON-Mp for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 10:53:03 -0700 (PDT)
Received: from mail-vk0-x234.google.com (mail-vk0-x234.google.com [IPv6:2607:f8b0:400c:c05::234]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id B80A3126E64 for <slim@ietf.org>; Sat, 14 Oct 2017 10:53:02 -0700 (PDT)
Received: by mail-vk0-x234.google.com with SMTP id j2so5754402vki.4 for <slim@ietf.org>; Sat, 14 Oct 2017 10:53:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;  h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=ti5KR3EMhw5JUCVL0Jm8iR2aJMoth6kHBz+8DrxdJ20=; b=CDp7ya5dAvPf1v9dB/z4PJEAseJbF6b7l/n73eVfJUQYBKYrOOI62O/v9xc9ybLj2A l1u0tAMZ1U4WvNZJq+f/7poDCkSPuiEudDI6Sbefs+8RXSZPgeBuxgRu/3r8S05SowJl 3Xox3tiVUHcySsANmw+lPmN0G7HR2xo/ErwkQ6S+U/Z92l0QEFPJJrlQFHcZp8bnhB8p THmdPUYdGiLYvvj3vdk06LjRN7366fjdUIZhvJcLWDTEl88xkQ6AOZ1uQrhA1qxuEavp 9JBGx/Te2vA9VYdK1NlzccQ1WEAwui5SUD42Wj7MAP80XBz9U1B+P01ujabSc5qeB2fg TtpQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=ti5KR3EMhw5JUCVL0Jm8iR2aJMoth6kHBz+8DrxdJ20=; b=HaYlunUtDSKVkJ6poZPdfEVEdmxC4301A+ZYrmor/4Y+Xefs1mCjgzRsoh5OJEb+BU jxLtdWW+7wfGd1xicmrF9A03L1r70dt79UTYaxv7xyQ6xG4LAVzLompRSBBPIWd+TTF2 q9PWYskiITjWdss0+/Nk3MHIZuApQc3mUuFxLeLO+AAmzDT6sEXCj1RfFt+0ubY76F03 ReoglWUKZzm+EhTVWJRtOxG5Y08PE11i/cI9QlQsf3Rq2Mm4d9aTwUbZmcqlYS57eTF4 Kla803MHF48+/rTl27xamb/a9BHaUe8o2ARqfprWl5b+ZJ7UEDXnQUElMY87Wto+8KQX Ed9g==
X-Gm-Message-State: AMCzsaWud2Ld0ggJPCvW83SxI1w9vPi0TDCwdvhUsixrxbTRkGb/mQrA K2aohfLYViVnOlxW+K62btjke6vqk60eqk9QAzQ=
X-Google-Smtp-Source: AOwi7QC+c4GjBaGpY10UYjlRYRe3rOS5rky2J13moUG9Yu1bezuHHLN5svXL/wZK+jdsdTLbTR48Jvx4u4e/3fjq6Rk=
X-Received: by 10.31.82.3 with SMTP id g3mr3653249vkb.76.1508003581570; Sat, 14 Oct 2017 10:53:01 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.159.32.76 with HTTP; Sat, 14 Oct 2017 10:52:41 -0700 (PDT)
In-Reply-To: <p06240608d607ac1cb56d@172.20.60.54>
References: <3e945827-8310-56aa-b2e5-7a9405ff85c4@omnitor.se> <p06240621d606585e823d@99.111.97.136> <57690f3d-faa2-18d8-f270-8ae179f39e68@omnitor.se> <p06240628d6066c091e76@99.111.97.136> <fea21ce6-398a-ebbb-5881-abe732c8983b@omnitor.se> <CAOW+2dubW_Pc-JKtTOZjSGeCWw=3bSwd1tqvObSwf4fyzs4Eig@mail.gmail.com> <9dafe618-8d7d-76ba-91e2-41e3b5ce1f3b@omnitor.se> <ABDCB89A-4BF0-494C-A729-3EB6529DA618@brianrosen.net> <59f36c7d-41fc-68f5-1395-b0450689f5ca@omnitor.se> <7750ee16-18a0-3f44-5d79-d50967447d8e@omnitor.se> <p06240608d607ac1cb56d@172.20.60.54>
From: Bernard Aboba <bernard.aboba@gmail.com>
Date: Sat, 14 Oct 2017 10:52:41 -0700
Message-ID: <CAOW+2du_AMEuU4up==8D=MutY9hz8Vs7J463riZ7WRTS=qUyxw@mail.gmail.com>
To: Randall Gellens <rg+ietf@randy.pensive.org>
Cc: =?UTF-8?Q?Gunnar_Hellstr=C3=B6m?= <gunnar.hellstrom@omnitor.se>,  Brian Rosen <br@brianrosen.net>, "slim@ietf.org" <slim@ietf.org>
Content-Type: multipart/alternative; boundary="001a114e509eccb436055b857137"
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/TxBtANowF1_X922tSL-8w448swE>
Subject: Re: [Slim] Indication of modality alternatives in draft-ietf-slim-negotiating-human-language -Issue #46
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 17:53:04 -0000

--001a114e509eccb436055b857137
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Randall said:

"The existing text is talking about which language is selected for use in a
media stream should that media stream be used for interactive
communication; the proposed wording instead talks about a language that may
or may not be used in a media stream, which doesn't seem correct to me."

[BA] Yes, that is how it came across to me as well.

On Sat, Oct 14, 2017 at 5:00 AM, Randall Gellens <rg+ietf@randy.pensive.org=
>
wrote:

> At 10:58 AM +0200 10/14/17, Gunnar Hellstr=C3=B6m wrote:
>
>  In order to not create complicated sentences but still having the wordin=
g
>> match our intentions, I want to change the proposed resolution for Issue=
 #
>> 46 Change 1 to:
>>
>>  ---Change 1 in 5.2, first paragraph----------------    ------old
>> text---------    This document defines two media-level attributes starti=
ng
>> with       'hlang' (short for "human interactive language") to negotiate
>> which       human language is selected for use in each interactive media
>> stream.    ------------new text--------------------    This document
>> defines two media-level attributes starting with       'hlang' (short fo=
r
>> "human interactive language") to negotiate which       human language is
>> selected for potential use in each media stream.
>>    -------end of change 1-------
>>
>>  That matches the "if" in paragraph 3, and it is also valid for both the
>> offers and answers, while paragraph 3 is only for the answer.
>>  Please accept it, it is of importance for proper understanding of our
>> intentions.
>>
>
> The existing text is talking about which language is selected for use in =
a
> media stream should that media stream be used for interactive
> communication; the proposed wording instead talks about a language that m=
ay
> or may not be used in a media stream, which doesn't seem correct to me.
> Since we already have text (as noted earlier) that explicitly says that n=
ot
> all negotiated media streams need be used, I don't see a problem with
> leaving the text as is.
>
>
> --
> Randall Gellens
> Opinions are personal;    facts are suspect;    I speak for myself only
> -------------- Randomly selected tag: ---------------
> ondinnonk (ON-din-onk; Iroquoian; noun): the soul's innermost
> benevolent desires.
>

--001a114e509eccb436055b857137
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Randall said:=C2=A0<div><br></div><div>&quot;<span style=
=3D"font-size:12.8px">The existing text is talking about which language is =
selected for use in a media stream should that media stream be used for int=
eractive communication; the proposed wording instead talks about a language=
 that may or may not be used in a media stream, which doesn&#39;t seem corr=
ect to me.</span>&quot;</div><div><br></div><div>[BA] Yes, that is how it c=
ame across to me as well.=C2=A0=C2=A0</div></div><div class=3D"gmail_extra"=
><br><div class=3D"gmail_quote">On Sat, Oct 14, 2017 at 5:00 AM, Randall Ge=
llens <span dir=3D"ltr">&lt;<a href=3D"mailto:rg+ietf@randy.pensive.org" ta=
rget=3D"_blank">rg+ietf@randy.pensive.org</a>&gt;</span> wrote:<br><blockqu=
ote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc s=
olid;padding-left:1ex"><span class=3D"">At 10:58 AM +0200 10/14/17, Gunnar =
Hellstr=C3=B6m wrote:<br>
<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
=C2=A0In order to not create complicated sentences but still having the wor=
ding match our intentions, I want to change the proposed resolution for Iss=
ue # 46 Change 1 to:<br>
<br>
=C2=A0---Change 1 in 5.2, first paragraph----------------=C2=A0 =C2=A0 ----=
--old text---------=C2=A0 =C2=A0 This document defines two media-level attr=
ibutes starting with=C2=A0 =C2=A0 =C2=A0 =C2=A0&#39;hlang&#39; (short for &=
quot;human interactive language&quot;) to negotiate which=C2=A0 =C2=A0 =C2=
=A0 =C2=A0human language is selected for use in each interactive media stre=
am.=C2=A0 =C2=A0 ------------new text--------------------=C2=A0 =C2=A0 This=
 document defines two media-level attributes starting with=C2=A0 =C2=A0 =C2=
=A0 =C2=A0&#39;hlang&#39; (short for &quot;human interactive language&quot;=
) to negotiate which=C2=A0 =C2=A0 =C2=A0 =C2=A0human language is selected f=
or potential use in each media stream.<br>
=C2=A0 =C2=A0-------end of change 1-------<br>
<br>
=C2=A0That matches the &quot;if&quot; in paragraph 3, and it is also valid =
for both the offers and answers, while paragraph 3 is only for the answer.<=
br>
=C2=A0Please accept it, it is of importance for proper understanding of our=
 intentions.<br>
</blockquote>
<br></span>
The existing text is talking about which language is selected for use in a =
media stream should that media stream be used for interactive communication=
; the proposed wording instead talks about a language that may or may not b=
e used in a media stream, which doesn&#39;t seem correct to me. Since we al=
ready have text (as noted earlier) that explicitly says that not all negoti=
ated media streams need be used, I don&#39;t see a problem with leaving the=
 text as is.<span class=3D"HOEnZb"><font color=3D"#888888"><br>
<br>
<br>
-- <br>
Randall Gellens<br>
Opinions are personal;=C2=A0 =C2=A0 facts are suspect;=C2=A0 =C2=A0 I speak=
 for myself only<br>
-------------- Randomly selected tag: ---------------<br>
ondinnonk (ON-din-onk; Iroquoian; noun): the soul&#39;s innermost<br>
benevolent desires.<br>
</font></span></blockquote></div><br></div>

--001a114e509eccb436055b857137--


From nobody Sat Oct 14 11:03:57 2017
Return-Path: <bernard.aboba@gmail.com>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 59F36132331 for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 11:03:55 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.698
X-Spam-Level: 
X-Spam-Status: No, score=-2.698 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 0M0U0UlWNC2w for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 11:03:53 -0700 (PDT)
Received: from mail-vk0-x22f.google.com (mail-vk0-x22f.google.com [IPv6:2607:f8b0:400c:c05::22f]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 13B94126E64 for <slim@ietf.org>; Sat, 14 Oct 2017 11:03:53 -0700 (PDT)
Received: by mail-vk0-x22f.google.com with SMTP id n70so5768719vkf.11 for <slim@ietf.org>; Sat, 14 Oct 2017 11:03:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;  h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=t5653NgFEANwJTxkJhJ0KdPwN44BTE1J3vfRdahKS+8=; b=quEmsAhaUvJ36q6Bm5coFxHViWHBh/AUO/9KhhdtKXzrP6S1w/lNvrbGdBxbbd9xNv WMDjB29vmWuKiigVSgNtvMHpxSakWINGw1KTnNHyioF+XJ/EPogeu+s0FwGBHYUa9UPX fMVHud+c8MOk5demhGfAITm89hCJ+poKvUIspP2G0v4FmurXwdsP5IxPs+K5SrCJpC8+ awwP9l6gI8dyvx28ScH4LubeP9BgG5h31OzruBR4CaCjpkJTXNhH3giCdhvq6IEwatgw lkZpQheyuo8awW4SO32KPVHgbFLnFjXywAH7O2Hymr7kz/jWmgSgCjPU78VzN7NcNFyr NUAA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=t5653NgFEANwJTxkJhJ0KdPwN44BTE1J3vfRdahKS+8=; b=cpzfS8LHNYEgI60FBCj6qB04DC6OuInDHmgLQEM4tJOLha7OzMAwjvvyh5QEJxD3wR HnthJgCJOT6ECH2eLpFic+ksxlGEg3iD2ZI5YGiv6TmT2yJkJV+XYaeP7vFYB1hiDrHO 2dBQP/eXaPfGR0/4qF/sUFqI11C2HLt15Wa/eU6ZA+8SJbm7MdfvHS1JNyqjUTwbp4ki alUgR8KdP+EtrZtFemDMlHPXlrlm1GEQL09mhN55w179ehcVsXOwIsT0qA05Jz6eUN6b bVdkqABJdcD12UoBichE+X5NRjzmJFhW62NcidwC1mjM15ZwHnw3xLp1MN6CAaL46Wku qGTg==
X-Gm-Message-State: AMCzsaWrSJUjRWYXbRGVnbh/SMZ+Ekn+S96kd12DlYeUqQfsYOqKvbQu wAwg1/SiYSB3Mw21RPu2RP4+Dr1SgYu4D0we3AWCQASC
X-Google-Smtp-Source: ABhQp+T03kpEsyZ1gAP6RSeydBr1emlTaPiYgYZSANtJ9cjAHGWOoCdhbLWuEdN4+Aoo4/FXeYucZzcRAZlkPW4fOP0=
X-Received: by 10.31.62.76 with SMTP id l73mr2386480vka.107.1508004231859; Sat, 14 Oct 2017 11:03:51 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.159.32.76 with HTTP; Sat, 14 Oct 2017 11:03:31 -0700 (PDT)
In-Reply-To: <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se>
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com> <p06240606d607257c9584@172.20.60.54> <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se>
From: Bernard Aboba <bernard.aboba@gmail.com>
Date: Sat, 14 Oct 2017 11:03:31 -0700
Message-ID: <CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com>
To: =?UTF-8?Q?Gunnar_Hellstr=C3=B6m?= <gunnar.hellstrom@omnitor.se>
Cc: slim@ietf.org
Content-Type: multipart/alternative; boundary="001a114476c28f58a2055b85986b"
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/Za4hVr1tvzDhTDBxTLIuzYqyklo>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 18:03:55 -0000

--001a114476c28f58a2055b85986b
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Gunnar said:

"Applications not implementing such specific notations may use the
following simple deductions.

- A language tag in audio media is supposed to indicate spoken modality.

[BA] Even a tag with "Sign Language" in the description??

- A language tag in text media is supposed to indicate  written modality.

[BA] If the tag has "Sign Language" in the description, can this document
really say that?

- A language tag in video media is supposed to indicate visual sign
language modality except for the case when it is supposed to indicate a
view of a speaking person mentioned in section 5.2 characterized by the
exact same language tag also appearing in an audio media specification.

[BA] It seems like an over-reach to say that a spoken language tag in video
media should instead be interpreted as a request for Sign Language.  If
this were done, would it always be clear which Sign Language was intended?
And could we really assume that both sides, if negotiating a spoken
language tag in video media, were really indicating the desire to sign?  It
seems like this could easily result interoperability failure.

- A language tag in media where the modality is obvious or specified for
the media subtype definition is supposed to indicate that modality.
- A language tag in other media descriptions than above has undefined
modality."

On Sat, Oct 14, 2017 at 1:21 AM, Gunnar Hellstr=C3=B6m <
gunnar.hellstrom@omnitor.se> wrote:

> Den 2017-10-14 kl. 04:25, skrev Randall Gellens:
>
>> At 1:46 PM -0700 10/13/17, Bernard Aboba wrote:
>>
>>  Issue 43 ( <https://trac.ietf.org/trac/slim/ticket/43>https://trac.ietf
>>> .org/trac/slim/ticket/43 ) results from a review comment that said that
>>> a simple way is required to decide if a language tag is a sign language=
 or
>>> a written or spoken language.
>>>
>>>  Some applications scan the IANA language registry at startup for the
>>> word "Sign" in the tag description:
>>>
>>> <https://www.iana.org/assignments/language-subtag-registry/
>>> language-subtag-registry>https://www.iana.org/assignments/
>>> language-subtag-registry/language-subtag-registry
>>>
>>>
>>>  Currently, there are 319 language subtags that include "Sign Language"
>>> in their description.
>>>  Given the current layout of the language subtag registry, it is not
>>> clear to me that there is an easier way to determine which tags represe=
nt
>>> sign languages.  Nor is it within the SLIM WG charter to develop a
>>> modification to the language subtag registry to address this concern.
>>>  So I am wondering whether we might resolve this with a Note outlining
>>> the problem but not offering a solution.
>>>
>>
>> I think the wording in -14 addresses the comment by accepting Dale's
>> suggestion that, rather than know non-signed tags, it's the use of the
>> exact same tag in both an audio and a video stream that is the indicator=
.
>> That both tightens up the technical issue and simplifies it greatly.
>>
>> The only other instance where we might add such a note would be in 5.4:
>>
>> 5.4.  Undefined Combinations
>>
>>    With the exception of the case mentioned in Section 5.2 (an audio
>>    stream in parallel with a video stream with the exact same (spoken)
>>    language tag), the behavior when specifying a non-signed language tag
>>    for a video media stream, or a signed language tag for an audio or
>>    text media stream, is not defined.
>>
>> We could add your suggested note to 5.4.
>>
>> <GH>We can replace 5.4 with a more explicit section guiding applications
> to how to make the deduction simple. So, instead of a note, I suggest tha=
t
> we replace 5.4 with:
>
> 5.4 Relations between media and modality
> There is no easy way to deduct the intended modality from a language tag.
> Other specifications may introduce specific notations for modality used i=
n
> a media or in relation to a language tag. Applications not implementing
> such specific notations may use the following simple deductions.
> - A language tag in audio media is supposed to indicate spoken modality.
> - A language tag in text media is supposed to indicate  written modality.
> - A language tag in video media is supposed to indicate visual sign
> language modality except for the case when it is supposed to indicate a
> view of a speaking person mentioned in section 5.2 characterized by the
> exact same language tag also appearing in an audio media specification.
> - A language tag in media where the modality is obvious or specified for
> the media subtype definition is supposed to indicate that modality.
> - A language tag in other media descriptions than above has undefined
> modality.
> ------------------------------------------------------------
> ---------------------------------------------------------
>
> My note:  by this we currently consciously exclude the following use and =
I
> am ok with that:
> -text in mp4 video
> -audio in mp4 video ( or is that only allowed in application/mp4 ??)
> -any modality in message media
> -most application media, however some may have explicit descriptions in
> subtype specifications.
>
> The exception with a view of a speaker stands out as very odd now,
> requiring comparison of language tags used in different media description=
s,
> and requiring simultaneous use of language in two different media that is
> otherwise out of scope for this draft. It was introduced while I still
> hoped that we could introduce other dependencies between language use in
> different media.  It is not the most urgent media/language combination to
> specify. It is also handled in draft-hellstrom-slim-modality-grouping.
> So, assuming that we can get progress on that draft, we could clean up th=
e
> current draft by deleting the exception. I suggest that we delete the
> exception.
>
> /Gunnar
>
>
>
> --
> -----------------------------------------
> Gunnar Hellstr=C3=B6m
> Omnitor
> gunnar.hellstrom@omnitor.se
> +46 708 204 288
>
>
> _______________________________________________
> SLIM mailing list
> SLIM@ietf.org
> https://www.ietf.org/mailman/listinfo/slim
>

--001a114476c28f58a2055b85986b
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Gunnar said:=C2=A0<div><br></div><div>&quot;<span style=3D=
"font-size:12.8px">Applications not implementing such specific notations ma=
y use the following simple deductions.</span></div><div><span style=3D"font=
-size:12.8px"><br></span></div><span style=3D"font-size:12.8px">- A languag=
e tag in audio media is supposed to indicate spoken modality.</span><div><b=
r></div><div>[BA] Even a tag with &quot;Sign Language&quot; in the descript=
ion??</div><div><br><span style=3D"font-size:12.8px">- A language tag in te=
xt media is supposed to indicate=C2=A0 written modality.</span></div><div><=
br class=3D"gmail-Apple-interchange-newline">[BA] If the tag has &quot;Sign=
 Language&quot; in the description, can this document really say that?</div=
><div><br style=3D""><span style=3D"font-size:12.8px">- A language tag in v=
ideo media is supposed to indicate visual sign language modality except for=
 the case when it is supposed to indicate a view of a speaking person menti=
oned in section 5.2 characterized by the exact same language tag also appea=
ring in an audio media specification.</span></div><div><br></div><div>[BA] =
It seems like an over-reach to say that a spoken language tag in video medi=
a should instead be interpreted as a request for Sign Language.=C2=A0 If th=
is were done, would it always be clear which Sign Language was intended?=C2=
=A0 And could we really assume that both sides, if negotiating a spoken lan=
guage tag in video media, were really indicating the desire to sign?=C2=A0 =
It seems like this could easily result interoperability failure.</div><div>=
<br style=3D""><span style=3D"font-size:12.8px">- A language tag in media w=
here the modality is obvious or specified for the media subtype definition =
is supposed to indicate that modality.</span><br style=3D"font-size:12.8px"=
><div><span style=3D"font-size:12.8px">- A language tag in other media desc=
riptions than above has undefined modality.</span>&quot;</div></div></div><=
div class=3D"gmail_extra"><br><div class=3D"gmail_quote">On Sat, Oct 14, 20=
17 at 1:21 AM, Gunnar Hellstr=C3=B6m <span dir=3D"ltr">&lt;<a href=3D"mailt=
o:gunnar.hellstrom@omnitor.se" target=3D"_blank">gunnar.hellstrom@omnitor.s=
e</a>&gt;</span> wrote:<br><blockquote class=3D"gmail_quote" style=3D"margi=
n:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=3D"">=
Den 2017-10-14 kl. 04:25, skrev Randall Gellens:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
At 1:46 PM -0700 10/13/17, Bernard Aboba wrote:<br>
<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
=C2=A0Issue 43 ( &lt;<a href=3D"https://trac.ietf.org/trac/slim/ticket/43" =
rel=3D"noreferrer" target=3D"_blank">https://trac.ietf.org/trac/sl<wbr>im/t=
icket/43</a>&gt;<a href=3D"https://trac.ietf.org/trac/slim/ticket/43" rel=
=3D"noreferrer" target=3D"_blank">https://trac.ietf<wbr>.org/trac/slim/tick=
et/43</a> ) results from a review comment that said that a simple way is re=
quired to decide if a language tag is a sign language or a written or spoke=
n language.<br>
<br>
=C2=A0Some applications scan the IANA language registry at startup for the =
word &quot;Sign&quot; in the tag description:<br>
<br>
&lt;<a href=3D"https://www.iana.org/assignments/language-subtag-registry/la=
nguage-subtag-registry" rel=3D"noreferrer" target=3D"_blank">https://www.ia=
na.org/assignme<wbr>nts/language-subtag-registry/<wbr>language-subtag-regis=
try</a>&gt;<a href=3D"https://www.iana.org/assignments/language-subtag-regi=
stry/language-subtag-registry" rel=3D"noreferrer" target=3D"_blank">https<w=
br>://www.iana.org/assignments/<wbr>language-subtag-registry/<wbr>language-=
subtag-registry</a> <br>
<br>
<br>
=C2=A0Currently, there are 319 language subtags that include &quot;Sign Lan=
guage&quot; in their description.<br>
=C2=A0Given the current layout of the language subtag registry, it is not c=
lear to me that there is an easier way to determine which tags represent si=
gn languages.=C2=A0 Nor is it within the SLIM WG charter to develop a modif=
ication to the language subtag registry to address this concern.<br>
=C2=A0So I am wondering whether we might resolve this with a Note outlining=
 the problem but not offering a solution. <br>
</blockquote>
<br>
I think the wording in -14 addresses the comment by accepting Dale&#39;s su=
ggestion that, rather than know non-signed tags, it&#39;s the use of the ex=
act same tag in both an audio and a video stream that is the indicator.=C2=
=A0 That both tightens up the technical issue and simplifies it greatly.<br=
>
<br>
The only other instance where we might add such a note would be in 5.4:<br>
<br>
5.4.=C2=A0 Undefined Combinations<br>
<br>
=C2=A0=C2=A0 With the exception of the case mentioned in Section 5.2 (an au=
dio<br>
=C2=A0=C2=A0 stream in parallel with a video stream with the exact same (sp=
oken)<br>
=C2=A0=C2=A0 language tag), the behavior when specifying a non-signed langu=
age tag<br>
=C2=A0=C2=A0 for a video media stream, or a signed language tag for an audi=
o or<br>
=C2=A0=C2=A0 text media stream, is not defined.<br>
<br>
We could add your suggested note to 5.4.<br>
<br>
</blockquote></span>
&lt;GH&gt;We can replace 5.4 with a more explicit section guiding applicati=
ons to how to make the deduction simple. So, instead of a note, I suggest t=
hat we replace 5.4 with:<br>
<br>
5.4 Relations between media and modality<br>
There is no easy way to deduct the intended modality from a language tag. O=
ther specifications may introduce specific notations for modality used in a=
 media or in relation to a language tag. Applications not implementing such=
 specific notations may use the following simple deductions.<br>
- A language tag in audio media is supposed to indicate spoken modality.<br=
>
- A language tag in text media is supposed to indicate=C2=A0 written modali=
ty.<br>
- A language tag in video media is supposed to indicate visual sign languag=
e modality except for the case when it is supposed to indicate a view of a =
speaking person mentioned in section 5.2 characterized by the exact same la=
nguage tag also appearing in an audio media specification.<br>
- A language tag in media where the modality is obvious or specified for th=
e media subtype definition is supposed to indicate that modality.<br>
- A language tag in other media descriptions than above has undefined modal=
ity.<br>
------------------------------<wbr>------------------------------<wbr>-----=
-------------------------<wbr>---------------------------<br>
<br>
My note:=C2=A0 by this we currently consciously exclude the following use a=
nd I am ok with that:<br>
-text in mp4 video<br>
-audio in mp4 video ( or is that only allowed in application/mp4 ??)<br>
-any modality in message media<br>
-most application media, however some may have explicit descriptions in sub=
type specifications.<br>
<br>
The exception with a view of a speaker stands out as very odd now, requirin=
g comparison of language tags used in different media descriptions, and req=
uiring simultaneous use of language in two different media that is otherwis=
e out of scope for this draft. It was introduced while I still hoped that w=
e could introduce other dependencies between language use in different medi=
a.=C2=A0 It is not the most urgent media/language combination to specify. I=
t is also handled in draft-hellstrom-slim-modality-<wbr>grouping. So, assum=
ing that we can get progress on that draft, we could clean up the current d=
raft by deleting the exception. I suggest that we delete the exception.<spa=
n class=3D"HOEnZb"><font color=3D"#888888"><br>
<br>
/Gunnar<br>
<br>
<br>
<br>
-- <br>
------------------------------<wbr>-----------<br>
Gunnar Hellstr=C3=B6m<br>
Omnitor<br>
<a href=3D"mailto:gunnar.hellstrom@omnitor.se" target=3D"_blank">gunnar.hel=
lstrom@omnitor.se</a><br>
<a href=3D"tel:%2B46%20708%20204%20288" value=3D"+46708204288" target=3D"_b=
lank">+46 708 204 288</a></font></span><div class=3D"HOEnZb"><div class=3D"=
h5"><br>
<br>
______________________________<wbr>_________________<br>
SLIM mailing list<br>
<a href=3D"mailto:SLIM@ietf.org" target=3D"_blank">SLIM@ietf.org</a><br>
<a href=3D"https://www.ietf.org/mailman/listinfo/slim" rel=3D"noreferrer" t=
arget=3D"_blank">https://www.ietf.org/mailman/l<wbr>istinfo/slim</a><br>
</div></div></blockquote></div><br></div>

--001a114476c28f58a2055b85986b--


From nobody Sat Oct 14 13:22:14 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id D65A9127005 for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 13:22:13 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Level: 
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id oBxwWXHtdWqe for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 13:22:09 -0700 (PDT)
Received: from bin-vsp-out-01.atm.binero.net (bin-mail-out-06.binero.net [195.74.38.229]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 3898F12426E for <slim@ietf.org>; Sat, 14 Oct 2017 13:22:09 -0700 (PDT)
X-Halon-ID: 436abc51-b11d-11e7-9c60-005056917a89
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-01.atm.binero.net (Halon) with ESMTPSA id 436abc51-b11d-11e7-9c60-005056917a89; Sat, 14 Oct 2017 22:21:30 +0200 (CEST)
To: slim@ietf.org
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com> <p06240606d607257c9584@172.20.60.54> <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se> <CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <1f70d1a8-08aa-6808-dbbc-c533291f71f0@omnitor.se>
Date: Sat, 14 Oct 2017 22:22:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com>
Content-Type: multipart/alternative; boundary="------------2B0B91D823EFA7BE14D8C3DE"
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/zPhJacBHjlnuqMv8ZBTItMfKgVc>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 20:22:14 -0000

This is a multi-part message in MIME format.
--------------2B0B91D823EFA7BE14D8C3DE
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit

Den 2017-10-14 kl. 20:03, skrev Bernard Aboba:

> Gunnar said:
>
> "Applications not implementing such specific notations may use the 
> following simple deductions.
>
> - A language tag in audio media is supposed to indicate spoken modality.
<GH>This proposal was intended to express approximately the same as the 
current 5.4, but in a more strict way, being easy for applications to 
use and allowing for extensions.
We have now (in version -16 reduced the defined modalities to sign 
language in video, written in text and spoken in audio. That is 
simplification enough for the review comment to be satisfied.
It seems too complicated to get agreement about this rewording so we can 
drop the proposal. The already done rewording in 5.4 in version -16 to 
say that other use of language tags is not defined in this document 
opens well for other work to add new valid media/modality/language tag 
combinations.

I continue answering your questions anyway.
>
> [BA] Even a tag with "Sign Language" in the description??
<GH>Yes. The receiving application would trust the sending application 
and know that a language tag in audio is not a sign language tag. If it 
was anyway, a match would be very unlikely.

>
> - A language tag in text media is supposed to indicate  written modality.
>
> [BA] If the tag has "Sign Language" in the description, can this 
> document really say that?
<GH>The idea is to have a simple rule to meet the needs indicated by the 
review comment.
>
> - A language tag in video media is supposed to indicate visual sign 
> language modality except for the case when it is supposed to indicate 
> a view of a speaking person mentioned in section 5.2 characterized by 
> the exact same language tag also appearing in an audio media 
> specification.
>
> [BA] It seems like an over-reach to say that a spoken language tag in 
> video media should instead be interpreted as a request for Sign 
> Language.  If this were done, would it always be clear which Sign 
> Language was intended?  And could we really assume that both sides, if 
> negotiating a spoken language tag in video media, were really 
> indicating the desire to sign?  It seems like this could easily result 
> interoperability failure.
<GH>Yes it would result in interoperability failure. A match would be 
very unlikely. But the sentence means that  the sending application 
should only put sign language tags in the video description and the 
receiving application could trust that the tags in video descriptions 
are sign language tags. ( except for the exception that we are now 
prepared to drop if all agree. (already gone in version -16))
>
> - A language tag in media where the modality is obvious or specified 
> for the media subtype definition is supposed to indicate that modality.
> - A language tag in other media descriptions than above has undefined 
> modality."
<GH>these two were maybe the more important additions plus the 
mentioning in the beginning of a possibility to add indication of 
modality to a media specification or a language specification by further 
work.

/Gunnar



> On Sat, Oct 14, 2017 at 1:21 AM, Gunnar Hellström 
> <gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>> wrote:
>
>     Den 2017-10-14 kl. 04:25, skrev Randall Gellens:
>
>         At 1:46 PM -0700 10/13/17, Bernard Aboba wrote:
>
>              Issue 43 ( <https://trac.ietf.org/trac/slim/ticket/43
>             <https://trac.ietf.org/trac/slim/ticket/43>>https://trac.ietf.org/trac/slim/ticket/43
>             <https://trac.ietf.org/trac/slim/ticket/43> ) results from
>             a review comment that said that a simple way is required
>             to decide if a language tag is a sign language or a
>             written or spoken language.
>
>              Some applications scan the IANA language registry at
>             startup for the word "Sign" in the tag description:
>
>             <https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry
>             <https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry>>https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry
>             <https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry>
>
>
>
>              Currently, there are 319 language subtags that include
>             "Sign Language" in their description.
>              Given the current layout of the language subtag registry,
>             it is not clear to me that there is an easier way to
>             determine which tags represent sign languages.  Nor is it
>             within the SLIM WG charter to develop a modification to
>             the language subtag registry to address this concern.
>              So I am wondering whether we might resolve this with a
>             Note outlining the problem but not offering a solution.
>
>
>         I think the wording in -14 addresses the comment by accepting
>         Dale's suggestion that, rather than know non-signed tags, it's
>         the use of the exact same tag in both an audio and a video
>         stream that is the indicator. That both tightens up the
>         technical issue and simplifies it greatly.
>
>         The only other instance where we might add such a note would
>         be in 5.4:
>
>         5.4.  Undefined Combinations
>
>            With the exception of the case mentioned in Section 5.2 (an
>         audio
>            stream in parallel with a video stream with the exact same
>         (spoken)
>            language tag), the behavior when specifying a non-signed
>         language tag
>            for a video media stream, or a signed language tag for an
>         audio or
>            text media stream, is not defined.
>
>         We could add your suggested note to 5.4.
>
>     <GH>We can replace 5.4 with a more explicit section guiding
>     applications to how to make the deduction simple. So, instead of a
>     note, I suggest that we replace 5.4 with:
>
>     5.4 Relations between media and modality
>     There is no easy way to deduct the intended modality from a
>     language tag. Other specifications may introduce specific
>     notations for modality used in a media or in relation to a
>     language tag. Applications not implementing such specific
>     notations may use the following simple deductions.
>     - A language tag in audio media is supposed to indicate spoken
>     modality.
>     - A language tag in text media is supposed to indicate written
>     modality.
>     - A language tag in video media is supposed to indicate visual
>     sign language modality except for the case when it is supposed to
>     indicate a view of a speaking person mentioned in section 5.2
>     characterized by the exact same language tag also appearing in an
>     audio media specification.
>     - A language tag in media where the modality is obvious or
>     specified for the media subtype definition is supposed to indicate
>     that modality.
>     - A language tag in other media descriptions than above has
>     undefined modality.
>     ---------------------------------------------------------------------------------------------------------------------
>
>     My note:  by this we currently consciously exclude the following
>     use and I am ok with that:
>     -text in mp4 video
>     -audio in mp4 video ( or is that only allowed in application/mp4 ??)
>     -any modality in message media
>     -most application media, however some may have explicit
>     descriptions in subtype specifications.
>
>     The exception with a view of a speaker stands out as very odd now,
>     requiring comparison of language tags used in different media
>     descriptions, and requiring simultaneous use of language in two
>     different media that is otherwise out of scope for this draft. It
>     was introduced while I still hoped that we could introduce other
>     dependencies between language use in different media.  It is not
>     the most urgent media/language combination to specify. It is also
>     handled in draft-hellstrom-slim-modality-grouping. So, assuming
>     that we can get progress on that draft, we could clean up the
>     current draft by deleting the exception. I suggest that we delete
>     the exception.
>
>     /Gunnar
>
>
>
>     -- 
>     -----------------------------------------
>     Gunnar Hellström
>     Omnitor
>     gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>
>     +46 708 204 288 <tel:%2B46%20708%20204%20288>
>
>
>     _______________________________________________
>     SLIM mailing list
>     SLIM@ietf.org <mailto:SLIM@ietf.org>
>     https://www.ietf.org/mailman/listinfo/slim
>     <https://www.ietf.org/mailman/listinfo/slim>
>
>
>
>
> _______________________________________________
> SLIM mailing list
> SLIM@ietf.org
> https://www.ietf.org/mailman/listinfo/slim

-- 
-----------------------------------------
Gunnar Hellström
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


--------------2B0B91D823EFA7BE14D8C3DE
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <p>Den 2017-10-14 kl. 20:03, skrev Bernard Aboba:<br>
    </p>
    <blockquote type="cite"
cite="mid:CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com">
      <div dir="ltr">Gunnar said: 
        <div><br>
        </div>
        <div>"<span style="font-size:12.8px">Applications not
            implementing such specific notations may use the following
            simple deductions.</span></div>
        <div><span style="font-size:12.8px"><br>
          </span></div>
        <span style="font-size:12.8px">- A language tag in audio media
          is supposed to indicate spoken modality.</span></div>
    </blockquote>
    &lt;GH&gt;This proposal was intended to express approximately the
    same as the current 5.4, but in a more strict way, being easy for
    applications to use and allowing for extensions. <br>
    We have now (in version -16 reduced the defined modalities to sign
    language in video, written in text and spoken in audio. That is
    simplification enough for the review comment to be satisfied. <br>
    It seems too complicated to get agreement about this rewording so we
    can drop the proposal. The already done rewording in 5.4 in version
    -16 to say that other use of language tags is not defined in this
    document opens well for other work to add new valid
    media/modality/language tag combinations. <br>
    <br>
    I continue answering your questions anyway. <br>
    <blockquote type="cite"
cite="mid:CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com">
      <div dir="ltr">
        <div><br>
        </div>
        <div>[BA] Even a tag with "Sign Language" in the description??</div>
      </div>
    </blockquote>
    &lt;GH&gt;Yes. The receiving application would trust the sending
    application and know that a language tag in audio is not a sign
    language tag. If it was anyway, a match would be very unlikely.  <br>
    <br>
    <blockquote type="cite"
cite="mid:CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com">
      <div dir="ltr">
        <div><br>
          <span style="font-size:12.8px">- A language tag in text media
            is supposed to indicate  written modality.</span></div>
        <div><br class="gmail-Apple-interchange-newline">
          [BA] If the tag has "Sign Language" in the description, can
          this document really say that?</div>
      </div>
    </blockquote>
    &lt;GH&gt;The idea is to have a simple rule to meet the needs
    indicated by the review comment.<br>
    <blockquote type="cite"
cite="mid:CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com">
      <div dir="ltr">
        <div><br style="">
          <span style="font-size:12.8px">- A language tag in video media
            is supposed to indicate visual sign language modality except
            for the case when it is supposed to indicate a view of a
            speaking person mentioned in section 5.2 characterized by
            the exact same language tag also appearing in an audio media
            specification.</span></div>
        <div><br>
        </div>
        <div>[BA] It seems like an over-reach to say that a spoken
          language tag in video media should instead be interpreted as a
          request for Sign Language.  If this were done, would it always
          be clear which Sign Language was intended?  And could we
          really assume that both sides, if negotiating a spoken
          language tag in video media, were really indicating the desire
          to sign?  It seems like this could easily result
          interoperability failure.</div>
      </div>
    </blockquote>
    &lt;GH&gt;Yes it would result in interoperability failure. A match
    would be very unlikely. But the sentence means that  the sending
    application should only put sign language tags in the video
    description and the receiving application could trust that the tags
    in video descriptions are sign language tags. ( except for the
    exception that we are now prepared to drop if all agree. (already
    gone in version -16))<br>
    <blockquote type="cite"
cite="mid:CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com">
      <div dir="ltr">
        <div><br style="">
          <span style="font-size:12.8px">- A language tag in media where
            the modality is obvious or specified for the media subtype
            definition is supposed to indicate that modality.</span><br
            style="font-size:12.8px">
          <div><span style="font-size:12.8px">- A language tag in other
              media descriptions than above has undefined modality.</span>"</div>
        </div>
      </div>
    </blockquote>
    &lt;GH&gt;these two were maybe the more important additions plus the
    mentioning in the beginning of a possibility to add indication of
    modality to a media specification or a language specification by
    further work. <br>
    <br>
    /Gunnar<br>
    <br>
    <br>
    <br>
    <blockquote type="cite"
cite="mid:CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com">
      <div class="gmail_extra">
        <div class="gmail_quote">On Sat, Oct 14, 2017 at 1:21 AM, Gunnar
          Hellström <span dir="ltr">&lt;<a
              href="mailto:gunnar.hellstrom@omnitor.se" target="_blank"
              moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>&gt;</span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex"><span
              class="">Den 2017-10-14 kl. 04:25, skrev Randall Gellens:<br>
              <blockquote class="gmail_quote" style="margin:0 0 0
                .8ex;border-left:1px #ccc solid;padding-left:1ex">
                At 1:46 PM -0700 10/13/17, Bernard Aboba wrote:<br>
                <br>
                <blockquote class="gmail_quote" style="margin:0 0 0
                  .8ex;border-left:1px #ccc solid;padding-left:1ex">
                   Issue 43 ( &lt;<a
                    href="https://trac.ietf.org/trac/slim/ticket/43"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://trac.ietf.org/trac/sl<wbr>im/ticket/43</a>&gt;<a
                    href="https://trac.ietf.org/trac/slim/ticket/43"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://trac.ietf<wbr>.org/trac/slim/ticket/43</a>
                  ) results from a review comment that said that a
                  simple way is required to decide if a language tag is
                  a sign language or a written or spoken language.<br>
                  <br>
                   Some applications scan the IANA language registry at
                  startup for the word "Sign" in the tag description:<br>
                  <br>
                  &lt;<a
href="https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https://www.iana.org/assignme<wbr>nts/language-subtag-registry/<wbr>language-subtag-registry</a>&gt;<a
href="https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry"
                    rel="noreferrer" target="_blank"
                    moz-do-not-send="true">https<wbr>://www.iana.org/assignments/<wbr>language-subtag-registry/<wbr>language-subtag-registry</a>
                  <br>
                  <br>
                  <br>
                   Currently, there are 319 language subtags that
                  include "Sign Language" in their description.<br>
                   Given the current layout of the language subtag
                  registry, it is not clear to me that there is an
                  easier way to determine which tags represent sign
                  languages.  Nor is it within the SLIM WG charter to
                  develop a modification to the language subtag registry
                  to address this concern.<br>
                   So I am wondering whether we might resolve this with
                  a Note outlining the problem but not offering a
                  solution. <br>
                </blockquote>
                <br>
                I think the wording in -14 addresses the comment by
                accepting Dale's suggestion that, rather than know
                non-signed tags, it's the use of the exact same tag in
                both an audio and a video stream that is the indicator. 
                That both tightens up the technical issue and simplifies
                it greatly.<br>
                <br>
                The only other instance where we might add such a note
                would be in 5.4:<br>
                <br>
                5.4.  Undefined Combinations<br>
                <br>
                   With the exception of the case mentioned in Section
                5.2 (an audio<br>
                   stream in parallel with a video stream with the exact
                same (spoken)<br>
                   language tag), the behavior when specifying a
                non-signed language tag<br>
                   for a video media stream, or a signed language tag
                for an audio or<br>
                   text media stream, is not defined.<br>
                <br>
                We could add your suggested note to 5.4.<br>
                <br>
              </blockquote>
            </span>
            &lt;GH&gt;We can replace 5.4 with a more explicit section
            guiding applications to how to make the deduction simple.
            So, instead of a note, I suggest that we replace 5.4 with:<br>
            <br>
            5.4 Relations between media and modality<br>
            There is no easy way to deduct the intended modality from a
            language tag. Other specifications may introduce specific
            notations for modality used in a media or in relation to a
            language tag. Applications not implementing such specific
            notations may use the following simple deductions.<br>
            - A language tag in audio media is supposed to indicate
            spoken modality.<br>
            - A language tag in text media is supposed to indicate 
            written modality.<br>
            - A language tag in video media is supposed to indicate
            visual sign language modality except for the case when it is
            supposed to indicate a view of a speaking person mentioned
            in section 5.2 characterized by the exact same language tag
            also appearing in an audio media specification.<br>
            - A language tag in media where the modality is obvious or
            specified for the media subtype definition is supposed to
            indicate that modality.<br>
            - A language tag in other media descriptions than above has
            undefined modality.<br>
            ------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>---------------------------<br>
            <br>
            My note:  by this we currently consciously exclude the
            following use and I am ok with that:<br>
            -text in mp4 video<br>
            -audio in mp4 video ( or is that only allowed in
            application/mp4 ??)<br>
            -any modality in message media<br>
            -most application media, however some may have explicit
            descriptions in subtype specifications.<br>
            <br>
            The exception with a view of a speaker stands out as very
            odd now, requiring comparison of language tags used in
            different media descriptions, and requiring simultaneous use
            of language in two different media that is otherwise out of
            scope for this draft. It was introduced while I still hoped
            that we could introduce other dependencies between language
            use in different media.  It is not the most urgent
            media/language combination to specify. It is also handled in
            draft-hellstrom-slim-modality-<wbr>grouping. So, assuming
            that we can get progress on that draft, we could clean up
            the current draft by deleting the exception. I suggest that
            we delete the exception.<span class="HOEnZb"><font
                color="#888888"><br>
                <br>
                /Gunnar<br>
                <br>
                <br>
                <br>
                -- <br>
                ------------------------------<wbr>-----------<br>
                Gunnar Hellström<br>
                Omnitor<br>
                <a href="mailto:gunnar.hellstrom@omnitor.se"
                  target="_blank" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a><br>
                <a href="tel:%2B46%20708%20204%20288"
                  value="+46708204288" target="_blank"
                  moz-do-not-send="true">+46 708 204 288</a></font></span>
            <div class="HOEnZb">
              <div class="h5"><br>
                <br>
                ______________________________<wbr>_________________<br>
                SLIM mailing list<br>
                <a href="mailto:SLIM@ietf.org" target="_blank"
                  moz-do-not-send="true">SLIM@ietf.org</a><br>
                <a href="https://www.ietf.org/mailman/listinfo/slim"
                  rel="noreferrer" target="_blank"
                  moz-do-not-send="true">https://www.ietf.org/mailman/l<wbr>istinfo/slim</a><br>
              </div>
            </div>
          </blockquote>
        </div>
        <br>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
SLIM mailing list
<a class="moz-txt-link-abbreviated" href="mailto:SLIM@ietf.org">SLIM@ietf.org</a>
<a class="moz-txt-link-freetext" href="https://www.ietf.org/mailman/listinfo/slim">https://www.ietf.org/mailman/listinfo/slim</a>
</pre>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
-----------------------------------------
Gunnar Hellström
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
  </body>
</html>

--------------2B0B91D823EFA7BE14D8C3DE--


From nobody Sat Oct 14 14:47:13 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 7CB371320BD for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 14:47:11 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Level: 
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id J5KVllspDUvB for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 14:47:09 -0700 (PDT)
Received: from bin-vsp-out-03.atm.binero.net (bin-mail-out-05.binero.net [195.74.38.228]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 1250E127005 for <slim@ietf.org>; Sat, 14 Oct 2017 14:47:08 -0700 (PDT)
X-Halon-ID: 37c59000-b129-11e7-83a9-0050569116f7
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-03.atm.binero.net (Halon) with ESMTPSA id 37c59000-b129-11e7-83a9-0050569116f7; Sat, 14 Oct 2017 23:47:05 +0200 (CEST)
To: Bernard Aboba <bernard.aboba@gmail.com>, Randall Gellens <rg+ietf@randy.pensive.org>
Cc: "slim@ietf.org" <slim@ietf.org>
References: <3e945827-8310-56aa-b2e5-7a9405ff85c4@omnitor.se> <p06240621d606585e823d@99.111.97.136> <57690f3d-faa2-18d8-f270-8ae179f39e68@omnitor.se> <p06240628d6066c091e76@99.111.97.136> <fea21ce6-398a-ebbb-5881-abe732c8983b@omnitor.se> <CAOW+2dubW_Pc-JKtTOZjSGeCWw=3bSwd1tqvObSwf4fyzs4Eig@mail.gmail.com> <9dafe618-8d7d-76ba-91e2-41e3b5ce1f3b@omnitor.se> <ABDCB89A-4BF0-494C-A729-3EB6529DA618@brianrosen.net> <59f36c7d-41fc-68f5-1395-b0450689f5ca@omnitor.se> <7750ee16-18a0-3f44-5d79-d50967447d8e@omnitor.se> <p06240608d607ac1cb56d@172.20.60.54> <CAOW+2du_AMEuU4up==8D=MutY9hz8Vs7J463riZ7WRTS=qUyxw@mail.gmail.com>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <7489472d-894a-4bd5-c589-7dd0a49dee3c@omnitor.se>
Date: Sat, 14 Oct 2017 23:47:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <CAOW+2du_AMEuU4up==8D=MutY9hz8Vs7J463riZ7WRTS=qUyxw@mail.gmail.com>
Content-Type: multipart/alternative; boundary="------------A258FCCD2C5E915D3399710F"
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/7Q8tKyVgJLS8s4x2H-05EBWvUq4>
Subject: Re: [Slim] Indication of modality alternatives in draft-ietf-slim-negotiating-human-language -Issue #46
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 21:47:11 -0000

This is a multi-part message in MIME format.
--------------A258FCCD2C5E915D3399710F
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit

I looked back to where we discussed this topic earlier.

It was on 1st of June this year.

At that time I proposed:

"This document defines two media-level attributes starting with
     'hlang' (short for "human interactive language") to negotiate which
     human language alternative(s) the users are prepared to use in each 
interactive media stream."

You (Randall) did not like "alternative(s)" and inserted instead the 
wording with "selected". But you missed to include "users are prepared 
to use" in the new version. Thereby it still looks as the users must use 
all languages the negotiation results in. You have agreed that that is 
not the intention. It is true that later paragraphs in 5.2 clarifies it, 
but I think it is important that also the first paragraph provides a 
true picture about what the negotiation is about.

It is the freedom to not use all negotiated languages that I want to 
have evident already from the first paragraph.

So, a new proposed wording for the sentence under discussion is:

"This document defines two media-level attributes starting with
     'hlang' (short for "human interactive language") to negotiate which
     human language the users are prepared to use in each interactive 
media stream.

If you want, you might fit in "selected" somewhere. The important fact 
is that already here we by the word "prepared" tell that they negotiate 
what they might use, and not what they must use.

OK?

Gunnar



Den 2017-10-14 kl. 19:52, skrev Bernard Aboba:
> Randall said:
>
> "The existing text is talking about which language is selected for use 
> in a media stream should that media stream be used for interactive 
> communication; the proposed wording instead talks about a language 
> that may or may not be used in a media stream, which doesn't seem 
> correct to me."
>
> [BA] Yes, that is how it came across to me as well.
>
> On Sat, Oct 14, 2017 at 5:00 AM, Randall Gellens 
> <rg+ietf@randy.pensive.org <mailto:rg+ietf@randy.pensive.org>> wrote:
>
>     At 10:58 AM +0200 10/14/17, Gunnar Hellström wrote:
>
>          In order to not create complicated sentences but still having
>         the wording match our intentions, I want to change the
>         proposed resolution for Issue # 46 Change 1 to:
>
>          ---Change 1 in 5.2, first paragraph---------------- ------old
>         text---------    This document defines two media-level
>         attributes starting with       'hlang' (short for "human
>         interactive language") to negotiate which       human language
>         is selected for use in each interactive media stream.   
>         ------------new text--------------------    This document
>         defines two media-level attributes starting with       'hlang'
>         (short for "human interactive language") to negotiate which   
>            human language is selected for potential use in each media
>         stream.
>            -------end of change 1-------
>
>          That matches the "if" in paragraph 3, and it is also valid
>         for both the offers and answers, while paragraph 3 is only for
>         the answer.
>          Please accept it, it is of importance for proper
>         understanding of our intentions.
>
>
>     The existing text is talking about which language is selected for
>     use in a media stream should that media stream be used for
>     interactive communication; the proposed wording instead talks
>     about a language that may or may not be used in a media stream,
>     which doesn't seem correct to me. Since we already have text (as
>     noted earlier) that explicitly says that not all negotiated media
>     streams need be used, I don't see a problem with leaving the text
>     as is.
>
>
>     -- 
>     Randall Gellens
>     Opinions are personal;    facts are suspect;    I speak for myself
>     only
>     -------------- Randomly selected tag: ---------------
>     ondinnonk (ON-din-onk; Iroquoian; noun): the soul's innermost
>     benevolent desires.
>
>

-- 
-----------------------------------------
Gunnar Hellström
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


--------------A258FCCD2C5E915D3399710F
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <p>I looked back to where we discussed this topic earlier.</p>
    <p>It was on 1st of June this year. <br>
    </p>
    <p>At that time I proposed:</p>
    <p>"This document defines two media-level attributes starting with <br>
          'hlang' (short for "human interactive language") to negotiate
      which <br>
          human language alternative(s) the users are prepared to use in
      each interactive media stream."</p>
    <p>You (Randall) did not like "alternative(s)" and inserted instead
      the wording with "selected". But you missed to include "users are
      prepared to use" in the new version. Thereby it still looks as the
      users must use all languages the negotiation results in. You have
      agreed that that is not the intention. It is true that later
      paragraphs in 5.2 clarifies it, but I think it is important that
      also the first paragraph provides a true picture about what the
      negotiation is about. <br>
    </p>
    <p>It is the freedom to not use all negotiated languages that I want
      to have evident already from the first paragraph.</p>
    <p>So, a new proposed wording for the sentence under discussion is:</p>
    <p>"This document defines two media-level attributes starting with <br>
          'hlang' (short for "human interactive language") to negotiate
      which <br>
          human language the users are prepared to use in each
      interactive media stream. <br>
    </p>
    <p>If you want, you might fit in "selected" somewhere. The important
      fact is that already here we by the word "prepared" tell that they
      negotiate what they might use, and not what they must use.</p>
    <p>OK?<br>
    </p>
    <p>Gunnar<br>
    </p>
    <p><br>
    </p>
    <br>
    <div class="moz-cite-prefix">Den 2017-10-14 kl. 19:52, skrev Bernard
      Aboba:<br>
    </div>
    <blockquote type="cite"
cite="mid:CAOW+2du_AMEuU4up==8D=MutY9hz8Vs7J463riZ7WRTS=qUyxw@mail.gmail.com">
      <div dir="ltr">Randall said: 
        <div><br>
        </div>
        <div>"<span style="font-size:12.8px">The existing text is
            talking about which language is selected for use in a media
            stream should that media stream be used for interactive
            communication; the proposed wording instead talks about a
            language that may or may not be used in a media stream,
            which doesn't seem correct to me.</span>"</div>
        <div><br>
        </div>
        <div>[BA] Yes, that is how it came across to me as well.  </div>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On Sat, Oct 14, 2017 at 5:00 AM,
          Randall Gellens <span dir="ltr">&lt;<a
              href="mailto:rg+ietf@randy.pensive.org" target="_blank"
              moz-do-not-send="true">rg+ietf@randy.pensive.org</a>&gt;</span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex"><span
              class="">At 10:58 AM +0200 10/14/17, Gunnar Hellström
              wrote:<br>
              <br>
              <blockquote class="gmail_quote" style="margin:0 0 0
                .8ex;border-left:1px #ccc solid;padding-left:1ex">
                 In order to not create complicated sentences but still
                having the wording match our intentions, I want to
                change the proposed resolution for Issue # 46 Change 1
                to:<br>
                <br>
                 ---Change 1 in 5.2, first paragraph----------------   
                ------old text---------    This document defines two
                media-level attributes starting with       'hlang'
                (short for "human interactive language") to negotiate
                which       human language is selected for use in each
                interactive media stream.    ------------new
                text--------------------    This document defines two
                media-level attributes starting with       'hlang'
                (short for "human interactive language") to negotiate
                which       human language is selected for potential use
                in each media stream.<br>
                   -------end of change 1-------<br>
                <br>
                 That matches the "if" in paragraph 3, and it is also
                valid for both the offers and answers, while paragraph 3
                is only for the answer.<br>
                 Please accept it, it is of importance for proper
                understanding of our intentions.<br>
              </blockquote>
              <br>
            </span>
            The existing text is talking about which language is
            selected for use in a media stream should that media stream
            be used for interactive communication; the proposed wording
            instead talks about a language that may or may not be used
            in a media stream, which doesn't seem correct to me. Since
            we already have text (as noted earlier) that explicitly says
            that not all negotiated media streams need be used, I don't
            see a problem with leaving the text as is.<span
              class="HOEnZb"><font color="#888888"><br>
                <br>
                <br>
                -- <br>
                Randall Gellens<br>
                Opinions are personal;    facts are suspect;    I speak
                for myself only<br>
                -------------- Randomly selected tag: ---------------<br>
                ondinnonk (ON-din-onk; Iroquoian; noun): the soul's
                innermost<br>
                benevolent desires.<br>
              </font></span></blockquote>
        </div>
        <br>
      </div>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
-----------------------------------------
Gunnar Hellström
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
  </body>
</html>

--------------A258FCCD2C5E915D3399710F--


From nobody Sat Oct 14 16:20:03 2017
Return-Path: <pkyzivat@alum.mit.edu>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 7EBFC1321B6 for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 16:20:01 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.801
X-Spam-Level: 
X-Spam-Status: No, score=-2.801 tagged_above=-999 required=5 tests=[BAYES_05=-0.5, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ab6AinYmVyxO for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 16:20:00 -0700 (PDT)
Received: from alum-mailsec-scanner-4.mit.edu (alum-mailsec-scanner-4.mit.edu [18.7.68.15]) by ietfa.amsl.com (Postfix) with ESMTP id 299E112895E for <slim@ietf.org>; Sat, 14 Oct 2017 16:19:59 -0700 (PDT)
X-AuditID: 1207440f-a5bff70000007960-64-59e29b9dda3a
Received: from outgoing-alum.mit.edu (OUTGOING-ALUM.MIT.EDU [18.7.68.33]) (using TLS with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by alum-mailsec-scanner-4.mit.edu (Symantec Messaging Gateway) with SMTP id 1F.5D.31072.D9B92E95; Sat, 14 Oct 2017 19:19:58 -0400 (EDT)
Received: from PaulKyzivatsMBP.localdomain (c-24-62-227-142.hsd1.ma.comcast.net [24.62.227.142]) (authenticated bits=0) (User authenticated as pkyzivat@ALUM.MIT.EDU) by outgoing-alum.mit.edu (8.13.8/8.12.4) with ESMTP id v9ENJtSC007055 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NOT) for <slim@ietf.org>; Sat, 14 Oct 2017 19:19:56 -0400
To: slim@ietf.org
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com> <p06240606d607257c9584@172.20.60.54> <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se> <CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com>
From: Paul Kyzivat <pkyzivat@alum.mit.edu>
Message-ID: <da5cfaea-75f8-3fe1-7483-d77042bd9708@alum.mit.edu>
Date: Sat, 14 Oct 2017 19:19:55 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrKIsWRmVeSWpSXmKPExsUixO6iqDtv9qNIg1kXNS1mfuhkc2D0WLLk J1MAYxSXTUpqTmZZapG+XQJXxsrZfYwFvXwVjZO/sTUwTuHuYuTkkBAwkVi28wt7FyMXh5DA DiaJ7m3fWCGcr0wSj1fvZwOpEhYIkth0+gSYLSIgKPG9ZwYTiC0k8IlR4s6KIhCbTUBLYs6h /ywgNq+AvcTUFd8YQWwWAVWJGwf62UFsUYE0iTszHjJB1AhKnJz5BKyeUyBQ4nbva7A4s4CZ xLzND5khbHGJW0/mQ8XlJZq3zmaewMg/C0n7LCQts5C0zELSsoCRZRWjXGJOaa5ubmJmTnFq sm5xcmJeXmqRrolebmaJXmpK6SZGSFjy72DsWi9ziFGAg1GJh1cg41GkEGtiWXFl7iFGSQ4m JVHec60PI4X4kvJTKjMSizPii0pzUosPMUpwMCuJ8LI5ApXzpiRWVqUW5cOkpDlYlMR51Zeo +wkJpCeWpGanphakFsFkZTg4lCR4T8wCahQsSk1PrUjLzClBSDNxcIIM5wEavgOkhre4IDG3 ODMdIn+K0Zijp+fGHyaORzfu/mESYsnLz0uVEue9PxOoVACkNKM0D24aLLW8YhQHek6YdxPI QB5gWoKb9wpoFRPQqncRD0BWlSQipKQaGJeu3jWbY07JzHneWSavJln1ZR0TKmt9urDox33D rdapltu+vn2mde5u2S1FwcJTWafuqDUfthW8cnP1VM1bouHVNguDhMTeJf+88fSHX/esrxYR Px9srqsIP2fSLfWA4ZvBdAm56JfLDea9lOWd/vtlz+VfPybuXPddrumDSG7MQdEJk5w/ucQp sRRnJBpqMRcVJwIA5+sWewgDAAA=
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/X3EUFG_7KNzeJaLOtc8RCO5-ny4>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Oct 2017 23:20:01 -0000

On 10/14/17 2:03 PM, Bernard Aboba wrote:
> Gunnar said:
> 
> "Applications not implementing such specific notations may use the 
> following simple deductions.
> 
> - A language tag in audio media is supposed to indicate spoken modality.
> 
> [BA] Even a tag with "Sign Language" in the description??
> 
> - A language tag in text media is supposed to indicate  written modality.
> 
> [BA] If the tag has "Sign Language" in the description, can this 
> document really say that?
> 
> - A language tag in video media is supposed to indicate visual sign 
> language modality except for the case when it is supposed to indicate a 
> view of a speaking person mentioned in section 5.2 characterized by the 
> exact same language tag also appearing in an audio media specification.
> 
> [BA] It seems like an over-reach to say that a spoken language tag in 
> video media should instead be interpreted as a request for Sign 
> Language.  If this were done, would it always be clear which Sign 
> Language was intended?  And could we really assume that both sides, if 
> negotiating a spoken language tag in video media, were really indicating 
> the desire to sign?  It seems like this could easily result 
> interoperability failure.

IMO the right way to indicate that two (or more) media streams are 
conveying alternative representations of the same language content is by 
grouping them with a new grouping attribute. That can tie together an 
audio with a video and/or text. A language tag for sign language on the 
video stream then clarifies to the recipient that it is sign language. 
The grouping attribute by itself can indicate that these streams are 
conveying language.

(IIRC I suggested something along these lines a long time ago.)

	Thanks,
	Paul


From nobody Sat Oct 14 17:25:27 2017
Return-Path: <rg+ietf@randy.pensive.org>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 4AFCB12895E for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 17:25:25 -0700 (PDT)
X-Quarantine-ID: <CdkwF1xUOjH5>
X-Virus-Scanned: amavisd-new at amsl.com
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "MIME-Version"
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Level: 
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id CdkwF1xUOjH5 for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 17:25:24 -0700 (PDT)
Received: from turing.pensive.org (turing.pensive.org [99.111.97.161]) by ietfa.amsl.com (Postfix) with ESMTP id 3C47D124239 for <slim@ietf.org>; Sat, 14 Oct 2017 17:25:24 -0700 (PDT)
Received: from [172.20.60.54] (99.111.97.161) by turing.pensive.org with ESMTP (EIMS X 3.3.9); Sat, 14 Oct 2017 17:29:29 -0700
Mime-Version: 1.0
Message-Id: <p06240602d6085a5a8c0f@[172.20.60.54]>
In-Reply-To: <7489472d-894a-4bd5-c589-7dd0a49dee3c@omnitor.se>
References: <3e945827-8310-56aa-b2e5-7a9405ff85c4@omnitor.se> <p06240621d606585e823d@99.111.97.136> <57690f3d-faa2-18d8-f270-8ae179f39e68@omnitor.se> <p06240628d6066c091e76@99.111.97.136> <fea21ce6-398a-ebbb-5881-abe732c8983b@omnitor.se> <CAOW+2dubW_Pc-JKtTOZjSGeCWw=3bSwd1tqvObSwf4fyzs4Eig@mail.gmail.com> <9dafe618-8d7d-76ba-91e2-41e3b5ce1f3b@omnitor.se> <ABDCB89A-4BF0-494C-A729-3EB6529DA618@brianrosen.net> <59f36c7d-41fc-68f5-1395-b0450689f5ca@omnitor.se> <7750ee16-18a0-3f44-5d79-d50967447d8e@omnitor.se> <p06240608d607ac1cb56d@172.20.60.54> <CAOW+2du_AMEuU4up==8D=MutY9hz8Vs7J463riZ7WRTS=qUyxw@mail.gmail.com> <7489472d-894a-4bd5-c589-7dd0a49dee3c@omnitor.se>
X-Mailer: Eudora for Mac OS X
Date: Sat, 14 Oct 2017 17:25:18 -0700
To: Gunnar =?iso-8859-1?Q?Hellstr=F6m?=  <gunnar.hellstrom@omnitor.se>, Bernard Aboba <bernard.aboba@gmail.com>
From: Randall Gellens <rg+ietf@randy.pensive.org>
Cc: "slim@ietf.org" <slim@ietf.org>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii" ; format="flowed"
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/MP4SvRgbKq-_cLFVJdcJ4xeSf8c>
Subject: Re: [Slim] Indication of modality alternatives in draft-ietf-slim-negotiating-human-language -Issue #46
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 15 Oct 2017 00:25:25 -0000

I really think the wording is fine as it is.  I don't see any 
implication that all streams will be used.  However, in an effort to 
move beyond what seems to me a silly argument, rather than try to 
come up with fancy wording that may say the wrong thing, we can 
simply ad a very clear and direct note.  I propose we add "(Note that 
not all streams will necessarily be used.)" to the text.  That is, we 
change the section from:

    This document defines two media-level attributes starting with
    'hlang' (short for "human interactive language") to negotiate which
    human language is selected for use in each interactive media stream.
    There are two attributes, one ending in "-send" and the other in
    "-recv", registered in Section 6.  Each can appear in offers and
    answers for media streams.

to:

    This document defines two media-level attributes starting with
    'hlang' (short for "human interactive language") to negotiate which
    human language is selected for use in each interactive media stream.
    (Note that not all streams will necessarily be used.)  There are two
    attributes, one ending in "-send" and the other in "-recv",
    registered in Section 6.  Each can appear in offers and answers for
    media streams.


-- 
Randall Gellens
Opinions are personal;    facts are suspect;    I speak for myself only
-------------- Randomly selected tag: ---------------
Premature optimization is the root of all evil
                              --C. A. R. Hoare


From nobody Sat Oct 14 20:35:48 2017
Return-Path: <bernard.aboba@gmail.com>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9E555132031 for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 20:35:47 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2
X-Spam-Level: 
X-Spam-Status: No, score=-2 tagged_above=-999 required=5 tests=[BAYES_00=-1.9,  DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id da4FcofOWGLI for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 20:35:46 -0700 (PDT)
Received: from mail-pf0-x234.google.com (mail-pf0-x234.google.com [IPv6:2607:f8b0:400e:c00::234]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 44E2B1321F5 for <slim@ietf.org>; Sat, 14 Oct 2017 20:35:46 -0700 (PDT)
Received: by mail-pf0-x234.google.com with SMTP id t188so10793835pfd.10 for <slim@ietf.org>; Sat, 14 Oct 2017 20:35:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;  h=mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=UQnpj8allBSeaCsz+Mk2M6KvHj/CqlhrFK1GfDaJg6I=; b=rP56MbIMoOeKE7d1WXv1Rs7+ra4ygUD+XNc8s2B0teWgqLnkHIHFeG2dunG02Pvcl6 Mo3SboMa+/9kTWNtajrLiLiPmKCWtmmK8TZb+nXBcJ2ohdpzVaUvuMiObgaBwgQsPivH PIHqzU8+eF2MLBC7hwd2j1yek71036q/jJmd8Ecy/GedidZocjsTybxW3qQ1lm++R/H7 ol4C3WrsGxN/nh4EjEiknfMTiHeBIc342soHNDMX9wM8VfuIc2toOarxFslwiMIWWXJu RBea0dUTp6v9EKafqTli/da2f5a2u/QvIaKYnXCcZvgcB/5LA9x3XtCzKOdR6mk5wAHq j2Ng==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=UQnpj8allBSeaCsz+Mk2M6KvHj/CqlhrFK1GfDaJg6I=; b=pKtdmFRaK7qAWcvHux0MYk3fxw2fsQq8axM6cEqrXAoTvtkkiZlL6nr0wDdQ8gLg32 W3491iA/AI1fp3lMeK/WYYKo47c1VT2fWY0Ur5zrTZ9J8yf3SEnNejZNQAiFpXx+43xR yG7mtyG/dgIrGvG+JQFW6qwsx5q6RRILR2GJEr2edKD2lLszia8w8is+UNTDsEKBQOb+ 9a9ZszjPsigEmoovGHiiFqgQ2FIXcmbMOAKXeTrPuNX77etZk/E0coh2oN5rb6nGqJvy /vLsm/HM57d0neQcXLJsYRAi7JV6KsU3fBIDPDDcelEGhII7Bh0l9q/owIx87fwliGBz moUQ==
X-Gm-Message-State: AMCzsaVddwGivVX5mfqHpaC3jlnbkZZCS5MU4M+P0JORbxzCky2RR8X4 OS7VNKs5fctE1lo7onI7WNHDVU/3
X-Google-Smtp-Source: ABhQp+SkgxK6Xyg5JYr2Fl6kIK2w3bZOhQnYCZioEZRrbCHczS7OJelBv2cYwxoZJFd2XfSEMioP7g==
X-Received: by 10.84.128.9 with SMTP id 9mr128608pla.332.1508038545346; Sat, 14 Oct 2017 20:35:45 -0700 (PDT)
Received: from [10.201.14.158] (c-73-42-175-112.hsd1.wa.comcast.net. [73.42.175.112]) by smtp.gmail.com with ESMTPSA id m9sm8191008pgt.49.2017.10.14.20.35.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 14 Oct 2017 20:35:44 -0700 (PDT)
Content-Type: text/plain; charset=us-ascii
Mime-Version: 1.0 (1.0)
From: Bernard Aboba <bernard.aboba@gmail.com>
X-Mailer: iPhone Mail (15A421)
In-Reply-To: <p06240602d6085a5a8c0f@[172.20.60.54]>
Date: Sat, 14 Oct 2017 20:35:43 -0700
Cc: =?utf-8?Q?Gunnar_Hellstr=C3=B6m?= <gunnar.hellstrom@omnitor.se>, "slim@ietf.org" <slim@ietf.org>
Content-Transfer-Encoding: quoted-printable
Message-Id: <E3D1744B-C0DC-4382-A1EC-B336CE5F74E7@gmail.com>
References: <3e945827-8310-56aa-b2e5-7a9405ff85c4@omnitor.se> <p06240621d606585e823d@99.111.97.136> <57690f3d-faa2-18d8-f270-8ae179f39e68@omnitor.se> <p06240628d6066c091e76@99.111.97.136> <fea21ce6-398a-ebbb-5881-abe732c8983b@omnitor.se> <CAOW+2dubW_Pc-JKtTOZjSGeCWw=3bSwd1tqvObSwf4fyzs4Eig@mail.gmail.com> <9dafe618-8d7d-76ba-91e2-41e3b5ce1f3b@omnitor.se> <ABDCB89A-4BF0-494C-A729-3EB6529DA618@brianrosen.net> <59f36c7d-41fc-68f5-1395-b0450689f5ca@omnitor.se> <7750ee16-18a0-3f44-5d79-d50967447d8e@omnitor.se> <p06240608d607ac1cb56d@172.20.60.54> <CAOW+2du_AMEuU4up==8D=MutY9hz8Vs7J463riZ7WRTS=qUyxw@mail.gmail.com> <7489472d-894a-4bd5-c589-7dd0a49dee3c@omnitor.se> <p06240602d6085a5a8c0f@[172.20.60.54]>
To: Randall Gellens <rg+ietf@randy.pensive.org>
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/NSEAg8WxzLZIQexWXfOwZPHIsEg>
Subject: Re: [Slim] Indication of modality alternatives in draft-ietf-slim-negotiating-human-language -Issue #46
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 15 Oct 2017 03:35:48 -0000

Fine with me (though I agree it is not necessary).

> On Oct 14, 2017, at 5:25 PM, Randall Gellens <rg+ietf@randy.pensive.org> w=
rote:
>=20
> I really think the wording is fine as it is.  I don't see any implication t=
hat all streams will be used.  However, in an effort to move beyond what see=
ms to me a silly argument, rather than try to come up with fancy wording tha=
t may say the wrong thing, we can simply ad a very clear and direct note.  I=
 propose we add "(Note that not all streams will necessarily be used.)" to t=
he text.  That is, we change the section from:
>=20
>   This document defines two media-level attributes starting with
>   'hlang' (short for "human interactive language") to negotiate which
>   human language is selected for use in each interactive media stream.
>   There are two attributes, one ending in "-send" and the other in
>   "-recv", registered in Section 6.  Each can appear in offers and
>   answers for media streams.
>=20
> to:
>=20
>   This document defines two media-level attributes starting with
>   'hlang' (short for "human interactive language") to negotiate which
>   human language is selected for use in each interactive media stream.
>   (Note that not all streams will necessarily be used.)  There are two
>   attributes, one ending in "-send" and the other in "-recv",
>   registered in Section 6.  Each can appear in offers and answers for
>   media streams.
>=20
>=20
> --=20
> Randall Gellens
> Opinions are personal;    facts are suspect;    I speak for myself only
> -------------- Randomly selected tag: ---------------
> Premature optimization is the root of all evil
>                             --C. A. R. Hoare


From nobody Sat Oct 14 23:24:28 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 21FB71326FE for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 23:24:27 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.601
X-Spam-Level: 
X-Spam-Status: No, score=-2.601 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id jpAv4tTuDTg8 for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 23:24:25 -0700 (PDT)
Received: from bin-vsp-out-02.atm.binero.net (vsp-unauthed02.binero.net [195.74.38.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id E13061320D9 for <slim@ietf.org>; Sat, 14 Oct 2017 23:24:24 -0700 (PDT)
X-Halon-ID: 6f5ad0ab-b171-11e7-99c0-005056917f90
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-02.atm.binero.net (Halon) with ESMTPSA id 6f5ad0ab-b171-11e7-99c0-005056917f90; Sun, 15 Oct 2017 08:24:02 +0200 (CEST)
To: Paul Kyzivat <pkyzivat@alum.mit.edu>, slim@ietf.org
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com> <p06240606d607257c9584@172.20.60.54> <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se> <CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com> <da5cfaea-75f8-3fe1-7483-d77042bd9708@alum.mit.edu>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <b2611e82-2133-0e77-b72b-ef709b1bba3c@omnitor.se>
Date: Sun, 15 Oct 2017 08:24:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <da5cfaea-75f8-3fe1-7483-d77042bd9708@alum.mit.edu>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/lXK879wo9WLDGV10YOowg0-uEHc>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 15 Oct 2017 06:24:27 -0000

Paul,
Den 2017-10-15 kl. 01:19, skrev Paul Kyzivat:
> On 10/14/17 2:03 PM, Bernard Aboba wrote:
>> Gunnar said:
>>
>> "Applications not implementing such specific notations may use the 
>> following simple deductions.
>>
>> - A language tag in audio media is supposed to indicate spoken modality.
>>
>> [BA] Even a tag with "Sign Language" in the description??
>>
>> - A language tag in text media is supposed to indicate  written 
>> modality.
>>
>> [BA] If the tag has "Sign Language" in the description, can this 
>> document really say that?
>>
>> - A language tag in video media is supposed to indicate visual sign 
>> language modality except for the case when it is supposed to indicate 
>> a view of a speaking person mentioned in section 5.2 characterized by 
>> the exact same language tag also appearing in an audio media 
>> specification.
>>
>> [BA] It seems like an over-reach to say that a spoken language tag in 
>> video media should instead be interpreted as a request for Sign 
>> Language.  If this were done, would it always be clear which Sign 
>> Language was intended?  And could we really assume that both sides, 
>> if negotiating a spoken language tag in video media, were really 
>> indicating the desire to sign?  It seems like this could easily 
>> result interoperability failure.
>
> IMO the right way to indicate that two (or more) media streams are 
> conveying alternative representations of the same language content is 
> by grouping them with a new grouping attribute. That can tie together 
> an audio with a video and/or text. A language tag for sign language on 
> the video stream then clarifies to the recipient that it is sign 
> language. The grouping attribute by itself can indicate that these 
> streams are conveying language.
<GH>Yes, and that is proposed in 
draft-hellstrom-slim-modality-grouping    with two kinds of grouping: 
One kind of grouping to tell that two or more languages in different 
streams are alternatives with the same content and a priority order is 
assigned to them to guide the selection of which one to use during the 
call. The other kind of grouping telling that two or more languages in 
different streams are desired together with the same language content 
but different modalities ( such as the use for captioned telephony with 
the same content provided in both speech and text, or sign language 
interpretation where you see the interpreter,  or possibly spoken 
language interpretation with the languages provided in different audio 
streams ). I hope that that draft can be progressed. I see it as a 
needed complement to the pure language indications per media.

The discussion in this thread is more about how an application would 
easily know that e.g. "ase" is a sign language and "en" is a spoken (or 
written) language, and also a discussion about what kinds of languages 
are allowed and indicated by default in each media type. It was not at 
all about falsely using language tags in the wrong media type as Bernard 
understood my wording. It was rather a limitation to what modalities are 
used in each media type and how to know the modality with cases that are 
not evident, e.g. "application" and "message" media types.

Right now we have returned to a very simple rule: we define only use of 
spoken language in audio media, written language in text media and sign 
language in video media.
We have discussed other use, such as a view of a speaking person in 
video, text overlay on video, a sign language notation in text media, 
written language in message media, written language in WebRTC data 
channels, sign written and spoken in bucket media maybe declared as 
application media. We do not define these cases. They are just not 
defined, not forbidden. They may be defined in the future.

My proposed wording in section 5.4 got too many misunderstandings so I 
gave up with it. I think we can live with 5.4 as it is in version -16.

Thanks,
Gunnar


>
> (IIRC I suggested something along these lines a long time ago.)
>
>     Thanks,
>     Paul
>
> _______________________________________________
> SLIM mailing list
> SLIM@ietf.org
> https://www.ietf.org/mailman/listinfo/slim

-- 
-----------------------------------------
Gunnar Hellström
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


From nobody Sat Oct 14 23:52:15 2017
Return-Path: <internet-drafts@ietf.org>
X-Original-To: slim@ietf.org
Delivered-To: slim@ietfa.amsl.com
Received: from ietfa.amsl.com (localhost [IPv6:::1]) by ietfa.amsl.com (Postfix) with ESMTP id BA64E126D0C; Sat, 14 Oct 2017 23:52:13 -0700 (PDT)
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
From: internet-drafts@ietf.org
To: <i-d-announce@ietf.org>
Cc: slim@ietf.org
X-Test-IDTracker: no
X-IETF-IDTracker: 6.63.1
Auto-Submitted: auto-generated
Precedence: bulk
Message-ID: <150805033374.12283.17013757061340540827@ietfa.amsl.com>
Date: Sat, 14 Oct 2017 23:52:13 -0700
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/UTA-_nbGmKbDfi7Eu92NyRMLSJQ>
Subject: [Slim] I-D Action: draft-ietf-slim-negotiating-human-language-17.txt
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 15 Oct 2017 06:52:14 -0000

A New Internet-Draft is available from the on-line Internet-Drafts directories.
This draft is a work item of the Selection of Language for Internet Media WG of the IETF.

        Title           : Negotiating Human Language in Real-Time Communications
        Author          : Randall Gellens
	Filename        : draft-ietf-slim-negotiating-human-language-17.txt
	Pages           : 16
	Date            : 2017-10-14

Abstract:
   Users have various human (natural) language needs, abilities, and
   preferences regarding spoken, written, and signed languages.  This
   document adds new SDP media-level attributes so that when
   establishing interactive communication sessions ("calls"), it is
   possible to negotiate (communicate and match) the caller's language
   and media needs with the capabilities of the called party.  This is
   especially important with emergency calls, where a call can be
   handled by a call taker capable of communicating with the user, or a
   translator or relay operator can be bridged into the call during
   setup, but this applies to non-emergency calls as well (as an
   example, when calling a company call center).

   This document describes the need and a solution using new SDP media
   attributes.


The IETF datatracker status page for this draft is:
https://datatracker.ietf.org/doc/draft-ietf-slim-negotiating-human-language/

There are also htmlized versions available at:
https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17
https://datatracker.ietf.org/doc/html/draft-ietf-slim-negotiating-human-language-17

A diff from the previous version is available at:
https://www.ietf.org/rfcdiff?url2=draft-ietf-slim-negotiating-human-language-17


Please note that it may take a couple of minutes from the time of submission
until the htmlized version and diff are available at tools.ietf.org.

Internet-Drafts are also available by anonymous FTP at:
ftp://ftp.ietf.org/internet-drafts/


From nobody Sat Oct 14 23:53:14 2017
Return-Path: <rg+ietf@randy.pensive.org>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id BF867126D0C for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 23:53:11 -0700 (PDT)
X-Quarantine-ID: <4wp6bHWty06c>
X-Virus-Scanned: amavisd-new at amsl.com
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "MIME-Version"
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Level: 
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4wp6bHWty06c for <slim@ietfa.amsl.com>; Sat, 14 Oct 2017 23:53:10 -0700 (PDT)
Received: from turing.pensive.org (turing.pensive.org [99.111.97.161]) by ietfa.amsl.com (Postfix) with ESMTP id AA3C71320D9 for <slim@ietf.org>; Sat, 14 Oct 2017 23:53:10 -0700 (PDT)
Received: from [172.20.60.54] (99.111.97.161) by turing.pensive.org with ESMTP (EIMS X 3.3.9); Sat, 14 Oct 2017 23:57:16 -0700
Mime-Version: 1.0
Message-Id: <p06240600d608b6256c16@[172.20.60.54]>
In-Reply-To: <E3D1744B-C0DC-4382-A1EC-B336CE5F74E7@gmail.com>
References: <3e945827-8310-56aa-b2e5-7a9405ff85c4@omnitor.se> <p06240621d606585e823d@99.111.97.136> <57690f3d-faa2-18d8-f270-8ae179f39e68@omnitor.se> <p06240628d6066c091e76@99.111.97.136> <fea21ce6-398a-ebbb-5881-abe732c8983b@omnitor.se> <CAOW+2dubW_Pc-JKtTOZjSGeCWw=3bSwd1tqvObSwf4fyzs4Eig@mail.gmail.com> <9dafe618-8d7d-76ba-91e2-41e3b5ce1f3b@omnitor.se> <ABDCB89A-4BF0-494C-A729-3EB6529DA618@brianrosen.net> <59f36c7d-41fc-68f5-1395-b0450689f5ca@omnitor.se> <7750ee16-18a0-3f44-5d79-d50967447d8e@omnitor.se> <p06240608d607ac1cb56d@172.20.60.54> <CAOW+2du_AMEuU4up==8D=MutY9hz8Vs7J463riZ7WRTS=qUyxw@mail.gmail.com> <7489472d-894a-4bd5-c589-7dd0a49dee3c@omnitor.se> <p06240602d6085a5a8c0f@[172.20.60.54]> <E3D1744B-C0DC-4382-A1EC-B336CE5F74E7@gmail.com>
X-Mailer: Eudora for Mac OS X
Date: Sat, 14 Oct 2017 23:53:05 -0700
To: Bernard Aboba <bernard.aboba@gmail.com>
From: Randall Gellens <rg+ietf@randy.pensive.org>
Cc: =?utf-8?Q?Gunnar_Hellstr=C3=B6m?= <gunnar.hellstrom@omnitor.se>, "slim@ietf.org" <slim@ietf.org>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii" ; format="flowed"
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/NROawB6JHbU_QAS7B7Zzc5SDQSY>
Subject: Re: [Slim] Indication of modality alternatives in draft-ietf-slim-negotiating-human-language -Issue #46
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 15 Oct 2017 06:53:12 -0000

-17 uploaded (with this change).

I think this closes all issues.

--Randall

At 8:35 PM -0700 10/14/17, Bernard Aboba wrote:

>  Fine with me (though I agree it is not necessary).
>
>>  On Oct 14, 2017, at 5:25 PM, Randall Gellens 
>> <rg+ietf@randy.pensive.org> wrote:
>>
>>  I really think the wording is fine as it is.  I don't see any 
>> implication that all streams will be used.  However, in an effort 
>> to move beyond what seems to me a silly argument, rather than try 
>> to come up with fancy wording that may say the wrong thing, we can 
>> simply ad a very clear and direct note.  I propose we add "(Note 
>> that not all streams will necessarily be used.)" to the text. 
>> That is, we change the section from:
>>
>>    This document defines two media-level attributes starting with
>>    'hlang' (short for "human interactive language") to negotiate which
>>    human language is selected for use in each interactive media stream.
>>    There are two attributes, one ending in "-send" and the other in
>>    "-recv", registered in Section 6.  Each can appear in offers and
>>    answers for media streams.
>>
>>  to:
>>
>>    This document defines two media-level attributes starting with
>>    'hlang' (short for "human interactive language") to negotiate which
>>    human language is selected for use in each interactive media stream.
>>    (Note that not all streams will necessarily be used.)  There are two
>>    attributes, one ending in "-send" and the other in "-recv",
>>    registered in Section 6.  Each can appear in offers and answers for
>>    media streams.
>>
>>
>>  --
>>  Randall Gellens
>>  Opinions are personal;    facts are suspect;    I speak for myself only
>>  -------------- Randomly selected tag: ---------------
>>  Premature optimization is the root of all evil
>>                              --C. A. R. Hoare


-- 
Randall Gellens
Opinions are personal;    facts are suspect;    I speak for myself only
-------------- Randomly selected tag: ---------------
Stolen Painting Found by Tree
--Newspaper headline


From nobody Sun Oct 15 00:08:29 2017
Return-Path: <bernard.aboba@gmail.com>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 34E53126D0C for <slim@ietfa.amsl.com>; Sun, 15 Oct 2017 00:08:28 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -0.1
X-Spam-Level: 
X-Spam-Status: No, score=-0.1 tagged_above=-999 required=5 tests=[BAYES_40=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id dZJoWkQ4e_Za for <slim@ietfa.amsl.com>; Sun, 15 Oct 2017 00:08:27 -0700 (PDT)
Received: from mail-ua0-x231.google.com (mail-ua0-x231.google.com [IPv6:2607:f8b0:400c:c08::231]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id C4C401320D9 for <slim@ietf.org>; Sun, 15 Oct 2017 00:08:26 -0700 (PDT)
Received: by mail-ua0-x231.google.com with SMTP id l40so7793155uah.2 for <slim@ietf.org>; Sun, 15 Oct 2017 00:08:26 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;  h=mime-version:from:date:message-id:subject:to; bh=Y1l5F7kDYbaE7OY6TkPm7NbPOIdgd3nwsKyjZ8N6WHQ=; b=Mrc3Wub9bch0Z4XxqLM9eXylgMRVaCB4xA2x8/xBCZUgPGoirLdNd2LUfBpzsu/KPK L6gy43Fl7q+zwYHs6wSAyT0c7iWJJqtQgwDetSfoF402O737Z93xwC0DZA2/R31TwXiA 4ppDtOlgDFwu9y6x06i2pPptOcoGCve0TMm/ZyLXCEMoA1Pue8iVr6/HHLcuGDlgoP3T yx1w2s/Lhr9+j7CIqdKrhyO4scBlsO3YpPWNdinCWPKBpIE1+BgUpEyVUcPSCNE+ucOf rBXveTFuQ1iNhrv52S/9sSyUIKB7nCOu9HXbMJr+TEkAZy3ej9eSWIHsOVlh+8oBZELZ PW/Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=Y1l5F7kDYbaE7OY6TkPm7NbPOIdgd3nwsKyjZ8N6WHQ=; b=mk5ZHgYrHL+7pPfG4SZe8wogRT/Q/Q7sBLTJUT0NjxaW98c13RPXRlbhw/VRGdv5BP 9rG7C8k1BHSlAYu8szmgrgVvV92OtP1lAG4vxnjD2fooeneQ+qaAhsHL0yWowJ+/SX1J udWCPQzasW6Kchx1iYUnoQ3Zk+vjm1nNybTqvNdCZ0qLX3dmoihUbLf7vUQo76yGbWuH TUws36sQuyTfL9wiSCuxP6ECLdnogNrpn0CmoWxAwQaJF2pTos2Sfh0+Mj25CFn4RKGj mXJXiip0mP6OI2zlujP/WZlRETWRvXwA6GL8Q1NQJtmKWYA813+bMDHTbNTfv/MRkgC5 Izow==
X-Gm-Message-State: AMCzsaXQCcIiB6HPrscN4LPOwkniz3ZYfN7V9JsY/m5RW+kQ4DjM/0Fs sf6z+rF7+qbVzYrckEx4T/R6MOuHQiuvq7VICbLCcQ==
X-Google-Smtp-Source: AOwi7QCSIUYQs7S6m0s+wmRib8rsvQWJYm3Iq7EUfTqxi4pw9KZw7C1hz3ny0d0AmqQD0p4g6ihNruevn3OFt2/TbiM=
X-Received: by 10.176.20.225 with SMTP id f30mr5218748uae.66.1508051305583; Sun, 15 Oct 2017 00:08:25 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.159.32.76 with HTTP; Sun, 15 Oct 2017 00:08:05 -0700 (PDT)
From: Bernard Aboba <bernard.aboba@gmail.com>
Date: Sun, 15 Oct 2017 00:08:05 -0700
Message-ID: <CAOW+2dvXSs-xzKknqWjVP7W8H4QWkHR0vrK2X=X1hZedHJbZMQ@mail.gmail.com>
To: slim@ietf.org
Content-Type: multipart/alternative; boundary="001a1145ab005f60de055b908e80"
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/usRyQjDzTLDdvuzy3cp35tIJZhw>
Subject: [Slim] Status of draft-ietf-slim-negotiating-human-language-17
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 15 Oct 2017 07:08:28 -0000

--001a1145ab005f60de055b908e80
Content-Type: text/plain; charset="UTF-8"

Many thanks to Randall for the progress made over the last days.

In the progression from -14 to -17, it appears to me that Issues 41 and 47
have been resolved, and as a result I have closed these Issues in
Datatracker.  If this impression is mistaken, please speak up now.

At this point, I am leaving Issue 43 open, pending verification from the
reviewer (Dale) and the WG that the resolution in -17 is satisfactory.

Opinions?

--001a1145ab005f60de055b908e80
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Many thanks to Randall for the progress made over the last=
 days.=C2=A0<div><br></div><div>In the progression from -14 to -17, it appe=
ars to me that Issues 41 and 47 have been resolved, and as a result I have =
closed these Issues in Datatracker.=C2=A0 If this impression is mistaken, p=
lease speak up now.=C2=A0</div><div><br></div><div>At this point, I am leav=
ing Issue 43 open, pending verification from the reviewer (Dale) and the WG=
 that the resolution in -17 is satisfactory.=C2=A0</div><div><br></div><div=
>Opinions?</div></div>

--001a1145ab005f60de055b908e80--


From nobody Sun Oct 15 10:13:27 2017
Return-Path: <pkyzivat@alum.mit.edu>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 177881331D2 for <slim@ietfa.amsl.com>; Sun, 15 Oct 2017 10:13:26 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.201
X-Spam-Level: 
X-Spam-Status: No, score=-4.201 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id z_yn9FHwZVo9 for <slim@ietfa.amsl.com>; Sun, 15 Oct 2017 10:13:24 -0700 (PDT)
Received: from alum-mailsec-scanner-3.mit.edu (alum-mailsec-scanner-3.mit.edu [18.7.68.14]) by ietfa.amsl.com (Postfix) with ESMTP id BF99E1270AB for <slim@ietf.org>; Sun, 15 Oct 2017 10:13:23 -0700 (PDT)
X-AuditID: 1207440e-bf9ff70000007085-95-59e397309328
Received: from outgoing-alum.mit.edu (OUTGOING-ALUM.MIT.EDU [18.7.68.33]) (using TLS with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by alum-mailsec-scanner-3.mit.edu (Symantec Messaging Gateway) with SMTP id FB.A3.28805.03793E95; Sun, 15 Oct 2017 13:13:21 -0400 (EDT)
Received: from PaulKyzivatsMBP.localdomain (c-24-62-227-142.hsd1.ma.comcast.net [24.62.227.142]) (authenticated bits=0) (User authenticated as pkyzivat@ALUM.MIT.EDU) by outgoing-alum.mit.edu (8.13.8/8.12.4) with ESMTP id v9FHDIJa016801 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NOT); Sun, 15 Oct 2017 13:13:19 -0400
To: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>, slim@ietf.org
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com> <p06240606d607257c9584@172.20.60.54> <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se> <CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com> <da5cfaea-75f8-3fe1-7483-d77042bd9708@alum.mit.edu> <b2611e82-2133-0e77-b72b-ef709b1bba3c@omnitor.se>
From: Paul Kyzivat <pkyzivat@alum.mit.edu>
Message-ID: <1b0380ef-b57d-3cc7-c649-5351dc61f878@alum.mit.edu>
Date: Sun, 15 Oct 2017 13:13:18 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <b2611e82-2133-0e77-b72b-ef709b1bba3c@omnitor.se>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFupileLIzCtJLcpLzFFi42IRYndR1DWc/jjS4MUnCYsd78+wWMz80Mnm wOSxZMlPJo+Jiz8xBzBFcdmkpOZklqUW6dslcGW8+tfOVLBIvWLe1YPsDYwz5bsYOTkkBEwk Tk5+ztjFyMUhJLCDSaJv9gcmkISQwEMmiakbw0BsYYEgiU2nT7CB2CIC0RKTdlxmhKi5wyTR e4ELxGYT0JKYc+g/C4jNK2AvsWFHH1gNi4CqxOk5s5hBbFGBNIk7Mx4yQdQISpyc+QSsnlPA TuL0pU52EJtZwExi3uaHzBC2uMStJ/OZIGx5ieats5knMPLPQtI+C0nLLCQts5C0LGBkWcUo l5hTmqubm5iZU5yarFucnJiXl1qka6yXm1mil5pSuokREqh8Oxjb18scYhTgYFTi4RXIeBQp xJpYVlyZe4hRkoNJSZT3XOvDSCG+pPyUyozE4oz4otKc1OJDjBIczEoivHMaHkcK8aYkVlal FuXDpKQ5WJTEedWWqPsJCaQnlqRmp6YWpBbBZGU4OJQkeBOmATUKFqWmp1akZeaUIKSZODhB hvMADbcHqeEtLkjMLc5Mh8ifYrTn6Om58YeJY8f920Dy0Y27QPLL7vt/mIRY8vLzUqXEeb1A 2gRA2jJK8+Amw5LQK0ZxoEeFeeNAqniACQxu9iugtUxAa99FPABZW5KIkJJqYJz8d+/D7gDG J2b7jFnWn31c03q776eeUI/Ae//gY+2f68tF979k+1+26/o1eUcTH/OVk3wkHTdVX80RTd27 2srSoFh2i+/c6ekGZh06rHJWy27GHfvV/CyLmzNzV/PiBSsv7fETnqmtvb1evyEqTXTuhssx P38X+e1waSxy0svyPSDJNWtClBJLcUaioRZzUXEiAL6gTNkdAwAA
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/UVANZiQoDE9zjSlETLWYdZfO9oY>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 15 Oct 2017 17:13:26 -0000

On 10/15/17 2:24 AM, Gunnar Hellström wrote:
> Paul,
> Den 2017-10-15 kl. 01:19, skrev Paul Kyzivat:
>> On 10/14/17 2:03 PM, Bernard Aboba wrote:
>>> Gunnar said:
>>>
>>> "Applications not implementing such specific notations may use the 
>>> following simple deductions.
>>>
>>> - A language tag in audio media is supposed to indicate spoken modality.
>>>
>>> [BA] Even a tag with "Sign Language" in the description??
>>>
>>> - A language tag in text media is supposed to indicate  written 
>>> modality.
>>>
>>> [BA] If the tag has "Sign Language" in the description, can this 
>>> document really say that?
>>>
>>> - A language tag in video media is supposed to indicate visual sign 
>>> language modality except for the case when it is supposed to indicate 
>>> a view of a speaking person mentioned in section 5.2 characterized by 
>>> the exact same language tag also appearing in an audio media 
>>> specification.
>>>
>>> [BA] It seems like an over-reach to say that a spoken language tag in 
>>> video media should instead be interpreted as a request for Sign 
>>> Language.  If this were done, would it always be clear which Sign 
>>> Language was intended?  And could we really assume that both sides, 
>>> if negotiating a spoken language tag in video media, were really 
>>> indicating the desire to sign?  It seems like this could easily 
>>> result interoperability failure.
>>
>> IMO the right way to indicate that two (or more) media streams are 
>> conveying alternative representations of the same language content is 
>> by grouping them with a new grouping attribute. That can tie together 
>> an audio with a video and/or text. A language tag for sign language on 
>> the video stream then clarifies to the recipient that it is sign 
>> language. The grouping attribute by itself can indicate that these 
>> streams are conveying language.
> <GH>Yes, and that is proposed in 
> draft-hellstrom-slim-modality-grouping    with two kinds of grouping: 
> One kind of grouping to tell that two or more languages in different 
> streams are alternatives with the same content and a priority order is 
> assigned to them to guide the selection of which one to use during the 
> call. The other kind of grouping telling that two or more languages in 
> different streams are desired together with the same language content 
> but different modalities ( such as the use for captioned telephony with 
> the same content provided in both speech and text, or sign language 
> interpretation where you see the interpreter,  or possibly spoken 
> language interpretation with the languages provided in different audio 
> streams ). I hope that that draft can be progressed. I see it as a 
> needed complement to the pure language indications per media.

Oh, sorry. I did read that draft but forgot about it.

> The discussion in this thread is more about how an application would 
> easily know that e.g. "ase" is a sign language and "en" is a spoken (or 
> written) language, and also a discussion about what kinds of languages 
> are allowed and indicated by default in each media type. It was not at 
> all about falsely using language tags in the wrong media type as Bernard 
> understood my wording. It was rather a limitation to what modalities are 
> used in each media type and how to know the modality with cases that are 
> not evident, e.g. "application" and "message" media types.

What do you mean by "know"? Is it for the *UA* software to know, or for 
the human user of the UA to know? Presumably a human user that cares 
will understand this if presented with the information in some way. But 
typically this isn't presented to the user.

For the software to know must mean that it will behave differently for a 
tag that represents a sign language than for one that represents a 
spoken or written language. What is it that it will do differently?

	Thanks,
	Paul

> Right now we have returned to a very simple rule: we define only use of 
> spoken language in audio media, written language in text media and sign 
> language in video media.
> We have discussed other use, such as a view of a speaking person in 
> video, text overlay on video, a sign language notation in text media, 
> written language in message media, written language in WebRTC data 
> channels, sign written and spoken in bucket media maybe declared as 
> application media. We do not define these cases. They are just not 
> defined, not forbidden. They may be defined in the future.
> 
> My proposed wording in section 5.4 got too many misunderstandings so I 
> gave up with it. I think we can live with 5.4 as it is in version -16.
> 
> Thanks,
> Gunnar
> 
> 
>>
>> (IIRC I suggested something along these lines a long time ago.)
>>
>>     Thanks,
>>     Paul
>>
>> _______________________________________________
>> SLIM mailing list
>> SLIM@ietf.org
>> https://www.ietf.org/mailman/listinfo/slim
> 


From nobody Sun Oct 15 10:49:42 2017
Return-Path: <bernard.aboba@gmail.com>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9EFD413292A for <slim@ietfa.amsl.com>; Sun, 15 Oct 2017 10:49:39 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level: 
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id S8wCV7EBtOkP for <slim@ietfa.amsl.com>; Sun, 15 Oct 2017 10:49:36 -0700 (PDT)
Received: from mail-ua0-x236.google.com (mail-ua0-x236.google.com [IPv6:2607:f8b0:400c:c08::236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 56B6F126B71 for <slim@ietf.org>; Sun, 15 Oct 2017 10:49:36 -0700 (PDT)
Received: by mail-ua0-x236.google.com with SMTP id v27so8291833uav.7 for <slim@ietf.org>; Sun, 15 Oct 2017 10:49:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;  h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=XEAdyRq7wcoynvKg9jUhcIy/5D5295AQ3uCzn7flRkM=; b=ZbvAxK3DMeRcDHQndWww6UUHS56ClbxefChi7pyKNBlbeeyuuoYaVFQc2EOQRl7HEy IoF2CrkUVnQG8p4DprJU/MjgBQ6KLJOvJ+Ta/ZCVDnlpRHqhs+PpE3F2c+ajwnQ3jsi0 e7UGbXMd9ILcpW57v/DGgctjAPD8SJ/+QD79+zoKATMpGD608YW+Q1uBD0GJcTs03OX2 7jipitGH9B0aZODSNjBFOIhZCzqOkdBQUkPMwO/Y5J1Q8dvwPT5+2WnBS9ITuMsVrYko GGHC7D1UHu5HQUdrHRZWt+rhJYQRhY9hFXqN48ave/tHRMknMIqKZB2NtbVYn97iXR21 v2sA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=XEAdyRq7wcoynvKg9jUhcIy/5D5295AQ3uCzn7flRkM=; b=QU0npdOjxu+g5PHSFkNg8I8FmqAS2ulqwSsMQ0S4wdSyji0x8E6Yb/8bERqsewAprh y9xPw8hT+ajDiyTiY+Ts3yhGGwb04vPcpcueGNXeYd1okNTSyrHfKxeZx66dxCmelwr5 /um1XM4UWM1WU6vq7rkBPtjcyEFSyB0p9TcsOAFH9LvWNC5tOAQmIGTqjfagvBpHoLIT YytyrtMqIuMIfOqxeJ3OPAtCwIvysT+GRS+BtOMRdo93OpSNdgRx855wVzvinikyuQz4 P51dqUyq+7h4HvIluXhs1BA8ZFoqWqCiUUIXCqyiBOdbIKiw+2Ad+TgUkvw7k3pBrsp8 2cbg==
X-Gm-Message-State: AMCzsaWIRorb3Sqhhx+GOj/yTk+5LsK/09ZAX9ch9VLU24+o6i7M4eSL RlQwqugex5s3VYlLTm37NfZx0FQ1bHb98f5nwqc=
X-Google-Smtp-Source: AOwi7QBQBPh7XL9WYqN5vpnKhlr/9ibNR+MluMbN+DiM22C9qQVsMxlec42QsBxmKV2JZASVSr0fSD8BJ83KquyoDH4=
X-Received: by 10.176.73.72 with SMTP id a8mr5980076uad.65.1508089775129; Sun, 15 Oct 2017 10:49:35 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.159.32.76 with HTTP; Sun, 15 Oct 2017 10:49:14 -0700 (PDT)
In-Reply-To: <1b0380ef-b57d-3cc7-c649-5351dc61f878@alum.mit.edu>
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com> <p06240606d607257c9584@172.20.60.54> <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se> <CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com> <da5cfaea-75f8-3fe1-7483-d77042bd9708@alum.mit.edu> <b2611e82-2133-0e77-b72b-ef709b1bba3c@omnitor.se> <1b0380ef-b57d-3cc7-c649-5351dc61f878@alum.mit.edu>
From: Bernard Aboba <bernard.aboba@gmail.com>
Date: Sun, 15 Oct 2017 10:49:14 -0700
Message-ID: <CAOW+2dtVE5BDmD2qy_g-asXvxntif4fVC8LYO4j7QLQ5Kq2E+g@mail.gmail.com>
To: Paul Kyzivat <pkyzivat@alum.mit.edu>
Cc: =?UTF-8?Q?Gunnar_Hellstr=C3=B6m?= <gunnar.hellstrom@omnitor.se>,  slim@ietf.org
Content-Type: multipart/alternative; boundary="001a11453754560d29055b99832e"
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/wulQfKTEIthfmmMxC2Anx2vlV-E>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 15 Oct 2017 17:49:40 -0000

--001a11453754560d29055b99832e
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Paul said:

"For the software to know must mean that it will behave differently for a
tag that represents a sign language than for one that represents a spoken
or written language. What is it that it will do differently?"

[BA] In terms of behavior based on the signed/non-signed distinction, in
-17 the only reference appears to be in Section 5.4, stating that certain
combinations are not defined in the document (but that definition of those
combinations was out of scope):

5.4 <https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language=
-17#section-5.4>.
Undefined Combinations

   The behavior when specifying a non-signed language tag for a video
   media stream, or a signed language tag for an audio or text media
   stream, is not defined in this document.

   The problem of knowing which language tags are signed and which are
   not is out of scope of this document.



On Sun, Oct 15, 2017 at 10:13 AM, Paul Kyzivat <pkyzivat@alum.mit.edu>
wrote:

> On 10/15/17 2:24 AM, Gunnar Hellstr=C3=B6m wrote:
>
>> Paul,
>> Den 2017-10-15 kl. 01:19, skrev Paul Kyzivat:
>>
>>> On 10/14/17 2:03 PM, Bernard Aboba wrote:
>>>
>>>> Gunnar said:
>>>>
>>>> "Applications not implementing such specific notations may use the
>>>> following simple deductions.
>>>>
>>>> - A language tag in audio media is supposed to indicate spoken modalit=
y.
>>>>
>>>> [BA] Even a tag with "Sign Language" in the description??
>>>>
>>>> - A language tag in text media is supposed to indicate  written
>>>> modality.
>>>>
>>>> [BA] If the tag has "Sign Language" in the description, can this
>>>> document really say that?
>>>>
>>>> - A language tag in video media is supposed to indicate visual sign
>>>> language modality except for the case when it is supposed to indicate =
a
>>>> view of a speaking person mentioned in section 5.2 characterized by th=
e
>>>> exact same language tag also appearing in an audio media specification=
.
>>>>
>>>> [BA] It seems like an over-reach to say that a spoken language tag in
>>>> video media should instead be interpreted as a request for Sign Langua=
ge.
>>>> If this were done, would it always be clear which Sign Language was
>>>> intended?  And could we really assume that both sides, if negotiating =
a
>>>> spoken language tag in video media, were really indicating the desire =
to
>>>> sign?  It seems like this could easily result interoperability failure=
.
>>>>
>>>
>>> IMO the right way to indicate that two (or more) media streams are
>>> conveying alternative representations of the same language content is b=
y
>>> grouping them with a new grouping attribute. That can tie together an a=
udio
>>> with a video and/or text. A language tag for sign language on the video
>>> stream then clarifies to the recipient that it is sign language. The
>>> grouping attribute by itself can indicate that these streams are convey=
ing
>>> language.
>>>
>> <GH>Yes, and that is proposed in draft-hellstrom-slim-modality-grouping
>> with two kinds of grouping: One kind of grouping to tell that two or mor=
e
>> languages in different streams are alternatives with the same content an=
d a
>> priority order is assigned to them to guide the selection of which one t=
o
>> use during the call. The other kind of grouping telling that two or more
>> languages in different streams are desired together with the same langua=
ge
>> content but different modalities ( such as the use for captioned telepho=
ny
>> with the same content provided in both speech and text, or sign language
>> interpretation where you see the interpreter,  or possibly spoken langua=
ge
>> interpretation with the languages provided in different audio streams ).=
 I
>> hope that that draft can be progressed. I see it as a needed complement =
to
>> the pure language indications per media.
>>
>
> Oh, sorry. I did read that draft but forgot about it.
>
> The discussion in this thread is more about how an application would
>> easily know that e.g. "ase" is a sign language and "en" is a spoken (or
>> written) language, and also a discussion about what kinds of languages a=
re
>> allowed and indicated by default in each media type. It was not at all
>> about falsely using language tags in the wrong media type as Bernard
>> understood my wording. It was rather a limitation to what modalities are
>> used in each media type and how to know the modality with cases that are
>> not evident, e.g. "application" and "message" media types.
>>
>
> What do you mean by "know"? Is it for the *UA* software to know, or for
> the human user of the UA to know? Presumably a human user that cares will
> understand this if presented with the information in some way. But
> typically this isn't presented to the user.
>
> For the software to know must mean that it will behave differently for a
> tag that represents a sign language than for one that represents a spoken
> or written language. What is it that it will do differently?
>
>         Thanks,
>         Paul
>
>
> Right now we have returned to a very simple rule: we define only use of
>> spoken language in audio media, written language in text media and sign
>> language in video media.
>> We have discussed other use, such as a view of a speaking person in
>> video, text overlay on video, a sign language notation in text media,
>> written language in message media, written language in WebRTC data
>> channels, sign written and spoken in bucket media maybe declared as
>> application media. We do not define these cases. They are just not defin=
ed,
>> not forbidden. They may be defined in the future.
>>
>> My proposed wording in section 5.4 got too many misunderstandings so I
>> gave up with it. I think we can live with 5.4 as it is in version -16.
>>
>> Thanks,
>> Gunnar
>>
>>
>>
>>> (IIRC I suggested something along these lines a long time ago.)
>>>
>>>     Thanks,
>>>     Paul
>>>
>>> _______________________________________________
>>> SLIM mailing list
>>> SLIM@ietf.org
>>> https://www.ietf.org/mailman/listinfo/slim
>>>
>>
>>
> _______________________________________________
> SLIM mailing list
> SLIM@ietf.org
> https://www.ietf.org/mailman/listinfo/slim
>

--001a11453754560d29055b99832e
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Paul said:=C2=A0<div><br></div><div>&quot;<span style=3D"f=
ont-size:12.8px">For the software to know must mean that it will behave dif=
ferently for a tag that represents a sign language than for one that repres=
ents a spoken or written language. What is it that it will do differently?<=
/span>&quot;</div><div><br></div><div>[BA] In terms of behavior based on th=
e signed/non-signed distinction, in -17 the only reference appears to be in=
 Section 5.4, stating that certain combinations are not defined in the docu=
ment (but that definition of those combinations was out of scope):=C2=A0=C2=
=A0</div><div><br></div><div><pre class=3D"gmail-newpage" style=3D"font-siz=
e:13.3333px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0)"><span class=
=3D"gmail-h3" style=3D"line-height:0pt;display:inline;font-size:1em;font-we=
ight:bold"><h3 style=3D"line-height:0pt;display:inline;font-size:1em"><a cl=
ass=3D"gmail-selflink" name=3D"section-5.4" href=3D"https://tools.ietf.org/=
html/draft-ietf-slim-negotiating-human-language-17#section-5.4" style=3D"co=
lor:black;text-decoration-line:none">5.4</a>.  Undefined Combinations</h3><=
/span>

   The behavior when specifying a non-signed language tag for a video
   media stream, or a signed language tag for an audio or text media
   stream, is not defined in this document.

   The problem of knowing which language tags are signed and which are
   not is out of scope of this document.
</pre></div><div><br></div></div><div class=3D"gmail_extra"><br><div class=
=3D"gmail_quote">On Sun, Oct 15, 2017 at 10:13 AM, Paul Kyzivat <span dir=
=3D"ltr">&lt;<a href=3D"mailto:pkyzivat@alum.mit.edu" target=3D"_blank">pky=
zivat@alum.mit.edu</a>&gt;</span> wrote:<br><blockquote class=3D"gmail_quot=
e" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">=
<span class=3D"">On 10/15/17 2:24 AM, Gunnar Hellstr=C3=B6m wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
Paul,<br>
Den 2017-10-15 kl. 01:19, skrev Paul Kyzivat:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
On 10/14/17 2:03 PM, Bernard Aboba wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
Gunnar said:<br>
<br>
&quot;Applications not implementing such specific notations may use the fol=
lowing simple deductions.<br>
<br>
- A language tag in audio media is supposed to indicate spoken modality.<br=
>
<br>
[BA] Even a tag with &quot;Sign Language&quot; in the description??<br>
<br>
- A language tag in text media is supposed to indicate=C2=A0 written modali=
ty.<br>
<br>
[BA] If the tag has &quot;Sign Language&quot; in the description, can this =
document really say that?<br>
<br>
- A language tag in video media is supposed to indicate visual sign languag=
e modality except for the case when it is supposed to indicate a view of a =
speaking person mentioned in section 5.2 characterized by the exact same la=
nguage tag also appearing in an audio media specification.<br>
<br>
[BA] It seems like an over-reach to say that a spoken language tag in video=
 media should instead be interpreted as a request for Sign Language.=C2=A0 =
If this were done, would it always be clear which Sign Language was intende=
d?=C2=A0 And could we really assume that both sides, if negotiating a spoke=
n language tag in video media, were really indicating the desire to sign?=
=C2=A0 It seems like this could easily result interoperability failure.<br>
</blockquote>
<br>
IMO the right way to indicate that two (or more) media streams are conveyin=
g alternative representations of the same language content is by grouping t=
hem with a new grouping attribute. That can tie together an audio with a vi=
deo and/or text. A language tag for sign language on the video stream then =
clarifies to the recipient that it is sign language. The grouping attribute=
 by itself can indicate that these streams are conveying language.<br>
</blockquote>
&lt;GH&gt;Yes, and that is proposed in draft-hellstrom-slim-modality-<wbr>g=
rouping=C2=A0=C2=A0=C2=A0 with two kinds of grouping: One kind of grouping =
to tell that two or more languages in different streams are alternatives wi=
th the same content and a priority order is assigned to them to guide the s=
election of which one to use during the call. The other kind of grouping te=
lling that two or more languages in different streams are desired together =
with the same language content but different modalities ( such as the use f=
or captioned telephony with the same content provided in both speech and te=
xt, or sign language interpretation where you see the interpreter,=C2=A0 or=
 possibly spoken language interpretation with the languages provided in dif=
ferent audio streams ). I hope that that draft can be progressed. I see it =
as a needed complement to the pure language indications per media.<br>
</blockquote>
<br></span>
Oh, sorry. I did read that draft but forgot about it.<span class=3D""><br>
<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
The discussion in this thread is more about how an application would easily=
 know that e.g. &quot;ase&quot; is a sign language and &quot;en&quot; is a =
spoken (or written) language, and also a discussion about what kinds of lan=
guages are allowed and indicated by default in each media type. It was not =
at all about falsely using language tags in the wrong media type as Bernard=
 understood my wording. It was rather a limitation to what modalities are u=
sed in each media type and how to know the modality with cases that are not=
 evident, e.g. &quot;application&quot; and &quot;message&quot; media types.=
<br>
</blockquote>
<br></span>
What do you mean by &quot;know&quot;? Is it for the *UA* software to know, =
or for the human user of the UA to know? Presumably a human user that cares=
 will understand this if presented with the information in some way. But ty=
pically this isn&#39;t presented to the user.<br>
<br>
For the software to know must mean that it will behave differently for a ta=
g that represents a sign language than for one that represents a spoken or =
written language. What is it that it will do differently?<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 Thanks,<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 Paul<div class=3D"HOEnZb"><div class=3D"h5"><br=
>
<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
Right now we have returned to a very simple rule: we define only use of spo=
ken language in audio media, written language in text media and sign langua=
ge in video media.<br>
We have discussed other use, such as a view of a speaking person in video, =
text overlay on video, a sign language notation in text media, written lang=
uage in message media, written language in WebRTC data channels, sign writt=
en and spoken in bucket media maybe declared as application media. We do no=
t define these cases. They are just not defined, not forbidden. They may be=
 defined in the future.<br>
<br>
My proposed wording in section 5.4 got too many misunderstandings so I gave=
 up with it. I think we can live with 5.4 as it is in version -16.<br>
<br>
Thanks,<br>
Gunnar<br>
<br>
<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
<br>
(IIRC I suggested something along these lines a long time ago.)<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0Thanks,<br>
=C2=A0=C2=A0=C2=A0=C2=A0Paul<br>
<br>
______________________________<wbr>_________________<br>
SLIM mailing list<br>
<a href=3D"mailto:SLIM@ietf.org" target=3D"_blank">SLIM@ietf.org</a><br>
<a href=3D"https://www.ietf.org/mailman/listinfo/slim" rel=3D"noreferrer" t=
arget=3D"_blank">https://www.ietf.org/mailman/l<wbr>istinfo/slim</a><br>
</blockquote>
<br>
</blockquote>
<br>
______________________________<wbr>_________________<br>
SLIM mailing list<br>
<a href=3D"mailto:SLIM@ietf.org" target=3D"_blank">SLIM@ietf.org</a><br>
<a href=3D"https://www.ietf.org/mailman/listinfo/slim" rel=3D"noreferrer" t=
arget=3D"_blank">https://www.ietf.org/mailman/l<wbr>istinfo/slim</a><br>
</div></div></blockquote></div><br></div>

--001a11453754560d29055b99832e--


From nobody Sun Oct 15 12:27:48 2017
Return-Path: <pkyzivat@alum.mit.edu>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0405F1331E4 for <slim@ietfa.amsl.com>; Sun, 15 Oct 2017 12:27:47 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.201
X-Spam-Level: 
X-Spam-Status: No, score=-4.201 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id DtSo342TU_Mk for <slim@ietfa.amsl.com>; Sun, 15 Oct 2017 12:27:44 -0700 (PDT)
Received: from alum-mailsec-scanner-1.mit.edu (alum-mailsec-scanner-1.mit.edu [18.7.68.12]) by ietfa.amsl.com (Postfix) with ESMTP id 8E427133078 for <slim@ietf.org>; Sun, 15 Oct 2017 12:27:44 -0700 (PDT)
X-AuditID: 1207440c-7fdff7000000143e-35-59e3b6ad3f25
Received: from outgoing-alum.mit.edu (OUTGOING-ALUM.MIT.EDU [18.7.68.33]) (using TLS with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by alum-mailsec-scanner-1.mit.edu (Symantec Messaging Gateway) with SMTP id 06.1B.05182.EA6B3E95; Sun, 15 Oct 2017 15:27:42 -0400 (EDT)
Received: from PaulKyzivatsMBP.localdomain (c-24-62-227-142.hsd1.ma.comcast.net [24.62.227.142]) (authenticated bits=0) (User authenticated as pkyzivat@ALUM.MIT.EDU) by outgoing-alum.mit.edu (8.13.8/8.12.4) with ESMTP id v9FJResa023215 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NOT); Sun, 15 Oct 2017 15:27:41 -0400
To: Bernard Aboba <bernard.aboba@gmail.com>
Cc: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>, slim@ietf.org
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com> <p06240606d607257c9584@172.20.60.54> <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se> <CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com> <da5cfaea-75f8-3fe1-7483-d77042bd9708@alum.mit.edu> <b2611e82-2133-0e77-b72b-ef709b1bba3c@omnitor.se> <1b0380ef-b57d-3cc7-c649-5351dc61f878@alum.mit.edu> <CAOW+2dtVE5BDmD2qy_g-asXvxntif4fVC8LYO4j7QLQ5Kq2E+g@mail.gmail.com>
From: Paul Kyzivat <pkyzivat@alum.mit.edu>
Message-ID: <3fc6d055-08a0-2bdb-f6e9-99b94efc49df@alum.mit.edu>
Date: Sun, 15 Oct 2017 15:27:40 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <CAOW+2dtVE5BDmD2qy_g-asXvxntif4fVC8LYO4j7QLQ5Kq2E+g@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprFKsWRmVeSWpSXmKPExsUixO6iqLtu2+NIg0tXrS027PvPbLHj/RkW i5kfOtkcmD12zrrL7rFkyU8mj4mLPzEHMEdx2aSk5mSWpRbp2yVwZSyctISl4LFrxdONT9kb GH8adDFyckgImEj0nlzN1MXIxSEksINJ4sn1kywQzkMmiTeXnzKBVAkLBElsOn2CDcQWEdCW 6Pu2DyzOLBAt8fDDD2aIhpvMEo+b97GAJNgEtCTmHPoPZHNw8ArYS9yYXwESZhFQlTi3ZQEj iC0qkCZxZ8ZDsDm8AoISJ2c+ASvnFAiUeLdUCmK8mcS8zQ+ZIWxxiVtP5kOtlZdo3jqbeQKj wCwk3bOQtMxC0jILScsCRpZVjHKJOaW5urmJmTnFqcm6xcmJeXmpRbqGermZJXqpKaWbGCFB zbOD8ds6mUOMAhyMSjy8AhmPIoVYE8uKK3MPMUpyMCmJ8p5rfRgpxJeUn1KZkVicEV9UmpNa fIhRgoNZSYR3TsPjSCHelMTKqtSifJiUNAeLkjiv6hJ1PyGB9MSS1OzU1ILUIpisDAeHkgRv /FagRsGi1PTUirTMnBKENBMHJ8hwHqDhZSA1vMUFibnFmekQ+VOMlhw9PTf+MHHsuH8bSD66 cfcPkxBLXn5eqpQ4Ly9IgwBIQ0ZpHtxMWJJ6xSgO9KIw73GQKh5ggoOb+gpoIRPQwncRD0AW liQipKQaGD08/j77vubMEt+lF+dNX//26Z/D+4KmvQlLYf/uu2yRRtahQhkN/amRC2SCviw4 lX73eevVbcfmmRya+Hv5acejewsMwrYVLd2n6P1oeUlYqt1JE1F2nZKl3Rvm52++azAjv4LP +8bSUIWnv6z3bbtgYtt2dHp41dzPiYfzDZwqlfzeLxTsOPNHiaU4I9FQi7moOBEA/Skpdi0D AAA=
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/4rRWfS7VZreBqwuspZ4tIE57gqg>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 15 Oct 2017 19:27:47 -0000

On 10/15/17 1:49 PM, Bernard Aboba wrote:
> Paul said:
> 
> "For the software to know must mean that it will behave differently for 
> a tag that represents a sign language than for one that represents a 
> spoken or written language. What is it that it will do differently?"
> 
> [BA] In terms of behavior based on the signed/non-signed distinction, in 
> -17 the only reference appears to be in Section 5.4, stating that 
> certain combinations are not defined in the document (but that 
> definition of those combinations was out of scope):

I'm asking whether this is a distinction without a difference. I'm not 
asking whether this makes a difference in the *protocol*, but whether in 
the end it benefits the participants in the call in any way. For instance:

- does it help the UA to decide how to alert the callee, so that the
   callee can better decide whether to accept the call or instruct the
   UA about how to handle the call?

- does it allow the UA to make a decision whether to accept the media?

- can the UA use this information to change how to render the media?

And if there is something like this, will the UA be able to do this 
generically based on whether the media is sign language or not, or will 
the UA need to already understand *specific* sign language tags?

E.g., A UA serving a deaf person might automatically introduce a sign 
language interpreter into an incoming audio-only call. If the incoming 
call has both audio and video then the video *might* be for conveying 
sign language, or not. If not then the UA will still want to bring in a 
sign language interpreter. But is knowing the call generically contains 
sign language sufficient to decide against bringing in an interpreter? 
Or must that depend on it being a sign language that the user can use? 
If the UA is configured for all the specific sign languages that the 
user can deal with then there is no need to recognize other sign 
languages generically.

	Thanks,
	Paul

>       5.4
>       <https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4>.
>       Undefined Combinations
> 
> 
> 
>     The behavior when specifying a non-signed language tag for a video
>     media stream, or a signed language tag for an audio or text media
>     stream, is not defined in this document.
> 
>     The problem of knowing which language tags are signed and which are
>     not is out of scope of this document.
> 
> 
> 
> On Sun, Oct 15, 2017 at 10:13 AM, Paul Kyzivat <pkyzivat@alum.mit.edu 
> <mailto:pkyzivat@alum.mit.edu>> wrote:
> 
>     On 10/15/17 2:24 AM, Gunnar Hellström wrote:
> 
>         Paul,
>         Den 2017-10-15 kl. 01:19, skrev Paul Kyzivat:
> 
>             On 10/14/17 2:03 PM, Bernard Aboba wrote:
> 
>                 Gunnar said:
> 
>                 "Applications not implementing such specific notations
>                 may use the following simple deductions.
> 
>                 - A language tag in audio media is supposed to indicate
>                 spoken modality.
> 
>                 [BA] Even a tag with "Sign Language" in the description??
> 
>                 - A language tag in text media is supposed to indicate 
>                 written modality.
> 
>                 [BA] If the tag has "Sign Language" in the description,
>                 can this document really say that?
> 
>                 - A language tag in video media is supposed to indicate
>                 visual sign language modality except for the case when
>                 it is supposed to indicate a view of a speaking person
>                 mentioned in section 5.2 characterized by the exact same
>                 language tag also appearing in an audio media specification.
> 
>                 [BA] It seems like an over-reach to say that a spoken
>                 language tag in video media should instead be
>                 interpreted as a request for Sign Language.  If this
>                 were done, would it always be clear which Sign Language
>                 was intended?  And could we really assume that both
>                 sides, if negotiating a spoken language tag in video
>                 media, were really indicating the desire to sign?  It
>                 seems like this could easily result interoperability
>                 failure.
> 
> 
>             IMO the right way to indicate that two (or more) media
>             streams are conveying alternative representations of the
>             same language content is by grouping them with a new
>             grouping attribute. That can tie together an audio with a
>             video and/or text. A language tag for sign language on the
>             video stream then clarifies to the recipient that it is sign
>             language. The grouping attribute by itself can indicate that
>             these streams are conveying language.
> 
>         <GH>Yes, and that is proposed in
>         draft-hellstrom-slim-modality-grouping    with two kinds of
>         grouping: One kind of grouping to tell that two or more
>         languages in different streams are alternatives with the same
>         content and a priority order is assigned to them to guide the
>         selection of which one to use during the call. The other kind of
>         grouping telling that two or more languages in different streams
>         are desired together with the same language content but
>         different modalities ( such as the use for captioned telephony
>         with the same content provided in both speech and text, or sign
>         language interpretation where you see the interpreter,  or
>         possibly spoken language interpretation with the languages
>         provided in different audio streams ). I hope that that draft
>         can be progressed. I see it as a needed complement to the pure
>         language indications per media.
> 
> 
>     Oh, sorry. I did read that draft but forgot about it.
> 
>         The discussion in this thread is more about how an application
>         would easily know that e.g. "ase" is a sign language and "en" is
>         a spoken (or written) language, and also a discussion about what
>         kinds of languages are allowed and indicated by default in each
>         media type. It was not at all about falsely using language tags
>         in the wrong media type as Bernard understood my wording. It was
>         rather a limitation to what modalities are used in each media
>         type and how to know the modality with cases that are not
>         evident, e.g. "application" and "message" media types.
> 
> 
>     What do you mean by "know"? Is it for the *UA* software to know, or
>     for the human user of the UA to know? Presumably a human user that
>     cares will understand this if presented with the information in some
>     way. But typically this isn't presented to the user.
> 
>     For the software to know must mean that it will behave differently
>     for a tag that represents a sign language than for one that
>     represents a spoken or written language. What is it that it will do
>     differently?
> 
>              Thanks,
>              Paul
> 
> 
>         Right now we have returned to a very simple rule: we define only
>         use of spoken language in audio media, written language in text
>         media and sign language in video media.
>         We have discussed other use, such as a view of a speaking person
>         in video, text overlay on video, a sign language notation in
>         text media, written language in message media, written language
>         in WebRTC data channels, sign written and spoken in bucket media
>         maybe declared as application media. We do not define these
>         cases. They are just not defined, not forbidden. They may be
>         defined in the future.
> 
>         My proposed wording in section 5.4 got too many
>         misunderstandings so I gave up with it. I think we can live with
>         5.4 as it is in version -16.
> 
>         Thanks,
>         Gunnar
> 
> 
> 
>             (IIRC I suggested something along these lines a long time ago.)
> 
>                  Thanks,
>                  Paul
> 
>             _______________________________________________
>             SLIM mailing list
>             SLIM@ietf.org <mailto:SLIM@ietf.org>
>             https://www.ietf.org/mailman/listinfo/slim
>             <https://www.ietf.org/mailman/listinfo/slim>
> 
> 
> 
>     _______________________________________________
>     SLIM mailing list
>     SLIM@ietf.org <mailto:SLIM@ietf.org>
>     https://www.ietf.org/mailman/listinfo/slim
>     <https://www.ietf.org/mailman/listinfo/slim>
> 
> 


From nobody Sun Oct 15 14:22:20 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 58AF31331FF for <slim@ietfa.amsl.com>; Sun, 15 Oct 2017 14:22:19 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.601
X-Spam-Level: 
X-Spam-Status: No, score=-2.601 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id xDpWILBojuuV for <slim@ietfa.amsl.com>; Sun, 15 Oct 2017 14:22:16 -0700 (PDT)
Received: from bin-vsp-out-02.atm.binero.net (bin-mail-out-05.binero.net [195.74.38.228]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 39724132351 for <slim@ietf.org>; Sun, 15 Oct 2017 14:22:15 -0700 (PDT)
X-Halon-ID: dc5989d3-b1ee-11e7-99c0-005056917f90
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-02.atm.binero.net (Halon) with ESMTPSA id dc5989d3-b1ee-11e7-99c0-005056917f90; Sun, 15 Oct 2017 23:21:52 +0200 (CEST)
To: Paul Kyzivat <pkyzivat@alum.mit.edu>, Bernard Aboba <bernard.aboba@gmail.com>
Cc: slim@ietf.org
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com> <p06240606d607257c9584@172.20.60.54> <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se> <CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com> <da5cfaea-75f8-3fe1-7483-d77042bd9708@alum.mit.edu> <b2611e82-2133-0e77-b72b-ef709b1bba3c@omnitor.se> <1b0380ef-b57d-3cc7-c649-5351dc61f878@alum.mit.edu> <CAOW+2dtVE5BDmD2qy_g-asXvxntif4fVC8LYO4j7QLQ5Kq2E+g@mail.gmail.com> <3fc6d055-08a0-2bdb-f6e9-99b94efc49df@alum.mit.edu>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <84fb37bd-5c7a-90ea-81fd-d315faabfd96@omnitor.se>
Date: Sun, 15 Oct 2017 23:22:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <3fc6d055-08a0-2bdb-f6e9-99b94efc49df@alum.mit.edu>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/HroshQyWv3CnNRqRPktbij8DuvM>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 15 Oct 2017 21:22:19 -0000

Den 2017-10-15 kl. 21:27, skrev Paul Kyzivat:
> On 10/15/17 1:49 PM, Bernard Aboba wrote:
>> Paul said:
>>
>> "For the software to know must mean that it will behave differently 
>> for a tag that represents a sign language than for one that 
>> represents a spoken or written language. What is it that it will do 
>> differently?"
>>
>> [BA] In terms of behavior based on the signed/non-signed distinction, 
>> in -17 the only reference appears to be in Section 5.4, stating that 
>> certain combinations are not defined in the document (but that 
>> definition of those combinations was out of scope):
>
> I'm asking whether this is a distinction without a difference. I'm not 
> asking whether this makes a difference in the *protocol*, but whether 
> in the end it benefits the participants in the call in any way. 
<GH>Good point, I was on my way to make a similar comment earlier today. 
The difference it makes for applications to "know" what modality a 
language tag represents in its used position seems to be only for 
imagined functions that are out of scope for the protocol specification.
> For instance:
>
> - does it help the UA to decide how to alert the callee, so that the
>   callee can better decide whether to accept the call or instruct the
>   UA about how to handle the call?
<GH>Yes, for a regular human user -to-user call, the result of the 
negotiation must be presented to the participants, so that they can 
start the call with a language and modality that is agreed.
That presentation could be exactly the description from the language tag 
registry, and then no "knowledge" is needed from the application. But it 
is more likely that the application has its own string for presentation 
of the negotiated language and modality. So that will be presented. But 
it is still found by a table lookup between language tag and string for 
a language name, so no real knowledge is needed.
We have said many times that the way the application tells the user the 
result of the negotiation is out of scope for the draft, but it is good 
to discuss and know that it can be done.
A similar mechanism is also needed for configuration of the user's 
language preference profile further discussed below.
>
> - does it allow the UA to make a decision whether to accept the media?
<GH>No, the media should be accepted regardless of the result of the 
language negotiation.
>
> - can the UA use this information to change how to render the media?
<GH>Yes, for the specialized text notation of sign language we have 
discussed but currently placed out of scope, a very special rendering 
application is needed. The modality would be recognized by a script 
subtag to a sign language tag used in text media. However, I think that 
would be best to also use it with a specific text subtype, so that the 
rendering can be controlled by invocation of a "codec" for that rendering.
>
> And if there is something like this, will the UA be able to do this 
> generically based on whether the media is sign language or not, or 
> will the UA need to already understand *specific* sign language tags?
<GH>Applications will need to have localized versions of the names for 
the different sign languages and also for spoken languages and written 
languages, to be used in setting of preferences and announcing the 
results of the negotiation. It might be overkill to have such localized 
names for all languages in the IANA language registry, so it will need 
to be able to handle localized names of a subset och the registry. With 
good design however, this is just an automatic translation between a 
language tag and a corresponding name, so it does in fact not require 
any "knowledge" of what modality is used with each language tag.
The application can ask for the configuration:
"Which languages do you want to offer to send in video"
"Which languages do you want to offer to send in text"
"Which languages do you want to offer to send in audio"
"Which languages do you want to be prepared to receive in video"
"Which languages do you want to be prepared to receive in text"
"Which languages do you want to be prepared to receive in audio"

And for each question provide a list of language names to select from. 
When the selection is made, the corresponding language tag is placed in 
the profile for negotiation.

If the application provides the whole IANA language registry to the user 
for each question, then there is a possibility that the user by mistake 
selects a language that requires another modality than the question was 
about. If the application shall limit the lists provided for each 
question, then it will need a kind of knowledge about which language 
tags suit each modality (and media)


>
> E.g., A UA serving a deaf person might automatically introduce a sign 
> language interpreter into an incoming audio-only call. If the incoming 
> call has both audio and video then the video *might* be for conveying 
> sign language, or not. If not then the UA will still want to bring in 
> a sign language interpreter. But is knowing the call generically 
> contains sign language sufficient to decide against bringing in an 
> interpreter? Or must that depend on it being a sign language that the 
> user can use? If the UA is configured for all the specific sign 
> languages that the user can deal with then there is no need to 
> recognize other sign languages generically.
<GH>We are talking about specific language tags here and knowing what 
modality they are used for. The user needs to specify which sign 
languages they prefer to use. The callee application can be made to look 
for gaps between what the caller offers and what the callee can accept, 
and from that deduct which type and languages for a conversion that is 
needed, and invoke that as a relay service. That invocation can be made 
completely table driven and have corresponding translation profiles for 
available relay services. But it is more likely that it is done by 
having some knowledge about which languages are sign languages and which 
are spoken languages and sending the call to the relay service to try to 
sort out if they can handle the translation.
>
>
So, the answer is - no, the application does not really have any 
knowledge about which modality a language tag represents in its used 
position. If the user selects to indicate very rare language tag 
indications for a media, then a match will just become very unlikely.

Where does this discussion take us? Should we modify section 5.4 again?

Thanks
Gunnar
>     Thanks,
>     Paul
>
>>       5.4
>> <https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4>.
>>       Undefined Combinations
>>
>>
>>
>>     The behavior when specifying a non-signed language tag for a video
>>     media stream, or a signed language tag for an audio or text media
>>     stream, is not defined in this document.
>>
>>     The problem of knowing which language tags are signed and which are
>>     not is out of scope of this document.
>>
>>
>>
>> On Sun, Oct 15, 2017 at 10:13 AM, Paul Kyzivat <pkyzivat@alum.mit.edu 
>> <mailto:pkyzivat@alum.mit.edu>> wrote:
>>
>>     On 10/15/17 2:24 AM, Gunnar Hellström wrote:
>>
>>         Paul,
>>         Den 2017-10-15 kl. 01:19, skrev Paul Kyzivat:
>>
>>             On 10/14/17 2:03 PM, Bernard Aboba wrote:
>>
>>                 Gunnar said:
>>
>>                 "Applications not implementing such specific notations
>>                 may use the following simple deductions.
>>
>>                 - A language tag in audio media is supposed to indicate
>>                 spoken modality.
>>
>>                 [BA] Even a tag with "Sign Language" in the 
>> description??
>>
>>                 - A language tag in text media is supposed to 
>> indicate                 written modality.
>>
>>                 [BA] If the tag has "Sign Language" in the description,
>>                 can this document really say that?
>>
>>                 - A language tag in video media is supposed to indicate
>>                 visual sign language modality except for the case when
>>                 it is supposed to indicate a view of a speaking person
>>                 mentioned in section 5.2 characterized by the exact same
>>                 language tag also appearing in an audio media 
>> specification.
>>
>>                 [BA] It seems like an over-reach to say that a spoken
>>                 language tag in video media should instead be
>>                 interpreted as a request for Sign Language.  If this
>>                 were done, would it always be clear which Sign Language
>>                 was intended?  And could we really assume that both
>>                 sides, if negotiating a spoken language tag in video
>>                 media, were really indicating the desire to sign?  It
>>                 seems like this could easily result interoperability
>>                 failure.
>>
>>
>>             IMO the right way to indicate that two (or more) media
>>             streams are conveying alternative representations of the
>>             same language content is by grouping them with a new
>>             grouping attribute. That can tie together an audio with a
>>             video and/or text. A language tag for sign language on the
>>             video stream then clarifies to the recipient that it is sign
>>             language. The grouping attribute by itself can indicate that
>>             these streams are conveying language.
>>
>>         <GH>Yes, and that is proposed in
>>         draft-hellstrom-slim-modality-grouping    with two kinds of
>>         grouping: One kind of grouping to tell that two or more
>>         languages in different streams are alternatives with the same
>>         content and a priority order is assigned to them to guide the
>>         selection of which one to use during the call. The other kind of
>>         grouping telling that two or more languages in different streams
>>         are desired together with the same language content but
>>         different modalities ( such as the use for captioned telephony
>>         with the same content provided in both speech and text, or sign
>>         language interpretation where you see the interpreter, or
>>         possibly spoken language interpretation with the languages
>>         provided in different audio streams ). I hope that that draft
>>         can be progressed. I see it as a needed complement to the pure
>>         language indications per media.
>>
>>
>>     Oh, sorry. I did read that draft but forgot about it.
>>
>>         The discussion in this thread is more about how an application
>>         would easily know that e.g. "ase" is a sign language and "en" is
>>         a spoken (or written) language, and also a discussion about what
>>         kinds of languages are allowed and indicated by default in each
>>         media type. It was not at all about falsely using language tags
>>         in the wrong media type as Bernard understood my wording. It was
>>         rather a limitation to what modalities are used in each media
>>         type and how to know the modality with cases that are not
>>         evident, e.g. "application" and "message" media types.
>>
>>
>>     What do you mean by "know"? Is it for the *UA* software to know, or
>>     for the human user of the UA to know? Presumably a human user that
>>     cares will understand this if presented with the information in some
>>     way. But typically this isn't presented to the user.
>>
>>     For the software to know must mean that it will behave differently
>>     for a tag that represents a sign language than for one that
>>     represents a spoken or written language. What is it that it will do
>>     differently?
>>
>>              Thanks,
>>              Paul
>>
>>
>>         Right now we have returned to a very simple rule: we define only
>>         use of spoken language in audio media, written language in text
>>         media and sign language in video media.
>>         We have discussed other use, such as a view of a speaking person
>>         in video, text overlay on video, a sign language notation in
>>         text media, written language in message media, written language
>>         in WebRTC data channels, sign written and spoken in bucket media
>>         maybe declared as application media. We do not define these
>>         cases. They are just not defined, not forbidden. They may be
>>         defined in the future.
>>
>>         My proposed wording in section 5.4 got too many
>>         misunderstandings so I gave up with it. I think we can live with
>>         5.4 as it is in version -16.
>>
>>         Thanks,
>>         Gunnar
>>
>>
>>
>>             (IIRC I suggested something along these lines a long time 
>> ago.)
>>
>>                  Thanks,
>>                  Paul
>>
>>             _______________________________________________
>>             SLIM mailing list
>>             SLIM@ietf.org <mailto:SLIM@ietf.org>
>>             https://www.ietf.org/mailman/listinfo/slim
>>             <https://www.ietf.org/mailman/listinfo/slim>
>>
>>
>>
>>     _______________________________________________
>>     SLIM mailing list
>>     SLIM@ietf.org <mailto:SLIM@ietf.org>
>>     https://www.ietf.org/mailman/listinfo/slim
>>     <https://www.ietf.org/mailman/listinfo/slim>
>>
>>
>

-- 
-----------------------------------------
Gunnar Hellström
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


From nobody Sun Oct 15 14:58:14 2017
Return-Path: <pkyzivat@alum.mit.edu>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id ED75A133039 for <slim@ietfa.amsl.com>; Sun, 15 Oct 2017 14:58:12 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.201
X-Spam-Level: 
X-Spam-Status: No, score=-4.201 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id dgTW3G6iAvKU for <slim@ietfa.amsl.com>; Sun, 15 Oct 2017 14:58:10 -0700 (PDT)
Received: from alum-mailsec-scanner-5.mit.edu (alum-mailsec-scanner-5.mit.edu [18.7.68.17]) by ietfa.amsl.com (Postfix) with ESMTP id D7D0E132351 for <slim@ietf.org>; Sun, 15 Oct 2017 14:58:09 -0700 (PDT)
X-AuditID: 12074411-f7dff70000007f0a-9a-59e3d9ef3eb8
Received: from outgoing-alum.mit.edu (OUTGOING-ALUM.MIT.EDU [18.7.68.33]) (using TLS with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by alum-mailsec-scanner-5.mit.edu (Symantec Messaging Gateway) with SMTP id 48.1D.32522.FE9D3E95; Sun, 15 Oct 2017 17:58:08 -0400 (EDT)
Received: from PaulKyzivatsMBP.localdomain (c-24-62-227-142.hsd1.ma.comcast.net [24.62.227.142]) (authenticated bits=0) (User authenticated as pkyzivat@ALUM.MIT.EDU) by outgoing-alum.mit.edu (8.13.8/8.12.4) with ESMTP id v9FLw6dN030678 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NOT); Sun, 15 Oct 2017 17:58:07 -0400
To: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>, Bernard Aboba <bernard.aboba@gmail.com>
Cc: slim@ietf.org
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com> <p06240606d607257c9584@172.20.60.54> <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se> <CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com> <da5cfaea-75f8-3fe1-7483-d77042bd9708@alum.mit.edu> <b2611e82-2133-0e77-b72b-ef709b1bba3c@omnitor.se> <1b0380ef-b57d-3cc7-c649-5351dc61f878@alum.mit.edu> <CAOW+2dtVE5BDmD2qy_g-asXvxntif4fVC8LYO4j7QLQ5Kq2E+g@mail.gmail.com> <3fc6d055-08a0-2bdb-f6e9-99b94efc49df@alum.mit.edu> <84fb37bd-5c7a-90ea-81fd-d315faabfd96@omnitor.se>
From: Paul Kyzivat <pkyzivat@alum.mit.edu>
Message-ID: <f72b4f64-8675-341b-8c30-7024f4297098@alum.mit.edu>
Date: Sun, 15 Oct 2017 17:58:06 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <84fb37bd-5c7a-90ea-81fd-d315faabfd96@omnitor.se>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprNKsWRmVeSWpSXmKPExsUixO6iqPvh5uNIg6593BYb9v1nttjx/gyL xcwPnWwOzB47Z91l91iy5CeTx8TFn5gDmKO4bFJSczLLUov07RK4MjZfe8xe0Fxf8f/ff7YG xqvpXYycHBICJhIrtxxi72Lk4hAS2MEk0TnlPxuE85BJYm3XEWaQKmGBIIlNp0+wgdgiAqUS XYv/s4PYzAKCEns67zFBNBxmkej73ccEkmAT0JKYc+g/SxcjBwevgL3E+3U8IGEWAVWJSU+W gc0UFUiTuDPjIVg5L9CckzOfsIDYnAJ2ElduzmCDmG8mMW/zQ2YIW1zi1pP5TBC2vETz1tnM ExgFZiFpn4WkZRaSlllIWhYwsqxilEvMKc3VzU3MzClOTdYtTk7My0st0jXVy80s0UtNKd3E CAlrwR2MM07KHWIU4GBU4uFdce5xpBBrYllxZe4hRkkOJiVR3nOtDyOF+JLyUyozEosz4otK c1KLDzFKcDArifDOaQAq501JrKxKLcqHSUlzsCiJ8/ItUfcTEkhPLEnNTk0tSC2CycpwcChJ 8H67AdQoWJSanlqRlplTgpBm4uAEGc4DNPwLSA1vcUFibnFmOkT+FKMlR0/PjT9MHDvu3waS j27c/cMkxJKXn5cqJc7rCkw0QgIgDRmleXAzYWnqFaM40IvCvDIgVTzAFAc39RXQQiaghe8i HoAsLElESEk1MPr0T3l1zpLpgYt/1kGjfXMeCIoeW3OG/9M8xWAJg2eORdraSjrZVi+uza34 8/hz8MvmWvcHBS2q/fcmt7xmC3NhNdm9rVbbJM1kv3/prjXVAnecHnlk7VPknrE69V7yi/xJ Pw/6zaz9via55LSqktbWV5wF6w3KXbn/ObL4haW7vuGbo2sQrMRSnJFoqMVcVJwIAH1LHiMu AwAA
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/iVSua2iyp03unO8y62OBaLtiuP8>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 15 Oct 2017 21:58:13 -0000

On 10/15/17 5:22 PM, Gunnar Hellström wrote:
> Den 2017-10-15 kl. 21:27, skrev Paul Kyzivat:
>> On 10/15/17 1:49 PM, Bernard Aboba wrote:
>>> Paul said:
>>>
>>> "For the software to know must mean that it will behave differently 
>>> for a tag that represents a sign language than for one that 
>>> represents a spoken or written language. What is it that it will do 
>>> differently?"
>>>
>>> [BA] In terms of behavior based on the signed/non-signed distinction, 
>>> in -17 the only reference appears to be in Section 5.4, stating that 
>>> certain combinations are not defined in the document (but that 
>>> definition of those combinations was out of scope):
>>
>> I'm asking whether this is a distinction without a difference. I'm not 
>> asking whether this makes a difference in the *protocol*, but whether 
>> in the end it benefits the participants in the call in any way. 
> <GH>Good point, I was on my way to make a similar comment earlier today. 
> The difference it makes for applications to "know" what modality a 
> language tag represents in its used position seems to be only for 
> imagined functions that are out of scope for the protocol specification.
>> For instance:
>>
>> - does it help the UA to decide how to alert the callee, so that the
>>   callee can better decide whether to accept the call or instruct the
>>   UA about how to handle the call?
> <GH>Yes, for a regular human user -to-user call, the result of the 
> negotiation must be presented to the participants, so that they can 
> start the call with a language and modality that is agreed.

*Today* we don't do this. We leave it for the end users to negotiate the 
language they will use by other means - typically in-band by informal 
negotiation. Most UAs don't have any provision to provide the specified 
language to the end user.

This of course will also work for deaf users with sign language in the 
absence of any interpreters in the call. When it doesn't work is when 
there is a deaf user and a hearing user, and video isn't present at both 
ends. But in that case there isn't much hope.

> That presentation could be exactly the description from the language tag 
> registry, and then no "knowledge" is needed from the application. But it 
> is more likely that the application has its own string for presentation 
> of the negotiated language and modality. So that will be presented. But 
> it is still found by a table lookup between language tag and string for 
> a language name, so no real knowledge is needed.

Clearly presenting the raw language tags won't be useful to most users. 
So some sort of translation is needed. Such a hypothetical translation 
table could also have properties, such as whether the language is a sign 
language. Hence this doesn't seem to be a problem we need to solve.

> We have said many times that the way the application tells the user the 
> result of the negotiation is out of scope for the draft, but it is good 
> to discuss and know that it can be done.
> A similar mechanism is also needed for configuration of the user's 
> language preference profile further discussed below.
>>
>> - does it allow the UA to make a decision whether to accept the media?
> <GH>No, the media should be accepted regardless of the result of the 
> language negotiation.
>>
>> - can the UA use this information to change how to render the media?
> <GH>Yes, for the specialized text notation of sign language we have 
> discussed but currently placed out of scope, a very special rendering 
> application is needed. The modality would be recognized by a script 
> subtag to a sign language tag used in text media. However, I think that 
> would be best to also use it with a specific text subtype, so that the 
> rendering can be controlled by invocation of a "codec" for that rendering.

But won't the rendering also be in some way language specific? If so, 
the device will need to be configured for the specific languages it 
supports. Hence again a generic mechanism isn't needed.

>> And if there is something like this, will the UA be able to do this 
>> generically based on whether the media is sign language or not, or 
>> will the UA need to already understand *specific* sign language tags?
> <GH>Applications will need to have localized versions of the names for 
> the different sign languages and also for spoken languages and written 
> languages, to be used in setting of preferences and announcing the 
> results of the negotiation. It might be overkill to have such localized 
> names for all languages in the IANA language registry, so it will need 
> to be able to handle localized names of a subset och the registry. With 
> good design however, this is just an automatic translation between a 
> language tag and a corresponding name, so it does in fact not require 
> any "knowledge" of what modality is used with each language tag.

My earlier comment applies to the above.

> The application can ask for the configuration:
> "Which languages do you want to offer to send in video"
> "Which languages do you want to offer to send in text"
> "Which languages do you want to offer to send in audio"
> "Which languages do you want to be prepared to receive in video"
> "Which languages do you want to be prepared to receive in text"
> "Which languages do you want to be prepared to receive in audio"
> 
> And for each question provide a list of language names to select from. 
> When the selection is made, the corresponding language tag is placed in 
> the profile for negotiation.

Hence, there need not be any *generic* mechanism, since this is based on 
configuration.

> If the application provides the whole IANA language registry to the user 
> for each question, then there is a possibility that the user by mistake 
> selects a language that requires another modality than the question was 
> about. If the application shall limit the lists provided for each 
> question, then it will need a kind of knowledge about which language 
> tags suit each modality (and media)
> 
> 
>>
>> E.g., A UA serving a deaf person might automatically introduce a sign 
>> language interpreter into an incoming audio-only call. If the incoming 
>> call has both audio and video then the video *might* be for conveying 
>> sign language, or not. If not then the UA will still want to bring in 
>> a sign language interpreter. But is knowing the call generically 
>> contains sign language sufficient to decide against bringing in an 
>> interpreter? Or must that depend on it being a sign language that the 
>> user can use? If the UA is configured for all the specific sign 
>> languages that the user can deal with then there is no need to 
>> recognize other sign languages generically.
> <GH>We are talking about specific language tags here and knowing what 
> modality they are used for. The user needs to specify which sign 
> languages they prefer to use. The callee application can be made to look 
> for gaps between what the caller offers and what the callee can accept, 
> and from that deduct which type and languages for a conversion that is 
> needed, and invoke that as a relay service. That invocation can be made 
> completely table driven and have corresponding translation profiles for 
> available relay services. But it is more likely that it is done by 
> having some knowledge about which languages are sign languages and which 
> are spoken languages and sending the call to the relay service to try to 
> sort out if they can handle the translation.
>>
>>
> So, the answer is - no, the application does not really have any 
> knowledge about which modality a language tag represents in its used 
> position. If the user selects to indicate very rare language tag 
> indications for a media, then a match will just become very unlikely.
> 
> Where does this discussion take us? Should we modify section 5.4 again?

Frankly, I see no need for section 5.4.

	Thanks,
	Paul

> Thanks
> Gunnar
>>     Thanks,
>>     Paul
>>
>>>       5.4
>>> <https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4>. 
>>>
>>>       Undefined Combinations
>>>
>>>
>>>
>>>     The behavior when specifying a non-signed language tag for a video
>>>     media stream, or a signed language tag for an audio or text media
>>>     stream, is not defined in this document.
>>>
>>>     The problem of knowing which language tags are signed and which are
>>>     not is out of scope of this document.
>>>
>>>
>>>
>>> On Sun, Oct 15, 2017 at 10:13 AM, Paul Kyzivat <pkyzivat@alum.mit.edu 
>>> <mailto:pkyzivat@alum.mit.edu>> wrote:
>>>
>>>     On 10/15/17 2:24 AM, Gunnar Hellström wrote:
>>>
>>>         Paul,
>>>         Den 2017-10-15 kl. 01:19, skrev Paul Kyzivat:
>>>
>>>             On 10/14/17 2:03 PM, Bernard Aboba wrote:
>>>
>>>                 Gunnar said:
>>>
>>>                 "Applications not implementing such specific notations
>>>                 may use the following simple deductions.
>>>
>>>                 - A language tag in audio media is supposed to indicate
>>>                 spoken modality.
>>>
>>>                 [BA] Even a tag with "Sign Language" in the 
>>> description??
>>>
>>>                 - A language tag in text media is supposed to 
>>> indicate                 written modality.
>>>
>>>                 [BA] If the tag has "Sign Language" in the description,
>>>                 can this document really say that?
>>>
>>>                 - A language tag in video media is supposed to indicate
>>>                 visual sign language modality except for the case when
>>>                 it is supposed to indicate a view of a speaking person
>>>                 mentioned in section 5.2 characterized by the exact same
>>>                 language tag also appearing in an audio media 
>>> specification.
>>>
>>>                 [BA] It seems like an over-reach to say that a spoken
>>>                 language tag in video media should instead be
>>>                 interpreted as a request for Sign Language.  If this
>>>                 were done, would it always be clear which Sign Language
>>>                 was intended?  And could we really assume that both
>>>                 sides, if negotiating a spoken language tag in video
>>>                 media, were really indicating the desire to sign?  It
>>>                 seems like this could easily result interoperability
>>>                 failure.
>>>
>>>
>>>             IMO the right way to indicate that two (or more) media
>>>             streams are conveying alternative representations of the
>>>             same language content is by grouping them with a new
>>>             grouping attribute. That can tie together an audio with a
>>>             video and/or text. A language tag for sign language on the
>>>             video stream then clarifies to the recipient that it is sign
>>>             language. The grouping attribute by itself can indicate that
>>>             these streams are conveying language.
>>>
>>>         <GH>Yes, and that is proposed in
>>>         draft-hellstrom-slim-modality-grouping    with two kinds of
>>>         grouping: One kind of grouping to tell that two or more
>>>         languages in different streams are alternatives with the same
>>>         content and a priority order is assigned to them to guide the
>>>         selection of which one to use during the call. The other kind of
>>>         grouping telling that two or more languages in different streams
>>>         are desired together with the same language content but
>>>         different modalities ( such as the use for captioned telephony
>>>         with the same content provided in both speech and text, or sign
>>>         language interpretation where you see the interpreter, or
>>>         possibly spoken language interpretation with the languages
>>>         provided in different audio streams ). I hope that that draft
>>>         can be progressed. I see it as a needed complement to the pure
>>>         language indications per media.
>>>
>>>
>>>     Oh, sorry. I did read that draft but forgot about it.
>>>
>>>         The discussion in this thread is more about how an application
>>>         would easily know that e.g. "ase" is a sign language and "en" is
>>>         a spoken (or written) language, and also a discussion about what
>>>         kinds of languages are allowed and indicated by default in each
>>>         media type. It was not at all about falsely using language tags
>>>         in the wrong media type as Bernard understood my wording. It was
>>>         rather a limitation to what modalities are used in each media
>>>         type and how to know the modality with cases that are not
>>>         evident, e.g. "application" and "message" media types.
>>>
>>>
>>>     What do you mean by "know"? Is it for the *UA* software to know, or
>>>     for the human user of the UA to know? Presumably a human user that
>>>     cares will understand this if presented with the information in some
>>>     way. But typically this isn't presented to the user.
>>>
>>>     For the software to know must mean that it will behave differently
>>>     for a tag that represents a sign language than for one that
>>>     represents a spoken or written language. What is it that it will do
>>>     differently?
>>>
>>>              Thanks,
>>>              Paul
>>>
>>>
>>>         Right now we have returned to a very simple rule: we define only
>>>         use of spoken language in audio media, written language in text
>>>         media and sign language in video media.
>>>         We have discussed other use, such as a view of a speaking person
>>>         in video, text overlay on video, a sign language notation in
>>>         text media, written language in message media, written language
>>>         in WebRTC data channels, sign written and spoken in bucket media
>>>         maybe declared as application media. We do not define these
>>>         cases. They are just not defined, not forbidden. They may be
>>>         defined in the future.
>>>
>>>         My proposed wording in section 5.4 got too many
>>>         misunderstandings so I gave up with it. I think we can live with
>>>         5.4 as it is in version -16.
>>>
>>>         Thanks,
>>>         Gunnar
>>>
>>>
>>>
>>>             (IIRC I suggested something along these lines a long time 
>>> ago.)
>>>
>>>                  Thanks,
>>>                  Paul
>>>
>>>             _______________________________________________
>>>             SLIM mailing list
>>>             SLIM@ietf.org <mailto:SLIM@ietf.org>
>>>             https://www.ietf.org/mailman/listinfo/slim
>>>             <https://www.ietf.org/mailman/listinfo/slim>
>>>
>>>
>>>
>>>     _______________________________________________
>>>     SLIM mailing list
>>>     SLIM@ietf.org <mailto:SLIM@ietf.org>
>>>     https://www.ietf.org/mailman/listinfo/slim
>>>     <https://www.ietf.org/mailman/listinfo/slim>
>>>
>>>
>>
> 


From nobody Sun Oct 15 15:37:11 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 192EE133018 for <slim@ietfa.amsl.com>; Sun, 15 Oct 2017 15:37:10 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.601
X-Spam-Level: 
X-Spam-Status: No, score=-2.601 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id fvTx3PUK4DVs for <slim@ietfa.amsl.com>; Sun, 15 Oct 2017 15:37:07 -0700 (PDT)
Received: from bin-vsp-out-03.atm.binero.net (bin-mail-out-05.binero.net [195.74.38.228]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id D06FA13304E for <slim@ietf.org>; Sun, 15 Oct 2017 15:37:06 -0700 (PDT)
X-Halon-ID: 5b959e30-b1f9-11e7-83a9-0050569116f7
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-03.atm.binero.net (Halon) with ESMTPSA id 5b959e30-b1f9-11e7-83a9-0050569116f7; Mon, 16 Oct 2017 00:37:01 +0200 (CEST)
To: Paul Kyzivat <pkyzivat@alum.mit.edu>
Cc: slim@ietf.org
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com> <p06240606d607257c9584@172.20.60.54> <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se> <CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com> <da5cfaea-75f8-3fe1-7483-d77042bd9708@alum.mit.edu> <b2611e82-2133-0e77-b72b-ef709b1bba3c@omnitor.se> <1b0380ef-b57d-3cc7-c649-5351dc61f878@alum.mit.edu> <CAOW+2dtVE5BDmD2qy_g-asXvxntif4fVC8LYO4j7QLQ5Kq2E+g@mail.gmail.com> <3fc6d055-08a0-2bdb-f6e9-99b94efc49df@alum.mit.edu> <84fb37bd-5c7a-90ea-81fd-d315faabfd96@omnitor.se> <f72b4f64-8675-341b-8c30-7024f4297098@alum.mit.edu>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <35281f05-2359-9eee-2e94-fbd4a2ffbcf5@omnitor.se>
Date: Mon, 16 Oct 2017 00:36:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <f72b4f64-8675-341b-8c30-7024f4297098@alum.mit.edu>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/QOuuZ1QoFWyvGqRdkGMBaMYKOZ0>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 15 Oct 2017 22:37:10 -0000

Den 2017-10-15 kl. 23:58, skrev Paul Kyzivat:
> On 10/15/17 5:22 PM, Gunnar Hellström wrote:
>> Den 2017-10-15 kl. 21:27, skrev Paul Kyzivat:
>>> On 10/15/17 1:49 PM, Bernard Aboba wrote:
>>>> Paul said:
>>>>
>>>> "For the software to know must mean that it will behave differently 
>>>> for a tag that represents a sign language than for one that 
>>>> represents a spoken or written language. What is it that it will do 
>>>> differently?"
>>>>
>>>> [BA] In terms of behavior based on the signed/non-signed 
>>>> distinction, in -17 the only reference appears to be in Section 
>>>> 5.4, stating that certain combinations are not defined in the 
>>>> document (but that definition of those combinations was out of scope):
>>>
>>> I'm asking whether this is a distinction without a difference. I'm 
>>> not asking whether this makes a difference in the *protocol*, but 
>>> whether in the end it benefits the participants in the call in any way. 
>> <GH>Good point, I was on my way to make a similar comment earlier 
>> today. The difference it makes for applications to "know" what 
>> modality a language tag represents in its used position seems to be 
>> only for imagined functions that are out of scope for the protocol 
>> specification.
>>> For instance:
>>>
>>> - does it help the UA to decide how to alert the callee, so that the
>>>   callee can better decide whether to accept the call or instruct the
>>>   UA about how to handle the call?
>> <GH>Yes, for a regular human user -to-user call, the result of the 
>> negotiation must be presented to the participants, so that they can 
>> start the call with a language and modality that is agreed.
>
> *Today* we don't do this. We leave it for the end users to negotiate 
> the language they will use by other means - typically in-band by 
> informal negotiation. Most UAs don't have any provision to provide the 
> specified language to the end user.
<GH>Yes, and the main purpose of this work is to improve on that. - To 
make sure that the users are informed so they can start the call in an 
appropriate language. And even invoke support if needed.
>
> This of course will also work for deaf users with sign language in the 
> absence of any interpreters in the call. When it doesn't work is when 
> there is a deaf user and a hearing user, and video isn't present at 
> both ends. But in that case there isn't much hope.
>
>> That presentation could be exactly the description from the language 
>> tag registry, and then no "knowledge" is needed from the application. 
>> But it is more likely that the application has its own string for 
>> presentation of the negotiated language and modality. So that will be 
>> presented. But it is still found by a table lookup between language 
>> tag and string for a language name, so no real knowledge is needed.
>
> Clearly presenting the raw language tags won't be useful to most 
> users. So some sort of translation is needed. Such a hypothetical 
> translation table could also have properties, such as whether the 
> language is a sign language. Hence this doesn't seem to be a problem 
> we need to solve.
<GH>Right, we provide the information in language tag format to the 
devices, and how the devices collect configuration information from the 
users and provide negotiation result to the users is not specified in 
this draft.
>
>> We have said many times that the way the application tells the user 
>> the result of the negotiation is out of scope for the draft, but it 
>> is good to discuss and know that it can be done.
>> A similar mechanism is also needed for configuration of the user's 
>> language preference profile further discussed below.
>>>
>>> - does it allow the UA to make a decision whether to accept the media?
>> <GH>No, the media should be accepted regardless of the result of the 
>> language negotiation.
>>>
>>> - can the UA use this information to change how to render the media?
>> <GH>Yes, for the specialized text notation of sign language we have 
>> discussed but currently placed out of scope, a very special rendering 
>> application is needed. The modality would be recognized by a script 
>> subtag to a sign language tag used in text media. However, I think 
>> that would be best to also use it with a specific text subtype, so 
>> that the rendering can be controlled by invocation of a "codec" for 
>> that rendering.
>
> But won't the rendering also be in some way language specific? If so, 
> the device will need to be configured for the specific languages it 
> supports. Hence again a generic mechanism isn't needed.
<GH>I got lost here in the reasoning. We were looking for reasons for 
the device to do things differently for different modalities and 
therefore know modality from language tag and its position in media 
description.
We have the unsolved case of a language tag for a spoken or written 
language when used in video media. Since these cases use the same 
language tag we do not know what is meant if it is provided in video 
media description. It can mean a view of a speaker or it can mean text 
overlay on video, e.g. with mp4 coding with text component together with 
video (if I remember the use of the subtypes right). This problem is why 
we for a while had a complex rule about just one case that we allowed.

Rendering will be modality specific. Text embedded in video requires 
other rendering than the view of a speaking person. All ways we have 
proposed to indicate the difference between these two modalities have 
been rejected.
>
>>> And if there is something like this, will the UA be able to do this 
>>> generically based on whether the media is sign language or not, or 
>>> will the UA need to already understand *specific* sign language tags?
>> <GH>Applications will need to have localized versions of the names 
>> for the different sign languages and also for spoken languages and 
>> written languages, to be used in setting of preferences and 
>> announcing the results of the negotiation. It might be overkill to 
>> have such localized names for all languages in the IANA language 
>> registry, so it will need to be able to handle localized names of a 
>> subset och the registry. With good design however, this is just an 
>> automatic translation between a language tag and a corresponding 
>> name, so it does in fact not require any "knowledge" of what modality 
>> is used with each language tag.
>
> My earlier comment applies to the above.
>
>> The application can ask for the configuration:
>> "Which languages do you want to offer to send in video"
>> "Which languages do you want to offer to send in text"
>> "Which languages do you want to offer to send in audio"
>> "Which languages do you want to be prepared to receive in video"
>> "Which languages do you want to be prepared to receive in text"
>> "Which languages do you want to be prepared to receive in audio"
>>
>> And for each question provide a list of language names to select 
>> from. When the selection is made, the corresponding language tag is 
>> placed in the profile for negotiation.
>
> Hence, there need not be any *generic* mechanism, since this is based 
> on configuration.
>
>> If the application provides the whole IANA language registry to the 
>> user for each question, then there is a possibility that the user by 
>> mistake selects a language that requires another modality than the 
>> question was about. If the application shall limit the lists provided 
>> for each question, then it will need a kind of knowledge about which 
>> language tags suit each modality (and media)
>>
>>
>>>
>>> E.g., A UA serving a deaf person might automatically introduce a 
>>> sign language interpreter into an incoming audio-only call. If the 
>>> incoming call has both audio and video then the video *might* be for 
>>> conveying sign language, or not. If not then the UA will still want 
>>> to bring in a sign language interpreter. But is knowing the call 
>>> generically contains sign language sufficient to decide against 
>>> bringing in an interpreter? Or must that depend on it being a sign 
>>> language that the user can use? If the UA is configured for all the 
>>> specific sign languages that the user can deal with then there is no 
>>> need to recognize other sign languages generically.
>> <GH>We are talking about specific language tags here and knowing what 
>> modality they are used for. The user needs to specify which sign 
>> languages they prefer to use. The callee application can be made to 
>> look for gaps between what the caller offers and what the callee can 
>> accept, and from that deduct which type and languages for a 
>> conversion that is needed, and invoke that as a relay service. That 
>> invocation can be made completely table driven and have corresponding 
>> translation profiles for available relay services. But it is more 
>> likely that it is done by having some knowledge about which languages 
>> are sign languages and which are spoken languages and sending the 
>> call to the relay service to try to sort out if they can handle the 
>> translation.
>>>
>>>
>> So, the answer is - no, the application does not really have any 
>> knowledge about which modality a language tag represents in its used 
>> position. If the user selects to indicate very rare language tag 
>> indications for a media, then a match will just become very unlikely.
>>
>> Where does this discussion take us? Should we modify section 5.4 again?
>
> Frankly, I see no need for section 5.4.
<GH>This discussion at least changes the need for section 5.4 dramatically.
What we still might need to say is that we have no agreed way to 
differentiate between a view of a speaking person and text embedded in 
video by the defined notation. And we could warn for using unusual 
language tag - media combinations that will rarely be matched.
>
>     Thanks,
>     Paul
>
>> Thanks
>> Gunnar
>>>     Thanks,
>>>     Paul
>>>
>>>>       5.4
>>>> <https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4>. 
>>>>
>>>>       Undefined Combinations
>>>>
>>>>
>>>>
>>>>     The behavior when specifying a non-signed language tag for a video
>>>>     media stream, or a signed language tag for an audio or text media
>>>>     stream, is not defined in this document.
>>>>
>>>>     The problem of knowing which language tags are signed and which 
>>>> are
>>>>     not is out of scope of this document.
>>>>
>>>>
>>>>
>>>> On Sun, Oct 15, 2017 at 10:13 AM, Paul Kyzivat 
>>>> <pkyzivat@alum.mit.edu <mailto:pkyzivat@alum.mit.edu>> wrote:
>>>>
>>>>     On 10/15/17 2:24 AM, Gunnar Hellström wrote:
>>>>
>>>>         Paul,
>>>>         Den 2017-10-15 kl. 01:19, skrev Paul Kyzivat:
>>>>
>>>>             On 10/14/17 2:03 PM, Bernard Aboba wrote:
>>>>
>>>>                 Gunnar said:
>>>>
>>>>                 "Applications not implementing such specific notations
>>>>                 may use the following simple deductions.
>>>>
>>>>                 - A language tag in audio media is supposed to 
>>>> indicate
>>>>                 spoken modality.
>>>>
>>>>                 [BA] Even a tag with "Sign Language" in the 
>>>> description??
>>>>
>>>>                 - A language tag in text media is supposed to 
>>>> indicate                 written modality.
>>>>
>>>>                 [BA] If the tag has "Sign Language" in the 
>>>> description,
>>>>                 can this document really say that?
>>>>
>>>>                 - A language tag in video media is supposed to 
>>>> indicate
>>>>                 visual sign language modality except for the case when
>>>>                 it is supposed to indicate a view of a speaking person
>>>>                 mentioned in section 5.2 characterized by the exact 
>>>> same
>>>>                 language tag also appearing in an audio media 
>>>> specification.
>>>>
>>>>                 [BA] It seems like an over-reach to say that a spoken
>>>>                 language tag in video media should instead be
>>>>                 interpreted as a request for Sign Language. If this
>>>>                 were done, would it always be clear which Sign 
>>>> Language
>>>>                 was intended?  And could we really assume that both
>>>>                 sides, if negotiating a spoken language tag in video
>>>>                 media, were really indicating the desire to sign?  It
>>>>                 seems like this could easily result interoperability
>>>>                 failure.
>>>>
>>>>
>>>>             IMO the right way to indicate that two (or more) media
>>>>             streams are conveying alternative representations of the
>>>>             same language content is by grouping them with a new
>>>>             grouping attribute. That can tie together an audio with a
>>>>             video and/or text. A language tag for sign language on the
>>>>             video stream then clarifies to the recipient that it is 
>>>> sign
>>>>             language. The grouping attribute by itself can indicate 
>>>> that
>>>>             these streams are conveying language.
>>>>
>>>>         <GH>Yes, and that is proposed in
>>>>         draft-hellstrom-slim-modality-grouping    with two kinds of
>>>>         grouping: One kind of grouping to tell that two or more
>>>>         languages in different streams are alternatives with the same
>>>>         content and a priority order is assigned to them to guide the
>>>>         selection of which one to use during the call. The other 
>>>> kind of
>>>>         grouping telling that two or more languages in different 
>>>> streams
>>>>         are desired together with the same language content but
>>>>         different modalities ( such as the use for captioned telephony
>>>>         with the same content provided in both speech and text, or 
>>>> sign
>>>>         language interpretation where you see the interpreter, or
>>>>         possibly spoken language interpretation with the languages
>>>>         provided in different audio streams ). I hope that that draft
>>>>         can be progressed. I see it as a needed complement to the pure
>>>>         language indications per media.
>>>>
>>>>
>>>>     Oh, sorry. I did read that draft but forgot about it.
>>>>
>>>>         The discussion in this thread is more about how an application
>>>>         would easily know that e.g. "ase" is a sign language and 
>>>> "en" is
>>>>         a spoken (or written) language, and also a discussion about 
>>>> what
>>>>         kinds of languages are allowed and indicated by default in 
>>>> each
>>>>         media type. It was not at all about falsely using language 
>>>> tags
>>>>         in the wrong media type as Bernard understood my wording. 
>>>> It was
>>>>         rather a limitation to what modalities are used in each media
>>>>         type and how to know the modality with cases that are not
>>>>         evident, e.g. "application" and "message" media types.
>>>>
>>>>
>>>>     What do you mean by "know"? Is it for the *UA* software to 
>>>> know, or
>>>>     for the human user of the UA to know? Presumably a human user that
>>>>     cares will understand this if presented with the information in 
>>>> some
>>>>     way. But typically this isn't presented to the user.
>>>>
>>>>     For the software to know must mean that it will behave differently
>>>>     for a tag that represents a sign language than for one that
>>>>     represents a spoken or written language. What is it that it 
>>>> will do
>>>>     differently?
>>>>
>>>>              Thanks,
>>>>              Paul
>>>>
>>>>
>>>>         Right now we have returned to a very simple rule: we define 
>>>> only
>>>>         use of spoken language in audio media, written language in 
>>>> text
>>>>         media and sign language in video media.
>>>>         We have discussed other use, such as a view of a speaking 
>>>> person
>>>>         in video, text overlay on video, a sign language notation in
>>>>         text media, written language in message media, written 
>>>> language
>>>>         in WebRTC data channels, sign written and spoken in bucket 
>>>> media
>>>>         maybe declared as application media. We do not define these
>>>>         cases. They are just not defined, not forbidden. They may be
>>>>         defined in the future.
>>>>
>>>>         My proposed wording in section 5.4 got too many
>>>>         misunderstandings so I gave up with it. I think we can live 
>>>> with
>>>>         5.4 as it is in version -16.
>>>>
>>>>         Thanks,
>>>>         Gunnar
>>>>
>>>>
>>>>
>>>>             (IIRC I suggested something along these lines a long 
>>>> time ago.)
>>>>
>>>>                  Thanks,
>>>>                  Paul
>>>>
>>>>             _______________________________________________
>>>>             SLIM mailing list
>>>>             SLIM@ietf.org <mailto:SLIM@ietf.org>
>>>>             https://www.ietf.org/mailman/listinfo/slim
>>>> <https://www.ietf.org/mailman/listinfo/slim>
>>>>
>>>>
>>>>
>>>>     _______________________________________________
>>>>     SLIM mailing list
>>>>     SLIM@ietf.org <mailto:SLIM@ietf.org>
>>>>     https://www.ietf.org/mailman/listinfo/slim
>>>>     <https://www.ietf.org/mailman/listinfo/slim>
>>>>
>>>>
>>>
>>
>
> _______________________________________________
> SLIM mailing list
> SLIM@ietf.org
> https://www.ietf.org/mailman/listinfo/slim

-- 
-----------------------------------------
Gunnar Hellström
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


From nobody Sun Oct 15 15:50:28 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9A10613321F for <slim@ietfa.amsl.com>; Sun, 15 Oct 2017 15:50:26 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.6
X-Spam-Level: 
X-Spam-Status: No, score=-2.6 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id n2ZAcF_lBqn7 for <slim@ietfa.amsl.com>; Sun, 15 Oct 2017 15:50:24 -0700 (PDT)
Received: from bin-vsp-out-01.atm.binero.net (bin-mail-out-06.binero.net [195.74.38.229]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 760B2127005 for <slim@ietf.org>; Sun, 15 Oct 2017 15:50:24 -0700 (PDT)
X-Halon-ID: 230fe3e6-b1fb-11e7-9c60-005056917a89
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-01.atm.binero.net (Halon) with ESMTPSA id 230fe3e6-b1fb-11e7-9c60-005056917a89; Mon, 16 Oct 2017 00:49:45 +0200 (CEST)
To: Bernard Aboba <bernard.aboba@gmail.com>, slim@ietf.org
References: <CAOW+2dvXSs-xzKknqWjVP7W8H4QWkHR0vrK2X=X1hZedHJbZMQ@mail.gmail.com>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <3cf98c54-27b0-f595-fa96-767516142cde@omnitor.se>
Date: Mon, 16 Oct 2017 00:50:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <CAOW+2dvXSs-xzKknqWjVP7W8H4QWkHR0vrK2X=X1hZedHJbZMQ@mail.gmail.com>
Content-Type: multipart/alternative; boundary="------------ECB1F552EB89B88A5A85102B"
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/NsIxM4QaRiqIX60qkTpGz6GxRrM>
Subject: Re: [Slim] Status of draft-ietf-slim-negotiating-human-language-17
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 15 Oct 2017 22:50:26 -0000

This is a multi-part message in MIME format.
--------------ECB1F552EB89B88A5A85102B
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit

Den 2017-10-15 kl. 09:08, skrev Bernard Aboba:

> Many thanks to Randall for the progress made over the last days.
Yes, I am also glad that we have had rapid progress.
>
> In the progression from -14 to -17, it appears to me that Issues 41 
> and 47 have been resolved, and as a result I have closed these Issues 
> in Datatracker.  If this impression is mistaken, please speak up now.
>
> At this point, I am leaving Issue 43 open, pending verification from 
> the reviewer (Dale) and the WG that the resolution in -17 is 
> satisfactory.
#46 was supposed to be solved by -17. The new wording still has risks 
for misinterpretation, but I am tired of trying to get it right. So I 
accept the current wording in 5.2 first paragraph and hope that readers 
read the whole section 5.2 and compose their impression to something 
that matches our intentions. We need to move on.

Gunnar.
> Opinions?
>
>
> _______________________________________________
> SLIM mailing list
> SLIM@ietf.org
> https://www.ietf.org/mailman/listinfo/slim

-- 
-----------------------------------------
Gunnar Hellström
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


--------------ECB1F552EB89B88A5A85102B
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <p>Den 2017-10-15 kl. 09:08, skrev Bernard Aboba:<br>
    </p>
    <blockquote type="cite"
cite="mid:CAOW+2dvXSs-xzKknqWjVP7W8H4QWkHR0vrK2X=X1hZedHJbZMQ@mail.gmail.com">
      <div dir="ltr">Many thanks to Randall for the progress made over
        the last days. <br>
      </div>
    </blockquote>
    Yes, I am also glad that we have had rapid progress.<br>
    <blockquote type="cite"
cite="mid:CAOW+2dvXSs-xzKknqWjVP7W8H4QWkHR0vrK2X=X1hZedHJbZMQ@mail.gmail.com">
      <div dir="ltr">
        <div><br>
        </div>
        <div>In the progression from -14 to -17, it appears to me that
          Issues 41 and 47 have been resolved, and as a result I have
          closed these Issues in Datatracker.  If this impression is
          mistaken, please speak up now. </div>
        <div><br>
        </div>
        <div>At this point, I am leaving Issue 43 open, pending
          verification from the reviewer (Dale) and the WG that the
          resolution in -17 is satisfactory. <br>
        </div>
      </div>
    </blockquote>
    #46 was supposed to be solved by -17. The new wording still has
    risks for misinterpretation, but I am tired of trying to get it
    right. So I accept the current wording in 5.2 first paragraph and
    hope that readers read the whole section 5.2 and compose their
    impression to something that matches our intentions. We need to move
    on.<br>
    <br>
    Gunnar.<br>
    <blockquote type="cite"
cite="mid:CAOW+2dvXSs-xzKknqWjVP7W8H4QWkHR0vrK2X=X1hZedHJbZMQ@mail.gmail.com">
      <div dir="ltr">
        <div>Opinions?</div>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
SLIM mailing list
<a class="moz-txt-link-abbreviated" href="mailto:SLIM@ietf.org">SLIM@ietf.org</a>
<a class="moz-txt-link-freetext" href="https://www.ietf.org/mailman/listinfo/slim">https://www.ietf.org/mailman/listinfo/slim</a>
</pre>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
-----------------------------------------
Gunnar Hellström
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
  </body>
</html>

--------------ECB1F552EB89B88A5A85102B--


From nobody Sun Oct 15 16:22:02 2017
Return-Path: <bernard.aboba@gmail.com>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3DA02133226 for <slim@ietfa.amsl.com>; Sun, 15 Oct 2017 16:22:00 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level: 
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id CiQqGVEhu5TN for <slim@ietfa.amsl.com>; Sun, 15 Oct 2017 16:21:56 -0700 (PDT)
Received: from mail-ua0-x232.google.com (mail-ua0-x232.google.com [IPv6:2607:f8b0:400c:c08::232]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 5DA9D133229 for <slim@ietf.org>; Sun, 15 Oct 2017 16:21:56 -0700 (PDT)
Received: by mail-ua0-x232.google.com with SMTP id w45so8545006uac.3 for <slim@ietf.org>; Sun, 15 Oct 2017 16:21:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;  h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=VDGYRpaW9UyneteoxgoL1QzeTukj4M11Kt/ixlnZURs=; b=qVb1cgHy7lS+RgjxuhNwaI6D9xiXsvSa53MVR6Ti5yzSTEgvQsKjgCMlaFmoD8bBV6 Jb5DQ+6YUqoZLsUpV0NCbzM+kx+9wLXwqPF6obbrRq3bXxFTx4qEx4cY+7AWyjnxW5Bx PnGzqQ3YI8/LiuyvopmwoeuKaxgaFz2QdlgRehjjH5QhzZWPBi24ukHgje/cGjG5xIYi wM7ctk3kU2WMRoVVFop3jRf6r9hwx21ES1+ZvlCXGmtPcYQeozVY0ZObaMkx/TJHfy8f mYJdR/H7JLNrVQwvyRbRmYEMRnSwXKsLBYdWpfLG5omnNoKVxAJFu/cMY1eWNKe7/J24 1rmA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=VDGYRpaW9UyneteoxgoL1QzeTukj4M11Kt/ixlnZURs=; b=Yno1gOsH+7WhSXlkJ0C6WfpbmIdGdO7EboztOjUpeFQx9SG5fpYf9vuOYKetURCBeK jP6s9Q/yE6szd94hTRRk2vn8tROckPACSbs75vshyKLd3lTQcx1IgKmA+64jqqH7AAD3 F55JnuxH0kM67CWucoj/jAzg1Pse1F4gCBphpTimDGtRAhGNiC85DtW/wEr/UOF11C5D FyWWjm/QhA/leVnizpUVmL8NYyQzGifweD27zidaRJn128oiirAPeJx2IHHLK2cj8fHu CkP7KJViQG/Ncua+f+z6Qqi/3rnP6mpARDMzGlPF6qMcZymWPdPVLNAxty/ILtQRyG9b 4W0g==
X-Gm-Message-State: AMCzsaUYqaTtngytpvI34QYbXg+RBiCJONUCkuSm8az2/tMUeMuulPZC pwpZQe7kjDKJ+40+yzC9CMGpcHKuEIzK4IgzYNM=
X-Google-Smtp-Source: ABhQp+S4BJIiB1NspnZdg3KFeNJ9t8IwtdGNKzimMeQsZBjovHqTgwI07jWxX+uD5HQ2CTRv54KphKNjN6wsC3HhsqA=
X-Received: by 10.176.64.131 with SMTP id i3mr1637094uad.195.1508109715148; Sun, 15 Oct 2017 16:21:55 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.159.32.76 with HTTP; Sun, 15 Oct 2017 16:21:34 -0700 (PDT)
In-Reply-To: <84fb37bd-5c7a-90ea-81fd-d315faabfd96@omnitor.se>
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com> <p06240606d607257c9584@172.20.60.54> <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se> <CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com> <da5cfaea-75f8-3fe1-7483-d77042bd9708@alum.mit.edu> <b2611e82-2133-0e77-b72b-ef709b1bba3c@omnitor.se> <1b0380ef-b57d-3cc7-c649-5351dc61f878@alum.mit.edu> <CAOW+2dtVE5BDmD2qy_g-asXvxntif4fVC8LYO4j7QLQ5Kq2E+g@mail.gmail.com> <3fc6d055-08a0-2bdb-f6e9-99b94efc49df@alum.mit.edu> <84fb37bd-5c7a-90ea-81fd-d315faabfd96@omnitor.se>
From: Bernard Aboba <bernard.aboba@gmail.com>
Date: Sun, 15 Oct 2017 16:21:34 -0700
Message-ID: <CAOW+2dvPSUGA_7tye+KqR1TGs1kYL43TdxBCDOHVEmWOFHud0Q@mail.gmail.com>
To: =?UTF-8?Q?Gunnar_Hellstr=C3=B6m?= <gunnar.hellstrom@omnitor.se>
Cc: Paul Kyzivat <pkyzivat@alum.mit.edu>, slim@ietf.org
Content-Type: multipart/alternative; boundary="94eb2c12370eda99f7055b9e274e"
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/nud9Rtu7qO210bFMl0I18jrrntg>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 15 Oct 2017 23:22:00 -0000

--94eb2c12370eda99f7055b9e274e
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Paul said:

""- can the UA use this information to change how to render the media?"

[BA]  If the video is used for signing, an application might infer an
encoder preference for frame rate over resolution (e.g. in WebRTC,
RTCRtpParameters.degradationPreference =3D "maintain-framerate" )

See:
https://rawgit.com/w3c/webrtc-pc/master/webrtc.html#dom-rtcrtpparameters-de=
gradationpreference

On Sun, Oct 15, 2017 at 2:22 PM, Gunnar Hellstr=C3=B6m <
gunnar.hellstrom@omnitor.se> wrote:

> Den 2017-10-15 kl. 21:27, skrev Paul Kyzivat:
>
>> On 10/15/17 1:49 PM, Bernard Aboba wrote:
>>
>>> Paul said:
>>>
>>> "For the software to know must mean that it will behave differently for
>>> a tag that represents a sign language than for one that represents a sp=
oken
>>> or written language. What is it that it will do differently?"
>>>
>>> [BA] In terms of behavior based on the signed/non-signed distinction, i=
n
>>> -17 the only reference appears to be in Section 5.4, stating that certa=
in
>>> combinations are not defined in the document (but that definition of th=
ose
>>> combinations was out of scope):
>>>
>>
>> I'm asking whether this is a distinction without a difference. I'm not
>> asking whether this makes a difference in the *protocol*, but whether in
>> the end it benefits the participants in the call in any way.
>>
> <GH>Good point, I was on my way to make a similar comment earlier today.
> The difference it makes for applications to "know" what modality a langua=
ge
> tag represents in its used position seems to be only for imagined functio=
ns
> that are out of scope for the protocol specification.
>
>> For instance:
>>
>> - does it help the UA to decide how to alert the callee, so that the
>>   callee can better decide whether to accept the call or instruct the
>>   UA about how to handle the call?
>>
> <GH>Yes, for a regular human user -to-user call, the result of the
> negotiation must be presented to the participants, so that they can start
> the call with a language and modality that is agreed.
> That presentation could be exactly the description from the language tag
> registry, and then no "knowledge" is needed from the application. But it =
is
> more likely that the application has its own string for presentation of t=
he
> negotiated language and modality. So that will be presented. But it is
> still found by a table lookup between language tag and string for a
> language name, so no real knowledge is needed.
> We have said many times that the way the application tells the user the
> result of the negotiation is out of scope for the draft, but it is good t=
o
> discuss and know that it can be done.
> A similar mechanism is also needed for configuration of the user's
> language preference profile further discussed below.
>
>>
>> - does it allow the UA to make a decision whether to accept the media?
>>
> <GH>No, the media should be accepted regardless of the result of the
> language negotiation.
>
>>
>> - can the UA use this information to change how to render the media?
>>
> <GH>Yes, for the specialized text notation of sign language we have
> discussed but currently placed out of scope, a very special rendering
> application is needed. The modality would be recognized by a script subta=
g
> to a sign language tag used in text media. However, I think that would be
> best to also use it with a specific text subtype, so that the rendering c=
an
> be controlled by invocation of a "codec" for that rendering.
>
>>
>> And if there is something like this, will the UA be able to do this
>> generically based on whether the media is sign language or not, or will =
the
>> UA need to already understand *specific* sign language tags?
>>
> <GH>Applications will need to have localized versions of the names for th=
e
> different sign languages and also for spoken languages and written
> languages, to be used in setting of preferences and announcing the result=
s
> of the negotiation. It might be overkill to have such localized names for
> all languages in the IANA language registry, so it will need to be able t=
o
> handle localized names of a subset och the registry. With good design
> however, this is just an automatic translation between a language tag and=
 a
> corresponding name, so it does in fact not require any "knowledge" of wha=
t
> modality is used with each language tag.
> The application can ask for the configuration:
> "Which languages do you want to offer to send in video"
> "Which languages do you want to offer to send in text"
> "Which languages do you want to offer to send in audio"
> "Which languages do you want to be prepared to receive in video"
> "Which languages do you want to be prepared to receive in text"
> "Which languages do you want to be prepared to receive in audio"
>
> And for each question provide a list of language names to select from.
> When the selection is made, the corresponding language tag is placed in t=
he
> profile for negotiation.
>
> If the application provides the whole IANA language registry to the user
> for each question, then there is a possibility that the user by mistake
> selects a language that requires another modality than the question was
> about. If the application shall limit the lists provided for each questio=
n,
> then it will need a kind of knowledge about which language tags suit each
> modality (and media)
>
>
>
>> E.g., A UA serving a deaf person might automatically introduce a sign
>> language interpreter into an incoming audio-only call. If the incoming c=
all
>> has both audio and video then the video *might* be for conveying sign
>> language, or not. If not then the UA will still want to bring in a sign
>> language interpreter. But is knowing the call generically contains sign
>> language sufficient to decide against bringing in an interpreter? Or mus=
t
>> that depend on it being a sign language that the user can use? If the UA=
 is
>> configured for all the specific sign languages that the user can deal wi=
th
>> then there is no need to recognize other sign languages generically.
>>
> <GH>We are talking about specific language tags here and knowing what
> modality they are used for. The user needs to specify which sign language=
s
> they prefer to use. The callee application can be made to look for gaps
> between what the caller offers and what the callee can accept, and from
> that deduct which type and languages for a conversion that is needed, and
> invoke that as a relay service. That invocation can be made completely
> table driven and have corresponding translation profiles for available
> relay services. But it is more likely that it is done by having some
> knowledge about which languages are sign languages and which are spoken
> languages and sending the call to the relay service to try to sort out if
> they can handle the translation.
>
>>
>>
>> So, the answer is - no, the application does not really have any
> knowledge about which modality a language tag represents in its used
> position. If the user selects to indicate very rare language tag
> indications for a media, then a match will just become very unlikely.
>
> Where does this discussion take us? Should we modify section 5.4 again?
>
> Thanks
> Gunnar
>
>     Thanks,
>>     Paul
>>
>>       5.4
>>> <https://tools.ietf.org/html/draft-ietf-slim-negotiating-hum
>>> an-language-17#section-5.4>.
>>>       Undefined Combinations
>>>
>>>
>>>
>>>     The behavior when specifying a non-signed language tag for a video
>>>     media stream, or a signed language tag for an audio or text media
>>>     stream, is not defined in this document.
>>>
>>>     The problem of knowing which language tags are signed and which are
>>>     not is out of scope of this document.
>>>
>>>
>>>
>>> On Sun, Oct 15, 2017 at 10:13 AM, Paul Kyzivat <pkyzivat@alum.mit.edu
>>> <mailto:pkyzivat@alum.mit.edu>> wrote:
>>>
>>>     On 10/15/17 2:24 AM, Gunnar Hellstr=C3=B6m wrote:
>>>
>>>         Paul,
>>>         Den 2017-10-15 kl. 01:19, skrev Paul Kyzivat:
>>>
>>>             On 10/14/17 2:03 PM, Bernard Aboba wrote:
>>>
>>>                 Gunnar said:
>>>
>>>                 "Applications not implementing such specific notations
>>>                 may use the following simple deductions.
>>>
>>>                 - A language tag in audio media is supposed to indicate
>>>                 spoken modality.
>>>
>>>                 [BA] Even a tag with "Sign Language" in the description=
??
>>>
>>>                 - A language tag in text media is supposed to indicate
>>>                 written modality.
>>>
>>>                 [BA] If the tag has "Sign Language" in the description,
>>>                 can this document really say that?
>>>
>>>                 - A language tag in video media is supposed to indicate
>>>                 visual sign language modality except for the case when
>>>                 it is supposed to indicate a view of a speaking person
>>>                 mentioned in section 5.2 characterized by the exact sam=
e
>>>                 language tag also appearing in an audio media
>>> specification.
>>>
>>>                 [BA] It seems like an over-reach to say that a spoken
>>>                 language tag in video media should instead be
>>>                 interpreted as a request for Sign Language.  If this
>>>                 were done, would it always be clear which Sign Language
>>>                 was intended?  And could we really assume that both
>>>                 sides, if negotiating a spoken language tag in video
>>>                 media, were really indicating the desire to sign?  It
>>>                 seems like this could easily result interoperability
>>>                 failure.
>>>
>>>
>>>             IMO the right way to indicate that two (or more) media
>>>             streams are conveying alternative representations of the
>>>             same language content is by grouping them with a new
>>>             grouping attribute. That can tie together an audio with a
>>>             video and/or text. A language tag for sign language on the
>>>             video stream then clarifies to the recipient that it is sig=
n
>>>             language. The grouping attribute by itself can indicate tha=
t
>>>             these streams are conveying language.
>>>
>>>         <GH>Yes, and that is proposed in
>>>         draft-hellstrom-slim-modality-grouping    with two kinds of
>>>         grouping: One kind of grouping to tell that two or more
>>>         languages in different streams are alternatives with the same
>>>         content and a priority order is assigned to them to guide the
>>>         selection of which one to use during the call. The other kind o=
f
>>>         grouping telling that two or more languages in different stream=
s
>>>         are desired together with the same language content but
>>>         different modalities ( such as the use for captioned telephony
>>>         with the same content provided in both speech and text, or sign
>>>         language interpretation where you see the interpreter, or
>>>         possibly spoken language interpretation with the languages
>>>         provided in different audio streams ). I hope that that draft
>>>         can be progressed. I see it as a needed complement to the pure
>>>         language indications per media.
>>>
>>>
>>>     Oh, sorry. I did read that draft but forgot about it.
>>>
>>>         The discussion in this thread is more about how an application
>>>         would easily know that e.g. "ase" is a sign language and "en" i=
s
>>>         a spoken (or written) language, and also a discussion about wha=
t
>>>         kinds of languages are allowed and indicated by default in each
>>>         media type. It was not at all about falsely using language tags
>>>         in the wrong media type as Bernard understood my wording. It wa=
s
>>>         rather a limitation to what modalities are used in each media
>>>         type and how to know the modality with cases that are not
>>>         evident, e.g. "application" and "message" media types.
>>>
>>>
>>>     What do you mean by "know"? Is it for the *UA* software to know, or
>>>     for the human user of the UA to know? Presumably a human user that
>>>     cares will understand this if presented with the information in som=
e
>>>     way. But typically this isn't presented to the user.
>>>
>>>     For the software to know must mean that it will behave differently
>>>     for a tag that represents a sign language than for one that
>>>     represents a spoken or written language. What is it that it will do
>>>     differently?
>>>
>>>              Thanks,
>>>              Paul
>>>
>>>
>>>         Right now we have returned to a very simple rule: we define onl=
y
>>>         use of spoken language in audio media, written language in text
>>>         media and sign language in video media.
>>>         We have discussed other use, such as a view of a speaking perso=
n
>>>         in video, text overlay on video, a sign language notation in
>>>         text media, written language in message media, written language
>>>         in WebRTC data channels, sign written and spoken in bucket medi=
a
>>>         maybe declared as application media. We do not define these
>>>         cases. They are just not defined, not forbidden. They may be
>>>         defined in the future.
>>>
>>>         My proposed wording in section 5.4 got too many
>>>         misunderstandings so I gave up with it. I think we can live wit=
h
>>>         5.4 as it is in version -16.
>>>
>>>         Thanks,
>>>         Gunnar
>>>
>>>
>>>
>>>             (IIRC I suggested something along these lines a long time
>>> ago.)
>>>
>>>                  Thanks,
>>>                  Paul
>>>
>>>             _______________________________________________
>>>             SLIM mailing list
>>>             SLIM@ietf.org <mailto:SLIM@ietf.org>
>>>             https://www.ietf.org/mailman/listinfo/slim
>>>             <https://www.ietf.org/mailman/listinfo/slim>
>>>
>>>
>>>
>>>     _______________________________________________
>>>     SLIM mailing list
>>>     SLIM@ietf.org <mailto:SLIM@ietf.org>
>>>     https://www.ietf.org/mailman/listinfo/slim
>>>     <https://www.ietf.org/mailman/listinfo/slim>
>>>
>>>
>>>
>>
> --
> -----------------------------------------
> Gunnar Hellstr=C3=B6m
> Omnitor
> gunnar.hellstrom@omnitor.se
> +46 708 204 288
>
>

--94eb2c12370eda99f7055b9e274e
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Paul said:=C2=A0<div><br></div><div>&quot;&quot;<span styl=
e=3D"color:rgb(80,0,80);font-size:12.8px">- can the UA use this information=
 to change how to render the media?&quot;</span></div><div><span style=3D"c=
olor:rgb(80,0,80);font-size:12.8px"><br></span></div><div><span style=3D"co=
lor:rgb(80,0,80);font-size:12.8px">[BA]=C2=A0 If the video is used for sign=
ing, an application might infer an encoder preference for frame rate over r=
esolution (e.g. in WebRTC, RTCRtpParameters.degradationPreference =3D &quot=
;maintain-framerate&quot; )</span></div><div><span style=3D"color:rgb(80,0,=
80);font-size:12.8px"><br></span></div><div><span style=3D"color:rgb(80,0,8=
0);font-size:12.8px">See:=C2=A0=C2=A0</span><font color=3D"#500050"><span s=
tyle=3D"font-size:12.8px"><a href=3D"https://rawgit.com/w3c/webrtc-pc/maste=
r/webrtc.html#dom-rtcrtpparameters-degradationpreference">https://rawgit.co=
m/w3c/webrtc-pc/master/webrtc.html#dom-rtcrtpparameters-degradationpreferen=
ce</a></span></font></div></div><div class=3D"gmail_extra"><br><div class=
=3D"gmail_quote">On Sun, Oct 15, 2017 at 2:22 PM, Gunnar Hellstr=C3=B6m <sp=
an dir=3D"ltr">&lt;<a href=3D"mailto:gunnar.hellstrom@omnitor.se" target=3D=
"_blank">gunnar.hellstrom@omnitor.se</a>&gt;</span> wrote:<br><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;=
padding-left:1ex"><span class=3D"">Den 2017-10-15 kl. 21:27, skrev Paul Kyz=
ivat:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
On 10/15/17 1:49 PM, Bernard Aboba wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
Paul said:<br>
<br>
&quot;For the software to know must mean that it will behave differently fo=
r a tag that represents a sign language than for one that represents a spok=
en or written language. What is it that it will do differently?&quot;<br>
<br>
[BA] In terms of behavior based on the signed/non-signed distinction, in -1=
7 the only reference appears to be in Section 5.4, stating that certain com=
binations are not defined in the document (but that definition of those com=
binations was out of scope):<br>
</blockquote>
<br>
I&#39;m asking whether this is a distinction without a difference. I&#39;m =
not asking whether this makes a difference in the *protocol*, but whether i=
n the end it benefits the participants in the call in any way. <br>
</blockquote></span>
&lt;GH&gt;Good point, I was on my way to make a similar comment earlier tod=
ay. The difference it makes for applications to &quot;know&quot; what modal=
ity a language tag represents in its used position seems to be only for ima=
gined functions that are out of scope for the protocol specification.<span =
class=3D""><br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
For instance:<br>
<br>
- does it help the UA to decide how to alert the callee, so that the<br>
=C2=A0 callee can better decide whether to accept the call or instruct the<=
br>
=C2=A0 UA about how to handle the call?<br>
</blockquote></span>
&lt;GH&gt;Yes, for a regular human user -to-user call, the result of the ne=
gotiation must be presented to the participants, so that they can start the=
 call with a language and modality that is agreed.<br>
That presentation could be exactly the description from the language tag re=
gistry, and then no &quot;knowledge&quot; is needed from the application. B=
ut it is more likely that the application has its own string for presentati=
on of the negotiated language and modality. So that will be presented. But =
it is still found by a table lookup between language tag and string for a l=
anguage name, so no real knowledge is needed.<br>
We have said many times that the way the application tells the user the res=
ult of the negotiation is out of scope for the draft, but it is good to dis=
cuss and know that it can be done.<br>
A similar mechanism is also needed for configuration of the user&#39;s lang=
uage preference profile further discussed below.<span class=3D""><br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
<br>
- does it allow the UA to make a decision whether to accept the media?<br>
</blockquote></span>
&lt;GH&gt;No, the media should be accepted regardless of the result of the =
language negotiation.<span class=3D""><br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
<br>
- can the UA use this information to change how to render the media?<br>
</blockquote></span>
&lt;GH&gt;Yes, for the specialized text notation of sign language we have d=
iscussed but currently placed out of scope, a very special rendering applic=
ation is needed. The modality would be recognized by a script subtag to a s=
ign language tag used in text media. However, I think that would be best to=
 also use it with a specific text subtype, so that the rendering can be con=
trolled by invocation of a &quot;codec&quot; for that rendering.<span class=
=3D""><br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
<br>
And if there is something like this, will the UA be able to do this generic=
ally based on whether the media is sign language or not, or will the UA nee=
d to already understand *specific* sign language tags?<br>
</blockquote></span>
&lt;GH&gt;Applications will need to have localized versions of the names fo=
r the different sign languages and also for spoken languages and written la=
nguages, to be used in setting of preferences and announcing the results of=
 the negotiation. It might be overkill to have such localized names for all=
 languages in the IANA language registry, so it will need to be able to han=
dle localized names of a subset och the registry. With good design however,=
 this is just an automatic translation between a language tag and a corresp=
onding name, so it does in fact not require any &quot;knowledge&quot; of wh=
at modality is used with each language tag.<br>
The application can ask for the configuration:<br>
&quot;Which languages do you want to offer to send in video&quot;<br>
&quot;Which languages do you want to offer to send in text&quot;<br>
&quot;Which languages do you want to offer to send in audio&quot;<br>
&quot;Which languages do you want to be prepared to receive in video&quot;<=
br>
&quot;Which languages do you want to be prepared to receive in text&quot;<b=
r>
&quot;Which languages do you want to be prepared to receive in audio&quot;<=
br>
<br>
And for each question provide a list of language names to select from. When=
 the selection is made, the corresponding language tag is placed in the pro=
file for negotiation.<br>
<br>
If the application provides the whole IANA language registry to the user fo=
r each question, then there is a possibility that the user by mistake selec=
ts a language that requires another modality than the question was about. I=
f the application shall limit the lists provided for each question, then it=
 will need a kind of knowledge about which language tags suit each modality=
 (and media)<span class=3D""><br>
<br>
<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
<br>
E.g., A UA serving a deaf person might automatically introduce a sign langu=
age interpreter into an incoming audio-only call. If the incoming call has =
both audio and video then the video *might* be for conveying sign language,=
 or not. If not then the UA will still want to bring in a sign language int=
erpreter. But is knowing the call generically contains sign language suffic=
ient to decide against bringing in an interpreter? Or must that depend on i=
t being a sign language that the user can use? If the UA is configured for =
all the specific sign languages that the user can deal with then there is n=
o need to recognize other sign languages generically.<br>
</blockquote></span>
&lt;GH&gt;We are talking about specific language tags here and knowing what=
 modality they are used for. The user needs to specify which sign languages=
 they prefer to use. The callee application can be made to look for gaps be=
tween what the caller offers and what the callee can accept, and from that =
deduct which type and languages for a conversion that is needed, and invoke=
 that as a relay service. That invocation can be made completely table driv=
en and have corresponding translation profiles for available relay services=
. But it is more likely that it is done by having some knowledge about whic=
h languages are sign languages and which are spoken languages and sending t=
he call to the relay service to try to sort out if they can handle the tran=
slation.<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
<br>
<br>
</blockquote>
So, the answer is - no, the application does not really have any knowledge =
about which modality a language tag represents in its used position. If the=
 user selects to indicate very rare language tag indications for a media, t=
hen a match will just become very unlikely.<br>
<br>
Where does this discussion take us? Should we modify section 5.4 again?<br>
<br>
Thanks<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Gunnar</font></span><div class=3D"HOEnZb"><div class=3D"h5"><br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
=C2=A0=C2=A0=C2=A0=C2=A0Thanks,<br>
=C2=A0=C2=A0=C2=A0=C2=A0Paul<br>
<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 5.4<br>
&lt;<a href=3D"https://tools.ietf.org/html/draft-ietf-slim-negotiating-huma=
n-language-17#section-5.4" rel=3D"noreferrer" target=3D"_blank">https://too=
ls.ietf.org/html/d<wbr>raft-ietf-slim-negotiating-hum<wbr>an-language-17#se=
ction-5.4</a>&gt;.<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Undefined Combinations<br>
<br>
<br>
<br>
=C2=A0=C2=A0=C2=A0 The behavior when specifying a non-signed language tag f=
or a video<br>
=C2=A0=C2=A0=C2=A0 media stream, or a signed language tag for an audio or t=
ext media<br>
=C2=A0=C2=A0=C2=A0 stream, is not defined in this document.<br>
<br>
=C2=A0=C2=A0=C2=A0 The problem of knowing which language tags are signed an=
d which are<br>
=C2=A0=C2=A0=C2=A0 not is out of scope of this document.<br>
<br>
<br>
<br>
On Sun, Oct 15, 2017 at 10:13 AM, Paul Kyzivat &lt;<a href=3D"mailto:pkyziv=
at@alum.mit.edu" target=3D"_blank">pkyzivat@alum.mit.edu</a> &lt;mailto:<a =
href=3D"mailto:pkyzivat@alum.mit.edu" target=3D"_blank">pkyzivat@alum.mit.e=
du</a>&gt;<wbr>&gt; wrote:<br>
<br>
=C2=A0=C2=A0=C2=A0 On 10/15/17 2:24 AM, Gunnar Hellstr=C3=B6m wrote:<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Paul,<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Den 2017-10-15 kl. 01:19, skrev =
Paul Kyzivat:<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 On 10/14=
/17 2:03 PM, Bernard Aboba wrote:<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 Gunnar said:<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 &quot;Applications not implementing such specific notations=
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 may use the following simple deductions.<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 - A language tag in audio media is supposed to indicate<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 spoken modality.<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 [BA] Even a tag with &quot;Sign Language&quot; in the descr=
iption??<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 - A language tag in text media is supposed to indicate =C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 written modality.<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 [BA] If the tag has &quot;Sign Language&quot; in the descri=
ption,<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 can this document really say that?<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 - A language tag in video media is supposed to indicate<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 visual sign language modality except for the case when<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 it is supposed to indicate a view of a speaking person<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 mentioned in section 5.2 characterized by the exact same<br=
>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 language tag also appearing in an audio media specification=
.<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 [BA] It seems like an over-reach to say that a spoken<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 language tag in video media should instead be<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 interpreted as a request for Sign Language.=C2=A0 If this<b=
r>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 were done, would it always be clear which Sign Language<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 was intended?=C2=A0 And could we really assume that both<br=
>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 sides, if negotiating a spoken language tag in video<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 media, were really indicating the desire to sign?=C2=A0 It<=
br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 seems like this could easily result interoperability<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 failure.<br>
<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 IMO the =
right way to indicate that two (or more) media<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 streams =
are conveying alternative representations of the<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 same lan=
guage content is by grouping them with a new<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 grouping=
 attribute. That can tie together an audio with a<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 video an=
d/or text. A language tag for sign language on the<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 video st=
ream then clarifies to the recipient that it is sign<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 language=
. The grouping attribute by itself can indicate that<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 these st=
reams are conveying language.<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 &lt;GH&gt;Yes, and that is propo=
sed in<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 draft-hellstrom-slim-modality-<w=
br>grouping=C2=A0=C2=A0=C2=A0 with two kinds of<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 grouping: One kind of grouping t=
o tell that two or more<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 languages in different streams a=
re alternatives with the same<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 content and a priority order is =
assigned to them to guide the<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 selection of which one to use du=
ring the call. The other kind of<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 grouping telling that two or mor=
e languages in different streams<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 are desired together with the sa=
me language content but<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 different modalities ( such as t=
he use for captioned telephony<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 with the same content provided i=
n both speech and text, or sign<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 language interpretation where yo=
u see the interpreter, or<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 possibly spoken language interpr=
etation with the languages<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 provided in different audio stre=
ams ). I hope that that draft<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 can be progressed. I see it as a=
 needed complement to the pure<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 language indications per media.<=
br>
<br>
<br>
=C2=A0=C2=A0=C2=A0 Oh, sorry. I did read that draft but forgot about it.<br=
>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 The discussion in this thread is=
 more about how an application<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 would easily know that e.g. &quo=
t;ase&quot; is a sign language and &quot;en&quot; is<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 a spoken (or written) language, =
and also a discussion about what<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 kinds of languages are allowed a=
nd indicated by default in each<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 media type. It was not at all ab=
out falsely using language tags<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 in the wrong media type as Berna=
rd understood my wording. It was<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 rather a limitation to what moda=
lities are used in each media<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 type and how to know the modalit=
y with cases that are not<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 evident, e.g. &quot;application&=
quot; and &quot;message&quot; media types.<br>
<br>
<br>
=C2=A0=C2=A0=C2=A0 What do you mean by &quot;know&quot;? Is it for the *UA*=
 software to know, or<br>
=C2=A0=C2=A0=C2=A0 for the human user of the UA to know? Presumably a human=
 user that<br>
=C2=A0=C2=A0=C2=A0 cares will understand this if presented with the informa=
tion in some<br>
=C2=A0=C2=A0=C2=A0 way. But typically this isn&#39;t presented to the user.=
<br>
<br>
=C2=A0=C2=A0=C2=A0 For the software to know must mean that it will behave d=
ifferently<br>
=C2=A0=C2=A0=C2=A0 for a tag that represents a sign language than for one t=
hat<br>
=C2=A0=C2=A0=C2=A0 represents a spoken or written language. What is it that=
 it will do<br>
=C2=A0=C2=A0=C2=A0 differently?<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Thanks,<br>
=C2=A0=C2=A0=C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Paul<br>
<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Right now we have returned to a =
very simple rule: we define only<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 use of spoken language in audio =
media, written language in text<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 media and sign language in video=
 media.<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 We have discussed other use, suc=
h as a view of a speaking person<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 in video, text overlay on video,=
 a sign language notation in<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 text media, written language in =
message media, written language<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 in WebRTC data channels, sign wr=
itten and spoken in bucket media<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 maybe declared as application me=
dia. We do not define these<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 cases. They are just not defined=
, not forbidden. They may be<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 defined in the future.<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 My proposed wording in section 5=
.4 got too many<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 misunderstandings so I gave up w=
ith it. I think we can live with<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 5.4 as it is in version -16.<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Thanks,<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Gunnar<br>
<br>
<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (IIRC I =
suggested something along these lines a long time ago.)<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
=C2=A0=C2=A0=C2=A0=C2=A0Thanks,<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
=C2=A0=C2=A0=C2=A0=C2=A0Paul<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ________=
______________________<wbr>_________________<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 SLIM mai=
ling list<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <a href=
=3D"mailto:SLIM@ietf.org" target=3D"_blank">SLIM@ietf.org</a> &lt;mailto:<a=
 href=3D"mailto:SLIM@ietf.org" target=3D"_blank">SLIM@ietf.org</a>&gt;<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <a href=
=3D"https://www.ietf.org/mailman/listinfo/slim" rel=3D"noreferrer" target=
=3D"_blank">https://www.ietf.org/mailman/l<wbr>istinfo/slim</a><br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 &lt;<a h=
ref=3D"https://www.ietf.org/mailman/listinfo/slim" rel=3D"noreferrer" targe=
t=3D"_blank">https://www.ietf.org/mailman/<wbr>listinfo/slim</a>&gt;<br>
<br>
<br>
<br>
=C2=A0=C2=A0=C2=A0 ______________________________<wbr>_________________<br>
=C2=A0=C2=A0=C2=A0 SLIM mailing list<br>
=C2=A0=C2=A0=C2=A0 <a href=3D"mailto:SLIM@ietf.org" target=3D"_blank">SLIM@=
ietf.org</a> &lt;mailto:<a href=3D"mailto:SLIM@ietf.org" target=3D"_blank">=
SLIM@ietf.org</a>&gt;<br>
=C2=A0=C2=A0=C2=A0 <a href=3D"https://www.ietf.org/mailman/listinfo/slim" r=
el=3D"noreferrer" target=3D"_blank">https://www.ietf.org/mailman/l<wbr>isti=
nfo/slim</a><br>
=C2=A0=C2=A0=C2=A0 &lt;<a href=3D"https://www.ietf.org/mailman/listinfo/sli=
m" rel=3D"noreferrer" target=3D"_blank">https://www.ietf.org/mailman/<wbr>l=
istinfo/slim</a>&gt;<br>
<br>
<br>
</blockquote>
<br>
</blockquote>
<br></div></div><div class=3D"HOEnZb"><div class=3D"h5">
-- <br>
------------------------------<wbr>-----------<br>
Gunnar Hellstr=C3=B6m<br>
Omnitor<br>
<a href=3D"mailto:gunnar.hellstrom@omnitor.se" target=3D"_blank">gunnar.hel=
lstrom@omnitor.se</a><br>
<a href=3D"tel:%2B46%20708%20204%20288" value=3D"+46708204288" target=3D"_b=
lank">+46 708 204 288</a><br>
<br>
</div></div></blockquote></div><br></div>

--94eb2c12370eda99f7055b9e274e--


From nobody Sun Oct 15 18:09:48 2017
Return-Path: <rg+ietf@randy.pensive.org>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id A8BCE1332DA for <slim@ietfa.amsl.com>; Sun, 15 Oct 2017 18:09:46 -0700 (PDT)
X-Quarantine-ID: <f48Q7JsAnmq6>
X-Virus-Scanned: amavisd-new at amsl.com
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "MIME-Version"
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Level: 
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id f48Q7JsAnmq6 for <slim@ietfa.amsl.com>; Sun, 15 Oct 2017 18:09:45 -0700 (PDT)
Received: from turing.pensive.org (turing.pensive.org [99.111.97.161]) by ietfa.amsl.com (Postfix) with ESMTP id C5D961332D4 for <slim@ietf.org>; Sun, 15 Oct 2017 18:09:45 -0700 (PDT)
Received: from [172.20.60.54] (99.111.97.161) by turing.pensive.org with ESMTP (EIMS X 3.3.9); Sun, 15 Oct 2017 18:13:52 -0700
Mime-Version: 1.0
Message-Id: <p06240600d609b742af25@[172.20.60.54]>
In-Reply-To: <84fb37bd-5c7a-90ea-81fd-d315faabfd96@omnitor.se>
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com> <p06240606d607257c9584@172.20.60.54> <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se> <CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com> <da5cfaea-75f8-3fe1-7483-d77042bd9708@alum.mit.edu> <b2611e82-2133-0e77-b72b-ef709b1bba3c@omnitor.se> <1b0380ef-b57d-3cc7-c649-5351dc61f878@alum.mit.edu> <CAOW+2dtVE5BDmD2qy_g-asXvxntif4fVC8LYO4j7QLQ5Kq2E+g@mail.gmail.com> <3fc6d055-08a0-2bdb-f6e9-99b94efc49df@alum.mit.edu> <84fb37bd-5c7a-90ea-81fd-d315faabfd96@omnitor.se>
X-Mailer: Eudora for Mac OS X
Date: Sun, 15 Oct 2017 18:09:40 -0700
To: Gunnar =?iso-8859-1?Q?Hellstr=F6m?=  <gunnar.hellstrom@omnitor.se>, Paul Kyzivat <pkyzivat@alum.mit.edu>, Bernard Aboba <bernard.aboba@gmail.com>
From: Randall Gellens <rg+ietf@randy.pensive.org>
Cc: slim@ietf.org
Mime-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1" ; format="flowed"
Content-Transfer-Encoding: quoted-printable
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/Y-MK5bWuvPcvt3fSTou16iVbEVo>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 16 Oct 2017 01:09:46 -0000

At 11:22 PM +0200 10/15/17, Gunnar Hellstr=F6m wrote:

>  Where does this discussion take us? Should we modify section 5.4 again?

No.  That should be for follow-on work.

-- 
Randall Gellens
Opinions are personal;    facts are suspect;    I speak for myself only
-------------- Randomly selected tag: ---------------
The First Amendment is often inconvenient. But that is besides the
point.  Inconvenience does not absolve the government of its
obligation to tolerate speech. --Justice Anthony Kennedy, in 91-155


From nobody Mon Oct 16 15:21:16 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 4ADE213247A for <slim@ietfa.amsl.com>; Mon, 16 Oct 2017 15:21:15 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Level: 
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id z0VXXnwHjjCK for <slim@ietfa.amsl.com>; Mon, 16 Oct 2017 15:21:10 -0700 (PDT)
Received: from bin-vsp-out-02.atm.binero.net (bin-mail-out-05.binero.net [195.74.38.228]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 4F1881321A7 for <slim@ietf.org>; Mon, 16 Oct 2017 15:21:09 -0700 (PDT)
X-Halon-ID: 407bb653-b2c0-11e7-99c0-005056917f90
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-02.atm.binero.net (Halon) with ESMTPSA id 407bb653-b2c0-11e7-99c0-005056917f90; Tue, 17 Oct 2017 00:20:44 +0200 (CEST)
To: Bernard Aboba <bernard.aboba@gmail.com>
Cc: Paul Kyzivat <pkyzivat@alum.mit.edu>, slim@ietf.org
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com> <p06240606d607257c9584@172.20.60.54> <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se> <CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com> <da5cfaea-75f8-3fe1-7483-d77042bd9708@alum.mit.edu> <b2611e82-2133-0e77-b72b-ef709b1bba3c@omnitor.se> <1b0380ef-b57d-3cc7-c649-5351dc61f878@alum.mit.edu> <CAOW+2dtVE5BDmD2qy_g-asXvxntif4fVC8LYO4j7QLQ5Kq2E+g@mail.gmail.com> <3fc6d055-08a0-2bdb-f6e9-99b94efc49df@alum.mit.edu> <84fb37bd-5c7a-90ea-81fd-d315faabfd96@omnitor.se> <CAOW+2dvPSUGA_7tye+KqR1TGs1kYL43TdxBCDOHVEmWOFHud0Q@mail.gmail.com>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <49cb3e25-6d65-1773-2803-dc667cd5890c@omnitor.se>
Date: Tue, 17 Oct 2017 00:21:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <CAOW+2dvPSUGA_7tye+KqR1TGs1kYL43TdxBCDOHVEmWOFHud0Q@mail.gmail.com>
Content-Type: multipart/alternative; boundary="------------C331B49A7F658633C5C6AB18"
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/qbszXwYxP66BOpjUWQo0W8YYlQo>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 16 Oct 2017 22:21:15 -0000

This is a multi-part message in MIME format.
--------------C331B49A7F658633C5C6AB18
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit

Den 2017-10-16 kl. 01:21, skrev Bernard Aboba:
> Paul said:
>
> ""- can the UA use this information to change how to render the media?"
>
> [BA]  If the video is used for signing, an application might infer an 
> encoder preference for frame rate over resolution (e.g. in WebRTC, 
> RTCRtpParameters.degradationPreference = "maintain-framerate" )
<GH>Right, that is a valid example of how real "knowledge" of the 
modality can be used by the application.


And, as a response on issue #43,

A simple way is to say

Video media descriptions shall only contain sign language tags
Audio media descriptions shall only contain language tags for spoken 
language
Text media descriptions shall only contain language tags for written 
language
Use of other media descriptions such as message and application with 
language indications require other specifications on how to assess the 
modality for non-signed languages.

The current 5.4 does not mention our main problem with the language 
tags, that there is no difference on them if we mean use for spoken 
language or written language. We should have made better efforts to 
solve that problem long ago, but we have not.

5.4 can be modified to specify the simple limited case and the problems 
that block us from specifying other cases:


      5.4
      <https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4>.
      Media and modality Combination problems




    The problem of indicating a language tag for the view of a speaking person in a video stream is out of scope for this document.

    The problem of indicating a language tag for use of written language coded as a component in a video stream is out of scope for this document.

    The use of language tags for negotiation of languages in other media than audio, video and text is not defined in this document.

    The problem of knowing which language tags are signed and which are not can be deducted
    from the IANA language tag registry. How this is done is out of scope of this document.


--------------------------------------------------------


But if we want to allow more cases, we need to consider the following 
complications:


1. to assess if a language represents a Sign Language, the application 
can look for the word "sign" in the description in the IANA language 
registry or a copy thereof as Randall already indicated.

2. For written languages used as a text component in a video stream, it 
is possible to code this for languages requiring a script subtag, but 
not for languages with suppressed script subtags

3. We have also discussed proposals for how to code written language in 
video stream for languages not requiring a script subtag, but not got 
acceptance for our proposals. So we need to say that that is currently 
undefined.

4. We also discussed how to code a view of a speaking person in video 
and said that that could be done by using the "definitively not written" 
script subtag on a non-signed language tag in video. But that was not 
appreciated by the language experts. Another option was to not allow 
written language overlayed on video, and that is the lately used option. 
( up to version -16 or so)

5. For talking and hearing audio media, we only have that case for 
language-tags in Audio. So that is easy to code and assess.

6. For written language in text media, a check can be made about if 
"sign" is part of the language tag description, and if not, it is a 
written language.

7. For signed language in text media, a check can be made about if 
"sign" is part of the language tag description, and if it is, it is a 
signed language in text notation. (extremely unusual)

8. For use with language tags in other media than audio, video and text, 
there is a need for a description on how to assess the modality, 
especially for non-signed languages before it is used.


We can construct a section 5.4 to describe this situation, but I doubt 
that it is worth the effort.


>
> See: 
> https://rawgit.com/w3c/webrtc-pc/master/webrtc.html#dom-rtcrtpparameters-degradationpreference
>
> On Sun, Oct 15, 2017 at 2:22 PM, Gunnar Hellström 
> <gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>> wrote:
>
>     Den 2017-10-15 kl. 21:27, skrev Paul Kyzivat:
>
>         On 10/15/17 1:49 PM, Bernard Aboba wrote:
>
>             Paul said:
>
>             "For the software to know must mean that it will behave
>             differently for a tag that represents a sign language than
>             for one that represents a spoken or written language. What
>             is it that it will do differently?"
>
>             [BA] In terms of behavior based on the signed/non-signed
>             distinction, in -17 the only reference appears to be in
>             Section 5.4, stating that certain combinations are not
>             defined in the document (but that definition of those
>             combinations was out of scope):
>
>
>         I'm asking whether this is a distinction without a difference.
>         I'm not asking whether this makes a difference in the
>         *protocol*, but whether in the end it benefits the
>         participants in the call in any way.
>
>     <GH>Good point, I was on my way to make a similar comment earlier
>     today. The difference it makes for applications to "know" what
>     modality a language tag represents in its used position seems to
>     be only for imagined functions that are out of scope for the
>     protocol specification.
>
>         For instance:
>
>         - does it help the UA to decide how to alert the callee, so
>         that the
>           callee can better decide whether to accept the call or
>         instruct the
>           UA about how to handle the call?
>
>     <GH>Yes, for a regular human user -to-user call, the result of the
>     negotiation must be presented to the participants, so that they
>     can start the call with a language and modality that is agreed.
>     That presentation could be exactly the description from the
>     language tag registry, and then no "knowledge" is needed from the
>     application. But it is more likely that the application has its
>     own string for presentation of the negotiated language and
>     modality. So that will be presented. But it is still found by a
>     table lookup between language tag and string for a language name,
>     so no real knowledge is needed.
>     We have said many times that the way the application tells the
>     user the result of the negotiation is out of scope for the draft,
>     but it is good to discuss and know that it can be done.
>     A similar mechanism is also needed for configuration of the user's
>     language preference profile further discussed below.
>
>
>         - does it allow the UA to make a decision whether to accept
>         the media?
>
>     <GH>No, the media should be accepted regardless of the result of
>     the language negotiation.
>
>
>         - can the UA use this information to change how to render the
>         media?
>
>     <GH>Yes, for the specialized text notation of sign language we
>     have discussed but currently placed out of scope, a very special
>     rendering application is needed. The modality would be recognized
>     by a script subtag to a sign language tag used in text media.
>     However, I think that would be best to also use it with a specific
>     text subtype, so that the rendering can be controlled by
>     invocation of a "codec" for that rendering.
>
>
>         And if there is something like this, will the UA be able to do
>         this generically based on whether the media is sign language
>         or not, or will the UA need to already understand *specific*
>         sign language tags?
>
>     <GH>Applications will need to have localized versions of the names
>     for the different sign languages and also for spoken languages and
>     written languages, to be used in setting of preferences and
>     announcing the results of the negotiation. It might be overkill to
>     have such localized names for all languages in the IANA language
>     registry, so it will need to be able to handle localized names of
>     a subset och the registry. With good design however, this is just
>     an automatic translation between a language tag and a
>     corresponding name, so it does in fact not require any "knowledge"
>     of what modality is used with each language tag.
>     The application can ask for the configuration:
>     "Which languages do you want to offer to send in video"
>     "Which languages do you want to offer to send in text"
>     "Which languages do you want to offer to send in audio"
>     "Which languages do you want to be prepared to receive in video"
>     "Which languages do you want to be prepared to receive in text"
>     "Which languages do you want to be prepared to receive in audio"
>
>     And for each question provide a list of language names to select
>     from. When the selection is made, the corresponding language tag
>     is placed in the profile for negotiation.
>
>     If the application provides the whole IANA language registry to
>     the user for each question, then there is a possibility that the
>     user by mistake selects a language that requires another modality
>     than the question was about. If the application shall limit the
>     lists provided for each question, then it will need a kind of
>     knowledge about which language tags suit each modality (and media)
>
>
>
>         E.g., A UA serving a deaf person might automatically introduce
>         a sign language interpreter into an incoming audio-only call.
>         If the incoming call has both audio and video then the video
>         *might* be for conveying sign language, or not. If not then
>         the UA will still want to bring in a sign language
>         interpreter. But is knowing the call generically contains sign
>         language sufficient to decide against bringing in an
>         interpreter? Or must that depend on it being a sign language
>         that the user can use? If the UA is configured for all the
>         specific sign languages that the user can deal with then there
>         is no need to recognize other sign languages generically.
>
>     <GH>We are talking about specific language tags here and knowing
>     what modality they are used for. The user needs to specify which
>     sign languages they prefer to use. The callee application can be
>     made to look for gaps between what the caller offers and what the
>     callee can accept, and from that deduct which type and languages
>     for a conversion that is needed, and invoke that as a relay
>     service. That invocation can be made completely table driven and
>     have corresponding translation profiles for available relay
>     services. But it is more likely that it is done by having some
>     knowledge about which languages are sign languages and which are
>     spoken languages and sending the call to the relay service to try
>     to sort out if they can handle the translation.
>
>
>
>     So, the answer is - no, the application does not really have any
>     knowledge about which modality a language tag represents in its
>     used position. If the user selects to indicate very rare language
>     tag indications for a media, then a match will just become very
>     unlikely.
>
>     Where does this discussion take us? Should we modify section 5.4
>     again?
>
>     Thanks
>     Gunnar
>
>             Thanks,
>             Paul
>
>                   5.4
>             <https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4
>             <https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4>>.
>                   Undefined Combinations
>
>
>
>                 The behavior when specifying a non-signed language tag
>             for a video
>                 media stream, or a signed language tag for an audio or
>             text media
>                 stream, is not defined in this document.
>
>                 The problem of knowing which language tags are signed
>             and which are
>                 not is out of scope of this document.
>
>
>
>             On Sun, Oct 15, 2017 at 10:13 AM, Paul Kyzivat
>             <pkyzivat@alum.mit.edu <mailto:pkyzivat@alum.mit.edu>
>             <mailto:pkyzivat@alum.mit.edu
>             <mailto:pkyzivat@alum.mit.edu>>> wrote:
>
>                 On 10/15/17 2:24 AM, Gunnar Hellström wrote:
>
>                     Paul,
>                     Den 2017-10-15 kl. 01:19, skrev Paul Kyzivat:
>
>                         On 10/14/17 2:03 PM, Bernard Aboba wrote:
>
>                             Gunnar said:
>
>                             "Applications not implementing such
>             specific notations
>                             may use the following simple deductions.
>
>                             - A language tag in audio media is
>             supposed to indicate
>                             spoken modality.
>
>                             [BA] Even a tag with "Sign Language" in
>             the description??
>
>                             - A language tag in text media is supposed
>             to indicate                 written modality.
>
>                             [BA] If the tag has "Sign Language" in the
>             description,
>                             can this document really say that?
>
>                             - A language tag in video media is
>             supposed to indicate
>                             visual sign language modality except for
>             the case when
>                             it is supposed to indicate a view of a
>             speaking person
>                             mentioned in section 5.2 characterized by
>             the exact same
>                             language tag also appearing in an audio
>             media specification.
>
>                             [BA] It seems like an over-reach to say
>             that a spoken
>                             language tag in video media should instead be
>                             interpreted as a request for Sign
>             Language.  If this
>                             were done, would it always be clear which
>             Sign Language
>                             was intended?  And could we really assume
>             that both
>                             sides, if negotiating a spoken language
>             tag in video
>                             media, were really indicating the desire
>             to sign?  It
>                             seems like this could easily result
>             interoperability
>                             failure.
>
>
>                         IMO the right way to indicate that two (or
>             more) media
>                         streams are conveying alternative
>             representations of the
>                         same language content is by grouping them with
>             a new
>                         grouping attribute. That can tie together an
>             audio with a
>                         video and/or text. A language tag for sign
>             language on the
>                         video stream then clarifies to the recipient
>             that it is sign
>                         language. The grouping attribute by itself can
>             indicate that
>                         these streams are conveying language.
>
>                     <GH>Yes, and that is proposed in
>                     draft-hellstrom-slim-modality-grouping with two
>             kinds of
>                     grouping: One kind of grouping to tell that two or
>             more
>                     languages in different streams are alternatives
>             with the same
>                     content and a priority order is assigned to them
>             to guide the
>                     selection of which one to use during the call. The
>             other kind of
>                     grouping telling that two or more languages in
>             different streams
>                     are desired together with the same language
>             content but
>                     different modalities ( such as the use for
>             captioned telephony
>                     with the same content provided in both speech and
>             text, or sign
>                     language interpretation where you see the
>             interpreter, or
>                     possibly spoken language interpretation with the
>             languages
>                     provided in different audio streams ). I hope that
>             that draft
>                     can be progressed. I see it as a needed complement
>             to the pure
>                     language indications per media.
>
>
>                 Oh, sorry. I did read that draft but forgot about it.
>
>                     The discussion in this thread is more about how an
>             application
>                     would easily know that e.g. "ase" is a sign
>             language and "en" is
>                     a spoken (or written) language, and also a
>             discussion about what
>                     kinds of languages are allowed and indicated by
>             default in each
>                     media type. It was not at all about falsely using
>             language tags
>                     in the wrong media type as Bernard understood my
>             wording. It was
>                     rather a limitation to what modalities are used in
>             each media
>                     type and how to know the modality with cases that
>             are not
>                     evident, e.g. "application" and "message" media types.
>
>
>                 What do you mean by "know"? Is it for the *UA*
>             software to know, or
>                 for the human user of the UA to know? Presumably a
>             human user that
>                 cares will understand this if presented with the
>             information in some
>                 way. But typically this isn't presented to the user.
>
>                 For the software to know must mean that it will behave
>             differently
>                 for a tag that represents a sign language than for one
>             that
>                 represents a spoken or written language. What is it
>             that it will do
>                 differently?
>
>                          Thanks,
>                          Paul
>
>
>                     Right now we have returned to a very simple rule:
>             we define only
>                     use of spoken language in audio media, written
>             language in text
>                     media and sign language in video media.
>                     We have discussed other use, such as a view of a
>             speaking person
>                     in video, text overlay on video, a sign language
>             notation in
>                     text media, written language in message media,
>             written language
>                     in WebRTC data channels, sign written and spoken
>             in bucket media
>                     maybe declared as application media. We do not
>             define these
>                     cases. They are just not defined, not forbidden.
>             They may be
>                     defined in the future.
>
>                     My proposed wording in section 5.4 got too many
>                     misunderstandings so I gave up with it. I think we
>             can live with
>                     5.4 as it is in version -16.
>
>                     Thanks,
>                     Gunnar
>
>
>
>                         (IIRC I suggested something along these lines
>             a long time ago.)
>
>                              Thanks,
>                              Paul
>
>                         _______________________________________________
>                         SLIM mailing list
>             SLIM@ietf.org <mailto:SLIM@ietf.org> <mailto:SLIM@ietf.org
>             <mailto:SLIM@ietf.org>>
>             https://www.ietf.org/mailman/listinfo/slim
>             <https://www.ietf.org/mailman/listinfo/slim>
>                         <https://www.ietf.org/mailman/listinfo/slim
>             <https://www.ietf.org/mailman/listinfo/slim>>
>
>
>
>                 _______________________________________________
>                 SLIM mailing list
>             SLIM@ietf.org <mailto:SLIM@ietf.org> <mailto:SLIM@ietf.org
>             <mailto:SLIM@ietf.org>>
>             https://www.ietf.org/mailman/listinfo/slim
>             <https://www.ietf.org/mailman/listinfo/slim>
>                 <https://www.ietf.org/mailman/listinfo/slim
>             <https://www.ietf.org/mailman/listinfo/slim>>
>
>
>
>
>     -- 
>     -----------------------------------------
>     Gunnar Hellström
>     Omnitor
>     gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>
>     +46 708 204 288 <tel:%2B46%20708%20204%20288>
>
>

-- 
-----------------------------------------
Gunnar Hellström
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


--------------C331B49A7F658633C5C6AB18
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    Den 2017-10-16 kl. 01:21, skrev Bernard Aboba:<br>
    <blockquote type="cite"
cite="mid:CAOW+2dvPSUGA_7tye+KqR1TGs1kYL43TdxBCDOHVEmWOFHud0Q@mail.gmail.com">
      <div dir="ltr">Paul said: 
        <div><br>
        </div>
        <div>""<span style="color:rgb(80,0,80);font-size:12.8px">- can
            the UA use this information to change how to render the
            media?"</span></div>
        <div><span style="color:rgb(80,0,80);font-size:12.8px"><br>
          </span></div>
        <div><span style="color:rgb(80,0,80);font-size:12.8px">[BA]  If
            the video is used for signing, an application might infer an
            encoder preference for frame rate over resolution (e.g. in
            WebRTC, RTCRtpParameters.degradationPreference =
            "maintain-framerate" )</span></div>
      </div>
    </blockquote>
    &lt;GH&gt;Right, that is a valid example of how real "knowledge" of
    the modality can be used by the application. <br>
    <br>
    <br>
    And, as a response on issue #43,<br>
    <br>
    A simple way is to say<br>
    <br>
    Video media descriptions shall only contain sign language tags<br>
    Audio media descriptions shall only contain language tags for spoken
    language<br>
    Text media descriptions shall only contain language tags for written
    language<br>
    Use of other media descriptions such as message and application with
    language indications require other specifications on how to assess
    the modality for non-signed languages.<br>
    <br>
    The current 5.4 does not mention our main problem with the language
    tags, that there is no difference on them if we mean use for spoken
    language or written language. We should have made better efforts to
    solve that problem long ago, but we have not.<br>
    <br>
    5.4 can be modified to specify the simple limited case and the
    problems that block us from specifying other cases:<br>
    <br>
    <pre class="newpage" style="font-size: 13.3333px; margin-top: 0px; margin-bottom: 0px; break-before: page; color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;"><span class="h3" style="line-height: 0pt; display: inline; white-space: pre; font-family: monospace; font-size: 1em; font-weight: bold;"><h3 style="line-height: 0pt; display: inline; white-space: pre; font-family: monospace; font-size: 1em; font-weight: bold;"><a class="selflink" name="section-5.4" href="https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4" style="color: black; text-decoration: none;">5.4</a>.  Media and modality Combination problems</h3></span>


   The problem of indicating a language tag for the view of a speaking person in a video stream is out of scope for this document.

   The problem of indicating a language tag for use of written language coded as a component in a video stream is out of scope for this document.

   The use of language tags for negotiation of languages in other media than audio, video and text is not defined in this document.

   The problem of knowing which language tags are signed and which are not can be deducted 
   from the IANA language tag registry. How this is done is out of scope of this document.
</pre>
    <br>
    --------------------------------------------------------<br>
    <br>
    <br>
    But if we want to allow more cases, we need to consider the
    following complications:<br>
      <br>
    <br>
    1. to assess if a language represents a Sign Language, the
    application can look for the word "sign" in the description in the 
    IANA language registry or a copy thereof as Randall already
    indicated. <br>
    <br>
    2. For written languages used as a text component in a video stream,
    it is possible to code this for languages requiring a script subtag,
    but not for languages with suppressed script subtags <br>
    <br>
    3. We have also discussed proposals for how to code written language
    in video stream for languages not requiring a script subtag, but not
    got acceptance for our proposals. So we need to say that that is
    currently undefined.<br>
    <br>
    4. We also discussed how to code a view of a speaking person in
    video and said that that could be done by using the "definitively
    not written" script subtag on a non-signed language tag in video.
    But that was not appreciated by the language experts. Another option
    was to not allow written language overlayed on video, and that is
    the lately used option. ( up to version -16 or so) <br>
    <br>
    5. For talking and hearing audio media, we only have that case for
    language-tags in Audio. So that is easy to code and assess.<br>
    <br>
    6. For written language in text media, a check can be made about if
    "sign" is part of the language tag description, and if not, it is a
    written language. <br>
    <br>
    7. For signed language in text media, a check can be made about if
    "sign" is part of the language tag description, and if it is, it is
    a signed language in text notation. (extremely unusual)<br>
    <br>
    8. For use with language tags in other media than audio, video and
    text, there is a need for a description on how to assess the
    modality, especially for non-signed languages before it is used.<br>
    <br>
    <br>
    We can construct a section 5.4 to describe this situation, but I
    doubt that it is worth the effort.<br>
    <br>
    <br>
    <blockquote type="cite"
cite="mid:CAOW+2dvPSUGA_7tye+KqR1TGs1kYL43TdxBCDOHVEmWOFHud0Q@mail.gmail.com">
      <div dir="ltr">
        <div><span style="color:rgb(80,0,80);font-size:12.8px"><br>
          </span></div>
        <div><span style="color:rgb(80,0,80);font-size:12.8px">See:  </span><font
            color="#500050"><span style="font-size:12.8px"><a
href="https://rawgit.com/w3c/webrtc-pc/master/webrtc.html#dom-rtcrtpparameters-degradationpreference"
                moz-do-not-send="true">https://rawgit.com/w3c/webrtc-pc/master/webrtc.html#dom-rtcrtpparameters-degradationpreference</a></span></font></div>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On Sun, Oct 15, 2017 at 2:22 PM, Gunnar
          Hellström <span dir="ltr">&lt;<a
              href="mailto:gunnar.hellstrom@omnitor.se" target="_blank"
              moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>&gt;</span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex"><span
              class="">Den 2017-10-15 kl. 21:27, skrev Paul Kyzivat:<br>
              <blockquote class="gmail_quote" style="margin:0 0 0
                .8ex;border-left:1px #ccc solid;padding-left:1ex">
                On 10/15/17 1:49 PM, Bernard Aboba wrote:<br>
                <blockquote class="gmail_quote" style="margin:0 0 0
                  .8ex;border-left:1px #ccc solid;padding-left:1ex">
                  Paul said:<br>
                  <br>
                  "For the software to know must mean that it will
                  behave differently for a tag that represents a sign
                  language than for one that represents a spoken or
                  written language. What is it that it will do
                  differently?"<br>
                  <br>
                  [BA] In terms of behavior based on the
                  signed/non-signed distinction, in -17 the only
                  reference appears to be in Section 5.4, stating that
                  certain combinations are not defined in the document
                  (but that definition of those combinations was out of
                  scope):<br>
                </blockquote>
                <br>
                I'm asking whether this is a distinction without a
                difference. I'm not asking whether this makes a
                difference in the *protocol*, but whether in the end it
                benefits the participants in the call in any way. <br>
              </blockquote>
            </span>
            &lt;GH&gt;Good point, I was on my way to make a similar
            comment earlier today. The difference it makes for
            applications to "know" what modality a language tag
            represents in its used position seems to be only for
            imagined functions that are out of scope for the protocol
            specification.<span class=""><br>
              <blockquote class="gmail_quote" style="margin:0 0 0
                .8ex;border-left:1px #ccc solid;padding-left:1ex">
                For instance:<br>
                <br>
                - does it help the UA to decide how to alert the callee,
                so that the<br>
                  callee can better decide whether to accept the call or
                instruct the<br>
                  UA about how to handle the call?<br>
              </blockquote>
            </span>
            &lt;GH&gt;Yes, for a regular human user -to-user call, the
            result of the negotiation must be presented to the
            participants, so that they can start the call with a
            language and modality that is agreed.<br>
            That presentation could be exactly the description from the
            language tag registry, and then no "knowledge" is needed
            from the application. But it is more likely that the
            application has its own string for presentation of the
            negotiated language and modality. So that will be presented.
            But it is still found by a table lookup between language tag
            and string for a language name, so no real knowledge is
            needed.<br>
            We have said many times that the way the application tells
            the user the result of the negotiation is out of scope for
            the draft, but it is good to discuss and know that it can be
            done.<br>
            A similar mechanism is also needed for configuration of the
            user's language preference profile further discussed below.<span
              class=""><br>
              <blockquote class="gmail_quote" style="margin:0 0 0
                .8ex;border-left:1px #ccc solid;padding-left:1ex">
                <br>
                - does it allow the UA to make a decision whether to
                accept the media?<br>
              </blockquote>
            </span>
            &lt;GH&gt;No, the media should be accepted regardless of the
            result of the language negotiation.<span class=""><br>
              <blockquote class="gmail_quote" style="margin:0 0 0
                .8ex;border-left:1px #ccc solid;padding-left:1ex">
                <br>
                - can the UA use this information to change how to
                render the media?<br>
              </blockquote>
            </span>
            &lt;GH&gt;Yes, for the specialized text notation of sign
            language we have discussed but currently placed out of
            scope, a very special rendering application is needed. The
            modality would be recognized by a script subtag to a sign
            language tag used in text media. However, I think that would
            be best to also use it with a specific text subtype, so that
            the rendering can be controlled by invocation of a "codec"
            for that rendering.<span class=""><br>
              <blockquote class="gmail_quote" style="margin:0 0 0
                .8ex;border-left:1px #ccc solid;padding-left:1ex">
                <br>
                And if there is something like this, will the UA be able
                to do this generically based on whether the media is
                sign language or not, or will the UA need to already
                understand *specific* sign language tags?<br>
              </blockquote>
            </span>
            &lt;GH&gt;Applications will need to have localized versions
            of the names for the different sign languages and also for
            spoken languages and written languages, to be used in
            setting of preferences and announcing the results of the
            negotiation. It might be overkill to have such localized
            names for all languages in the IANA language registry, so it
            will need to be able to handle localized names of a subset
            och the registry. With good design however, this is just an
            automatic translation between a language tag and a
            corresponding name, so it does in fact not require any
            "knowledge" of what modality is used with each language tag.<br>
            The application can ask for the configuration:<br>
            "Which languages do you want to offer to send in video"<br>
            "Which languages do you want to offer to send in text"<br>
            "Which languages do you want to offer to send in audio"<br>
            "Which languages do you want to be prepared to receive in
            video"<br>
            "Which languages do you want to be prepared to receive in
            text"<br>
            "Which languages do you want to be prepared to receive in
            audio"<br>
            <br>
            And for each question provide a list of language names to
            select from. When the selection is made, the corresponding
            language tag is placed in the profile for negotiation.<br>
            <br>
            If the application provides the whole IANA language registry
            to the user for each question, then there is a possibility
            that the user by mistake selects a language that requires
            another modality than the question was about. If the
            application shall limit the lists provided for each
            question, then it will need a kind of knowledge about which
            language tags suit each modality (and media)<span class=""><br>
              <br>
              <br>
              <blockquote class="gmail_quote" style="margin:0 0 0
                .8ex;border-left:1px #ccc solid;padding-left:1ex">
                <br>
                E.g., A UA serving a deaf person might automatically
                introduce a sign language interpreter into an incoming
                audio-only call. If the incoming call has both audio and
                video then the video *might* be for conveying sign
                language, or not. If not then the UA will still want to
                bring in a sign language interpreter. But is knowing the
                call generically contains sign language sufficient to
                decide against bringing in an interpreter? Or must that
                depend on it being a sign language that the user can
                use? If the UA is configured for all the specific sign
                languages that the user can deal with then there is no
                need to recognize other sign languages generically.<br>
              </blockquote>
            </span>
            &lt;GH&gt;We are talking about specific language tags here
            and knowing what modality they are used for. The user needs
            to specify which sign languages they prefer to use. The
            callee application can be made to look for gaps between what
            the caller offers and what the callee can accept, and from
            that deduct which type and languages for a conversion that
            is needed, and invoke that as a relay service. That
            invocation can be made completely table driven and have
            corresponding translation profiles for available relay
            services. But it is more likely that it is done by having
            some knowledge about which languages are sign languages and
            which are spoken languages and sending the call to the relay
            service to try to sort out if they can handle the
            translation.<br>
            <blockquote class="gmail_quote" style="margin:0 0 0
              .8ex;border-left:1px #ccc solid;padding-left:1ex">
              <br>
              <br>
            </blockquote>
            So, the answer is - no, the application does not really have
            any knowledge about which modality a language tag represents
            in its used position. If the user selects to indicate very
            rare language tag indications for a media, then a match will
            just become very unlikely.<br>
            <br>
            Where does this discussion take us? Should we modify section
            5.4 again?<br>
            <br>
            Thanks<span class="HOEnZb"><font color="#888888"><br>
                Gunnar</font></span>
            <div class="HOEnZb">
              <div class="h5"><br>
                <blockquote class="gmail_quote" style="margin:0 0 0
                  .8ex;border-left:1px #ccc solid;padding-left:1ex">
                      Thanks,<br>
                      Paul<br>
                  <br>
                  <blockquote class="gmail_quote" style="margin:0 0 0
                    .8ex;border-left:1px #ccc solid;padding-left:1ex">
                          5.4<br>
                    &lt;<a
href="https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4"
                      rel="noreferrer" target="_blank"
                      moz-do-not-send="true">https://tools.ietf.org/html/d<wbr>raft-ietf-slim-negotiating-hum<wbr>an-language-17#section-5.4</a>&gt;.<br>
                          Undefined Combinations<br>
                    <br>
                    <br>
                    <br>
                        The behavior when specifying a non-signed
                    language tag for a video<br>
                        media stream, or a signed language tag for an
                    audio or text media<br>
                        stream, is not defined in this document.<br>
                    <br>
                        The problem of knowing which language tags are
                    signed and which are<br>
                        not is out of scope of this document.<br>
                    <br>
                    <br>
                    <br>
                    On Sun, Oct 15, 2017 at 10:13 AM, Paul Kyzivat &lt;<a
                      href="mailto:pkyzivat@alum.mit.edu"
                      target="_blank" moz-do-not-send="true">pkyzivat@alum.mit.edu</a>
                    &lt;mailto:<a href="mailto:pkyzivat@alum.mit.edu"
                      target="_blank" moz-do-not-send="true">pkyzivat@alum.mit.edu</a>&gt;<wbr>&gt;
                    wrote:<br>
                    <br>
                        On 10/15/17 2:24 AM, Gunnar Hellström wrote:<br>
                    <br>
                            Paul,<br>
                            Den 2017-10-15 kl. 01:19, skrev Paul
                    Kyzivat:<br>
                    <br>
                                On 10/14/17 2:03 PM, Bernard Aboba
                    wrote:<br>
                    <br>
                                    Gunnar said:<br>
                    <br>
                                    "Applications not implementing such
                    specific notations<br>
                                    may use the following simple
                    deductions.<br>
                    <br>
                                    - A language tag in audio media is
                    supposed to indicate<br>
                                    spoken modality.<br>
                    <br>
                                    [BA] Even a tag with "Sign Language"
                    in the description??<br>
                    <br>
                                    - A language tag in text media is
                    supposed to indicate                 written
                    modality.<br>
                    <br>
                                    [BA] If the tag has "Sign Language"
                    in the description,<br>
                                    can this document really say that?<br>
                    <br>
                                    - A language tag in video media is
                    supposed to indicate<br>
                                    visual sign language modality except
                    for the case when<br>
                                    it is supposed to indicate a view of
                    a speaking person<br>
                                    mentioned in section 5.2
                    characterized by the exact same<br>
                                    language tag also appearing in an
                    audio media specification.<br>
                    <br>
                                    [BA] It seems like an over-reach to
                    say that a spoken<br>
                                    language tag in video media should
                    instead be<br>
                                    interpreted as a request for Sign
                    Language.  If this<br>
                                    were done, would it always be clear
                    which Sign Language<br>
                                    was intended?  And could we really
                    assume that both<br>
                                    sides, if negotiating a spoken
                    language tag in video<br>
                                    media, were really indicating the
                    desire to sign?  It<br>
                                    seems like this could easily result
                    interoperability<br>
                                    failure.<br>
                    <br>
                    <br>
                                IMO the right way to indicate that two
                    (or more) media<br>
                                streams are conveying alternative
                    representations of the<br>
                                same language content is by grouping
                    them with a new<br>
                                grouping attribute. That can tie
                    together an audio with a<br>
                                video and/or text. A language tag for
                    sign language on the<br>
                                video stream then clarifies to the
                    recipient that it is sign<br>
                                language. The grouping attribute by
                    itself can indicate that<br>
                                these streams are conveying language.<br>
                    <br>
                            &lt;GH&gt;Yes, and that is proposed in<br>
                            draft-hellstrom-slim-modality-<wbr>grouping   
                    with two kinds of<br>
                            grouping: One kind of grouping to tell that
                    two or more<br>
                            languages in different streams are
                    alternatives with the same<br>
                            content and a priority order is assigned to
                    them to guide the<br>
                            selection of which one to use during the
                    call. The other kind of<br>
                            grouping telling that two or more languages
                    in different streams<br>
                            are desired together with the same language
                    content but<br>
                            different modalities ( such as the use for
                    captioned telephony<br>
                            with the same content provided in both
                    speech and text, or sign<br>
                            language interpretation where you see the
                    interpreter, or<br>
                            possibly spoken language interpretation with
                    the languages<br>
                            provided in different audio streams ). I
                    hope that that draft<br>
                            can be progressed. I see it as a needed
                    complement to the pure<br>
                            language indications per media.<br>
                    <br>
                    <br>
                        Oh, sorry. I did read that draft but forgot
                    about it.<br>
                    <br>
                            The discussion in this thread is more about
                    how an application<br>
                            would easily know that e.g. "ase" is a sign
                    language and "en" is<br>
                            a spoken (or written) language, and also a
                    discussion about what<br>
                            kinds of languages are allowed and indicated
                    by default in each<br>
                            media type. It was not at all about falsely
                    using language tags<br>
                            in the wrong media type as Bernard
                    understood my wording. It was<br>
                            rather a limitation to what modalities are
                    used in each media<br>
                            type and how to know the modality with cases
                    that are not<br>
                            evident, e.g. "application" and "message"
                    media types.<br>
                    <br>
                    <br>
                        What do you mean by "know"? Is it for the *UA*
                    software to know, or<br>
                        for the human user of the UA to know? Presumably
                    a human user that<br>
                        cares will understand this if presented with the
                    information in some<br>
                        way. But typically this isn't presented to the
                    user.<br>
                    <br>
                        For the software to know must mean that it will
                    behave differently<br>
                        for a tag that represents a sign language than
                    for one that<br>
                        represents a spoken or written language. What is
                    it that it will do<br>
                        differently?<br>
                    <br>
                                 Thanks,<br>
                                 Paul<br>
                    <br>
                    <br>
                            Right now we have returned to a very simple
                    rule: we define only<br>
                            use of spoken language in audio media,
                    written language in text<br>
                            media and sign language in video media.<br>
                            We have discussed other use, such as a view
                    of a speaking person<br>
                            in video, text overlay on video, a sign
                    language notation in<br>
                            text media, written language in message
                    media, written language<br>
                            in WebRTC data channels, sign written and
                    spoken in bucket media<br>
                            maybe declared as application media. We do
                    not define these<br>
                            cases. They are just not defined, not
                    forbidden. They may be<br>
                            defined in the future.<br>
                    <br>
                            My proposed wording in section 5.4 got too
                    many<br>
                            misunderstandings so I gave up with it. I
                    think we can live with<br>
                            5.4 as it is in version -16.<br>
                    <br>
                            Thanks,<br>
                            Gunnar<br>
                    <br>
                    <br>
                    <br>
                                (IIRC I suggested something along these
                    lines a long time ago.)<br>
                    <br>
                                     Thanks,<br>
                                     Paul<br>
                    <br>
                                ______________________________<wbr>_________________<br>
                                SLIM mailing list<br>
                                <a href="mailto:SLIM@ietf.org"
                      target="_blank" moz-do-not-send="true">SLIM@ietf.org</a>
                    &lt;mailto:<a href="mailto:SLIM@ietf.org"
                      target="_blank" moz-do-not-send="true">SLIM@ietf.org</a>&gt;<br>
                                <a
                      href="https://www.ietf.org/mailman/listinfo/slim"
                      rel="noreferrer" target="_blank"
                      moz-do-not-send="true">https://www.ietf.org/mailman/l<wbr>istinfo/slim</a><br>
                                &lt;<a
                      href="https://www.ietf.org/mailman/listinfo/slim"
                      rel="noreferrer" target="_blank"
                      moz-do-not-send="true">https://www.ietf.org/mailman/<wbr>listinfo/slim</a>&gt;<br>
                    <br>
                    <br>
                    <br>
                        ______________________________<wbr>_________________<br>
                        SLIM mailing list<br>
                        <a href="mailto:SLIM@ietf.org" target="_blank"
                      moz-do-not-send="true">SLIM@ietf.org</a>
                    &lt;mailto:<a href="mailto:SLIM@ietf.org"
                      target="_blank" moz-do-not-send="true">SLIM@ietf.org</a>&gt;<br>
                        <a
                      href="https://www.ietf.org/mailman/listinfo/slim"
                      rel="noreferrer" target="_blank"
                      moz-do-not-send="true">https://www.ietf.org/mailman/l<wbr>istinfo/slim</a><br>
                        &lt;<a
                      href="https://www.ietf.org/mailman/listinfo/slim"
                      rel="noreferrer" target="_blank"
                      moz-do-not-send="true">https://www.ietf.org/mailman/<wbr>listinfo/slim</a>&gt;<br>
                    <br>
                    <br>
                  </blockquote>
                  <br>
                </blockquote>
                <br>
              </div>
            </div>
            <div class="HOEnZb">
              <div class="h5">
                -- <br>
                ------------------------------<wbr>-----------<br>
                Gunnar Hellström<br>
                Omnitor<br>
                <a href="mailto:gunnar.hellstrom@omnitor.se"
                  target="_blank" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a><br>
                <a href="tel:%2B46%20708%20204%20288"
                  value="+46708204288" target="_blank"
                  moz-do-not-send="true">+46 708 204 288</a><br>
                <br>
              </div>
            </div>
          </blockquote>
        </div>
        <br>
      </div>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
-----------------------------------------
Gunnar Hellström
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
  </body>
</html>

--------------C331B49A7F658633C5C6AB18--


From nobody Tue Oct 17 02:03:14 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id CB0BB1332D7 for <slim@ietfa.amsl.com>; Tue, 17 Oct 2017 02:03:13 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.6
X-Spam-Level: 
X-Spam-Status: No, score=-2.6 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id XTqh6dSD35wH for <slim@ietfa.amsl.com>; Tue, 17 Oct 2017 02:03:09 -0700 (PDT)
Received: from bin-vsp-out-03.atm.binero.net (bin-mail-out-05.binero.net [195.74.38.228]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id D22D113331E for <slim@ietf.org>; Tue, 17 Oct 2017 02:03:04 -0700 (PDT)
X-Halon-ID: f5a29f73-b319-11e7-83a9-0050569116f7
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-03.atm.binero.net (Halon) with ESMTPSA id f5a29f73-b319-11e7-83a9-0050569116f7; Tue, 17 Oct 2017 11:02:54 +0200 (CEST)
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
To: Bernard Aboba <bernard.aboba@gmail.com>
Cc: Paul Kyzivat <pkyzivat@alum.mit.edu>, slim@ietf.org
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com> <p06240606d607257c9584@172.20.60.54> <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se> <CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com> <da5cfaea-75f8-3fe1-7483-d77042bd9708@alum.mit.edu> <b2611e82-2133-0e77-b72b-ef709b1bba3c@omnitor.se> <1b0380ef-b57d-3cc7-c649-5351dc61f878@alum.mit.edu> <CAOW+2dtVE5BDmD2qy_g-asXvxntif4fVC8LYO4j7QLQ5Kq2E+g@mail.gmail.com> <3fc6d055-08a0-2bdb-f6e9-99b94efc49df@alum.mit.edu> <84fb37bd-5c7a-90ea-81fd-d315faabfd96@omnitor.se> <CAOW+2dvPSUGA_7tye+KqR1TGs1kYL43TdxBCDOHVEmWOFHud0Q@mail.gmail.com> <49cb3e25-6d65-1773-2803-dc667cd5890c@omnitor.se>
Message-ID: <7d20fee8-fcb0-1f50-049b-82f0c2491f50@omnitor.se>
Date: Tue, 17 Oct 2017 11:02:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <49cb3e25-6d65-1773-2803-dc667cd5890c@omnitor.se>
Content-Type: multipart/alternative; boundary="------------AFBAE38C0049BF1C725DFCC7"
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/kxN_VHGMMjqRakYKwxxP2fDZ5gc>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 17 Oct 2017 09:03:14 -0000

This is a multi-part message in MIME format.
--------------AFBAE38C0049BF1C725DFCC7
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit

An even more general way to express what section 5.4 tries to say is:

------------------------------------------------------------------------------------------------------------------------------------------

5.4 Combinations of Language tags and Media descriptions

The combination of Language tags and other information in the media 
descriptions should be made so that the intended modality can be 
concluded by the negotiating parties.


----------------------------------------------------------------------------------------

That makes us not need to investigate what is possible today, and what 
further attributes or coding rules may be added in the future.

We have a risk that implementers start using some insufficient coding, 
that can cause interop issues. But instead we do not need to limit valid 
use that we just have not thought about by saying that specific 
combinations are out of scope or not defined. It is up to implementers 
to check that the combinations they use result in unambiguous modality.

And it opens for use of possible new attributes e.g. a=modality:spoken  
or a=modality:written  etc, to complement the undefined case when a 
non-signed language tag without script subtag is used in video media, 
and also for explaining any use of m=application or m=message media in 
interactive communication.

It does not really answer Issue #43 by explaining HOW to assess the 
modality easily, but it requires the implementers to make sure that it 
is possible.

And deducting the intended modality is the key to successful neotiation 
and communication.

Do you think this would be clear enough, or do we need to go into what 
clear cases we have?

Gunnar



Den 2017-10-17 kl. 00:21, skrev Gunnar Hellström:
> Den 2017-10-16 kl. 01:21, skrev Bernard Aboba:
>> Paul said:
>>
>> ""- can the UA use this information to change how to render the media?"
>>
>> [BA] If the video is used for signing, an application might infer an 
>> encoder preference for frame rate over resolution (e.g. in WebRTC, 
>> RTCRtpParameters.degradationPreference = "maintain-framerate" )
> <GH>Right, that is a valid example of how real "knowledge" of the 
> modality can be used by the application.
>
>
> And, as a response on issue #43,
>
> A simple way is to say
>
> Video media descriptions shall only contain sign language tags
> Audio media descriptions shall only contain language tags for spoken 
> language
> Text media descriptions shall only contain language tags for written 
> language
> Use of other media descriptions such as message and application with 
> language indications require other specifications on how to assess the 
> modality for non-signed languages.
>
> The current 5.4 does not mention our main problem with the language 
> tags, that there is no difference on them if we mean use for spoken 
> language or written language. We should have made better efforts to 
> solve that problem long ago, but we have not.
>
> 5.4 can be modified to specify the simple limited case and the 
> problems that block us from specifying other cases:
>
>
>       5.4
>       <https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4>.
>       Media and modality Combination problems
>
>
>
>
>     The problem of indicating a language tag for the view of a speaking person in a video stream is out of scope for this document.
>
>     The problem of indicating a language tag for use of written language coded as a component in a video stream is out of scope for this document.
>
>     The use of language tags for negotiation of languages in other media than audio, video and text is not defined in this document.
>
>     The problem of knowing which language tags are signed and which are not can be deducted
>     from the IANA language tag registry. How this is done is out of scope of this document.
>
> --------------------------------------------------------
>
>
> But if we want to allow more cases, we need to consider the following 
> complications:
>
>
> 1. to assess if a language represents a Sign Language, the application 
> can look for the word "sign" in the description in the  IANA language 
> registry or a copy thereof as Randall already indicated.
>
> 2. For written languages used as a text component in a video stream, 
> it is possible to code this for languages requiring a script subtag, 
> but not for languages with suppressed script subtags
>
> 3. We have also discussed proposals for how to code written language 
> in video stream for languages not requiring a script subtag, but not 
> got acceptance for our proposals. So we need to say that that is 
> currently undefined.
>
> 4. We also discussed how to code a view of a speaking person in video 
> and said that that could be done by using the "definitively not 
> written" script subtag on a non-signed language tag in video. But that 
> was not appreciated by the language experts. Another option was to not 
> allow written language overlayed on video, and that is the lately used 
> option. ( up to version -16 or so)
>
> 5. For talking and hearing audio media, we only have that case for 
> language-tags in Audio. So that is easy to code and assess.
>
> 6. For written language in text media, a check can be made about if 
> "sign" is part of the language tag description, and if not, it is a 
> written language.
>
> 7. For signed language in text media, a check can be made about if 
> "sign" is part of the language tag description, and if it is, it is a 
> signed language in text notation. (extremely unusual)
>
> 8. For use with language tags in other media than audio, video and 
> text, there is a need for a description on how to assess the modality, 
> especially for non-signed languages before it is used.
>
>
> We can construct a section 5.4 to describe this situation, but I doubt 
> that it is worth the effort.
>
>
>>
>> See: 
>> https://rawgit.com/w3c/webrtc-pc/master/webrtc.html#dom-rtcrtpparameters-degradationpreference
>>
>> On Sun, Oct 15, 2017 at 2:22 PM, Gunnar Hellström 
>> <gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>> wrote:
>>
>>     Den 2017-10-15 kl. 21:27, skrev Paul Kyzivat:
>>
>>         On 10/15/17 1:49 PM, Bernard Aboba wrote:
>>
>>             Paul said:
>>
>>             "For the software to know must mean that it will behave
>>             differently for a tag that represents a sign language
>>             than for one that represents a spoken or written
>>             language. What is it that it will do differently?"
>>
>>             [BA] In terms of behavior based on the signed/non-signed
>>             distinction, in -17 the only reference appears to be in
>>             Section 5.4, stating that certain combinations are not
>>             defined in the document (but that definition of those
>>             combinations was out of scope):
>>
>>
>>         I'm asking whether this is a distinction without a
>>         difference. I'm not asking whether this makes a difference in
>>         the *protocol*, but whether in the end it benefits the
>>         participants in the call in any way.
>>
>>     <GH>Good point, I was on my way to make a similar comment earlier
>>     today. The difference it makes for applications to "know" what
>>     modality a language tag represents in its used position seems to
>>     be only for imagined functions that are out of scope for the
>>     protocol specification.
>>
>>         For instance:
>>
>>         - does it help the UA to decide how to alert the callee, so
>>         that the
>>           callee can better decide whether to accept the call or
>>         instruct the
>>           UA about how to handle the call?
>>
>>     <GH>Yes, for a regular human user -to-user call, the result of
>>     the negotiation must be presented to the participants, so that
>>     they can start the call with a language and modality that is agreed.
>>     That presentation could be exactly the description from the
>>     language tag registry, and then no "knowledge" is needed from the
>>     application. But it is more likely that the application has its
>>     own string for presentation of the negotiated language and
>>     modality. So that will be presented. But it is still found by a
>>     table lookup between language tag and string for a language name,
>>     so no real knowledge is needed.
>>     We have said many times that the way the application tells the
>>     user the result of the negotiation is out of scope for the draft,
>>     but it is good to discuss and know that it can be done.
>>     A similar mechanism is also needed for configuration of the
>>     user's language preference profile further discussed below.
>>
>>
>>         - does it allow the UA to make a decision whether to accept
>>         the media?
>>
>>     <GH>No, the media should be accepted regardless of the result of
>>     the language negotiation.
>>
>>
>>         - can the UA use this information to change how to render the
>>         media?
>>
>>     <GH>Yes, for the specialized text notation of sign language we
>>     have discussed but currently placed out of scope, a very special
>>     rendering application is needed. The modality would be recognized
>>     by a script subtag to a sign language tag used in text media.
>>     However, I think that would be best to also use it with a
>>     specific text subtype, so that the rendering can be controlled by
>>     invocation of a "codec" for that rendering.
>>
>>
>>         And if there is something like this, will the UA be able to
>>         do this generically based on whether the media is sign
>>         language or not, or will the UA need to already understand
>>         *specific* sign language tags?
>>
>>     <GH>Applications will need to have localized versions of the
>>     names for the different sign languages and also for spoken
>>     languages and written languages, to be used in setting of
>>     preferences and announcing the results of the negotiation. It
>>     might be overkill to have such localized names for all languages
>>     in the IANA language registry, so it will need to be able to
>>     handle localized names of a subset och the registry. With good
>>     design however, this is just an automatic translation between a
>>     language tag and a corresponding name, so it does in fact not
>>     require any "knowledge" of what modality is used with each
>>     language tag.
>>     The application can ask for the configuration:
>>     "Which languages do you want to offer to send in video"
>>     "Which languages do you want to offer to send in text"
>>     "Which languages do you want to offer to send in audio"
>>     "Which languages do you want to be prepared to receive in video"
>>     "Which languages do you want to be prepared to receive in text"
>>     "Which languages do you want to be prepared to receive in audio"
>>
>>     And for each question provide a list of language names to select
>>     from. When the selection is made, the corresponding language tag
>>     is placed in the profile for negotiation.
>>
>>     If the application provides the whole IANA language registry to
>>     the user for each question, then there is a possibility that the
>>     user by mistake selects a language that requires another modality
>>     than the question was about. If the application shall limit the
>>     lists provided for each question, then it will need a kind of
>>     knowledge about which language tags suit each modality (and media)
>>
>>
>>
>>         E.g., A UA serving a deaf person might automatically
>>         introduce a sign language interpreter into an incoming
>>         audio-only call. If the incoming call has both audio and
>>         video then the video *might* be for conveying sign language,
>>         or not. If not then the UA will still want to bring in a sign
>>         language interpreter. But is knowing the call generically
>>         contains sign language sufficient to decide against bringing
>>         in an interpreter? Or must that depend on it being a sign
>>         language that the user can use? If the UA is configured for
>>         all the specific sign languages that the user can deal with
>>         then there is no need to recognize other sign languages
>>         generically.
>>
>>     <GH>We are talking about specific language tags here and knowing
>>     what modality they are used for. The user needs to specify which
>>     sign languages they prefer to use. The callee application can be
>>     made to look for gaps between what the caller offers and what the
>>     callee can accept, and from that deduct which type and languages
>>     for a conversion that is needed, and invoke that as a relay
>>     service. That invocation can be made completely table driven and
>>     have corresponding translation profiles for available relay
>>     services. But it is more likely that it is done by having some
>>     knowledge about which languages are sign languages and which are
>>     spoken languages and sending the call to the relay service to try
>>     to sort out if they can handle the translation.
>>
>>
>>
>>     So, the answer is - no, the application does not really have any
>>     knowledge about which modality a language tag represents in its
>>     used position. If the user selects to indicate very rare language
>>     tag indications for a media, then a match will just become very
>>     unlikely.
>>
>>     Where does this discussion take us? Should we modify section 5.4
>>     again?
>>
>>     Thanks
>>     Gunnar
>>
>>             Thanks,
>>             Paul
>>
>>                   5.4
>>             <https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4
>>             <https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4>>.
>>                   Undefined Combinations
>>
>>
>>
>>                 The behavior when specifying a non-signed language
>>             tag for a video
>>                 media stream, or a signed language tag for an audio
>>             or text media
>>                 stream, is not defined in this document.
>>
>>                 The problem of knowing which language tags are signed
>>             and which are
>>                 not is out of scope of this document.
>>
>>
>>
>>             On Sun, Oct 15, 2017 at 10:13 AM, Paul Kyzivat
>>             <pkyzivat@alum.mit.edu <mailto:pkyzivat@alum.mit.edu>
>>             <mailto:pkyzivat@alum.mit.edu
>>             <mailto:pkyzivat@alum.mit.edu>>> wrote:
>>
>>                 On 10/15/17 2:24 AM, Gunnar Hellström wrote:
>>
>>                     Paul,
>>                     Den 2017-10-15 kl. 01:19, skrev Paul Kyzivat:
>>
>>                         On 10/14/17 2:03 PM, Bernard Aboba wrote:
>>
>>                             Gunnar said:
>>
>>                             "Applications not implementing such
>>             specific notations
>>                             may use the following simple deductions.
>>
>>                             - A language tag in audio media is
>>             supposed to indicate
>>                             spoken modality.
>>
>>                             [BA] Even a tag with "Sign Language" in
>>             the description??
>>
>>                             - A language tag in text media is
>>             supposed to indicate                 written modality.
>>
>>                             [BA] If the tag has "Sign Language" in
>>             the description,
>>                             can this document really say that?
>>
>>                             - A language tag in video media is
>>             supposed to indicate
>>                             visual sign language modality except for
>>             the case when
>>                             it is supposed to indicate a view of a
>>             speaking person
>>                             mentioned in section 5.2 characterized by
>>             the exact same
>>                             language tag also appearing in an audio
>>             media specification.
>>
>>                             [BA] It seems like an over-reach to say
>>             that a spoken
>>                             language tag in video media should instead be
>>                             interpreted as a request for Sign
>>             Language.  If this
>>                             were done, would it always be clear which
>>             Sign Language
>>                             was intended?  And could we really assume
>>             that both
>>                             sides, if negotiating a spoken language
>>             tag in video
>>                             media, were really indicating the desire
>>             to sign?  It
>>                             seems like this could easily result
>>             interoperability
>>                             failure.
>>
>>
>>                         IMO the right way to indicate that two (or
>>             more) media
>>                         streams are conveying alternative
>>             representations of the
>>                         same language content is by grouping them
>>             with a new
>>                         grouping attribute. That can tie together an
>>             audio with a
>>                         video and/or text. A language tag for sign
>>             language on the
>>                         video stream then clarifies to the recipient
>>             that it is sign
>>                         language. The grouping attribute by itself
>>             can indicate that
>>                         these streams are conveying language.
>>
>>                     <GH>Yes, and that is proposed in
>>                     draft-hellstrom-slim-modality-grouping with two
>>             kinds of
>>                     grouping: One kind of grouping to tell that two
>>             or more
>>                     languages in different streams are alternatives
>>             with the same
>>                     content and a priority order is assigned to them
>>             to guide the
>>                     selection of which one to use during the call.
>>             The other kind of
>>                     grouping telling that two or more languages in
>>             different streams
>>                     are desired together with the same language
>>             content but
>>                     different modalities ( such as the use for
>>             captioned telephony
>>                     with the same content provided in both speech and
>>             text, or sign
>>                     language interpretation where you see the
>>             interpreter, or
>>                     possibly spoken language interpretation with the
>>             languages
>>                     provided in different audio streams ). I hope
>>             that that draft
>>                     can be progressed. I see it as a needed
>>             complement to the pure
>>                     language indications per media.
>>
>>
>>                 Oh, sorry. I did read that draft but forgot about it.
>>
>>                     The discussion in this thread is more about how
>>             an application
>>                     would easily know that e.g. "ase" is a sign
>>             language and "en" is
>>                     a spoken (or written) language, and also a
>>             discussion about what
>>                     kinds of languages are allowed and indicated by
>>             default in each
>>                     media type. It was not at all about falsely using
>>             language tags
>>                     in the wrong media type as Bernard understood my
>>             wording. It was
>>                     rather a limitation to what modalities are used
>>             in each media
>>                     type and how to know the modality with cases that
>>             are not
>>                     evident, e.g. "application" and "message" media
>>             types.
>>
>>
>>                 What do you mean by "know"? Is it for the *UA*
>>             software to know, or
>>                 for the human user of the UA to know? Presumably a
>>             human user that
>>                 cares will understand this if presented with the
>>             information in some
>>                 way. But typically this isn't presented to the user.
>>
>>                 For the software to know must mean that it will
>>             behave differently
>>                 for a tag that represents a sign language than for
>>             one that
>>                 represents a spoken or written language. What is it
>>             that it will do
>>                 differently?
>>
>>                          Thanks,
>>                          Paul
>>
>>
>>                     Right now we have returned to a very simple rule:
>>             we define only
>>                     use of spoken language in audio media, written
>>             language in text
>>                     media and sign language in video media.
>>                     We have discussed other use, such as a view of a
>>             speaking person
>>                     in video, text overlay on video, a sign language
>>             notation in
>>                     text media, written language in message media,
>>             written language
>>                     in WebRTC data channels, sign written and spoken
>>             in bucket media
>>                     maybe declared as application media. We do not
>>             define these
>>                     cases. They are just not defined, not forbidden.
>>             They may be
>>                     defined in the future.
>>
>>                     My proposed wording in section 5.4 got too many
>>                     misunderstandings so I gave up with it. I think
>>             we can live with
>>                     5.4 as it is in version -16.
>>
>>                     Thanks,
>>                     Gunnar
>>
>>
>>
>>                         (IIRC I suggested something along these lines
>>             a long time ago.)
>>
>>                              Thanks,
>>                              Paul
>>
>>                         _______________________________________________
>>                         SLIM mailing list
>>             SLIM@ietf.org <mailto:SLIM@ietf.org>
>>             <mailto:SLIM@ietf.org <mailto:SLIM@ietf.org>>
>>             https://www.ietf.org/mailman/listinfo/slim
>>             <https://www.ietf.org/mailman/listinfo/slim>
>>                         <https://www.ietf.org/mailman/listinfo/slim
>>             <https://www.ietf.org/mailman/listinfo/slim>>
>>
>>
>>
>>                 _______________________________________________
>>                 SLIM mailing list
>>             SLIM@ietf.org <mailto:SLIM@ietf.org>
>>             <mailto:SLIM@ietf.org <mailto:SLIM@ietf.org>>
>>             https://www.ietf.org/mailman/listinfo/slim
>>             <https://www.ietf.org/mailman/listinfo/slim>
>>                 <https://www.ietf.org/mailman/listinfo/slim
>>             <https://www.ietf.org/mailman/listinfo/slim>>
>>
>>
>>
>>
>>     -- 
>>     -----------------------------------------
>>     Gunnar Hellström
>>     Omnitor
>>     gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>
>>     +46 708 204 288 <tel:%2B46%20708%20204%20288>
>>
>>
>
> -- 
> -----------------------------------------
> Gunnar Hellström
> Omnitor
> gunnar.hellstrom@omnitor.se
> +46 708 204 288

-- 
-----------------------------------------
Gunnar Hellström
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


--------------AFBAE38C0049BF1C725DFCC7
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <p>An even more general way to express what section 5.4 tries to say
      is:</p>
    <p>------------------------------------------------------------------------------------------------------------------------------------------<br>
    </p>
    <p>5.4 Combinations of Language tags and Media descriptions</p>
    <p>The combination of Language tags and other information in the
      media descriptions should be made so that the intended modality
      can be concluded by the negotiating parties.</p>
    <p><br>
    </p>
    <p>
----------------------------------------------------------------------------------------</p>
    <p>That makes us not need to investigate what is possible today, and
      what further attributes or coding rules may be added in the
      future. <br>
    </p>
    <p>We have a risk that implementers start using some insufficient
      coding, that can cause interop issues. But instead we do not need
      to limit valid use that we just have not thought about by saying
      that specific combinations are out of scope or not defined. It is
      up to implementers to check that the combinations they use result
      in unambiguous modality.<br>
    </p>
    <p>And it opens for use of possible new attributes e.g. 
      a=modality:spoken  or a=modality:written  etc, to complement the
      undefined case when a non-signed language tag without script
      subtag is used in video media, and also for explaining any use of
      m=application or m=message media in interactive communication. <br>
    </p>
    <p>It does not really answer Issue #43 by explaining HOW to assess
      the modality easily, but it requires the implementers to make sure
      that it is possible.</p>
    <p>And deducting the intended modality is the key to successful
      neotiation and communication.<br>
    </p>
    <p>Do you think this would be clear enough, or do we need to go into
      what clear cases we have?<br>
    </p>
    <p>Gunnar<br>
    </p>
    <p><br>
    </p>
    <br>
    <div class="moz-cite-prefix">Den 2017-10-17 kl. 00:21, skrev Gunnar
      Hellström:<br>
    </div>
    <blockquote type="cite"
      cite="mid:49cb3e25-6d65-1773-2803-dc667cd5890c@omnitor.se">
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      Den 2017-10-16 kl. 01:21, skrev Bernard Aboba:<br>
      <blockquote type="cite"
cite="mid:CAOW+2dvPSUGA_7tye+KqR1TGs1kYL43TdxBCDOHVEmWOFHud0Q@mail.gmail.com">
        <div dir="ltr">Paul said: 
          <div><br>
          </div>
          <div>""<span style="color:rgb(80,0,80);font-size:12.8px">- can
              the UA use this information to change how to render the
              media?"</span></div>
          <div><span style="color:rgb(80,0,80);font-size:12.8px"><br>
            </span></div>
          <div><span style="color:rgb(80,0,80);font-size:12.8px">[BA] 
              If the video is used for signing, an application might
              infer an encoder preference for frame rate over resolution
              (e.g. in WebRTC, RTCRtpParameters.degradationPreference =
              "maintain-framerate" )</span></div>
        </div>
      </blockquote>
      &lt;GH&gt;Right, that is a valid example of how real "knowledge"
      of the modality can be used by the application. <br>
      <br>
      <br>
      And, as a response on issue #43,<br>
      <br>
      A simple way is to say<br>
      <br>
      Video media descriptions shall only contain sign language tags<br>
      Audio media descriptions shall only contain language tags for
      spoken language<br>
      Text media descriptions shall only contain language tags for
      written language<br>
      Use of other media descriptions such as message and application
      with language indications require other specifications on how to
      assess the modality for non-signed languages.<br>
      <br>
      The current 5.4 does not mention our main problem with the
      language tags, that there is no difference on them if we mean use
      for spoken language or written language. We should have made
      better efforts to solve that problem long ago, but we have not.<br>
      <br>
      5.4 can be modified to specify the simple limited case and the
      problems that block us from specifying other cases:<br>
      <br>
      <pre class="newpage" style="font-size: 13.3333px; margin-top: 0px; margin-bottom: 0px; break-before: page; color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;"><span class="h3" style="line-height: 0pt; display: inline; white-space: pre; font-family: monospace; font-size: 1em; font-weight: bold;"><h3 style="line-height: 0pt; display: inline; white-space: pre; font-family: monospace; font-size: 1em; font-weight: bold;"><a class="selflink" name="section-5.4" href="https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4" style="color: black; text-decoration: none;" moz-do-not-send="true">5.4</a>.  Media and modality Combination problems</h3></span>


   The problem of indicating a language tag for the view of a speaking person in a video stream is out of scope for this document.

   The problem of indicating a language tag for use of written language coded as a component in a video stream is out of scope for this document.

   The use of language tags for negotiation of languages in other media than audio, video and text is not defined in this document.

   The problem of knowing which language tags are signed and which are not can be deducted 
   from the IANA language tag registry. How this is done is out of scope of this document.
</pre>
      <br>
      --------------------------------------------------------<br>
      <br>
      <br>
      But if we want to allow more cases, we need to consider the
      following complications:<br>
        <br>
      <br>
      1. to assess if a language represents a Sign Language, the
      application can look for the word "sign" in the description in
      the  IANA language registry or a copy thereof as Randall already
      indicated. <br>
      <br>
      2. For written languages used as a text component in a video
      stream, it is possible to code this for languages requiring a
      script subtag, but not for languages with suppressed script
      subtags <br>
      <br>
      3. We have also discussed proposals for how to code written
      language in video stream for languages not requiring a script
      subtag, but not got acceptance for our proposals. So we need to
      say that that is currently undefined.<br>
      <br>
      4. We also discussed how to code a view of a speaking person in
      video and said that that could be done by using the "definitively
      not written" script subtag on a non-signed language tag in video.
      But that was not appreciated by the language experts. Another
      option was to not allow written language overlayed on video, and
      that is the lately used option. ( up to version -16 or so) <br>
      <br>
      5. For talking and hearing audio media, we only have that case for
      language-tags in Audio. So that is easy to code and assess.<br>
      <br>
      6. For written language in text media, a check can be made about
      if "sign" is part of the language tag description, and if not, it
      is a written language. <br>
      <br>
      7. For signed language in text media, a check can be made about if
      "sign" is part of the language tag description, and if it is, it
      is a signed language in text notation. (extremely unusual)<br>
      <br>
      8. For use with language tags in other media than audio, video and
      text, there is a need for a description on how to assess the
      modality, especially for non-signed languages before it is used.<br>
      <br>
      <br>
      We can construct a section 5.4 to describe this situation, but I
      doubt that it is worth the effort.<br>
      <br>
      <br>
      <blockquote type="cite"
cite="mid:CAOW+2dvPSUGA_7tye+KqR1TGs1kYL43TdxBCDOHVEmWOFHud0Q@mail.gmail.com">
        <div dir="ltr">
          <div><span style="color:rgb(80,0,80);font-size:12.8px"><br>
            </span></div>
          <div><span style="color:rgb(80,0,80);font-size:12.8px">See:  </span><font
              color="#500050"><span style="font-size:12.8px"><a
href="https://rawgit.com/w3c/webrtc-pc/master/webrtc.html#dom-rtcrtpparameters-degradationpreference"
                  moz-do-not-send="true">https://rawgit.com/w3c/webrtc-pc/master/webrtc.html#dom-rtcrtpparameters-degradationpreference</a></span></font></div>
        </div>
        <div class="gmail_extra"><br>
          <div class="gmail_quote">On Sun, Oct 15, 2017 at 2:22 PM,
            Gunnar Hellström <span dir="ltr">&lt;<a
                href="mailto:gunnar.hellstrom@omnitor.se"
                target="_blank" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>&gt;</span>
            wrote:<br>
            <blockquote class="gmail_quote" style="margin:0 0 0
              .8ex;border-left:1px #ccc solid;padding-left:1ex"><span
                class="">Den 2017-10-15 kl. 21:27, skrev Paul Kyzivat:<br>
                <blockquote class="gmail_quote" style="margin:0 0 0
                  .8ex;border-left:1px #ccc solid;padding-left:1ex"> On
                  10/15/17 1:49 PM, Bernard Aboba wrote:<br>
                  <blockquote class="gmail_quote" style="margin:0 0 0
                    .8ex;border-left:1px #ccc solid;padding-left:1ex">
                    Paul said:<br>
                    <br>
                    "For the software to know must mean that it will
                    behave differently for a tag that represents a sign
                    language than for one that represents a spoken or
                    written language. What is it that it will do
                    differently?"<br>
                    <br>
                    [BA] In terms of behavior based on the
                    signed/non-signed distinction, in -17 the only
                    reference appears to be in Section 5.4, stating that
                    certain combinations are not defined in the document
                    (but that definition of those combinations was out
                    of scope):<br>
                  </blockquote>
                  <br>
                  I'm asking whether this is a distinction without a
                  difference. I'm not asking whether this makes a
                  difference in the *protocol*, but whether in the end
                  it benefits the participants in the call in any way. <br>
                </blockquote>
              </span> &lt;GH&gt;Good point, I was on my way to make a
              similar comment earlier today. The difference it makes for
              applications to "know" what modality a language tag
              represents in its used position seems to be only for
              imagined functions that are out of scope for the protocol
              specification.<span class=""><br>
                <blockquote class="gmail_quote" style="margin:0 0 0
                  .8ex;border-left:1px #ccc solid;padding-left:1ex"> For
                  instance:<br>
                  <br>
                  - does it help the UA to decide how to alert the
                  callee, so that the<br>
                    callee can better decide whether to accept the call
                  or instruct the<br>
                    UA about how to handle the call?<br>
                </blockquote>
              </span> &lt;GH&gt;Yes, for a regular human user -to-user
              call, the result of the negotiation must be presented to
              the participants, so that they can start the call with a
              language and modality that is agreed.<br>
              That presentation could be exactly the description from
              the language tag registry, and then no "knowledge" is
              needed from the application. But it is more likely that
              the application has its own string for presentation of the
              negotiated language and modality. So that will be
              presented. But it is still found by a table lookup between
              language tag and string for a language name, so no real
              knowledge is needed.<br>
              We have said many times that the way the application tells
              the user the result of the negotiation is out of scope for
              the draft, but it is good to discuss and know that it can
              be done.<br>
              A similar mechanism is also needed for configuration of
              the user's language preference profile further discussed
              below.<span class=""><br>
                <blockquote class="gmail_quote" style="margin:0 0 0
                  .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br>
                  - does it allow the UA to make a decision whether to
                  accept the media?<br>
                </blockquote>
              </span> &lt;GH&gt;No, the media should be accepted
              regardless of the result of the language negotiation.<span
                class=""><br>
                <blockquote class="gmail_quote" style="margin:0 0 0
                  .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br>
                  - can the UA use this information to change how to
                  render the media?<br>
                </blockquote>
              </span> &lt;GH&gt;Yes, for the specialized text notation
              of sign language we have discussed but currently placed
              out of scope, a very special rendering application is
              needed. The modality would be recognized by a script
              subtag to a sign language tag used in text media. However,
              I think that would be best to also use it with a specific
              text subtype, so that the rendering can be controlled by
              invocation of a "codec" for that rendering.<span class=""><br>
                <blockquote class="gmail_quote" style="margin:0 0 0
                  .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br>
                  And if there is something like this, will the UA be
                  able to do this generically based on whether the media
                  is sign language or not, or will the UA need to
                  already understand *specific* sign language tags?<br>
                </blockquote>
              </span> &lt;GH&gt;Applications will need to have localized
              versions of the names for the different sign languages and
              also for spoken languages and written languages, to be
              used in setting of preferences and announcing the results
              of the negotiation. It might be overkill to have such
              localized names for all languages in the IANA language
              registry, so it will need to be able to handle localized
              names of a subset och the registry. With good design
              however, this is just an automatic translation between a
              language tag and a corresponding name, so it does in fact
              not require any "knowledge" of what modality is used with
              each language tag.<br>
              The application can ask for the configuration:<br>
              "Which languages do you want to offer to send in video"<br>
              "Which languages do you want to offer to send in text"<br>
              "Which languages do you want to offer to send in audio"<br>
              "Which languages do you want to be prepared to receive in
              video"<br>
              "Which languages do you want to be prepared to receive in
              text"<br>
              "Which languages do you want to be prepared to receive in
              audio"<br>
              <br>
              And for each question provide a list of language names to
              select from. When the selection is made, the corresponding
              language tag is placed in the profile for negotiation.<br>
              <br>
              If the application provides the whole IANA language
              registry to the user for each question, then there is a
              possibility that the user by mistake selects a language
              that requires another modality than the question was
              about. If the application shall limit the lists provided
              for each question, then it will need a kind of knowledge
              about which language tags suit each modality (and media)<span
                class=""><br>
                <br>
                <br>
                <blockquote class="gmail_quote" style="margin:0 0 0
                  .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br>
                  E.g., A UA serving a deaf person might automatically
                  introduce a sign language interpreter into an incoming
                  audio-only call. If the incoming call has both audio
                  and video then the video *might* be for conveying sign
                  language, or not. If not then the UA will still want
                  to bring in a sign language interpreter. But is
                  knowing the call generically contains sign language
                  sufficient to decide against bringing in an
                  interpreter? Or must that depend on it being a sign
                  language that the user can use? If the UA is
                  configured for all the specific sign languages that
                  the user can deal with then there is no need to
                  recognize other sign languages generically.<br>
                </blockquote>
              </span> &lt;GH&gt;We are talking about specific language
              tags here and knowing what modality they are used for. The
              user needs to specify which sign languages they prefer to
              use. The callee application can be made to look for gaps
              between what the caller offers and what the callee can
              accept, and from that deduct which type and languages for
              a conversion that is needed, and invoke that as a relay
              service. That invocation can be made completely table
              driven and have corresponding translation profiles for
              available relay services. But it is more likely that it is
              done by having some knowledge about which languages are
              sign languages and which are spoken languages and sending
              the call to the relay service to try to sort out if they
              can handle the translation.<br>
              <blockquote class="gmail_quote" style="margin:0 0 0
                .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br>
                <br>
              </blockquote>
              So, the answer is - no, the application does not really
              have any knowledge about which modality a language tag
              represents in its used position. If the user selects to
              indicate very rare language tag indications for a media,
              then a match will just become very unlikely.<br>
              <br>
              Where does this discussion take us? Should we modify
              section 5.4 again?<br>
              <br>
              Thanks<span class="HOEnZb"><font color="#888888"><br>
                  Gunnar</font></span>
              <div class="HOEnZb">
                <div class="h5"><br>
                  <blockquote class="gmail_quote" style="margin:0 0 0
                    .8ex;border-left:1px #ccc solid;padding-left:1ex">
                        Thanks,<br>
                        Paul<br>
                    <br>
                    <blockquote class="gmail_quote" style="margin:0 0 0
                      .8ex;border-left:1px #ccc solid;padding-left:1ex">
                            5.4<br>
                      &lt;<a
href="https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4"
                        rel="noreferrer" target="_blank"
                        moz-do-not-send="true">https://tools.ietf.org/html/d<wbr>raft-ietf-slim-negotiating-hum<wbr>an-language-17#section-5.4</a>&gt;.<br>
                            Undefined Combinations<br>
                      <br>
                      <br>
                      <br>
                          The behavior when specifying a non-signed
                      language tag for a video<br>
                          media stream, or a signed language tag for an
                      audio or text media<br>
                          stream, is not defined in this document.<br>
                      <br>
                          The problem of knowing which language tags are
                      signed and which are<br>
                          not is out of scope of this document.<br>
                      <br>
                      <br>
                      <br>
                      On Sun, Oct 15, 2017 at 10:13 AM, Paul Kyzivat
                      &lt;<a href="mailto:pkyzivat@alum.mit.edu"
                        target="_blank" moz-do-not-send="true">pkyzivat@alum.mit.edu</a>
                      &lt;mailto:<a href="mailto:pkyzivat@alum.mit.edu"
                        target="_blank" moz-do-not-send="true">pkyzivat@alum.mit.edu</a>&gt;<wbr>&gt;
                      wrote:<br>
                      <br>
                          On 10/15/17 2:24 AM, Gunnar Hellström wrote:<br>
                      <br>
                              Paul,<br>
                              Den 2017-10-15 kl. 01:19, skrev Paul
                      Kyzivat:<br>
                      <br>
                                  On 10/14/17 2:03 PM, Bernard Aboba
                      wrote:<br>
                      <br>
                                      Gunnar said:<br>
                      <br>
                                      "Applications not implementing
                      such specific notations<br>
                                      may use the following simple
                      deductions.<br>
                      <br>
                                      - A language tag in audio media is
                      supposed to indicate<br>
                                      spoken modality.<br>
                      <br>
                                      [BA] Even a tag with "Sign
                      Language" in the description??<br>
                      <br>
                                      - A language tag in text media is
                      supposed to indicate                 written
                      modality.<br>
                      <br>
                                      [BA] If the tag has "Sign
                      Language" in the description,<br>
                                      can this document really say that?<br>
                      <br>
                                      - A language tag in video media is
                      supposed to indicate<br>
                                      visual sign language modality
                      except for the case when<br>
                                      it is supposed to indicate a view
                      of a speaking person<br>
                                      mentioned in section 5.2
                      characterized by the exact same<br>
                                      language tag also appearing in an
                      audio media specification.<br>
                      <br>
                                      [BA] It seems like an over-reach
                      to say that a spoken<br>
                                      language tag in video media should
                      instead be<br>
                                      interpreted as a request for Sign
                      Language.  If this<br>
                                      were done, would it always be
                      clear which Sign Language<br>
                                      was intended?  And could we really
                      assume that both<br>
                                      sides, if negotiating a spoken
                      language tag in video<br>
                                      media, were really indicating the
                      desire to sign?  It<br>
                                      seems like this could easily
                      result interoperability<br>
                                      failure.<br>
                      <br>
                      <br>
                                  IMO the right way to indicate that two
                      (or more) media<br>
                                  streams are conveying alternative
                      representations of the<br>
                                  same language content is by grouping
                      them with a new<br>
                                  grouping attribute. That can tie
                      together an audio with a<br>
                                  video and/or text. A language tag for
                      sign language on the<br>
                                  video stream then clarifies to the
                      recipient that it is sign<br>
                                  language. The grouping attribute by
                      itself can indicate that<br>
                                  these streams are conveying language.<br>
                      <br>
                              &lt;GH&gt;Yes, and that is proposed in<br>
                              draft-hellstrom-slim-modality-<wbr>grouping   
                      with two kinds of<br>
                              grouping: One kind of grouping to tell
                      that two or more<br>
                              languages in different streams are
                      alternatives with the same<br>
                              content and a priority order is assigned
                      to them to guide the<br>
                              selection of which one to use during the
                      call. The other kind of<br>
                              grouping telling that two or more
                      languages in different streams<br>
                              are desired together with the same
                      language content but<br>
                              different modalities ( such as the use for
                      captioned telephony<br>
                              with the same content provided in both
                      speech and text, or sign<br>
                              language interpretation where you see the
                      interpreter, or<br>
                              possibly spoken language interpretation
                      with the languages<br>
                              provided in different audio streams ). I
                      hope that that draft<br>
                              can be progressed. I see it as a needed
                      complement to the pure<br>
                              language indications per media.<br>
                      <br>
                      <br>
                          Oh, sorry. I did read that draft but forgot
                      about it.<br>
                      <br>
                              The discussion in this thread is more
                      about how an application<br>
                              would easily know that e.g. "ase" is a
                      sign language and "en" is<br>
                              a spoken (or written) language, and also a
                      discussion about what<br>
                              kinds of languages are allowed and
                      indicated by default in each<br>
                              media type. It was not at all about
                      falsely using language tags<br>
                              in the wrong media type as Bernard
                      understood my wording. It was<br>
                              rather a limitation to what modalities are
                      used in each media<br>
                              type and how to know the modality with
                      cases that are not<br>
                              evident, e.g. "application" and "message"
                      media types.<br>
                      <br>
                      <br>
                          What do you mean by "know"? Is it for the *UA*
                      software to know, or<br>
                          for the human user of the UA to know?
                      Presumably a human user that<br>
                          cares will understand this if presented with
                      the information in some<br>
                          way. But typically this isn't presented to the
                      user.<br>
                      <br>
                          For the software to know must mean that it
                      will behave differently<br>
                          for a tag that represents a sign language than
                      for one that<br>
                          represents a spoken or written language. What
                      is it that it will do<br>
                          differently?<br>
                      <br>
                                   Thanks,<br>
                                   Paul<br>
                      <br>
                      <br>
                              Right now we have returned to a very
                      simple rule: we define only<br>
                              use of spoken language in audio media,
                      written language in text<br>
                              media and sign language in video media.<br>
                              We have discussed other use, such as a
                      view of a speaking person<br>
                              in video, text overlay on video, a sign
                      language notation in<br>
                              text media, written language in message
                      media, written language<br>
                              in WebRTC data channels, sign written and
                      spoken in bucket media<br>
                              maybe declared as application media. We do
                      not define these<br>
                              cases. They are just not defined, not
                      forbidden. They may be<br>
                              defined in the future.<br>
                      <br>
                              My proposed wording in section 5.4 got too
                      many<br>
                              misunderstandings so I gave up with it. I
                      think we can live with<br>
                              5.4 as it is in version -16.<br>
                      <br>
                              Thanks,<br>
                              Gunnar<br>
                      <br>
                      <br>
                      <br>
                                  (IIRC I suggested something along
                      these lines a long time ago.)<br>
                      <br>
                                       Thanks,<br>
                                       Paul<br>
                      <br>
                                  ______________________________<wbr>_________________<br>
                                  SLIM mailing list<br>
                                  <a href="mailto:SLIM@ietf.org"
                        target="_blank" moz-do-not-send="true">SLIM@ietf.org</a>
                      &lt;mailto:<a href="mailto:SLIM@ietf.org"
                        target="_blank" moz-do-not-send="true">SLIM@ietf.org</a>&gt;<br>
                                  <a
                        href="https://www.ietf.org/mailman/listinfo/slim"
                        rel="noreferrer" target="_blank"
                        moz-do-not-send="true">https://www.ietf.org/mailman/l<wbr>istinfo/slim</a><br>
                                  &lt;<a
                        href="https://www.ietf.org/mailman/listinfo/slim"
                        rel="noreferrer" target="_blank"
                        moz-do-not-send="true">https://www.ietf.org/mailman/<wbr>listinfo/slim</a>&gt;<br>
                      <br>
                      <br>
                      <br>
                          ______________________________<wbr>_________________<br>
                          SLIM mailing list<br>
                          <a href="mailto:SLIM@ietf.org"
                        target="_blank" moz-do-not-send="true">SLIM@ietf.org</a>
                      &lt;mailto:<a href="mailto:SLIM@ietf.org"
                        target="_blank" moz-do-not-send="true">SLIM@ietf.org</a>&gt;<br>
                          <a
                        href="https://www.ietf.org/mailman/listinfo/slim"
                        rel="noreferrer" target="_blank"
                        moz-do-not-send="true">https://www.ietf.org/mailman/l<wbr>istinfo/slim</a><br>
                          &lt;<a
                        href="https://www.ietf.org/mailman/listinfo/slim"
                        rel="noreferrer" target="_blank"
                        moz-do-not-send="true">https://www.ietf.org/mailman/<wbr>listinfo/slim</a>&gt;<br>
                      <br>
                      <br>
                    </blockquote>
                    <br>
                  </blockquote>
                  <br>
                </div>
              </div>
              <div class="HOEnZb">
                <div class="h5"> -- <br>
                  ------------------------------<wbr>-----------<br>
                  Gunnar Hellström<br>
                  Omnitor<br>
                  <a href="mailto:gunnar.hellstrom@omnitor.se"
                    target="_blank" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a><br>
                  <a href="tel:%2B46%20708%20204%20288"
                    value="+46708204288" target="_blank"
                    moz-do-not-send="true">+46 708 204 288</a><br>
                  <br>
                </div>
              </div>
            </blockquote>
          </div>
          <br>
        </div>
      </blockquote>
      <br>
      <pre class="moz-signature" cols="72">-- 
-----------------------------------------
Gunnar Hellström
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
-----------------------------------------
Gunnar Hellström
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
  </body>
</html>

--------------AFBAE38C0049BF1C725DFCC7--


From nobody Mon Oct 23 14:17:53 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 6F40E13A5CB for <slim@ietfa.amsl.com>; Mon, 23 Oct 2017 14:17:51 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.6
X-Spam-Level: 
X-Spam-Status: No, score=-2.6 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id loxteeIvgrK2 for <slim@ietfa.amsl.com>; Mon, 23 Oct 2017 14:17:46 -0700 (PDT)
Received: from bin-vsp-out-02.atm.binero.net (vsp-unauthed02.binero.net [195.74.38.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 4AB8A1394F8 for <slim@ietf.org>; Mon, 23 Oct 2017 14:17:39 -0700 (PDT)
X-Halon-ID: 864dced7-b837-11e7-99c7-005056917f90
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-02.atm.binero.net (Halon) with ESMTPSA id 864dced7-b837-11e7-99c7-005056917f90; Mon, 23 Oct 2017 23:17:07 +0200 (CEST)
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
To: Bernard Aboba <bernard.aboba@gmail.com>
Cc: slim@ietf.org, Paul Kyzivat <pkyzivat@alum.mit.edu>
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com> <p06240606d607257c9584@172.20.60.54> <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se> <CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com> <da5cfaea-75f8-3fe1-7483-d77042bd9708@alum.mit.edu> <b2611e82-2133-0e77-b72b-ef709b1bba3c@omnitor.se> <1b0380ef-b57d-3cc7-c649-5351dc61f878@alum.mit.edu> <CAOW+2dtVE5BDmD2qy_g-asXvxntif4fVC8LYO4j7QLQ5Kq2E+g@mail.gmail.com> <3fc6d055-08a0-2bdb-f6e9-99b94efc49df@alum.mit.edu> <84fb37bd-5c7a-90ea-81fd-d315faabfd96@omnitor.se> <CAOW+2dvPSUGA_7tye+KqR1TGs1kYL43TdxBCDOHVEmWOFHud0Q@mail.gmail.com> <49cb3e25-6d65-1773-2803-dc667cd5890c@omnitor.se> <7d20fee8-fcb0-1f50-049b-82f0c2491f50@omnitor.se>
Message-ID: <65f7d728-10b0-b8f9-3d82-8de13c5e7c67@omnitor.se>
Date: Mon, 23 Oct 2017 23:17:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <7d20fee8-fcb0-1f50-049b-82f0c2491f50@omnitor.se>
Content-Type: multipart/alternative; boundary="------------509E6EDBBE7A51BE5A97733A"
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/IT_1Ls2dizXGbva0yRMnofeecPE>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 23 Oct 2017 21:17:51 -0000

This is a multi-part message in MIME format.
--------------509E6EDBBE7A51BE5A97733A
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit

Issue #43 is the only issue we have left now. I do not want to see the 
discussion stop again until we have a solution on it that seems acceptable.

Section 5.4 seems to be a good place to handle Issue #43.

Currently, section 5.4 is more aimed at limiting what kind of coding for 
languages and modalities are acceptable.

Some viewpoints said that such limitations are not needed and that 5.4 
can be deleted.

I think we can do something in between these extremes. We can introduce 
explanations for what is required from an acceptable coding of a 
combination of media, languages, directions, and other parameters and 
explain basic ways to assess what is the resulting modality, and also 
explain that the more common media and language combinations that are 
used, the higher chance there is for a match. Thus unusual combinations 
are discouraged but not forbidden as long as the modality can be 
assessed from them. They can be used in specific applications.

I might continue tomorrow with wording proposal for the reasoning above, 
hoping that we can close issue #43 and the discussions around 5.4 soon.

/Gunnar


Den 2017-10-17 kl. 11:02, skrev Gunnar Hellström:
>
> An even more general way to express what section 5.4 tries to say is:
>
> ------------------------------------------------------------------------------------------------------------------------------------------
>
> 5.4 Combinations of Language tags and Media descriptions
>
> The combination of Language tags and other information in the media 
> descriptions should be made so that the intended modality can be 
> concluded by the negotiating parties.
>
>
> ----------------------------------------------------------------------------------------
>
> That makes us not need to investigate what is possible today, and what 
> further attributes or coding rules may be added in the future.
>
> We have a risk that implementers start using some insufficient coding, 
> that can cause interop issues. But instead we do not need to limit 
> valid use that we just have not thought about by saying that specific 
> combinations are out of scope or not defined. It is up to implementers 
> to check that the combinations they use result in unambiguous modality.
>
> And it opens for use of possible new attributes e.g. 
> a=modality:spoken  or a=modality:written  etc, to complement the 
> undefined case when a non-signed language tag without script subtag is 
> used in video media, and also for explaining any use of m=application 
> or m=message media in interactive communication.
>
> It does not really answer Issue #43 by explaining HOW to assess the 
> modality easily, but it requires the implementers to make sure that it 
> is possible.
>
> And deducting the intended modality is the key to successful 
> neotiation and communication.
>
> Do you think this would be clear enough, or do we need to go into what 
> clear cases we have?
>
> Gunnar
>
>
>
> Den 2017-10-17 kl. 00:21, skrev Gunnar Hellström:
>> Den 2017-10-16 kl. 01:21, skrev Bernard Aboba:
>>> Paul said:
>>>
>>> ""- can the UA use this information to change how to render the media?"
>>>
>>> [BA] If the video is used for signing, an application might infer an 
>>> encoder preference for frame rate over resolution (e.g. in WebRTC, 
>>> RTCRtpParameters.degradationPreference = "maintain-framerate" )
>> <GH>Right, that is a valid example of how real "knowledge" of the 
>> modality can be used by the application.
>>
>>
>> And, as a response on issue #43,
>>
>> A simple way is to say
>>
>> Video media descriptions shall only contain sign language tags
>> Audio media descriptions shall only contain language tags for spoken 
>> language
>> Text media descriptions shall only contain language tags for written 
>> language
>> Use of other media descriptions such as message and application with 
>> language indications require other specifications on how to assess 
>> the modality for non-signed languages.
>>
>> The current 5.4 does not mention our main problem with the language 
>> tags, that there is no difference on them if we mean use for spoken 
>> language or written language. We should have made better efforts to 
>> solve that problem long ago, but we have not.
>>
>> 5.4 can be modified to specify the simple limited case and the 
>> problems that block us from specifying other cases:
>>
>>
>>       5.4
>>       <https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4>.
>>       Media and modality Combination problems
>>
>>
>>
>>
>>     The problem of indicating a language tag for the view of a speaking person in a video stream is out of scope for this document.
>>
>>     The problem of indicating a language tag for use of written language coded as a component in a video stream is out of scope for this document.
>>
>>     The use of language tags for negotiation of languages in other media than audio, video and text is not defined in this document.
>>
>>     The problem of knowing which language tags are signed and which are not can be deducted
>>     from the IANA language tag registry. How this is done is out of scope of this document.
>>
>> --------------------------------------------------------
>>
>>
>> But if we want to allow more cases, we need to consider the following 
>> complications:
>>
>>
>> 1. to assess if a language represents a Sign Language, the 
>> application can look for the word "sign" in the description in the  
>> IANA language registry or a copy thereof as Randall already indicated.
>>
>> 2. For written languages used as a text component in a video stream, 
>> it is possible to code this for languages requiring a script subtag, 
>> but not for languages with suppressed script subtags
>>
>> 3. We have also discussed proposals for how to code written language 
>> in video stream for languages not requiring a script subtag, but not 
>> got acceptance for our proposals. So we need to say that that is 
>> currently undefined.
>>
>> 4. We also discussed how to code a view of a speaking person in video 
>> and said that that could be done by using the "definitively not 
>> written" script subtag on a non-signed language tag in video. But 
>> that was not appreciated by the language experts. Another option was 
>> to not allow written language overlayed on video, and that is the 
>> lately used option. ( up to version -16 or so)
>>
>> 5. For talking and hearing audio media, we only have that case for 
>> language-tags in Audio. So that is easy to code and assess.
>>
>> 6. For written language in text media, a check can be made about if 
>> "sign" is part of the language tag description, and if not, it is a 
>> written language.
>>
>> 7. For signed language in text media, a check can be made about if 
>> "sign" is part of the language tag description, and if it is, it is a 
>> signed language in text notation. (extremely unusual)
>>
>> 8. For use with language tags in other media than audio, video and 
>> text, there is a need for a description on how to assess the 
>> modality, especially for non-signed languages before it is used.
>>
>>
>> We can construct a section 5.4 to describe this situation, but I 
>> doubt that it is worth the effort.
>>
>>
>>>
>>> See: 
>>> https://rawgit.com/w3c/webrtc-pc/master/webrtc.html#dom-rtcrtpparameters-degradationpreference
>>>
>>> On Sun, Oct 15, 2017 at 2:22 PM, Gunnar Hellström 
>>> <gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>> 
>>> wrote:
>>>
>>>     Den 2017-10-15 kl. 21:27, skrev Paul Kyzivat:
>>>
>>>         On 10/15/17 1:49 PM, Bernard Aboba wrote:
>>>
>>>             Paul said:
>>>
>>>             "For the software to know must mean that it will behave
>>>             differently for a tag that represents a sign language
>>>             than for one that represents a spoken or written
>>>             language. What is it that it will do differently?"
>>>
>>>             [BA] In terms of behavior based on the signed/non-signed
>>>             distinction, in -17 the only reference appears to be in
>>>             Section 5.4, stating that certain combinations are not
>>>             defined in the document (but that definition of those
>>>             combinations was out of scope):
>>>
>>>
>>>         I'm asking whether this is a distinction without a
>>>         difference. I'm not asking whether this makes a difference
>>>         in the *protocol*, but whether in the end it benefits the
>>>         participants in the call in any way.
>>>
>>>     <GH>Good point, I was on my way to make a similar comment
>>>     earlier today. The difference it makes for applications to
>>>     "know" what modality a language tag represents in its used
>>>     position seems to be only for imagined functions that are out of
>>>     scope for the protocol specification.
>>>
>>>         For instance:
>>>
>>>         - does it help the UA to decide how to alert the callee, so
>>>         that the
>>>           callee can better decide whether to accept the call or
>>>         instruct the
>>>           UA about how to handle the call?
>>>
>>>     <GH>Yes, for a regular human user -to-user call, the result of
>>>     the negotiation must be presented to the participants, so that
>>>     they can start the call with a language and modality that is agreed.
>>>     That presentation could be exactly the description from the
>>>     language tag registry, and then no "knowledge" is needed from
>>>     the application. But it is more likely that the application has
>>>     its own string for presentation of the negotiated language and
>>>     modality. So that will be presented. But it is still found by a
>>>     table lookup between language tag and string for a language
>>>     name, so no real knowledge is needed.
>>>     We have said many times that the way the application tells the
>>>     user the result of the negotiation is out of scope for the
>>>     draft, but it is good to discuss and know that it can be done.
>>>     A similar mechanism is also needed for configuration of the
>>>     user's language preference profile further discussed below.
>>>
>>>
>>>         - does it allow the UA to make a decision whether to accept
>>>         the media?
>>>
>>>     <GH>No, the media should be accepted regardless of the result of
>>>     the language negotiation.
>>>
>>>
>>>         - can the UA use this information to change how to render
>>>         the media?
>>>
>>>     <GH>Yes, for the specialized text notation of sign language we
>>>     have discussed but currently placed out of scope, a very special
>>>     rendering application is needed. The modality would be
>>>     recognized by a script subtag to a sign language tag used in
>>>     text media. However, I think that would be best to also use it
>>>     with a specific text subtype, so that the rendering can be
>>>     controlled by invocation of a "codec" for that rendering.
>>>
>>>
>>>         And if there is something like this, will the UA be able to
>>>         do this generically based on whether the media is sign
>>>         language or not, or will the UA need to already understand
>>>         *specific* sign language tags?
>>>
>>>     <GH>Applications will need to have localized versions of the
>>>     names for the different sign languages and also for spoken
>>>     languages and written languages, to be used in setting of
>>>     preferences and announcing the results of the negotiation. It
>>>     might be overkill to have such localized names for all languages
>>>     in the IANA language registry, so it will need to be able to
>>>     handle localized names of a subset och the registry. With good
>>>     design however, this is just an automatic translation between a
>>>     language tag and a corresponding name, so it does in fact not
>>>     require any "knowledge" of what modality is used with each
>>>     language tag.
>>>     The application can ask for the configuration:
>>>     "Which languages do you want to offer to send in video"
>>>     "Which languages do you want to offer to send in text"
>>>     "Which languages do you want to offer to send in audio"
>>>     "Which languages do you want to be prepared to receive in video"
>>>     "Which languages do you want to be prepared to receive in text"
>>>     "Which languages do you want to be prepared to receive in audio"
>>>
>>>     And for each question provide a list of language names to select
>>>     from. When the selection is made, the corresponding language tag
>>>     is placed in the profile for negotiation.
>>>
>>>     If the application provides the whole IANA language registry to
>>>     the user for each question, then there is a possibility that the
>>>     user by mistake selects a language that requires another
>>>     modality than the question was about. If the application shall
>>>     limit the lists provided for each question, then it will need a
>>>     kind of knowledge about which language tags suit each modality
>>>     (and media)
>>>
>>>
>>>
>>>         E.g., A UA serving a deaf person might automatically
>>>         introduce a sign language interpreter into an incoming
>>>         audio-only call. If the incoming call has both audio and
>>>         video then the video *might* be for conveying sign language,
>>>         or not. If not then the UA will still want to bring in a
>>>         sign language interpreter. But is knowing the call
>>>         generically contains sign language sufficient to decide
>>>         against bringing in an interpreter? Or must that depend on
>>>         it being a sign language that the user can use? If the UA is
>>>         configured for all the specific sign languages that the user
>>>         can deal with then there is no need to recognize other sign
>>>         languages generically.
>>>
>>>     <GH>We are talking about specific language tags here and knowing
>>>     what modality they are used for. The user needs to specify which
>>>     sign languages they prefer to use. The callee application can be
>>>     made to look for gaps between what the caller offers and what
>>>     the callee can accept, and from that deduct which type and
>>>     languages for a conversion that is needed, and invoke that as a
>>>     relay service. That invocation can be made completely table
>>>     driven and have corresponding translation profiles for available
>>>     relay services. But it is more likely that it is done by having
>>>     some knowledge about which languages are sign languages and
>>>     which are spoken languages and sending the call to the relay
>>>     service to try to sort out if they can handle the translation.
>>>
>>>
>>>
>>>     So, the answer is - no, the application does not really have any
>>>     knowledge about which modality a language tag represents in its
>>>     used position. If the user selects to indicate very rare
>>>     language tag indications for a media, then a match will just
>>>     become very unlikely.
>>>
>>>     Where does this discussion take us? Should we modify section 5.4
>>>     again?
>>>
>>>     Thanks
>>>     Gunnar
>>>
>>>             Thanks,
>>>             Paul
>>>
>>>                   5.4
>>>             <https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4
>>>             <https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4>>.
>>>                   Undefined Combinations
>>>
>>>
>>>
>>>                 The behavior when specifying a non-signed language
>>>             tag for a video
>>>                 media stream, or a signed language tag for an audio
>>>             or text media
>>>                 stream, is not defined in this document.
>>>
>>>                 The problem of knowing which language tags are
>>>             signed and which are
>>>                 not is out of scope of this document.
>>>
>>>
>>>
>>>             On Sun, Oct 15, 2017 at 10:13 AM, Paul Kyzivat
>>>             <pkyzivat@alum.mit.edu <mailto:pkyzivat@alum.mit.edu>
>>>             <mailto:pkyzivat@alum.mit.edu
>>>             <mailto:pkyzivat@alum.mit.edu>>> wrote:
>>>
>>>                 On 10/15/17 2:24 AM, Gunnar Hellström wrote:
>>>
>>>                     Paul,
>>>                     Den 2017-10-15 kl. 01:19, skrev Paul Kyzivat:
>>>
>>>                         On 10/14/17 2:03 PM, Bernard Aboba wrote:
>>>
>>>                             Gunnar said:
>>>
>>>                             "Applications not implementing such
>>>             specific notations
>>>                             may use the following simple deductions.
>>>
>>>                             - A language tag in audio media is
>>>             supposed to indicate
>>>                             spoken modality.
>>>
>>>                             [BA] Even a tag with "Sign Language" in
>>>             the description??
>>>
>>>                             - A language tag in text media is
>>>             supposed to indicate                 written modality.
>>>
>>>                             [BA] If the tag has "Sign Language" in
>>>             the description,
>>>                             can this document really say that?
>>>
>>>                             - A language tag in video media is
>>>             supposed to indicate
>>>                             visual sign language modality except for
>>>             the case when
>>>                             it is supposed to indicate a view of a
>>>             speaking person
>>>                             mentioned in section 5.2 characterized
>>>             by the exact same
>>>                             language tag also appearing in an audio
>>>             media specification.
>>>
>>>                             [BA] It seems like an over-reach to say
>>>             that a spoken
>>>                             language tag in video media should
>>>             instead be
>>>                             interpreted as a request for Sign
>>>             Language.  If this
>>>                             were done, would it always be clear
>>>             which Sign Language
>>>                             was intended?  And could we really
>>>             assume that both
>>>                             sides, if negotiating a spoken language
>>>             tag in video
>>>                             media, were really indicating the desire
>>>             to sign?  It
>>>                             seems like this could easily result
>>>             interoperability
>>>                             failure.
>>>
>>>
>>>                         IMO the right way to indicate that two (or
>>>             more) media
>>>                         streams are conveying alternative
>>>             representations of the
>>>                         same language content is by grouping them
>>>             with a new
>>>                         grouping attribute. That can tie together an
>>>             audio with a
>>>                         video and/or text. A language tag for sign
>>>             language on the
>>>                         video stream then clarifies to the recipient
>>>             that it is sign
>>>                         language. The grouping attribute by itself
>>>             can indicate that
>>>                         these streams are conveying language.
>>>
>>>                     <GH>Yes, and that is proposed in
>>>                     draft-hellstrom-slim-modality-grouping with two
>>>             kinds of
>>>                     grouping: One kind of grouping to tell that two
>>>             or more
>>>                     languages in different streams are alternatives
>>>             with the same
>>>                     content and a priority order is assigned to them
>>>             to guide the
>>>                     selection of which one to use during the call.
>>>             The other kind of
>>>                     grouping telling that two or more languages in
>>>             different streams
>>>                     are desired together with the same language
>>>             content but
>>>                     different modalities ( such as the use for
>>>             captioned telephony
>>>                     with the same content provided in both speech
>>>             and text, or sign
>>>                     language interpretation where you see the
>>>             interpreter, or
>>>                     possibly spoken language interpretation with the
>>>             languages
>>>                     provided in different audio streams ). I hope
>>>             that that draft
>>>                     can be progressed. I see it as a needed
>>>             complement to the pure
>>>                     language indications per media.
>>>
>>>
>>>                 Oh, sorry. I did read that draft but forgot about it.
>>>
>>>                     The discussion in this thread is more about how
>>>             an application
>>>                     would easily know that e.g. "ase" is a sign
>>>             language and "en" is
>>>                     a spoken (or written) language, and also a
>>>             discussion about what
>>>                     kinds of languages are allowed and indicated by
>>>             default in each
>>>                     media type. It was not at all about falsely
>>>             using language tags
>>>                     in the wrong media type as Bernard understood my
>>>             wording. It was
>>>                     rather a limitation to what modalities are used
>>>             in each media
>>>                     type and how to know the modality with cases
>>>             that are not
>>>                     evident, e.g. "application" and "message" media
>>>             types.
>>>
>>>
>>>                 What do you mean by "know"? Is it for the *UA*
>>>             software to know, or
>>>                 for the human user of the UA to know? Presumably a
>>>             human user that
>>>                 cares will understand this if presented with the
>>>             information in some
>>>                 way. But typically this isn't presented to the user.
>>>
>>>                 For the software to know must mean that it will
>>>             behave differently
>>>                 for a tag that represents a sign language than for
>>>             one that
>>>                 represents a spoken or written language. What is it
>>>             that it will do
>>>                 differently?
>>>
>>>                          Thanks,
>>>                          Paul
>>>
>>>
>>>                     Right now we have returned to a very simple
>>>             rule: we define only
>>>                     use of spoken language in audio media, written
>>>             language in text
>>>                     media and sign language in video media.
>>>                     We have discussed other use, such as a view of a
>>>             speaking person
>>>                     in video, text overlay on video, a sign language
>>>             notation in
>>>                     text media, written language in message media,
>>>             written language
>>>                     in WebRTC data channels, sign written and spoken
>>>             in bucket media
>>>                     maybe declared as application media. We do not
>>>             define these
>>>                     cases. They are just not defined, not forbidden.
>>>             They may be
>>>                     defined in the future.
>>>
>>>                     My proposed wording in section 5.4 got too many
>>>                     misunderstandings so I gave up with it. I think
>>>             we can live with
>>>                     5.4 as it is in version -16.
>>>
>>>                     Thanks,
>>>                     Gunnar
>>>
>>>
>>>
>>>                         (IIRC I suggested something along these
>>>             lines a long time ago.)
>>>
>>>                              Thanks,
>>>                              Paul
>>>
>>>                         _______________________________________________
>>>                         SLIM mailing list
>>>             SLIM@ietf.org <mailto:SLIM@ietf.org>
>>>             <mailto:SLIM@ietf.org <mailto:SLIM@ietf.org>>
>>>             https://www.ietf.org/mailman/listinfo/slim
>>>             <https://www.ietf.org/mailman/listinfo/slim>
>>>                         <https://www.ietf.org/mailman/listinfo/slim
>>>             <https://www.ietf.org/mailman/listinfo/slim>>
>>>
>>>
>>>
>>>                 _______________________________________________
>>>                 SLIM mailing list
>>>             SLIM@ietf.org <mailto:SLIM@ietf.org>
>>>             <mailto:SLIM@ietf.org <mailto:SLIM@ietf.org>>
>>>             https://www.ietf.org/mailman/listinfo/slim
>>>             <https://www.ietf.org/mailman/listinfo/slim>
>>>                 <https://www.ietf.org/mailman/listinfo/slim
>>>             <https://www.ietf.org/mailman/listinfo/slim>>
>>>
>>>
>>>
>>>
>>>     -- 
>>>     -----------------------------------------
>>>     Gunnar Hellström
>>>     Omnitor
>>>     gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>
>>>     +46 708 204 288 <tel:%2B46%20708%20204%20288>
>>>
>>>
>>
>> -- 
>> -----------------------------------------
>> Gunnar Hellström
>> Omnitor
>> gunnar.hellstrom@omnitor.se
>> +46 708 204 288
>
> -- 
> -----------------------------------------
> Gunnar Hellström
> Omnitor
> gunnar.hellstrom@omnitor.se
> +46 708 204 288
>
>
> _______________________________________________
> SLIM mailing list
> SLIM@ietf.org
> https://www.ietf.org/mailman/listinfo/slim

-- 
-----------------------------------------
Gunnar Hellström
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


--------------509E6EDBBE7A51BE5A97733A
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <p>Issue #43 is the only issue we have left now. I do not want to
      see the discussion stop again until we have a solution on it that
      seems acceptable. <br>
    </p>
    <p>Section 5.4 seems to be a good place to handle Issue #43. <br>
    </p>
    <p>Currently, section 5.4 is more aimed at limiting what kind of
      coding for languages and modalities are acceptable. <br>
    </p>
    <p>Some viewpoints said that such limitations are not needed and
      that 5.4 can be deleted. <br>
    </p>
    <p>I think we can do something in between these extremes. We can
      introduce explanations for what is required from an acceptable
      coding of a combination of media, languages, directions, and other
      parameters and explain basic ways to assess what is the resulting
      modality, and also explain that the more common media and language
      combinations that are used, the higher chance there is for a
      match. Thus unusual combinations are discouraged but not forbidden
      as long as the modality can be assessed from them. They can be
      used in specific applications. <br>
    </p>
    <p>I might continue tomorrow with wording proposal for the reasoning
      above, hoping that we can close issue #43 and the discussions
      around 5.4 soon.</p>
    <p>/Gunnar<br>
    </p>
    <br>
    <div class="moz-cite-prefix">Den 2017-10-17 kl. 11:02, skrev Gunnar
      Hellström:<br>
    </div>
    <blockquote type="cite"
      cite="mid:7d20fee8-fcb0-1f50-049b-82f0c2491f50@omnitor.se">
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      <p>An even more general way to express what section 5.4 tries to
        say is:</p>
      <p>------------------------------------------------------------------------------------------------------------------------------------------<br>
      </p>
      <p>5.4 Combinations of Language tags and Media descriptions</p>
      <p>The combination of Language tags and other information in the
        media descriptions should be made so that the intended modality
        can be concluded by the negotiating parties.</p>
      <p><br>
      </p>
      <p>
----------------------------------------------------------------------------------------</p>
      <p>That makes us not need to investigate what is possible today,
        and what further attributes or coding rules may be added in the
        future. <br>
      </p>
      <p>We have a risk that implementers start using some insufficient
        coding, that can cause interop issues. But instead we do not
        need to limit valid use that we just have not thought about by
        saying that specific combinations are out of scope or not
        defined. It is up to implementers to check that the combinations
        they use result in unambiguous modality.<br>
      </p>
      <p>And it opens for use of possible new attributes e.g. 
        a=modality:spoken  or a=modality:written  etc, to complement the
        undefined case when a non-signed language tag without script
        subtag is used in video media, and also for explaining any use
        of m=application or m=message media in interactive
        communication. <br>
      </p>
      <p>It does not really answer Issue #43 by explaining HOW to assess
        the modality easily, but it requires the implementers to make
        sure that it is possible.</p>
      <p>And deducting the intended modality is the key to successful
        neotiation and communication.<br>
      </p>
      <p>Do you think this would be clear enough, or do we need to go
        into what clear cases we have?<br>
      </p>
      <p>Gunnar<br>
      </p>
      <p><br>
      </p>
      <br>
      <div class="moz-cite-prefix">Den 2017-10-17 kl. 00:21, skrev
        Gunnar Hellström:<br>
      </div>
      <blockquote type="cite"
        cite="mid:49cb3e25-6d65-1773-2803-dc667cd5890c@omnitor.se">
        <meta http-equiv="Content-Type" content="text/html;
          charset=utf-8">
        Den 2017-10-16 kl. 01:21, skrev Bernard Aboba:<br>
        <blockquote type="cite"
cite="mid:CAOW+2dvPSUGA_7tye+KqR1TGs1kYL43TdxBCDOHVEmWOFHud0Q@mail.gmail.com">
          <div dir="ltr">Paul said: 
            <div><br>
            </div>
            <div>""<span style="color:rgb(80,0,80);font-size:12.8px">-
                can the UA use this information to change how to render
                the media?"</span></div>
            <div><span style="color:rgb(80,0,80);font-size:12.8px"><br>
              </span></div>
            <div><span style="color:rgb(80,0,80);font-size:12.8px">[BA] 
                If the video is used for signing, an application might
                infer an encoder preference for frame rate over
                resolution (e.g. in WebRTC,
                RTCRtpParameters.degradationPreference =
                "maintain-framerate" )</span></div>
          </div>
        </blockquote>
        &lt;GH&gt;Right, that is a valid example of how real "knowledge"
        of the modality can be used by the application. <br>
        <br>
        <br>
        And, as a response on issue #43,<br>
        <br>
        A simple way is to say<br>
        <br>
        Video media descriptions shall only contain sign language tags<br>
        Audio media descriptions shall only contain language tags for
        spoken language<br>
        Text media descriptions shall only contain language tags for
        written language<br>
        Use of other media descriptions such as message and application
        with language indications require other specifications on how to
        assess the modality for non-signed languages.<br>
        <br>
        The current 5.4 does not mention our main problem with the
        language tags, that there is no difference on them if we mean
        use for spoken language or written language. We should have made
        better efforts to solve that problem long ago, but we have not.<br>
        <br>
        5.4 can be modified to specify the simple limited case and the
        problems that block us from specifying other cases:<br>
        <br>
        <pre class="newpage" style="font-size: 13.3333px; margin-top: 0px; margin-bottom: 0px; break-before: page; color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;"><span class="h3" style="line-height: 0pt; display: inline; white-space: pre; font-family: monospace; font-size: 1em; font-weight: bold;"><h3 style="line-height: 0pt; display: inline; white-space: pre; font-family: monospace; font-size: 1em; font-weight: bold;"><a class="selflink" name="section-5.4" href="https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4" style="color: black; text-decoration: none;" moz-do-not-send="true">5.4</a>.  Media and modality Combination problems</h3></span>


   The problem of indicating a language tag for the view of a speaking person in a video stream is out of scope for this document.

   The problem of indicating a language tag for use of written language coded as a component in a video stream is out of scope for this document.

   The use of language tags for negotiation of languages in other media than audio, video and text is not defined in this document.

   The problem of knowing which language tags are signed and which are not can be deducted 
   from the IANA language tag registry. How this is done is out of scope of this document.
</pre>
        <br>
        --------------------------------------------------------<br>
        <br>
        <br>
        But if we want to allow more cases, we need to consider the
        following complications:<br>
          <br>
        <br>
        1. to assess if a language represents a Sign Language, the
        application can look for the word "sign" in the description in
        the  IANA language registry or a copy thereof as Randall already
        indicated. <br>
        <br>
        2. For written languages used as a text component in a video
        stream, it is possible to code this for languages requiring a
        script subtag, but not for languages with suppressed script
        subtags <br>
        <br>
        3. We have also discussed proposals for how to code written
        language in video stream for languages not requiring a script
        subtag, but not got acceptance for our proposals. So we need to
        say that that is currently undefined.<br>
        <br>
        4. We also discussed how to code a view of a speaking person in
        video and said that that could be done by using the
        "definitively not written" script subtag on a non-signed
        language tag in video. But that was not appreciated by the
        language experts. Another option was to not allow written
        language overlayed on video, and that is the lately used option.
        ( up to version -16 or so) <br>
        <br>
        5. For talking and hearing audio media, we only have that case
        for language-tags in Audio. So that is easy to code and assess.<br>
        <br>
        6. For written language in text media, a check can be made about
        if "sign" is part of the language tag description, and if not,
        it is a written language. <br>
        <br>
        7. For signed language in text media, a check can be made about
        if "sign" is part of the language tag description, and if it is,
        it is a signed language in text notation. (extremely unusual)<br>
        <br>
        8. For use with language tags in other media than audio, video
        and text, there is a need for a description on how to assess the
        modality, especially for non-signed languages before it is used.<br>
        <br>
        <br>
        We can construct a section 5.4 to describe this situation, but I
        doubt that it is worth the effort.<br>
        <br>
        <br>
        <blockquote type="cite"
cite="mid:CAOW+2dvPSUGA_7tye+KqR1TGs1kYL43TdxBCDOHVEmWOFHud0Q@mail.gmail.com">
          <div dir="ltr">
            <div><span style="color:rgb(80,0,80);font-size:12.8px"><br>
              </span></div>
            <div><span style="color:rgb(80,0,80);font-size:12.8px">See:  </span><font
                color="#500050"><span style="font-size:12.8px"><a
href="https://rawgit.com/w3c/webrtc-pc/master/webrtc.html#dom-rtcrtpparameters-degradationpreference"
                    moz-do-not-send="true">https://rawgit.com/w3c/webrtc-pc/master/webrtc.html#dom-rtcrtpparameters-degradationpreference</a></span></font></div>
          </div>
          <div class="gmail_extra"><br>
            <div class="gmail_quote">On Sun, Oct 15, 2017 at 2:22 PM,
              Gunnar Hellström <span dir="ltr">&lt;<a
                  href="mailto:gunnar.hellstrom@omnitor.se"
                  target="_blank" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>&gt;</span>
              wrote:<br>
              <blockquote class="gmail_quote" style="margin:0 0 0
                .8ex;border-left:1px #ccc solid;padding-left:1ex"><span
                  class="">Den 2017-10-15 kl. 21:27, skrev Paul Kyzivat:<br>
                  <blockquote class="gmail_quote" style="margin:0 0 0
                    .8ex;border-left:1px #ccc solid;padding-left:1ex">
                    On 10/15/17 1:49 PM, Bernard Aboba wrote:<br>
                    <blockquote class="gmail_quote" style="margin:0 0 0
                      .8ex;border-left:1px #ccc solid;padding-left:1ex">
                      Paul said:<br>
                      <br>
                      "For the software to know must mean that it will
                      behave differently for a tag that represents a
                      sign language than for one that represents a
                      spoken or written language. What is it that it
                      will do differently?"<br>
                      <br>
                      [BA] In terms of behavior based on the
                      signed/non-signed distinction, in -17 the only
                      reference appears to be in Section 5.4, stating
                      that certain combinations are not defined in the
                      document (but that definition of those
                      combinations was out of scope):<br>
                    </blockquote>
                    <br>
                    I'm asking whether this is a distinction without a
                    difference. I'm not asking whether this makes a
                    difference in the *protocol*, but whether in the end
                    it benefits the participants in the call in any way.
                    <br>
                  </blockquote>
                </span> &lt;GH&gt;Good point, I was on my way to make a
                similar comment earlier today. The difference it makes
                for applications to "know" what modality a language tag
                represents in its used position seems to be only for
                imagined functions that are out of scope for the
                protocol specification.<span class=""><br>
                  <blockquote class="gmail_quote" style="margin:0 0 0
                    .8ex;border-left:1px #ccc solid;padding-left:1ex">
                    For instance:<br>
                    <br>
                    - does it help the UA to decide how to alert the
                    callee, so that the<br>
                      callee can better decide whether to accept the
                    call or instruct the<br>
                      UA about how to handle the call?<br>
                  </blockquote>
                </span> &lt;GH&gt;Yes, for a regular human user -to-user
                call, the result of the negotiation must be presented to
                the participants, so that they can start the call with a
                language and modality that is agreed.<br>
                That presentation could be exactly the description from
                the language tag registry, and then no "knowledge" is
                needed from the application. But it is more likely that
                the application has its own string for presentation of
                the negotiated language and modality. So that will be
                presented. But it is still found by a table lookup
                between language tag and string for a language name, so
                no real knowledge is needed.<br>
                We have said many times that the way the application
                tells the user the result of the negotiation is out of
                scope for the draft, but it is good to discuss and know
                that it can be done.<br>
                A similar mechanism is also needed for configuration of
                the user's language preference profile further discussed
                below.<span class=""><br>
                  <blockquote class="gmail_quote" style="margin:0 0 0
                    .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br>
                    - does it allow the UA to make a decision whether to
                    accept the media?<br>
                  </blockquote>
                </span> &lt;GH&gt;No, the media should be accepted
                regardless of the result of the language negotiation.<span
                  class=""><br>
                  <blockquote class="gmail_quote" style="margin:0 0 0
                    .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br>
                    - can the UA use this information to change how to
                    render the media?<br>
                  </blockquote>
                </span> &lt;GH&gt;Yes, for the specialized text notation
                of sign language we have discussed but currently placed
                out of scope, a very special rendering application is
                needed. The modality would be recognized by a script
                subtag to a sign language tag used in text media.
                However, I think that would be best to also use it with
                a specific text subtype, so that the rendering can be
                controlled by invocation of a "codec" for that
                rendering.<span class=""><br>
                  <blockquote class="gmail_quote" style="margin:0 0 0
                    .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br>
                    And if there is something like this, will the UA be
                    able to do this generically based on whether the
                    media is sign language or not, or will the UA need
                    to already understand *specific* sign language tags?<br>
                  </blockquote>
                </span> &lt;GH&gt;Applications will need to have
                localized versions of the names for the different sign
                languages and also for spoken languages and written
                languages, to be used in setting of preferences and
                announcing the results of the negotiation. It might be
                overkill to have such localized names for all languages
                in the IANA language registry, so it will need to be
                able to handle localized names of a subset och the
                registry. With good design however, this is just an
                automatic translation between a language tag and a
                corresponding name, so it does in fact not require any
                "knowledge" of what modality is used with each language
                tag.<br>
                The application can ask for the configuration:<br>
                "Which languages do you want to offer to send in video"<br>
                "Which languages do you want to offer to send in text"<br>
                "Which languages do you want to offer to send in audio"<br>
                "Which languages do you want to be prepared to receive
                in video"<br>
                "Which languages do you want to be prepared to receive
                in text"<br>
                "Which languages do you want to be prepared to receive
                in audio"<br>
                <br>
                And for each question provide a list of language names
                to select from. When the selection is made, the
                corresponding language tag is placed in the profile for
                negotiation.<br>
                <br>
                If the application provides the whole IANA language
                registry to the user for each question, then there is a
                possibility that the user by mistake selects a language
                that requires another modality than the question was
                about. If the application shall limit the lists provided
                for each question, then it will need a kind of knowledge
                about which language tags suit each modality (and media)<span
                  class=""><br>
                  <br>
                  <br>
                  <blockquote class="gmail_quote" style="margin:0 0 0
                    .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br>
                    E.g., A UA serving a deaf person might automatically
                    introduce a sign language interpreter into an
                    incoming audio-only call. If the incoming call has
                    both audio and video then the video *might* be for
                    conveying sign language, or not. If not then the UA
                    will still want to bring in a sign language
                    interpreter. But is knowing the call generically
                    contains sign language sufficient to decide against
                    bringing in an interpreter? Or must that depend on
                    it being a sign language that the user can use? If
                    the UA is configured for all the specific sign
                    languages that the user can deal with then there is
                    no need to recognize other sign languages
                    generically.<br>
                  </blockquote>
                </span> &lt;GH&gt;We are talking about specific language
                tags here and knowing what modality they are used for.
                The user needs to specify which sign languages they
                prefer to use. The callee application can be made to
                look for gaps between what the caller offers and what
                the callee can accept, and from that deduct which type
                and languages for a conversion that is needed, and
                invoke that as a relay service. That invocation can be
                made completely table driven and have corresponding
                translation profiles for available relay services. But
                it is more likely that it is done by having some
                knowledge about which languages are sign languages and
                which are spoken languages and sending the call to the
                relay service to try to sort out if they can handle the
                translation.<br>
                <blockquote class="gmail_quote" style="margin:0 0 0
                  .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br>
                  <br>
                </blockquote>
                So, the answer is - no, the application does not really
                have any knowledge about which modality a language tag
                represents in its used position. If the user selects to
                indicate very rare language tag indications for a media,
                then a match will just become very unlikely.<br>
                <br>
                Where does this discussion take us? Should we modify
                section 5.4 again?<br>
                <br>
                Thanks<span class="HOEnZb"><font color="#888888"><br>
                    Gunnar</font></span>
                <div class="HOEnZb">
                  <div class="h5"><br>
                    <blockquote class="gmail_quote" style="margin:0 0 0
                      .8ex;border-left:1px #ccc solid;padding-left:1ex">
                          Thanks,<br>
                          Paul<br>
                      <br>
                      <blockquote class="gmail_quote" style="margin:0 0
                        0 .8ex;border-left:1px #ccc
                        solid;padding-left:1ex">       5.4<br>
                        &lt;<a
href="https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4"
                          rel="noreferrer" target="_blank"
                          moz-do-not-send="true">https://tools.ietf.org/html/d<wbr>raft-ietf-slim-negotiating-hum<wbr>an-language-17#section-5.4</a>&gt;.<br>
                              Undefined Combinations<br>
                        <br>
                        <br>
                        <br>
                            The behavior when specifying a non-signed
                        language tag for a video<br>
                            media stream, or a signed language tag for
                        an audio or text media<br>
                            stream, is not defined in this document.<br>
                        <br>
                            The problem of knowing which language tags
                        are signed and which are<br>
                            not is out of scope of this document.<br>
                        <br>
                        <br>
                        <br>
                        On Sun, Oct 15, 2017 at 10:13 AM, Paul Kyzivat
                        &lt;<a href="mailto:pkyzivat@alum.mit.edu"
                          target="_blank" moz-do-not-send="true">pkyzivat@alum.mit.edu</a>
                        &lt;mailto:<a
                          href="mailto:pkyzivat@alum.mit.edu"
                          target="_blank" moz-do-not-send="true">pkyzivat@alum.mit.edu</a>&gt;<wbr>&gt;
                        wrote:<br>
                        <br>
                            On 10/15/17 2:24 AM, Gunnar Hellström wrote:<br>
                        <br>
                                Paul,<br>
                                Den 2017-10-15 kl. 01:19, skrev Paul
                        Kyzivat:<br>
                        <br>
                                    On 10/14/17 2:03 PM, Bernard Aboba
                        wrote:<br>
                        <br>
                                        Gunnar said:<br>
                        <br>
                                        "Applications not implementing
                        such specific notations<br>
                                        may use the following simple
                        deductions.<br>
                        <br>
                                        - A language tag in audio media
                        is supposed to indicate<br>
                                        spoken modality.<br>
                        <br>
                                        [BA] Even a tag with "Sign
                        Language" in the description??<br>
                        <br>
                                        - A language tag in text media
                        is supposed to indicate                 written
                        modality.<br>
                        <br>
                                        [BA] If the tag has "Sign
                        Language" in the description,<br>
                                        can this document really say
                        that?<br>
                        <br>
                                        - A language tag in video media
                        is supposed to indicate<br>
                                        visual sign language modality
                        except for the case when<br>
                                        it is supposed to indicate a
                        view of a speaking person<br>
                                        mentioned in section 5.2
                        characterized by the exact same<br>
                                        language tag also appearing in
                        an audio media specification.<br>
                        <br>
                                        [BA] It seems like an over-reach
                        to say that a spoken<br>
                                        language tag in video media
                        should instead be<br>
                                        interpreted as a request for
                        Sign Language.  If this<br>
                                        were done, would it always be
                        clear which Sign Language<br>
                                        was intended?  And could we
                        really assume that both<br>
                                        sides, if negotiating a spoken
                        language tag in video<br>
                                        media, were really indicating
                        the desire to sign?  It<br>
                                        seems like this could easily
                        result interoperability<br>
                                        failure.<br>
                        <br>
                        <br>
                                    IMO the right way to indicate that
                        two (or more) media<br>
                                    streams are conveying alternative
                        representations of the<br>
                                    same language content is by grouping
                        them with a new<br>
                                    grouping attribute. That can tie
                        together an audio with a<br>
                                    video and/or text. A language tag
                        for sign language on the<br>
                                    video stream then clarifies to the
                        recipient that it is sign<br>
                                    language. The grouping attribute by
                        itself can indicate that<br>
                                    these streams are conveying
                        language.<br>
                        <br>
                                &lt;GH&gt;Yes, and that is proposed in<br>
                                draft-hellstrom-slim-modality-<wbr>grouping   
                        with two kinds of<br>
                                grouping: One kind of grouping to tell
                        that two or more<br>
                                languages in different streams are
                        alternatives with the same<br>
                                content and a priority order is assigned
                        to them to guide the<br>
                                selection of which one to use during the
                        call. The other kind of<br>
                                grouping telling that two or more
                        languages in different streams<br>
                                are desired together with the same
                        language content but<br>
                                different modalities ( such as the use
                        for captioned telephony<br>
                                with the same content provided in both
                        speech and text, or sign<br>
                                language interpretation where you see
                        the interpreter, or<br>
                                possibly spoken language interpretation
                        with the languages<br>
                                provided in different audio streams ). I
                        hope that that draft<br>
                                can be progressed. I see it as a needed
                        complement to the pure<br>
                                language indications per media.<br>
                        <br>
                        <br>
                            Oh, sorry. I did read that draft but forgot
                        about it.<br>
                        <br>
                                The discussion in this thread is more
                        about how an application<br>
                                would easily know that e.g. "ase" is a
                        sign language and "en" is<br>
                                a spoken (or written) language, and also
                        a discussion about what<br>
                                kinds of languages are allowed and
                        indicated by default in each<br>
                                media type. It was not at all about
                        falsely using language tags<br>
                                in the wrong media type as Bernard
                        understood my wording. It was<br>
                                rather a limitation to what modalities
                        are used in each media<br>
                                type and how to know the modality with
                        cases that are not<br>
                                evident, e.g. "application" and
                        "message" media types.<br>
                        <br>
                        <br>
                            What do you mean by "know"? Is it for the
                        *UA* software to know, or<br>
                            for the human user of the UA to know?
                        Presumably a human user that<br>
                            cares will understand this if presented with
                        the information in some<br>
                            way. But typically this isn't presented to
                        the user.<br>
                        <br>
                            For the software to know must mean that it
                        will behave differently<br>
                            for a tag that represents a sign language
                        than for one that<br>
                            represents a spoken or written language.
                        What is it that it will do<br>
                            differently?<br>
                        <br>
                                     Thanks,<br>
                                     Paul<br>
                        <br>
                        <br>
                                Right now we have returned to a very
                        simple rule: we define only<br>
                                use of spoken language in audio media,
                        written language in text<br>
                                media and sign language in video media.<br>
                                We have discussed other use, such as a
                        view of a speaking person<br>
                                in video, text overlay on video, a sign
                        language notation in<br>
                                text media, written language in message
                        media, written language<br>
                                in WebRTC data channels, sign written
                        and spoken in bucket media<br>
                                maybe declared as application media. We
                        do not define these<br>
                                cases. They are just not defined, not
                        forbidden. They may be<br>
                                defined in the future.<br>
                        <br>
                                My proposed wording in section 5.4 got
                        too many<br>
                                misunderstandings so I gave up with it.
                        I think we can live with<br>
                                5.4 as it is in version -16.<br>
                        <br>
                                Thanks,<br>
                                Gunnar<br>
                        <br>
                        <br>
                        <br>
                                    (IIRC I suggested something along
                        these lines a long time ago.)<br>
                        <br>
                                         Thanks,<br>
                                         Paul<br>
                        <br>
                                    ______________________________<wbr>_________________<br>
                                    SLIM mailing list<br>
                                    <a href="mailto:SLIM@ietf.org"
                          target="_blank" moz-do-not-send="true">SLIM@ietf.org</a>
                        &lt;mailto:<a href="mailto:SLIM@ietf.org"
                          target="_blank" moz-do-not-send="true">SLIM@ietf.org</a>&gt;<br>
                                    <a
                          href="https://www.ietf.org/mailman/listinfo/slim"
                          rel="noreferrer" target="_blank"
                          moz-do-not-send="true">https://www.ietf.org/mailman/l<wbr>istinfo/slim</a><br>
                                    &lt;<a
                          href="https://www.ietf.org/mailman/listinfo/slim"
                          rel="noreferrer" target="_blank"
                          moz-do-not-send="true">https://www.ietf.org/mailman/<wbr>listinfo/slim</a>&gt;<br>
                        <br>
                        <br>
                        <br>
                            ______________________________<wbr>_________________<br>
                            SLIM mailing list<br>
                            <a href="mailto:SLIM@ietf.org"
                          target="_blank" moz-do-not-send="true">SLIM@ietf.org</a>
                        &lt;mailto:<a href="mailto:SLIM@ietf.org"
                          target="_blank" moz-do-not-send="true">SLIM@ietf.org</a>&gt;<br>
                            <a
                          href="https://www.ietf.org/mailman/listinfo/slim"
                          rel="noreferrer" target="_blank"
                          moz-do-not-send="true">https://www.ietf.org/mailman/l<wbr>istinfo/slim</a><br>
                            &lt;<a
                          href="https://www.ietf.org/mailman/listinfo/slim"
                          rel="noreferrer" target="_blank"
                          moz-do-not-send="true">https://www.ietf.org/mailman/<wbr>listinfo/slim</a>&gt;<br>
                        <br>
                        <br>
                      </blockquote>
                      <br>
                    </blockquote>
                    <br>
                  </div>
                </div>
                <div class="HOEnZb">
                  <div class="h5"> -- <br>
                    ------------------------------<wbr>-----------<br>
                    Gunnar Hellström<br>
                    Omnitor<br>
                    <a href="mailto:gunnar.hellstrom@omnitor.se"
                      target="_blank" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a><br>
                    <a href="tel:%2B46%20708%20204%20288"
                      value="+46708204288" target="_blank"
                      moz-do-not-send="true">+46 708 204 288</a><br>
                    <br>
                  </div>
                </div>
              </blockquote>
            </div>
            <br>
          </div>
        </blockquote>
        <br>
        <pre class="moz-signature" cols="72">-- 
-----------------------------------------
Gunnar Hellström
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
      </blockquote>
      <br>
      <pre class="moz-signature" cols="72">-- 
-----------------------------------------
Gunnar Hellström
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
SLIM mailing list
<a class="moz-txt-link-abbreviated" href="mailto:SLIM@ietf.org">SLIM@ietf.org</a>
<a class="moz-txt-link-freetext" href="https://www.ietf.org/mailman/listinfo/slim">https://www.ietf.org/mailman/listinfo/slim</a>
</pre>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
-----------------------------------------
Gunnar Hellström
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
  </body>
</html>

--------------509E6EDBBE7A51BE5A97733A--


From nobody Mon Oct 23 18:51:57 2017
Return-Path: <bernard.aboba@gmail.com>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E2D47139F67 for <slim@ietfa.amsl.com>; Mon, 23 Oct 2017 18:51:55 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.998
X-Spam-Level: 
X-Spam-Status: No, score=-1.998 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id b18QxmtuVTOg for <slim@ietfa.amsl.com>; Mon, 23 Oct 2017 18:51:50 -0700 (PDT)
Received: from mail-ua0-x236.google.com (mail-ua0-x236.google.com [IPv6:2607:f8b0:400c:c08::236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 59F2113942F for <slim@ietf.org>; Mon, 23 Oct 2017 18:51:50 -0700 (PDT)
Received: by mail-ua0-x236.google.com with SMTP id f46so14378901uae.1 for <slim@ietf.org>; Mon, 23 Oct 2017 18:51:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;  h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=Oe5+bxVFStoqMXzN5xO4L/U8TxNZwyQvymgXWyTrCzM=; b=KBxby2U7nrsE5xkjHNDGLUelL2Y2HFrYzGPxHqqLGNH5wEq0FIR1ppRu1ik+bolr3E S2YSSQ7bRA5hVT6bBvqVn60xz1k3SEhvyGXKondlE963APZ0CEMNMEY4ZoryK8d2J9hZ XTKt9NxRLkI2t71LhDCOx8TehxXCRGA8n0WOs/9utsFJa2ntH1sLr9EKpJrzDcUj1Csp sPq3131Nl0B4UDWw65crHAVvAr8bg+t+KJ1aETt309cnuGD+WfOH1vEBk6o/p3fCRoUr o3J7C8daEYJQTWnrYX3F4JVD6e3ijdDwQ56PidMBcnnFJ7Yn6a6CpNV+Z9Gr4Jv+wAi/ dJVw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=Oe5+bxVFStoqMXzN5xO4L/U8TxNZwyQvymgXWyTrCzM=; b=a77O8OY/vC/Xa4w+Zninv44op2VNmAcXarRP6WQfljKhVpwWRqzGYR5zlTbjnn44hd VhyK3TSfMrUOWyDZ5vUQ172pDu59CxVrPtWdRCA/4vhaCluzLItf/LEAZmDODZFYqJj3 /1fz+exG7sTjMk8KeHxQQhJ3pnp0hMIEqg3clPnbSKVCz7krwbCXVqa4+6ogGoFAfGNn SaCZNVv5c2RmZ6PQmsUj3hMUDpw4jLp9LADEis/+3FkHBupi/zJ9YifCwAQHiaf39DGS chepN0decfeTq0mrE7t6+Hd3kMN6GkhCDtgeqSHMTC2utJjEBCUkinCKXHNXlcgEnaTe ZCcA==
X-Gm-Message-State: AMCzsaVjwb9JBc7c1ev2Vpm3h1OTpOXZzxODbWiiDMoUB102q2iwBxtW eE1UaipebB9LxALZpmvbMVGYNQ3/Lcawt9Ju5qHJ7Q==
X-Google-Smtp-Source: ABhQp+Ro9h4kn1+KyYKMRO0q+Lurhw4hPV/XXsggSk1Tm7TnsAzWH1utAQGx4Hylqd/67yFGSKvrJdapLL3GLp5vl9U=
X-Received: by 10.176.73.72 with SMTP id a8mr12551229uad.65.1508809908916; Mon, 23 Oct 2017 18:51:48 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.159.38.130 with HTTP; Mon, 23 Oct 2017 18:51:28 -0700 (PDT)
In-Reply-To: <65f7d728-10b0-b8f9-3d82-8de13c5e7c67@omnitor.se>
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com> <p06240606d607257c9584@172.20.60.54> <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se> <CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com> <da5cfaea-75f8-3fe1-7483-d77042bd9708@alum.mit.edu> <b2611e82-2133-0e77-b72b-ef709b1bba3c@omnitor.se> <1b0380ef-b57d-3cc7-c649-5351dc61f878@alum.mit.edu> <CAOW+2dtVE5BDmD2qy_g-asXvxntif4fVC8LYO4j7QLQ5Kq2E+g@mail.gmail.com> <3fc6d055-08a0-2bdb-f6e9-99b94efc49df@alum.mit.edu> <84fb37bd-5c7a-90ea-81fd-d315faabfd96@omnitor.se> <CAOW+2dvPSUGA_7tye+KqR1TGs1kYL43TdxBCDOHVEmWOFHud0Q@mail.gmail.com> <49cb3e25-6d65-1773-2803-dc667cd5890c@omnitor.se> <7d20fee8-fcb0-1f50-049b-82f0c2491f50@omnitor.se> <65f7d728-10b0-b8f9-3d82-8de13c5e7c67@omnitor.se>
From: Bernard Aboba <bernard.aboba@gmail.com>
Date: Mon, 23 Oct 2017 18:51:28 -0700
Message-ID: <CAOW+2dv-Pob1DPVXDe81hyeM8k7hEpT-9BaRte706_J+Snv60g@mail.gmail.com>
To: =?UTF-8?Q?Gunnar_Hellstr=C3=B6m?= <gunnar.hellstrom@omnitor.se>
Cc: slim@ietf.org, Paul Kyzivat <pkyzivat@alum.mit.edu>
Content-Type: multipart/alternative; boundary="001a11453754a798f9055c412e38"
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/qCUpYyyBEHjd4nO1L0PioEZ9VHs>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 24 Oct 2017 01:51:56 -0000

--001a11453754a798f9055c412e38
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Thanks for suggesting a way forward, Gunnar. I too would like to get Issue
43 resolved so we can move forward in the process.
Please send your thoughts to the mailing list (preferrably before the
October 30 submission deadline so we can spin a new draft version).

In thinking about the issue, a question Paul asked has stuck in my mind:
What difference does it/should it make?

Let us presume that the user agents are configured to signal the language
preferences.

In a pure peer-to-peer case (me calling you, no intermediaries), I
configure my UA to indicate a preference for Mandarin on the text modality,
French sign language on the audio modality and Swahili on the video
modality.

You (knowing my background and weird sense of humour) after being notified
of my preference, realize I am kidding and agree to accept the call,
knowing that whatever language preference you indicate, we will most likely
communicate in English since I do not speak Mandarin or Swahili and am
barely conversant in French, let along French sign language.

Would this scenario have worked out better with rules that mandated that my
odd choices for audio and video languages be labelled "undefined"? I think
not.

In a scenario where the call is between me and a call center (not a PSAP)
my flippant UA configuration might result in the call being rejected due to
a lack of Mandarin, Swahili or French sign language resources within the
call center.  But it's not clear that labelling my odd choices as
"undefined" should play a role in that decision.  For example, if the call
center did have someone who spoke Swahili, and connected me to them
(perhaps under the theory that my declared preference might indicate an
ability to lip-read Swahili), this might have improved the chance of
communication had my UA configuration been based on genuine expertise
rather than a warped sense of humour.

In other words,it is not clear to me how Section 5.4's discussion of scope
improves or clarifies the situation in any way - and there is some
possibility that it could cause problems.

On Mon, Oct 23, 2017 at 2:17 PM, Gunnar Hellstr=C3=B6m <
gunnar.hellstrom@omnitor.se> wrote:

> Issue #43 is the only issue we have left now. I do not want to see the
> discussion stop again until we have a solution on it that seems acceptabl=
e.
>
> Section 5.4 seems to be a good place to handle Issue #43.
>
> Currently, section 5.4 is more aimed at limiting what kind of coding for
> languages and modalities are acceptable.
>
> Some viewpoints said that such limitations are not needed and that 5.4 ca=
n
> be deleted.
>
> I think we can do something in between these extremes. We can introduce
> explanations for what is required from an acceptable coding of a
> combination of media, languages, directions, and other parameters and
> explain basic ways to assess what is the resulting modality, and also
> explain that the more common media and language combinations that are use=
d,
> the higher chance there is for a match. Thus unusual combinations are
> discouraged but not forbidden as long as the modality can be assessed fro=
m
> them. They can be used in specific applications.
>
> I might continue tomorrow with wording proposal for the reasoning above,
> hoping that we can close issue #43 and the discussions around 5.4 soon.
>
> /Gunnar
>
> Den 2017-10-17 kl. 11:02, skrev Gunnar Hellstr=C3=B6m:
>
> An even more general way to express what section 5.4 tries to say is:
>
> ------------------------------------------------------------
> ------------------------------------------------------------
> ------------------
>
> 5.4 Combinations of Language tags and Media descriptions
>
> The combination of Language tags and other information in the media
> descriptions should be made so that the intended modality can be conclude=
d
> by the negotiating parties.
>
>
> ------------------------------------------------------------
> ----------------------------
>
> That makes us not need to investigate what is possible today, and what
> further attributes or coding rules may be added in the future.
>
> We have a risk that implementers start using some insufficient coding,
> that can cause interop issues. But instead we do not need to limit valid
> use that we just have not thought about by saying that specific
> combinations are out of scope or not defined. It is up to implementers to
> check that the combinations they use result in unambiguous modality.
>
> And it opens for use of possible new attributes e.g.  a=3Dmodality:spoken
> or a=3Dmodality:written  etc, to complement the undefined case when a
> non-signed language tag without script subtag is used in video media, and
> also for explaining any use of m=3Dapplication or m=3Dmessage media in
> interactive communication.
>
> It does not really answer Issue #43 by explaining HOW to assess the
> modality easily, but it requires the implementers to make sure that it is
> possible.
>
> And deducting the intended modality is the key to successful neotiation
> and communication.
>
> Do you think this would be clear enough, or do we need to go into what
> clear cases we have?
>
> Gunnar
>
>
>
> Den 2017-10-17 kl. 00:21, skrev Gunnar Hellstr=C3=B6m:
>
> Den 2017-10-16 kl. 01:21, skrev Bernard Aboba:
>
> Paul said:
>
> ""- can the UA use this information to change how to render the media?"
>
> [BA]  If the video is used for signing, an application might infer an
> encoder preference for frame rate over resolution (e.g. in WebRTC,
> RTCRtpParameters.degradationPreference =3D "maintain-framerate" )
>
> <GH>Right, that is a valid example of how real "knowledge" of the modalit=
y
> can be used by the application.
>
>
> And, as a response on issue #43,
>
> A simple way is to say
>
> Video media descriptions shall only contain sign language tags
> Audio media descriptions shall only contain language tags for spoken
> language
> Text media descriptions shall only contain language tags for written
> language
> Use of other media descriptions such as message and application with
> language indications require other specifications on how to assess the
> modality for non-signed languages.
>
> The current 5.4 does not mention our main problem with the language tags,
> that there is no difference on them if we mean use for spoken language or
> written language. We should have made better efforts to solve that proble=
m
> long ago, but we have not.
>
> 5.4 can be modified to specify the simple limited case and the problems
> that block us from specifying other cases:
>
> 5.4 <https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-langua=
ge-17#section-5.4>.  Media and modality Combination problems
>
>
>    The problem of indicating a language tag for the view of a speaking pe=
rson in a video stream is out of scope for this document.
>
>    The problem of indicating a language tag for use of written language c=
oded as a component in a video stream is out of scope for this document.
>
>    The use of language tags for negotiation of languages in other media t=
han audio, video and text is not defined in this document.
>
>    The problem of knowing which language tags are signed and which are no=
t can be deducted
>    from the IANA language tag registry. How this is done is out of scope =
of this document.
>
>
> --------------------------------------------------------
>
>
> But if we want to allow more cases, we need to consider the following
> complications:
>
>
> 1. to assess if a language represents a Sign Language, the application ca=
n
> look for the word "sign" in the description in the  IANA language registr=
y
> or a copy thereof as Randall already indicated.
>
> 2. For written languages used as a text component in a video stream, it i=
s
> possible to code this for languages requiring a script subtag, but not fo=
r
> languages with suppressed script subtags
>
> 3. We have also discussed proposals for how to code written language in
> video stream for languages not requiring a script subtag, but not got
> acceptance for our proposals. So we need to say that that is currently
> undefined.
>
> 4. We also discussed how to code a view of a speaking person in video and
> said that that could be done by using the "definitively not written" scri=
pt
> subtag on a non-signed language tag in video. But that was not appreciate=
d
> by the language experts. Another option was to not allow written language
> overlayed on video, and that is the lately used option. ( up to version -=
16
> or so)
>
> 5. For talking and hearing audio media, we only have that case for
> language-tags in Audio. So that is easy to code and assess.
>
> 6. For written language in text media, a check can be made about if "sign=
"
> is part of the language tag description, and if not, it is a written
> language.
>
> 7. For signed language in text media, a check can be made about if "sign"
> is part of the language tag description, and if it is, it is a signed
> language in text notation. (extremely unusual)
>
> 8. For use with language tags in other media than audio, video and text,
> there is a need for a description on how to assess the modality, especial=
ly
> for non-signed languages before it is used.
>
>
> We can construct a section 5.4 to describe this situation, but I doubt
> that it is worth the effort.
>
>
>
> See:  https://rawgit.com/w3c/webrtc-pc/master/webrtc.html#
> dom-rtcrtpparameters-degradationpreference
>
> On Sun, Oct 15, 2017 at 2:22 PM, Gunnar Hellstr=C3=B6m <
> gunnar.hellstrom@omnitor.se> wrote:
>
>> Den 2017-10-15 kl. 21:27, skrev Paul Kyzivat:
>>
>>> On 10/15/17 1:49 PM, Bernard Aboba wrote:
>>>
>>>> Paul said:
>>>>
>>>> "For the software to know must mean that it will behave differently fo=
r
>>>> a tag that represents a sign language than for one that represents a s=
poken
>>>> or written language. What is it that it will do differently?"
>>>>
>>>> [BA] In terms of behavior based on the signed/non-signed distinction,
>>>> in -17 the only reference appears to be in Section 5.4, stating that
>>>> certain combinations are not defined in the document (but that definit=
ion
>>>> of those combinations was out of scope):
>>>>
>>>
>>> I'm asking whether this is a distinction without a difference. I'm not
>>> asking whether this makes a difference in the *protocol*, but whether i=
n
>>> the end it benefits the participants in the call in any way.
>>>
>> <GH>Good point, I was on my way to make a similar comment earlier today.
>> The difference it makes for applications to "know" what modality a langu=
age
>> tag represents in its used position seems to be only for imagined functi=
ons
>> that are out of scope for the protocol specification.
>>
>>> For instance:
>>>
>>> - does it help the UA to decide how to alert the callee, so that the
>>>   callee can better decide whether to accept the call or instruct the
>>>   UA about how to handle the call?
>>>
>> <GH>Yes, for a regular human user -to-user call, the result of the
>> negotiation must be presented to the participants, so that they can star=
t
>> the call with a language and modality that is agreed.
>> That presentation could be exactly the description from the language tag
>> registry, and then no "knowledge" is needed from the application. But it=
 is
>> more likely that the application has its own string for presentation of =
the
>> negotiated language and modality. So that will be presented. But it is
>> still found by a table lookup between language tag and string for a
>> language name, so no real knowledge is needed.
>> We have said many times that the way the application tells the user the
>> result of the negotiation is out of scope for the draft, but it is good =
to
>> discuss and know that it can be done.
>> A similar mechanism is also needed for configuration of the user's
>> language preference profile further discussed below.
>>
>>>
>>> - does it allow the UA to make a decision whether to accept the media?
>>>
>> <GH>No, the media should be accepted regardless of the result of the
>> language negotiation.
>>
>>>
>>> - can the UA use this information to change how to render the media?
>>>
>> <GH>Yes, for the specialized text notation of sign language we have
>> discussed but currently placed out of scope, a very special rendering
>> application is needed. The modality would be recognized by a script subt=
ag
>> to a sign language tag used in text media. However, I think that would b=
e
>> best to also use it with a specific text subtype, so that the rendering =
can
>> be controlled by invocation of a "codec" for that rendering.
>>
>>>
>>> And if there is something like this, will the UA be able to do this
>>> generically based on whether the media is sign language or not, or will=
 the
>>> UA need to already understand *specific* sign language tags?
>>>
>> <GH>Applications will need to have localized versions of the names for
>> the different sign languages and also for spoken languages and written
>> languages, to be used in setting of preferences and announcing the resul=
ts
>> of the negotiation. It might be overkill to have such localized names fo=
r
>> all languages in the IANA language registry, so it will need to be able =
to
>> handle localized names of a subset och the registry. With good design
>> however, this is just an automatic translation between a language tag an=
d a
>> corresponding name, so it does in fact not require any "knowledge" of wh=
at
>> modality is used with each language tag.
>> The application can ask for the configuration:
>> "Which languages do you want to offer to send in video"
>> "Which languages do you want to offer to send in text"
>> "Which languages do you want to offer to send in audio"
>> "Which languages do you want to be prepared to receive in video"
>> "Which languages do you want to be prepared to receive in text"
>> "Which languages do you want to be prepared to receive in audio"
>>
>> And for each question provide a list of language names to select from.
>> When the selection is made, the corresponding language tag is placed in =
the
>> profile for negotiation.
>>
>> If the application provides the whole IANA language registry to the user
>> for each question, then there is a possibility that the user by mistake
>> selects a language that requires another modality than the question was
>> about. If the application shall limit the lists provided for each questi=
on,
>> then it will need a kind of knowledge about which language tags suit eac=
h
>> modality (and media)
>>
>>
>>
>>> E.g., A UA serving a deaf person might automatically introduce a sign
>>> language interpreter into an incoming audio-only call. If the incoming =
call
>>> has both audio and video then the video *might* be for conveying sign
>>> language, or not. If not then the UA will still want to bring in a sign
>>> language interpreter. But is knowing the call generically contains sign
>>> language sufficient to decide against bringing in an interpreter? Or mu=
st
>>> that depend on it being a sign language that the user can use? If the U=
A is
>>> configured for all the specific sign languages that the user can deal w=
ith
>>> then there is no need to recognize other sign languages generically.
>>>
>> <GH>We are talking about specific language tags here and knowing what
>> modality they are used for. The user needs to specify which sign languag=
es
>> they prefer to use. The callee application can be made to look for gaps
>> between what the caller offers and what the callee can accept, and from
>> that deduct which type and languages for a conversion that is needed, an=
d
>> invoke that as a relay service. That invocation can be made completely
>> table driven and have corresponding translation profiles for available
>> relay services. But it is more likely that it is done by having some
>> knowledge about which languages are sign languages and which are spoken
>> languages and sending the call to the relay service to try to sort out i=
f
>> they can handle the translation.
>>
>>>
>>>
>>> So, the answer is - no, the application does not really have any
>> knowledge about which modality a language tag represents in its used
>> position. If the user selects to indicate very rare language tag
>> indications for a media, then a match will just become very unlikely.
>>
>> Where does this discussion take us? Should we modify section 5.4 again?
>>
>> Thanks
>> Gunnar
>>
>>     Thanks,
>>>     Paul
>>>
>>>       5.4
>>>> <https://tools.ietf.org/html/draft-ietf-slim-negotiating-hum
>>>> an-language-17#section-5.4>.
>>>>       Undefined Combinations
>>>>
>>>>
>>>>
>>>>     The behavior when specifying a non-signed language tag for a video
>>>>     media stream, or a signed language tag for an audio or text media
>>>>     stream, is not defined in this document.
>>>>
>>>>     The problem of knowing which language tags are signed and which ar=
e
>>>>     not is out of scope of this document.
>>>>
>>>>
>>>>
>>>> On Sun, Oct 15, 2017 at 10:13 AM, Paul Kyzivat <pkyzivat@alum.mit.edu
>>>> <mailto:pkyzivat@alum.mit.edu>> wrote:
>>>>
>>>>     On 10/15/17 2:24 AM, Gunnar Hellstr=C3=B6m wrote:
>>>>
>>>>         Paul,
>>>>         Den 2017-10-15 kl. 01:19, skrev Paul Kyzivat:
>>>>
>>>>             On 10/14/17 2:03 PM, Bernard Aboba wrote:
>>>>
>>>>                 Gunnar said:
>>>>
>>>>                 "Applications not implementing such specific notations
>>>>                 may use the following simple deductions.
>>>>
>>>>                 - A language tag in audio media is supposed to indicat=
e
>>>>                 spoken modality.
>>>>
>>>>                 [BA] Even a tag with "Sign Language" in the
>>>> description??
>>>>
>>>>                 - A language tag in text media is supposed to indicate
>>>>                 written modality.
>>>>
>>>>                 [BA] If the tag has "Sign Language" in the description=
,
>>>>                 can this document really say that?
>>>>
>>>>                 - A language tag in video media is supposed to indicat=
e
>>>>                 visual sign language modality except for the case when
>>>>                 it is supposed to indicate a view of a speaking person
>>>>                 mentioned in section 5.2 characterized by the exact sa=
me
>>>>                 language tag also appearing in an audio media
>>>> specification.
>>>>
>>>>                 [BA] It seems like an over-reach to say that a spoken
>>>>                 language tag in video media should instead be
>>>>                 interpreted as a request for Sign Language.  If this
>>>>                 were done, would it always be clear which Sign Languag=
e
>>>>                 was intended?  And could we really assume that both
>>>>                 sides, if negotiating a spoken language tag in video
>>>>                 media, were really indicating the desire to sign?  It
>>>>                 seems like this could easily result interoperability
>>>>                 failure.
>>>>
>>>>
>>>>             IMO the right way to indicate that two (or more) media
>>>>             streams are conveying alternative representations of the
>>>>             same language content is by grouping them with a new
>>>>             grouping attribute. That can tie together an audio with a
>>>>             video and/or text. A language tag for sign language on the
>>>>             video stream then clarifies to the recipient that it is si=
gn
>>>>             language. The grouping attribute by itself can indicate th=
at
>>>>             these streams are conveying language.
>>>>
>>>>         <GH>Yes, and that is proposed in
>>>>         draft-hellstrom-slim-modality-grouping    with two kinds of
>>>>         grouping: One kind of grouping to tell that two or more
>>>>         languages in different streams are alternatives with the same
>>>>         content and a priority order is assigned to them to guide the
>>>>         selection of which one to use during the call. The other kind =
of
>>>>         grouping telling that two or more languages in different strea=
ms
>>>>         are desired together with the same language content but
>>>>         different modalities ( such as the use for captioned telephony
>>>>         with the same content provided in both speech and text, or sig=
n
>>>>         language interpretation where you see the interpreter, or
>>>>         possibly spoken language interpretation with the languages
>>>>         provided in different audio streams ). I hope that that draft
>>>>         can be progressed. I see it as a needed complement to the pure
>>>>         language indications per media.
>>>>
>>>>
>>>>     Oh, sorry. I did read that draft but forgot about it.
>>>>
>>>>         The discussion in this thread is more about how an application
>>>>         would easily know that e.g. "ase" is a sign language and "en" =
is
>>>>         a spoken (or written) language, and also a discussion about wh=
at
>>>>         kinds of languages are allowed and indicated by default in eac=
h
>>>>         media type. It was not at all about falsely using language tag=
s
>>>>         in the wrong media type as Bernard understood my wording. It w=
as
>>>>         rather a limitation to what modalities are used in each media
>>>>         type and how to know the modality with cases that are not
>>>>         evident, e.g. "application" and "message" media types.
>>>>
>>>>
>>>>     What do you mean by "know"? Is it for the *UA* software to know, o=
r
>>>>     for the human user of the UA to know? Presumably a human user that
>>>>     cares will understand this if presented with the information in so=
me
>>>>     way. But typically this isn't presented to the user.
>>>>
>>>>     For the software to know must mean that it will behave differently
>>>>     for a tag that represents a sign language than for one that
>>>>     represents a spoken or written language. What is it that it will d=
o
>>>>     differently?
>>>>
>>>>              Thanks,
>>>>              Paul
>>>>
>>>>
>>>>         Right now we have returned to a very simple rule: we define on=
ly
>>>>         use of spoken language in audio media, written language in tex=
t
>>>>         media and sign language in video media.
>>>>         We have discussed other use, such as a view of a speaking pers=
on
>>>>         in video, text overlay on video, a sign language notation in
>>>>         text media, written language in message media, written languag=
e
>>>>         in WebRTC data channels, sign written and spoken in bucket med=
ia
>>>>         maybe declared as application media. We do not define these
>>>>         cases. They are just not defined, not forbidden. They may be
>>>>         defined in the future.
>>>>
>>>>         My proposed wording in section 5.4 got too many
>>>>         misunderstandings so I gave up with it. I think we can live wi=
th
>>>>         5.4 as it is in version -16.
>>>>
>>>>         Thanks,
>>>>         Gunnar
>>>>
>>>>
>>>>
>>>>             (IIRC I suggested something along these lines a long time
>>>> ago.)
>>>>
>>>>                  Thanks,
>>>>                  Paul
>>>>
>>>>             _______________________________________________
>>>>             SLIM mailing list
>>>>             SLIM@ietf.org <mailto:SLIM@ietf.org>
>>>>             https://www.ietf.org/mailman/listinfo/slim
>>>>             <https://www.ietf.org/mailman/listinfo/slim>
>>>>
>>>>
>>>>
>>>>     _______________________________________________
>>>>     SLIM mailing list
>>>>     SLIM@ietf.org <mailto:SLIM@ietf.org>
>>>>     https://www.ietf.org/mailman/listinfo/slim
>>>>     <https://www.ietf.org/mailman/listinfo/slim>
>>>>
>>>>
>>>>
>>>
>> --
>> -----------------------------------------
>> Gunnar Hellstr=C3=B6m
>> Omnitor
>> gunnar.hellstrom@omnitor.se
>> +46 708 204 288
>>
>>
>
> --
> -----------------------------------------
> Gunnar Hellstr=C3=B6m
> Omnitorgunnar.hellstrom@omnitor.se
> +46 708 204 288
>
>
> --
> -----------------------------------------
> Gunnar Hellstr=C3=B6m
> Omnitorgunnar.hellstrom@omnitor.se
> +46 708 204 288
>
>
>
> _______________________________________________
> SLIM mailing listSLIM@ietf.orghttps://www.ietf.org/mailman/listinfo/slim
>
>
> --
> -----------------------------------------
> Gunnar Hellstr=C3=B6m
> Omnitorgunnar.hellstrom@omnitor.se
> +46 708 204 288
>
>

--001a11453754a798f9055c412e38
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Thanks for suggesting a way forward, Gunnar. I too would l=
ike to get Issue 43 resolved so we can move forward in the process.=C2=A0<d=
iv>Please send your thoughts to the mailing list (preferrably before the Oc=
tober 30 submission deadline so we can spin a new draft version).=C2=A0<br>=
<div><br></div><div>In thinking about the issue, a question Paul asked has =
stuck in my mind:=C2=A0 What difference does it/should it make?=C2=A0</div>=
<div><br></div><div>Let us presume that the user agents are configured to s=
ignal the language preferences.=C2=A0</div><div><br></div><div>In a pure pe=
er-to-peer case (me calling you, no intermediaries), I configure my UA to i=
ndicate a preference for Mandarin on the text modality, French sign languag=
e on the audio modality and Swahili on the video modality.=C2=A0<br></div><=
div><br></div><div>You (knowing my background and weird sense of humour) af=
ter being notified of my preference, realize I am kidding and agree to acce=
pt the call, knowing that whatever language preference you indicate, we wil=
l most likely communicate in English since I do not speak Mandarin or Swahi=
li and am barely conversant in French, let along French sign language.=C2=
=A0</div><div><br></div><div>Would this scenario have worked out better wit=
h rules that mandated that my odd choices for audio and video languages be =
labelled &quot;undefined&quot;? I think not.</div><div><br></div><div>In a =
scenario where the call is between me and a call center (not a PSAP) my fli=
ppant UA configuration might result in the call being rejected due to a lac=
k of Mandarin, Swahili or French sign language resources within the call ce=
nter.=C2=A0 But it&#39;s not clear that labelling my odd choices as &quot;u=
ndefined&quot; should play a role in that decision.=C2=A0 For example, if t=
he call center did have someone who spoke Swahili, and connected me to them=
 (perhaps under the theory that my declared preference might indicate an ab=
ility to lip-read Swahili), this might have improved the chance of communic=
ation had my UA configuration been based on genuine expertise rather than a=
 warped sense of humour.=C2=A0</div><div><br></div><div>In other words,it i=
s not clear to me how Section 5.4&#39;s discussion of scope improves or cla=
rifies the situation in any way - and there is some possibility that it cou=
ld cause problems.</div></div></div><div class=3D"gmail_extra"><br><div cla=
ss=3D"gmail_quote">On Mon, Oct 23, 2017 at 2:17 PM, Gunnar Hellstr=C3=B6m <=
span dir=3D"ltr">&lt;<a href=3D"mailto:gunnar.hellstrom@omnitor.se" target=
=3D"_blank">gunnar.hellstrom@omnitor.se</a>&gt;</span> wrote:<br><blockquot=
e class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc sol=
id;padding-left:1ex">
 =20
   =20
 =20
  <div text=3D"#000000" bgcolor=3D"#FFFFFF">
    <p>Issue #43 is the only issue we have left now. I do not want to
      see the discussion stop again until we have a solution on it that
      seems acceptable. <br>
    </p>
    <p>Section 5.4 seems to be a good place to handle Issue #43. <br>
    </p>
    <p>Currently, section 5.4 is more aimed at limiting what kind of
      coding for languages and modalities are acceptable. <br>
    </p>
    <p>Some viewpoints said that such limitations are not needed and
      that 5.4 can be deleted. <br>
    </p>
    <p>I think we can do something in between these extremes. We can
      introduce explanations for what is required from an acceptable
      coding of a combination of media, languages, directions, and other
      parameters and explain basic ways to assess what is the resulting
      modality, and also explain that the more common media and language
      combinations that are used, the higher chance there is for a
      match. Thus unusual combinations are discouraged but not forbidden
      as long as the modality can be assessed from them. They can be
      used in specific applications. <br>
    </p>
    <p>I might continue tomorrow with wording proposal for the reasoning
      above, hoping that we can close issue #43 and the discussions
      around 5.4 soon.</p><span class=3D"HOEnZb"><font color=3D"#888888">
    <p>/Gunnar<br>
    </p></font></span><div><div class=3D"h5">
    <br>
    <div class=3D"m_-374027455181744526moz-cite-prefix">Den 2017-10-17 kl. =
11:02, skrev Gunnar
      Hellstr=C3=B6m:<br>
    </div>
    <blockquote type=3D"cite">
     =20
      <p>An even more general way to express what section 5.4 tries to
        say is:</p>
      <p>------------------------------<wbr>------------------------------<=
wbr>------------------------------<wbr>------------------------------<wbr>-=
-----------------<br>
      </p>
      <p>5.4 Combinations of Language tags and Media descriptions</p>
      <p>The combination of Language tags and other information in the
        media descriptions should be made so that the intended modality
        can be concluded by the negotiating parties.</p>
      <p><br>
      </p>
      <p>
------------------------------<wbr>------------------------------<wbr>-----=
-----------------------</p>
      <p>That makes us not need to investigate what is possible today,
        and what further attributes or coding rules may be added in the
        future. <br>
      </p>
      <p>We have a risk that implementers start using some insufficient
        coding, that can cause interop issues. But instead we do not
        need to limit valid use that we just have not thought about by
        saying that specific combinations are out of scope or not
        defined. It is up to implementers to check that the combinations
        they use result in unambiguous modality.<br>
      </p>
      <p>And it opens for use of possible new attributes e.g.=C2=A0
        a=3Dmodality:spoken=C2=A0 or a=3Dmodality:written=C2=A0 etc, to com=
plement the
        undefined case when a non-signed language tag without script
        subtag is used in video media, and also for explaining any use
        of m=3Dapplication or m=3Dmessage media in interactive
        communication. <br>
      </p>
      <p>It does not really answer Issue #43 by explaining HOW to assess
        the modality easily, but it requires the implementers to make
        sure that it is possible.</p>
      <p>And deducting the intended modality is the key to successful
        neotiation and communication.<br>
      </p>
      <p>Do you think this would be clear enough, or do we need to go
        into what clear cases we have?<br>
      </p>
      <p>Gunnar<br>
      </p>
      <p><br>
      </p>
      <br>
      <div class=3D"m_-374027455181744526moz-cite-prefix">Den 2017-10-17 kl=
. 00:21, skrev
        Gunnar Hellstr=C3=B6m:<br>
      </div>
      <blockquote type=3D"cite">
       =20
        Den 2017-10-16 kl. 01:21, skrev Bernard Aboba:<br>
        <blockquote type=3D"cite">
          <div dir=3D"ltr">Paul said:=C2=A0
            <div><br>
            </div>
            <div>&quot;&quot;<span style=3D"color:rgb(80,0,80);font-size:12=
.8px">-
                can the UA use this information to change how to render
                the media?&quot;</span></div>
            <div><span style=3D"color:rgb(80,0,80);font-size:12.8px"><br>
              </span></div>
            <div><span style=3D"color:rgb(80,0,80);font-size:12.8px">[BA]=
=C2=A0
                If the video is used for signing, an application might
                infer an encoder preference for frame rate over
                resolution (e.g. in WebRTC,
                RTCRtpParameters.<wbr>degradationPreference =3D
                &quot;maintain-framerate&quot; )</span></div>
          </div>
        </blockquote>
        &lt;GH&gt;Right, that is a valid example of how real &quot;knowledg=
e&quot;
        of the modality can be used by the application. <br>
        <br>
        <br>
        And, as a response on issue #43,<br>
        <br>
        A simple way is to say<br>
        <br>
        Video media descriptions shall only contain sign language tags<br>
        Audio media descriptions shall only contain language tags for
        spoken language<br>
        Text media descriptions shall only contain language tags for
        written language<br>
        Use of other media descriptions such as message and application
        with language indications require other specifications on how to
        assess the modality for non-signed languages.<br>
        <br>
        The current 5.4 does not mention our main problem with the
        language tags, that there is no difference on them if we mean
        use for spoken language or written language. We should have made
        better efforts to solve that problem long ago, but we have not.<br>
        <br>
        5.4 can be modified to specify the simple limited case and the
        problems that block us from specifying other cases:<br>
        <br>
        <pre class=3D"m_-374027455181744526newpage" style=3D"font-size:13.3=
333px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0);font-style:normal;f=
ont-variant-ligatures:normal;font-variant-caps:normal;font-weight:normal;le=
tter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;wo=
rd-spacing:0px;text-decoration-style:initial;text-decoration-color:initial"=
><span class=3D"m_-374027455181744526h3" style=3D"line-height:0pt;display:i=
nline;white-space:pre-wrap;font-family:monospace;font-size:1em;font-weight:=
bold"><h3 style=3D"line-height:0pt;display:inline;white-space:pre-wrap;font=
-family:monospace;font-size:1em;font-weight:bold"><a class=3D"m_-3740274551=
81744526selflink" name=3D"m_-374027455181744526_section-5.4" href=3D"https:=
//tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section=
-5.4" style=3D"color:black;text-decoration:none" target=3D"_blank">5.4</a>.=
  Media and modality Combination problems</h3></span>


   The problem of indicating a language tag for the view of a speaking pers=
on in a video stream is out of scope for this document.

   The problem of indicating a language tag for use of written language cod=
ed as a component in a video stream is out of scope for this document.

=C2=A0  The use of language tags for negotiation of languages in other medi=
a than audio, video and text is not defined in this document.

   The problem of knowing which language tags are signed and which are not =
can be deducted=20
   from the IANA language tag registry. How this is done is out of scope of=
 this document.
</pre>
        <br>
        ------------------------------<wbr>--------------------------<br>
        <br>
        <br>
        But if we want to allow more cases, we need to consider the
        following complications:<br>
        =C2=A0 <br>
        <br>
        1. to assess if a language represents a Sign Language, the
        application can look for the word &quot;sign&quot; in the descripti=
on in
        the=C2=A0 IANA language registry or a copy thereof as Randall alrea=
dy
        indicated. <br>
        <br>
        2. For written languages used as a text component in a video
        stream, it is possible to code this for languages requiring a
        script subtag, but not for languages with suppressed script
        subtags <br>
        <br>
        3. We have also discussed proposals for how to code written
        language in video stream for languages not requiring a script
        subtag, but not got acceptance for our proposals. So we need to
        say that that is currently undefined.<br>
        <br>
        4. We also discussed how to code a view of a speaking person in
        video and said that that could be done by using the
        &quot;definitively not written&quot; script subtag on a non-signed
        language tag in video. But that was not appreciated by the
        language experts. Another option was to not allow written
        language overlayed on video, and that is the lately used option.
        ( up to version -16 or so) <br>
        <br>
        5. For talking and hearing audio media, we only have that case
        for language-tags in Audio. So that is easy to code and assess.<br>
        <br>
        6. For written language in text media, a check can be made about
        if &quot;sign&quot; is part of the language tag description, and if=
 not,
        it is a written language. <br>
        <br>
        7. For signed language in text media, a check can be made about
        if &quot;sign&quot; is part of the language tag description, and if=
 it is,
        it is a signed language in text notation. (extremely unusual)<br>
        <br>
        8. For use with language tags in other media than audio, video
        and text, there is a need for a description on how to assess the
        modality, especially for non-signed languages before it is used.<br=
>
        <br>
        <br>
        We can construct a section 5.4 to describe this situation, but I
        doubt that it is worth the effort.<br>
        <br>
        <br>
        <blockquote type=3D"cite">
          <div dir=3D"ltr">
            <div><span style=3D"color:rgb(80,0,80);font-size:12.8px"><br>
              </span></div>
            <div><span style=3D"color:rgb(80,0,80);font-size:12.8px">See:=
=C2=A0=C2=A0</span><font color=3D"#500050"><span style=3D"font-size:12.8px"=
><a href=3D"https://rawgit.com/w3c/webrtc-pc/master/webrtc.html#dom-rtcrtpp=
arameters-degradationpreference" target=3D"_blank">https://rawgit.com/w3c/<=
wbr>webrtc-pc/master/webrtc.html#<wbr>dom-rtcrtpparameters-<wbr>degradation=
preference</a></span></font></div>
          </div>
          <div class=3D"gmail_extra"><br>
            <div class=3D"gmail_quote">On Sun, Oct 15, 2017 at 2:22 PM,
              Gunnar Hellstr=C3=B6m <span dir=3D"ltr">&lt;<a href=3D"mailto=
:gunnar.hellstrom@omnitor.se" target=3D"_blank">gunnar.hellstrom@omnitor.se=
</a>&gt;</span>
              wrote:<br>
              <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;=
border-left:1px #ccc solid;padding-left:1ex"><span>Den 2017-10-15 kl. 21:27=
, skrev Paul Kyzivat:<br>
                  <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .=
8ex;border-left:1px #ccc solid;padding-left:1ex">
                    On 10/15/17 1:49 PM, Bernard Aboba wrote:<br>
                    <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0=
 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                      Paul said:<br>
                      <br>
                      &quot;For the software to know must mean that it will
                      behave differently for a tag that represents a
                      sign language than for one that represents a
                      spoken or written language. What is it that it
                      will do differently?&quot;<br>
                      <br>
                      [BA] In terms of behavior based on the
                      signed/non-signed distinction, in -17 the only
                      reference appears to be in Section 5.4, stating
                      that certain combinations are not defined in the
                      document (but that definition of those
                      combinations was out of scope):<br>
                    </blockquote>
                    <br>
                    I&#39;m asking whether this is a distinction without a
                    difference. I&#39;m not asking whether this makes a
                    difference in the *protocol*, but whether in the end
                    it benefits the participants in the call in any way.
                    <br>
                  </blockquote>
                </span> &lt;GH&gt;Good point, I was on my way to make a
                similar comment earlier today. The difference it makes
                for applications to &quot;know&quot; what modality a langua=
ge tag
                represents in its used position seems to be only for
                imagined functions that are out of scope for the
                protocol specification.<span><br>
                  <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .=
8ex;border-left:1px #ccc solid;padding-left:1ex">
                    For instance:<br>
                    <br>
                    - does it help the UA to decide how to alert the
                    callee, so that the<br>
                    =C2=A0 callee can better decide whether to accept the
                    call or instruct the<br>
                    =C2=A0 UA about how to handle the call?<br>
                  </blockquote>
                </span> &lt;GH&gt;Yes, for a regular human user -to-user
                call, the result of the negotiation must be presented to
                the participants, so that they can start the call with a
                language and modality that is agreed.<br>
                That presentation could be exactly the description from
                the language tag registry, and then no &quot;knowledge&quot=
; is
                needed from the application. But it is more likely that
                the application has its own string for presentation of
                the negotiated language and modality. So that will be
                presented. But it is still found by a table lookup
                between language tag and string for a language name, so
                no real knowledge is needed.<br>
                We have said many times that the way the application
                tells the user the result of the negotiation is out of
                scope for the draft, but it is good to discuss and know
                that it can be done.<br>
                A similar mechanism is also needed for configuration of
                the user&#39;s language preference profile further discusse=
d
                below.<span><br>
                  <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .=
8ex;border-left:1px #ccc solid;padding-left:1ex"> <br>
                    - does it allow the UA to make a decision whether to
                    accept the media?<br>
                  </blockquote>
                </span> &lt;GH&gt;No, the media should be accepted
                regardless of the result of the language negotiation.<span>=
<br>
                  <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .=
8ex;border-left:1px #ccc solid;padding-left:1ex"> <br>
                    - can the UA use this information to change how to
                    render the media?<br>
                  </blockquote>
                </span> &lt;GH&gt;Yes, for the specialized text notation
                of sign language we have discussed but currently placed
                out of scope, a very special rendering application is
                needed. The modality would be recognized by a script
                subtag to a sign language tag used in text media.
                However, I think that would be best to also use it with
                a specific text subtype, so that the rendering can be
                controlled by invocation of a &quot;codec&quot; for that
                rendering.<span><br>
                  <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .=
8ex;border-left:1px #ccc solid;padding-left:1ex"> <br>
                    And if there is something like this, will the UA be
                    able to do this generically based on whether the
                    media is sign language or not, or will the UA need
                    to already understand *specific* sign language tags?<br=
>
                  </blockquote>
                </span> &lt;GH&gt;Applications will need to have
                localized versions of the names for the different sign
                languages and also for spoken languages and written
                languages, to be used in setting of preferences and
                announcing the results of the negotiation. It might be
                overkill to have such localized names for all languages
                in the IANA language registry, so it will need to be
                able to handle localized names of a subset och the
                registry. With good design however, this is just an
                automatic translation between a language tag and a
                corresponding name, so it does in fact not require any
                &quot;knowledge&quot; of what modality is used with each la=
nguage
                tag.<br>
                The application can ask for the configuration:<br>
                &quot;Which languages do you want to offer to send in video=
&quot;<br>
                &quot;Which languages do you want to offer to send in text&=
quot;<br>
                &quot;Which languages do you want to offer to send in audio=
&quot;<br>
                &quot;Which languages do you want to be prepared to receive
                in video&quot;<br>
                &quot;Which languages do you want to be prepared to receive
                in text&quot;<br>
                &quot;Which languages do you want to be prepared to receive
                in audio&quot;<br>
                <br>
                And for each question provide a list of language names
                to select from. When the selection is made, the
                corresponding language tag is placed in the profile for
                negotiation.<br>
                <br>
                If the application provides the whole IANA language
                registry to the user for each question, then there is a
                possibility that the user by mistake selects a language
                that requires another modality than the question was
                about. If the application shall limit the lists provided
                for each question, then it will need a kind of knowledge
                about which language tags suit each modality (and media)<sp=
an><br>
                  <br>
                  <br>
                  <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .=
8ex;border-left:1px #ccc solid;padding-left:1ex"> <br>
                    E.g., A UA serving a deaf person might automatically
                    introduce a sign language interpreter into an
                    incoming audio-only call. If the incoming call has
                    both audio and video then the video *might* be for
                    conveying sign language, or not. If not then the UA
                    will still want to bring in a sign language
                    interpreter. But is knowing the call generically
                    contains sign language sufficient to decide against
                    bringing in an interpreter? Or must that depend on
                    it being a sign language that the user can use? If
                    the UA is configured for all the specific sign
                    languages that the user can deal with then there is
                    no need to recognize other sign languages
                    generically.<br>
                  </blockquote>
                </span> &lt;GH&gt;We are talking about specific language
                tags here and knowing what modality they are used for.
                The user needs to specify which sign languages they
                prefer to use. The callee application can be made to
                look for gaps between what the caller offers and what
                the callee can accept, and from that deduct which type
                and languages for a conversion that is needed, and
                invoke that as a relay service. That invocation can be
                made completely table driven and have corresponding
                translation profiles for available relay services. But
                it is more likely that it is done by having some
                knowledge about which languages are sign languages and
                which are spoken languages and sending the call to the
                relay service to try to sort out if they can handle the
                translation.<br>
                <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8e=
x;border-left:1px #ccc solid;padding-left:1ex"> <br>
                  <br>
                </blockquote>
                So, the answer is - no, the application does not really
                have any knowledge about which modality a language tag
                represents in its used position. If the user selects to
                indicate very rare language tag indications for a media,
                then a match will just become very unlikely.<br>
                <br>
                Where does this discussion take us? Should we modify
                section 5.4 again?<br>
                <br>
                Thanks<span class=3D"m_-374027455181744526HOEnZb"><font col=
or=3D"#888888"><br>
                    Gunnar</font></span>
                <div class=3D"m_-374027455181744526HOEnZb">
                  <div class=3D"m_-374027455181744526h5"><br>
                    <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0=
 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                      =C2=A0=C2=A0=C2=A0=C2=A0Thanks,<br>
                      =C2=A0=C2=A0=C2=A0=C2=A0Paul<br>
                      <br>
                      <blockquote class=3D"gmail_quote" style=3D"margin:0 0=
 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> =C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 5.4<br>
                        &lt;<a href=3D"https://tools.ietf.org/html/draft-ie=
tf-slim-negotiating-human-language-17#section-5.4" rel=3D"noreferrer" targe=
t=3D"_blank">https://tools.ietf.org/html/d<wbr>raft-ietf-slim-negotiating-h=
um<wbr>an-language-17#section-5.4</a>&gt;.<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Undefined Combinatio=
ns<br>
                        <br>
                        <br>
                        <br>
                        =C2=A0=C2=A0=C2=A0 The behavior when specifying a n=
on-signed
                        language tag for a video<br>
                        =C2=A0=C2=A0=C2=A0 media stream, or a signed langua=
ge tag for
                        an audio or text media<br>
                        =C2=A0=C2=A0=C2=A0 stream, is not defined in this d=
ocument.<br>
                        <br>
                        =C2=A0=C2=A0=C2=A0 The problem of knowing which lan=
guage tags
                        are signed and which are<br>
                        =C2=A0=C2=A0=C2=A0 not is out of scope of this docu=
ment.<br>
                        <br>
                        <br>
                        <br>
                        On Sun, Oct 15, 2017 at 10:13 AM, Paul Kyzivat
                        &lt;<a href=3D"mailto:pkyzivat@alum.mit.edu" target=
=3D"_blank">pkyzivat@alum.mit.edu</a>
                        &lt;mailto:<a href=3D"mailto:pkyzivat@alum.mit.edu"=
 target=3D"_blank">pkyzivat@alum.mit.edu</a>&gt;<wbr>&gt;
                        wrote:<br>
                        <br>
                        =C2=A0=C2=A0=C2=A0 On 10/15/17 2:24 AM, Gunnar Hell=
str=C3=B6m wrote:<br>
                        <br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Paul,<br=
>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Den 2017=
-10-15 kl. 01:19, skrev Paul
                        Kyzivat:<br>
                        <br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 On 10/14/17 2:03 PM, Bernard Aboba
                        wrote:<br>
                        <br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Gunnar said:<br>
                        <br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 &quot;Applications not implementing
                        such specific notations<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 may use the following simple
                        deductions.<br>
                        <br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - A language tag in audio media
                        is supposed to indicate<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 spoken modality.<br>
                        <br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [BA] Even a tag with &quot;Sign
                        Language&quot; in the description??<br>
                        <br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - A language tag in text media
                        is supposed to indicate =C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 written
                        modality.<br>
                        <br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [BA] If the tag has &quot;Sign
                        Language&quot; in the description,<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 can this document really say
                        that?<br>
                        <br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - A language tag in video media
                        is supposed to indicate<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 visual sign language modality
                        except for the case when<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 it is supposed to indicate a
                        view of a speaking person<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 mentioned in section 5.2
                        characterized by the exact same<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 language tag also appearing in
                        an audio media specification.<br>
                        <br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [BA] It seems like an over-reach
                        to say that a spoken<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 language tag in video media
                        should instead be<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 interpreted as a request for
                        Sign Language.=C2=A0 If this<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 were done, would it always be
                        clear which Sign Language<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 was intended?=C2=A0 And could we
                        really assume that both<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sides, if negotiating a spoken
                        language tag in video<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 media, were really indicating
                        the desire to sign?=C2=A0 It<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 seems like this could easily
                        result interoperability<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 failure.<br>
                        <br>
                        <br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 IMO the right way to indicate that
                        two (or more) media<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 streams are conveying alternative
                        representations of the<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 same language content is by grouping
                        them with a new<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 grouping attribute. That can tie
                        together an audio with a<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 video and/or text. A language tag
                        for sign language on the<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 video stream then clarifies to the
                        recipient that it is sign<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 language. The grouping attribute by
                        itself can indicate that<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 these streams are conveying
                        language.<br>
                        <br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 &lt;GH&g=
t;Yes, and that is proposed in<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 draft-he=
llstrom-slim-modality-<wbr>grouping=C2=A0=C2=A0=C2=A0
                        with two kinds of<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 grouping=
: One kind of grouping to tell
                        that two or more<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 language=
s in different streams are
                        alternatives with the same<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 content =
and a priority order is assigned
                        to them to guide the<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 selectio=
n of which one to use during the
                        call. The other kind of<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 grouping=
 telling that two or more
                        languages in different streams<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 are desi=
red together with the same
                        language content but<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 differen=
t modalities ( such as the use
                        for captioned telephony<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 with the=
 same content provided in both
                        speech and text, or sign<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 language=
 interpretation where you see
                        the interpreter, or<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 possibly=
 spoken language interpretation
                        with the languages<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 provided=
 in different audio streams ). I
                        hope that that draft<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 can be p=
rogressed. I see it as a needed
                        complement to the pure<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 language=
 indications per media.<br>
                        <br>
                        <br>
                        =C2=A0=C2=A0=C2=A0 Oh, sorry. I did read that draft=
 but forgot
                        about it.<br>
                        <br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 The disc=
ussion in this thread is more
                        about how an application<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 would ea=
sily know that e.g. &quot;ase&quot; is a
                        sign language and &quot;en&quot; is<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 a spoken=
 (or written) language, and also
                        a discussion about what<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 kinds of=
 languages are allowed and
                        indicated by default in each<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 media ty=
pe. It was not at all about
                        falsely using language tags<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 in the w=
rong media type as Bernard
                        understood my wording. It was<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 rather a=
 limitation to what modalities
                        are used in each media<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 type and=
 how to know the modality with
                        cases that are not<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 evident,=
 e.g. &quot;application&quot; and
                        &quot;message&quot; media types.<br>
                        <br>
                        <br>
                        =C2=A0=C2=A0=C2=A0 What do you mean by &quot;know&q=
uot;? Is it for the
                        *UA* software to know, or<br>
                        =C2=A0=C2=A0=C2=A0 for the human user of the UA to =
know?
                        Presumably a human user that<br>
                        =C2=A0=C2=A0=C2=A0 cares will understand this if pr=
esented with
                        the information in some<br>
                        =C2=A0=C2=A0=C2=A0 way. But typically this isn&#39;=
t presented to
                        the user.<br>
                        <br>
                        =C2=A0=C2=A0=C2=A0 For the software to know must me=
an that it
                        will behave differently<br>
                        =C2=A0=C2=A0=C2=A0 for a tag that represents a sign=
 language
                        than for one that<br>
                        =C2=A0=C2=A0=C2=A0 represents a spoken or written l=
anguage.
                        What is it that it will do<br>
                        =C2=A0=C2=A0=C2=A0 differently?<br>
                        <br>
                        =C2=A0=C2=A0=C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 Thanks,<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 Paul<br>
                        <br>
                        <br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Right no=
w we have returned to a very
                        simple rule: we define only<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 use of s=
poken language in audio media,
                        written language in text<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 media an=
d sign language in video media.<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 We have =
discussed other use, such as a
                        view of a speaking person<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 in video=
, text overlay on video, a sign
                        language notation in<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 text med=
ia, written language in message
                        media, written language<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 in WebRT=
C data channels, sign written
                        and spoken in bucket media<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 maybe de=
clared as application media. We
                        do not define these<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 cases. T=
hey are just not defined, not
                        forbidden. They may be<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 defined =
in the future.<br>
                        <br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 My propo=
sed wording in section 5.4 got
                        too many<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 misunder=
standings so I gave up with it.
                        I think we can live with<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 5.4 as i=
t is in version -16.<br>
                        <br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Thanks,<=
br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Gunnar<b=
r>
                        <br>
                        <br>
                        <br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 (IIRC I suggested something along
                        these lines a long time ago.)<br>
                        <br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0=C2=A0Thanks,<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0=C2=A0Paul<br>
                        <br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 ______________________________<wbr>_________________<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 SLIM mailing list<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 <a href=3D"mailto:SLIM@ietf.org" target=3D"_blank">SLIM@iet=
f.org</a>
                        &lt;mailto:<a href=3D"mailto:SLIM@ietf.org" target=
=3D"_blank">SLIM@ietf.org</a>&gt;<br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 <a href=3D"https://www.ietf.org/mailman/listinfo/slim" rel=
=3D"noreferrer" target=3D"_blank">https://www.ietf.org/mailman/l<wbr>istinf=
o/slim</a><br>
                        =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 &lt;<a href=3D"https://www.ietf.org/mailman/listinfo/slim" =
rel=3D"noreferrer" target=3D"_blank">https://www.ietf.org/mailman/<wbr>list=
info/slim</a>&gt;<br>
                        <br>
                        <br>
                        <br>
                        =C2=A0=C2=A0=C2=A0 ______________________________<w=
br>_________________<br>
                        =C2=A0=C2=A0=C2=A0 SLIM mailing list<br>
                        =C2=A0=C2=A0=C2=A0 <a href=3D"mailto:SLIM@ietf.org"=
 target=3D"_blank">SLIM@ietf.org</a>
                        &lt;mailto:<a href=3D"mailto:SLIM@ietf.org" target=
=3D"_blank">SLIM@ietf.org</a>&gt;<br>
                        =C2=A0=C2=A0=C2=A0 <a href=3D"https://www.ietf.org/=
mailman/listinfo/slim" rel=3D"noreferrer" target=3D"_blank">https://www.iet=
f.org/mailman/l<wbr>istinfo/slim</a><br>
                        =C2=A0=C2=A0=C2=A0 &lt;<a href=3D"https://www.ietf.=
org/mailman/listinfo/slim" rel=3D"noreferrer" target=3D"_blank">https://www=
.ietf.org/mailman/<wbr>listinfo/slim</a>&gt;<br>
                        <br>
                        <br>
                      </blockquote>
                      <br>
                    </blockquote>
                    <br>
                  </div>
                </div>
                <div class=3D"m_-374027455181744526HOEnZb">
                  <div class=3D"m_-374027455181744526h5"> -- <br>
                    ------------------------------<wbr>-----------<br>
                    Gunnar Hellstr=C3=B6m<br>
                    Omnitor<br>
                    <a href=3D"mailto:gunnar.hellstrom@omnitor.se" target=
=3D"_blank">gunnar.hellstrom@omnitor.se</a><br>
                    <a href=3D"tel:%2B46%20708%20204%20288" value=3D"+46708=
204288" target=3D"_blank">+46 708 204 288</a><br>
                    <br>
                  </div>
                </div>
              </blockquote>
            </div>
            <br>
          </div>
        </blockquote>
        <br>
        <pre class=3D"m_-374027455181744526moz-signature" cols=3D"72">--=20
------------------------------<wbr>-----------
Gunnar Hellstr=C3=B6m
Omnitor
<a class=3D"m_-374027455181744526moz-txt-link-abbreviated" href=3D"mailto:g=
unnar.hellstrom@omnitor.se" target=3D"_blank">gunnar.hellstrom@omnitor.se</=
a>
+46 708 204 288</pre>
      </blockquote>
      <br>
      <pre class=3D"m_-374027455181744526moz-signature" cols=3D"72">--=20
------------------------------<wbr>-----------
Gunnar Hellstr=C3=B6m
Omnitor
<a class=3D"m_-374027455181744526moz-txt-link-abbreviated" href=3D"mailto:g=
unnar.hellstrom@omnitor.se" target=3D"_blank">gunnar.hellstrom@omnitor.se</=
a>
+46 708 204 288</pre>
      <br>
      <fieldset class=3D"m_-374027455181744526mimeAttachmentHeader"></field=
set>
      <br>
      <pre>______________________________<wbr>_________________
SLIM mailing list
<a class=3D"m_-374027455181744526moz-txt-link-abbreviated" href=3D"mailto:S=
LIM@ietf.org" target=3D"_blank">SLIM@ietf.org</a>
<a class=3D"m_-374027455181744526moz-txt-link-freetext" href=3D"https://www=
.ietf.org/mailman/listinfo/slim" target=3D"_blank">https://www.ietf.org/mai=
lman/<wbr>listinfo/slim</a>
</pre>
    </blockquote>
    <br>
    <pre class=3D"m_-374027455181744526moz-signature" cols=3D"72">--=20
------------------------------<wbr>-----------
Gunnar Hellstr=C3=B6m
Omnitor
<a class=3D"m_-374027455181744526moz-txt-link-abbreviated" href=3D"mailto:g=
unnar.hellstrom@omnitor.se" target=3D"_blank">gunnar.hellstrom@omnitor.se</=
a>
+46 708 204 288</pre>
  </div></div></div>

</blockquote></div><br></div>

--001a11453754a798f9055c412e38--


From nobody Tue Oct 24 07:52:00 2017
Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id AFFB0138BDB for <slim@ietfa.amsl.com>; Tue, 24 Oct 2017 07:51:57 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Level: 
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Y_9Fpd8pJWl9 for <slim@ietfa.amsl.com>; Tue, 24 Oct 2017 07:51:51 -0700 (PDT)
Received: from bin-vsp-out-02.atm.binero.net (bin-mail-out-05.binero.net [195.74.38.228]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id AD8CA13F4BE for <slim@ietf.org>; Tue, 24 Oct 2017 07:51:43 -0700 (PDT)
X-Halon-ID: c8c4425a-b8ca-11e7-99c7-005056917f90
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [87.96.178.34]) by bin-vsp-out-02.atm.binero.net (Halon) with ESMTPSA id c8c4425a-b8ca-11e7-99c7-005056917f90; Tue, 24 Oct 2017 16:51:15 +0200 (CEST)
To: Bernard Aboba <bernard.aboba@gmail.com>
Cc: slim@ietf.org, Paul Kyzivat <pkyzivat@alum.mit.edu>
References: <CAOW+2dtSOgp3JeiSVAttP+t0ZZ-k3oJK++TS71Xn7sCOzMZNVQ@mail.gmail.com> <p06240606d607257c9584@172.20.60.54> <fb9e6b79-7bdd-9933-e72e-a47bc8c93b58@omnitor.se> <CAOW+2dtteOadptCT=yvfmk01z-+USfE4a7JO1+u_fkTp72ygNA@mail.gmail.com> <da5cfaea-75f8-3fe1-7483-d77042bd9708@alum.mit.edu> <b2611e82-2133-0e77-b72b-ef709b1bba3c@omnitor.se> <1b0380ef-b57d-3cc7-c649-5351dc61f878@alum.mit.edu> <CAOW+2dtVE5BDmD2qy_g-asXvxntif4fVC8LYO4j7QLQ5Kq2E+g@mail.gmail.com> <3fc6d055-08a0-2bdb-f6e9-99b94efc49df@alum.mit.edu> <84fb37bd-5c7a-90ea-81fd-d315faabfd96@omnitor.se> <CAOW+2dvPSUGA_7tye+KqR1TGs1kYL43TdxBCDOHVEmWOFHud0Q@mail.gmail.com> <49cb3e25-6d65-1773-2803-dc667cd5890c@omnitor.se> <7d20fee8-fcb0-1f50-049b-82f0c2491f50@omnitor.se> <65f7d728-10b0-b8f9-3d82-8de13c5e7c67@omnitor.se> <CAOW+2dv-Pob1DPVXDe81hyeM8k7hEpT-9BaRte706_J+Snv60g@mail.gmail.com>
From: =?UTF-8?Q?Gunnar_Hellstr=c3=b6m?= <gunnar.hellstrom@omnitor.se>
Message-ID: <440b9987-44fb-0952-90ec-16d61b1961f8@omnitor.se>
Date: Tue, 24 Oct 2017 16:51:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <CAOW+2dv-Pob1DPVXDe81hyeM8k7hEpT-9BaRte706_J+Snv60g@mail.gmail.com>
Content-Type: multipart/alternative; boundary="------------05FF34FA2EE7BBD75AE804E2"
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/A4b6Wpgh0Z0zpXKqpwF9bfdW35g>
Subject: Re: [Slim] Issue 43: How to know the modality of a language indication?
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 24 Oct 2017 14:51:58 -0000

This is a multi-part message in MIME format.
--------------05FF34FA2EE7BBD75AE804E2
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit

Bernard and all,

Yes, I agree with your reasoning. I also understand how we ended up in a 
limiting section 5.4. We were mainly thinking of a globally 
interoperable multimedia communication system, and wanted high chance 
for language and modality match between callers and callees. Limiting 
the choices then seems to increase the opportunities for match. But the 
RFCs can be used in much smaller application areas, where it can be of 
value to differentiate between some less common combinations of media 
and languages that by the current wording of 5.4 maybe would be 
discouraged from using the mechanism, while it instead could do it by an 
internal agreement of which media - language combinations are relevant 
and possibly what extra information will be used to distinguish between 
otherwise ambiguous combinations. (as a tag for written or spoken 
language in video that may need further information to be decided.)

So, yet another proposal for Issue #43 and section 5.4, much more 
informative and less restrictive than the previous version.

-----Old text----


      5.4 Undefined Combinations



    The behavior when specifying a non-signed language tag for a video
    media stream, or a signed language tag for an audio or text media
    stream, is not defined in this document.

    The problem of knowing which language tags are signed and which are
    not is out of scope of this document.

-----New text------------
5.4 Media, Language and Modality indications

The combination of Language tags and other information in the media 
descriptions should be composed so that the intended modality can be 
concluded by the negotiating parties. For general use, with best 
opportunity for finding matching languages, it is recommended to use the 
most apparent combinations of language tags and media: sign language 
tags in video media, spoken language tags for audio media and written 
language tags for text media. The examples in this specification are all 
from this set of three obvious language/media/modality combinations.

The following explains some factors in combining language tags, media 
types and other media description information to identify intended 
language modalities.

A specific sign language can be identified by its existence in the IANA 
registry of language subtags according to BCP 47 [RFC5646] , and finding 
that the language subtag is found at least in two entries in the 
registry, once with the Type field "language" and once with the Type 
field "extlang" combined with the Prefix field value "sgn".

A generic identification of sign language competence or preference, 
without specifying exactly which sign language, can be indicated by use 
of the value "sgn" in the language tag of the corresponding "hlang" 
attribute.

Sign language communication in the usually used visual modality is most 
often conveyed in a "video" media stream. Application specific use may 
appear in other media, such as "message" and "application". Certain 
textual notation modalities of sign language may appear in the "text" 
media stream.

A specific spoken or written language can be identified by finding that 
the language subtag exsists in the IANA registry of language subtags 
according to  BCP 47 [RFC5646] with the Type field "language" and no 
entry for the language subtag exists with the value "sgn" in the Prefix 
field of any entry with type field value "extlang".  The spoken modality 
is usually conveyed in an "audio" media stream. The written modality in 
real-time is usually conveyed in a "text" media stream.

Use of a language subtag for a written or spoken language in other media 
streams than  "text" or "audio" require further indications or 
application agreements for identification of the modality. A number of 
such further indications are available and new ones may be added by 
further work. Use of written modality in another media stream than 
"text", may be discriminated by use of a script subtag in the language 
tag, where that is appropriate. Use for sending of a visual view of a 
speaking person may be indicated by the value "speaker" in an SDP 
Content attribute according to RFC 4796 [RFC4796] in a "video" media 
stream or another media carrying video (e.g. "message" or "application").

Use of written modality in another media stream than "text", may, for 
cases when the script subtag is suppressed, be discriminated by use of 
any other appropriate notation or application agreement. An appropriate 
notation may be use of a media subtype specific to the intended modality.
-------------------------------------------------------------------End 
of new text---------------------------------------

Gunnar








Den 2017-10-24 kl. 03:51, skrev Bernard Aboba:
> Thanks for suggesting a way forward, Gunnar. I too would like to get 
> Issue 43 resolved so we can move forward in the process.
> Please send your thoughts to the mailing list (preferrably before the 
> October 30 submission deadline so we can spin a new draft version).
>
> In thinking about the issue, a question Paul asked has stuck in my 
> mind:  What difference does it/should it make?
>
> Let us presume that the user agents are configured to signal the 
> language preferences.
>
> In a pure peer-to-peer case (me calling you, no intermediaries), I 
> configure my UA to indicate a preference for Mandarin on the text 
> modality, French sign language on the audio modality and Swahili on 
> the video modality.
>
> You (knowing my background and weird sense of humour) after being 
> notified of my preference, realize I am kidding and agree to accept 
> the call, knowing that whatever language preference you indicate, we 
> will most likely communicate in English since I do not speak Mandarin 
> or Swahili and am barely conversant in French, let along French sign 
> language.
>
> Would this scenario have worked out better with rules that mandated 
> that my odd choices for audio and video languages be labelled 
> "undefined"? I think not.
>
> In a scenario where the call is between me and a call center (not a 
> PSAP) my flippant UA configuration might result in the call being 
> rejected due to a lack of Mandarin, Swahili or French sign language 
> resources within the call center.  But it's not clear that labelling 
> my odd choices as "undefined" should play a role in that decision.  
> For example, if the call center did have someone who spoke Swahili, 
> and connected me to them (perhaps under the theory that my declared 
> preference might indicate an ability to lip-read Swahili), this might 
> have improved the chance of communication had my UA configuration been 
> based on genuine expertise rather than a warped sense of humour.
>
> In other words,it is not clear to me how Section 5.4's discussion of 
> scope improves or clarifies the situation in any way - and there is 
> some possibility that it could cause problems.
>
> On Mon, Oct 23, 2017 at 2:17 PM, Gunnar Hellström 
> <gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>> wrote:
>
>     Issue #43 is the only issue we have left now. I do not want to see
>     the discussion stop again until we have a solution on it that
>     seems acceptable.
>
>     Section 5.4 seems to be a good place to handle Issue #43.
>
>     Currently, section 5.4 is more aimed at limiting what kind of
>     coding for languages and modalities are acceptable.
>
>     Some viewpoints said that such limitations are not needed and that
>     5.4 can be deleted.
>
>     I think we can do something in between these extremes. We can
>     introduce explanations for what is required from an acceptable
>     coding of a combination of media, languages, directions, and other
>     parameters and explain basic ways to assess what is the resulting
>     modality, and also explain that the more common media and language
>     combinations that are used, the higher chance there is for a
>     match. Thus unusual combinations are discouraged but not forbidden
>     as long as the modality can be assessed from them. They can be
>     used in specific applications.
>
>     I might continue tomorrow with wording proposal for the reasoning
>     above, hoping that we can close issue #43 and the discussions
>     around 5.4 soon.
>
>     /Gunnar
>
>
>     Den 2017-10-17 kl. 11:02, skrev Gunnar Hellström:
>>
>>     An even more general way to express what section 5.4 tries to say is:
>>
>>     ------------------------------------------------------------------------------------------------------------------------------------------
>>
>>     5.4 Combinations of Language tags and Media descriptions
>>
>>     The combination of Language tags and other information in the
>>     media descriptions should be made so that the intended modality
>>     can be concluded by the negotiating parties.
>>
>>
>>     ----------------------------------------------------------------------------------------
>>
>>     That makes us not need to investigate what is possible today, and
>>     what further attributes or coding rules may be added in the future.
>>
>>     We have a risk that implementers start using some insufficient
>>     coding, that can cause interop issues. But instead we do not need
>>     to limit valid use that we just have not thought about by saying
>>     that specific combinations are out of scope or not defined. It is
>>     up to implementers to check that the combinations they use result
>>     in unambiguous modality.
>>
>>     And it opens for use of possible new attributes e.g. 
>>     a=modality:spoken  or a=modality:written etc, to complement the
>>     undefined case when a non-signed language tag without script
>>     subtag is used in video media, and also for explaining any use of
>>     m=application or m=message media in interactive communication.
>>
>>     It does not really answer Issue #43 by explaining HOW to assess
>>     the modality easily, but it requires the implementers to make
>>     sure that it is possible.
>>
>>     And deducting the intended modality is the key to successful
>>     neotiation and communication.
>>
>>     Do you think this would be clear enough, or do we need to go into
>>     what clear cases we have?
>>
>>     Gunnar
>>
>>
>>
>>     Den 2017-10-17 kl. 00:21, skrev Gunnar Hellström:
>>>     Den 2017-10-16 kl. 01:21, skrev Bernard Aboba:
>>>>     Paul said:
>>>>
>>>>     ""- can the UA use this information to change how to render the
>>>>     media?"
>>>>
>>>>     [BA] If the video is used for signing, an application might
>>>>     infer an encoder preference for frame rate over resolution
>>>>     (e.g. in WebRTC, RTCRtpParameters.degradationPreference =
>>>>     "maintain-framerate" )
>>>     <GH>Right, that is a valid example of how real "knowledge" of
>>>     the modality can be used by the application.
>>>
>>>
>>>     And, as a response on issue #43,
>>>
>>>     A simple way is to say
>>>
>>>     Video media descriptions shall only contain sign language tags
>>>     Audio media descriptions shall only contain language tags for
>>>     spoken language
>>>     Text media descriptions shall only contain language tags for
>>>     written language
>>>     Use of other media descriptions such as message and application
>>>     with language indications require other specifications on how to
>>>     assess the modality for non-signed languages.
>>>
>>>     The current 5.4 does not mention our main problem with the
>>>     language tags, that there is no difference on them if we mean
>>>     use for spoken language or written language. We should have made
>>>     better efforts to solve that problem long ago, but we have not.
>>>
>>>     5.4 can be modified to specify the simple limited case and the
>>>     problems that block us from specifying other cases:
>>>
>>>
>>>           5.4
>>>           <https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4>.
>>>           Media and modality Combination problems
>>>
>>>
>>>
>>>
>>>         The problem of indicating a language tag for the view of a speaking person in a video stream is out of scope for this document.
>>>
>>>         The problem of indicating a language tag for use of written language coded as a component in a video stream is out of scope for this document.
>>>
>>>         The use of language tags for negotiation of languages in other media than audio, video and text is not defined in this document.
>>>
>>>         The problem of knowing which language tags are signed and which are not can be deducted
>>>         from the IANA language tag registry. How this is done is out of scope of this document.
>>>
>>>     --------------------------------------------------------
>>>
>>>
>>>     But if we want to allow more cases, we need to consider the
>>>     following complications:
>>>
>>>
>>>     1. to assess if a language represents a Sign Language, the
>>>     application can look for the word "sign" in the description in
>>>     the  IANA language registry or a copy thereof as Randall already
>>>     indicated.
>>>
>>>     2. For written languages used as a text component in a video
>>>     stream, it is possible to code this for languages requiring a
>>>     script subtag, but not for languages with suppressed script subtags
>>>
>>>     3. We have also discussed proposals for how to code written
>>>     language in video stream for languages not requiring a script
>>>     subtag, but not got acceptance for our proposals. So we need to
>>>     say that that is currently undefined.
>>>
>>>     4. We also discussed how to code a view of a speaking person in
>>>     video and said that that could be done by using the
>>>     "definitively not written" script subtag on a non-signed
>>>     language tag in video. But that was not appreciated by the
>>>     language experts. Another option was to not allow written
>>>     language overlayed on video, and that is the lately used option.
>>>     ( up to version -16 or so)
>>>
>>>     5. For talking and hearing audio media, we only have that case
>>>     for language-tags in Audio. So that is easy to code and assess.
>>>
>>>     6. For written language in text media, a check can be made about
>>>     if "sign" is part of the language tag description, and if not,
>>>     it is a written language.
>>>
>>>     7. For signed language in text media, a check can be made about
>>>     if "sign" is part of the language tag description, and if it is,
>>>     it is a signed language in text notation. (extremely unusual)
>>>
>>>     8. For use with language tags in other media than audio, video
>>>     and text, there is a need for a description on how to assess the
>>>     modality, especially for non-signed languages before it is used.
>>>
>>>
>>>     We can construct a section 5.4 to describe this situation, but I
>>>     doubt that it is worth the effort.
>>>
>>>
>>>>
>>>>     See:
>>>>     https://rawgit.com/w3c/webrtc-pc/master/webrtc.html#dom-rtcrtpparameters-degradationpreference
>>>>     <https://rawgit.com/w3c/webrtc-pc/master/webrtc.html#dom-rtcrtpparameters-degradationpreference>
>>>>
>>>>     On Sun, Oct 15, 2017 at 2:22 PM, Gunnar Hellström
>>>>     <gunnar.hellstrom@omnitor.se
>>>>     <mailto:gunnar.hellstrom@omnitor.se>> wrote:
>>>>
>>>>         Den 2017-10-15 kl. 21:27, skrev Paul Kyzivat:
>>>>
>>>>             On 10/15/17 1:49 PM, Bernard Aboba wrote:
>>>>
>>>>                 Paul said:
>>>>
>>>>                 "For the software to know must mean that it will
>>>>                 behave differently for a tag that represents a sign
>>>>                 language than for one that represents a spoken or
>>>>                 written language. What is it that it will do
>>>>                 differently?"
>>>>
>>>>                 [BA] In terms of behavior based on the
>>>>                 signed/non-signed distinction, in -17 the only
>>>>                 reference appears to be in Section 5.4, stating
>>>>                 that certain combinations are not defined in the
>>>>                 document (but that definition of those combinations
>>>>                 was out of scope):
>>>>
>>>>
>>>>             I'm asking whether this is a distinction without a
>>>>             difference. I'm not asking whether this makes a
>>>>             difference in the *protocol*, but whether in the end it
>>>>             benefits the participants in the call in any way.
>>>>
>>>>         <GH>Good point, I was on my way to make a similar comment
>>>>         earlier today. The difference it makes for applications to
>>>>         "know" what modality a language tag represents in its used
>>>>         position seems to be only for imagined functions that are
>>>>         out of scope for the protocol specification.
>>>>
>>>>             For instance:
>>>>
>>>>             - does it help the UA to decide how to alert the
>>>>             callee, so that the
>>>>               callee can better decide whether to accept the call
>>>>             or instruct the
>>>>               UA about how to handle the call?
>>>>
>>>>         <GH>Yes, for a regular human user -to-user call, the result
>>>>         of the negotiation must be presented to the participants,
>>>>         so that they can start the call with a language and
>>>>         modality that is agreed.
>>>>         That presentation could be exactly the description from the
>>>>         language tag registry, and then no "knowledge" is needed
>>>>         from the application. But it is more likely that the
>>>>         application has its own string for presentation of the
>>>>         negotiated language and modality. So that will be
>>>>         presented. But it is still found by a table lookup between
>>>>         language tag and string for a language name, so no real
>>>>         knowledge is needed.
>>>>         We have said many times that the way the application tells
>>>>         the user the result of the negotiation is out of scope for
>>>>         the draft, but it is good to discuss and know that it can
>>>>         be done.
>>>>         A similar mechanism is also needed for configuration of the
>>>>         user's language preference profile further discussed below.
>>>>
>>>>
>>>>             - does it allow the UA to make a decision whether to
>>>>             accept the media?
>>>>
>>>>         <GH>No, the media should be accepted regardless of the
>>>>         result of the language negotiation.
>>>>
>>>>
>>>>             - can the UA use this information to change how to
>>>>             render the media?
>>>>
>>>>         <GH>Yes, for the specialized text notation of sign language
>>>>         we have discussed but currently placed out of scope, a very
>>>>         special rendering application is needed. The modality would
>>>>         be recognized by a script subtag to a sign language tag
>>>>         used in text media. However, I think that would be best to
>>>>         also use it with a specific text subtype, so that the
>>>>         rendering can be controlled by invocation of a "codec" for
>>>>         that rendering.
>>>>
>>>>
>>>>             And if there is something like this, will the UA be
>>>>             able to do this generically based on whether the media
>>>>             is sign language or not, or will the UA need to already
>>>>             understand *specific* sign language tags?
>>>>
>>>>         <GH>Applications will need to have localized versions of
>>>>         the names for the different sign languages and also for
>>>>         spoken languages and written languages, to be used in
>>>>         setting of preferences and announcing the results of the
>>>>         negotiation. It might be overkill to have such localized
>>>>         names for all languages in the IANA language registry, so
>>>>         it will need to be able to handle localized names of a
>>>>         subset och the registry. With good design however, this is
>>>>         just an automatic translation between a language tag and a
>>>>         corresponding name, so it does in fact not require any
>>>>         "knowledge" of what modality is used with each language tag.
>>>>         The application can ask for the configuration:
>>>>         "Which languages do you want to offer to send in video"
>>>>         "Which languages do you want to offer to send in text"
>>>>         "Which languages do you want to offer to send in audio"
>>>>         "Which languages do you want to be prepared to receive in
>>>>         video"
>>>>         "Which languages do you want to be prepared to receive in text"
>>>>         "Which languages do you want to be prepared to receive in
>>>>         audio"
>>>>
>>>>         And for each question provide a list of language names to
>>>>         select from. When the selection is made, the corresponding
>>>>         language tag is placed in the profile for negotiation.
>>>>
>>>>         If the application provides the whole IANA language
>>>>         registry to the user for each question, then there is a
>>>>         possibility that the user by mistake selects a language
>>>>         that requires another modality than the question was about.
>>>>         If the application shall limit the lists provided for each
>>>>         question, then it will need a kind of knowledge about which
>>>>         language tags suit each modality (and media)
>>>>
>>>>
>>>>
>>>>             E.g., A UA serving a deaf person might automatically
>>>>             introduce a sign language interpreter into an incoming
>>>>             audio-only call. If the incoming call has both audio
>>>>             and video then the video *might* be for conveying sign
>>>>             language, or not. If not then the UA will still want to
>>>>             bring in a sign language interpreter. But is knowing
>>>>             the call generically contains sign language sufficient
>>>>             to decide against bringing in an interpreter? Or must
>>>>             that depend on it being a sign language that the user
>>>>             can use? If the UA is configured for all the specific
>>>>             sign languages that the user can deal with then there
>>>>             is no need to recognize other sign languages generically.
>>>>
>>>>         <GH>We are talking about specific language tags here and
>>>>         knowing what modality they are used for. The user needs to
>>>>         specify which sign languages they prefer to use. The callee
>>>>         application can be made to look for gaps between what the
>>>>         caller offers and what the callee can accept, and from that
>>>>         deduct which type and languages for a conversion that is
>>>>         needed, and invoke that as a relay service. That invocation
>>>>         can be made completely table driven and have corresponding
>>>>         translation profiles for available relay services. But it
>>>>         is more likely that it is done by having some knowledge
>>>>         about which languages are sign languages and which are
>>>>         spoken languages and sending the call to the relay service
>>>>         to try to sort out if they can handle the translation.
>>>>
>>>>
>>>>
>>>>         So, the answer is - no, the application does not really
>>>>         have any knowledge about which modality a language tag
>>>>         represents in its used position. If the user selects to
>>>>         indicate very rare language tag indications for a media,
>>>>         then a match will just become very unlikely.
>>>>
>>>>         Where does this discussion take us? Should we modify
>>>>         section 5.4 again?
>>>>
>>>>         Thanks
>>>>         Gunnar
>>>>
>>>>                 Thanks,
>>>>                 Paul
>>>>
>>>>                       5.4
>>>>                 <https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4
>>>>                 <https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4>>.
>>>>                       Undefined Combinations
>>>>
>>>>
>>>>
>>>>                     The behavior when specifying a non-signed
>>>>                 language tag for a video
>>>>                     media stream, or a signed language tag for an
>>>>                 audio or text media
>>>>                     stream, is not defined in this document.
>>>>
>>>>                     The problem of knowing which language tags are
>>>>                 signed and which are
>>>>                     not is out of scope of this document.
>>>>
>>>>
>>>>
>>>>                 On Sun, Oct 15, 2017 at 10:13 AM, Paul Kyzivat
>>>>                 <pkyzivat@alum.mit.edu
>>>>                 <mailto:pkyzivat@alum.mit.edu>
>>>>                 <mailto:pkyzivat@alum.mit.edu
>>>>                 <mailto:pkyzivat@alum.mit.edu>>> wrote:
>>>>
>>>>                     On 10/15/17 2:24 AM, Gunnar Hellström wrote:
>>>>
>>>>                         Paul,
>>>>                         Den 2017-10-15 kl. 01:19, skrev Paul Kyzivat:
>>>>
>>>>                             On 10/14/17 2:03 PM, Bernard Aboba wrote:
>>>>
>>>>                                 Gunnar said:
>>>>
>>>>                                 "Applications not implementing such
>>>>                 specific notations
>>>>                                 may use the following simple
>>>>                 deductions.
>>>>
>>>>                                 - A language tag in audio media is
>>>>                 supposed to indicate
>>>>                                 spoken modality.
>>>>
>>>>                                 [BA] Even a tag with "Sign
>>>>                 Language" in the description??
>>>>
>>>>                                 - A language tag in text media is
>>>>                 supposed to indicate                 written modality.
>>>>
>>>>                                 [BA] If the tag has "Sign Language"
>>>>                 in the description,
>>>>                                 can this document really say that?
>>>>
>>>>                                 - A language tag in video media is
>>>>                 supposed to indicate
>>>>                                 visual sign language modality
>>>>                 except for the case when
>>>>                                 it is supposed to indicate a view
>>>>                 of a speaking person
>>>>                                 mentioned in section 5.2
>>>>                 characterized by the exact same
>>>>                                 language tag also appearing in an
>>>>                 audio media specification.
>>>>
>>>>                                 [BA] It seems like an over-reach to
>>>>                 say that a spoken
>>>>                                 language tag in video media should
>>>>                 instead be
>>>>                                 interpreted as a request for Sign
>>>>                 Language.  If this
>>>>                                 were done, would it always be clear
>>>>                 which Sign Language
>>>>                                 was intended?  And could we really
>>>>                 assume that both
>>>>                                 sides, if negotiating a spoken
>>>>                 language tag in video
>>>>                                 media, were really indicating the
>>>>                 desire to sign?  It
>>>>                                 seems like this could easily result
>>>>                 interoperability
>>>>                                 failure.
>>>>
>>>>
>>>>                             IMO the right way to indicate that two
>>>>                 (or more) media
>>>>                             streams are conveying alternative
>>>>                 representations of the
>>>>                             same language content is by grouping
>>>>                 them with a new
>>>>                             grouping attribute. That can tie
>>>>                 together an audio with a
>>>>                             video and/or text. A language tag for
>>>>                 sign language on the
>>>>                             video stream then clarifies to the
>>>>                 recipient that it is sign
>>>>                             language. The grouping attribute by
>>>>                 itself can indicate that
>>>>                             these streams are conveying language.
>>>>
>>>>                         <GH>Yes, and that is proposed in
>>>>                 draft-hellstrom-slim-modality-grouping with two
>>>>                 kinds of
>>>>                         grouping: One kind of grouping to tell that
>>>>                 two or more
>>>>                         languages in different streams are
>>>>                 alternatives with the same
>>>>                         content and a priority order is assigned to
>>>>                 them to guide the
>>>>                         selection of which one to use during the
>>>>                 call. The other kind of
>>>>                         grouping telling that two or more languages
>>>>                 in different streams
>>>>                         are desired together with the same language
>>>>                 content but
>>>>                         different modalities ( such as the use for
>>>>                 captioned telephony
>>>>                         with the same content provided in both
>>>>                 speech and text, or sign
>>>>                         language interpretation where you see the
>>>>                 interpreter, or
>>>>                         possibly spoken language interpretation
>>>>                 with the languages
>>>>                         provided in different audio streams ). I
>>>>                 hope that that draft
>>>>                         can be progressed. I see it as a needed
>>>>                 complement to the pure
>>>>                         language indications per media.
>>>>
>>>>
>>>>                     Oh, sorry. I did read that draft but forgot
>>>>                 about it.
>>>>
>>>>                         The discussion in this thread is more about
>>>>                 how an application
>>>>                         would easily know that e.g. "ase" is a sign
>>>>                 language and "en" is
>>>>                         a spoken (or written) language, and also a
>>>>                 discussion about what
>>>>                         kinds of languages are allowed and
>>>>                 indicated by default in each
>>>>                         media type. It was not at all about falsely
>>>>                 using language tags
>>>>                         in the wrong media type as Bernard
>>>>                 understood my wording. It was
>>>>                         rather a limitation to what modalities are
>>>>                 used in each media
>>>>                         type and how to know the modality with
>>>>                 cases that are not
>>>>                         evident, e.g. "application" and "message"
>>>>                 media types.
>>>>
>>>>
>>>>                     What do you mean by "know"? Is it for the *UA*
>>>>                 software to know, or
>>>>                     for the human user of the UA to know?
>>>>                 Presumably a human user that
>>>>                     cares will understand this if presented with
>>>>                 the information in some
>>>>                     way. But typically this isn't presented to the
>>>>                 user.
>>>>
>>>>                     For the software to know must mean that it will
>>>>                 behave differently
>>>>                     for a tag that represents a sign language than
>>>>                 for one that
>>>>                     represents a spoken or written language. What
>>>>                 is it that it will do
>>>>                     differently?
>>>>
>>>>                              Thanks,
>>>>                              Paul
>>>>
>>>>
>>>>                         Right now we have returned to a very simple
>>>>                 rule: we define only
>>>>                         use of spoken language in audio media,
>>>>                 written language in text
>>>>                         media and sign language in video media.
>>>>                         We have discussed other use, such as a view
>>>>                 of a speaking person
>>>>                         in video, text overlay on video, a sign
>>>>                 language notation in
>>>>                         text media, written language in message
>>>>                 media, written language
>>>>                         in WebRTC data channels, sign written and
>>>>                 spoken in bucket media
>>>>                         maybe declared as application media. We do
>>>>                 not define these
>>>>                         cases. They are just not defined, not
>>>>                 forbidden. They may be
>>>>                         defined in the future.
>>>>
>>>>                         My proposed wording in section 5.4 got too many
>>>>                         misunderstandings so I gave up with it. I
>>>>                 think we can live with
>>>>                         5.4 as it is in version -16.
>>>>
>>>>                         Thanks,
>>>>                         Gunnar
>>>>
>>>>
>>>>
>>>>                             (IIRC I suggested something along these
>>>>                 lines a long time ago.)
>>>>
>>>>                                  Thanks,
>>>>                                  Paul
>>>>
>>>>                 _______________________________________________
>>>>                             SLIM mailing list
>>>>                 SLIM@ietf.org <mailto:SLIM@ietf.org>
>>>>                 <mailto:SLIM@ietf.org <mailto:SLIM@ietf.org>>
>>>>                 https://www.ietf.org/mailman/listinfo/slim
>>>>                 <https://www.ietf.org/mailman/listinfo/slim>
>>>>                            
>>>>                 <https://www.ietf.org/mailman/listinfo/slim
>>>>                 <https://www.ietf.org/mailman/listinfo/slim>>
>>>>
>>>>
>>>>
>>>>                     _______________________________________________
>>>>                     SLIM mailing list
>>>>                 SLIM@ietf.org <mailto:SLIM@ietf.org>
>>>>                 <mailto:SLIM@ietf.org <mailto:SLIM@ietf.org>>
>>>>                 https://www.ietf.org/mailman/listinfo/slim
>>>>                 <https://www.ietf.org/mailman/listinfo/slim>
>>>>                     <https://www.ietf.org/mailman/listinfo/slim
>>>>                 <https://www.ietf.org/mailman/listinfo/slim>>
>>>>
>>>>
>>>>
>>>>
>>>>         -- 
>>>>         -----------------------------------------
>>>>         Gunnar Hellström
>>>>         Omnitor
>>>>         gunnar.hellstrom@omnitor.se
>>>>         <mailto:gunnar.hellstrom@omnitor.se>
>>>>         +46 708 204 288 <tel:%2B46%20708%20204%20288>
>>>>
>>>>
>>>
>>>     -- 
>>>     -----------------------------------------
>>>     Gunnar Hellström
>>>     Omnitor
>>>     gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>
>>>     +46 708 204 288
>>
>>     -- 
>>     -----------------------------------------
>>     Gunnar Hellström
>>     Omnitor
>>     gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>
>>     +46 708 204 288
>>
>>
>>     _______________________________________________
>>     SLIM mailing list
>>     SLIM@ietf.org <mailto:SLIM@ietf.org>
>>     https://www.ietf.org/mailman/listinfo/slim
>>     <https://www.ietf.org/mailman/listinfo/slim>
>
>     -- 
>     -----------------------------------------
>     Gunnar Hellström
>     Omnitor
>     gunnar.hellstrom@omnitor.se <mailto:gunnar.hellstrom@omnitor.se>
>     +46 708 204 288
>
>

-- 
-----------------------------------------
Gunnar Hellström
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288


--------------05FF34FA2EE7BBD75AE804E2
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <p>Bernard and all,<br>
    </p>
    <p>Yes, I agree with your reasoning. I also understand how we ended
      up in a limiting section 5.4. We were mainly thinking of a
      globally interoperable multimedia communication system, and wanted
      high chance for language and modality match between callers and
      callees. Limiting the choices then seems to increase the
      opportunities for match. But the RFCs can be used in much smaller
      application areas, where it can be of value to differentiate
      between some less common combinations of media and languages that
      by the current wording of 5.4 maybe would be discouraged from
      using the mechanism, while it instead could do it by an internal
      agreement of which media - language combinations are relevant and
      possibly what extra information will be used to distinguish
      between otherwise ambiguous combinations. (as a tag for written or
      spoken language in video that may need further information to be
      decided.)</p>
    <p>So, yet another proposal for Issue #43 and section 5.4, much more
      informative and less restrictive than the previous version. <br>
    </p>
    <p>-----Old text----</p>
    <pre class="newpage" style="font-size: 13.3333px; margin-top: 0px; margin-bottom: 0px; break-before: page; color: rgb(0, 0, 0); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;"><span class="h3" style="line-height: 0pt; display: inline; white-space: pre; font-family: monospace; font-size: 1em; font-weight: bold;"><h3 style="line-height: 0pt; display: inline; white-space: pre; font-family: monospace; font-size: 1em; font-weight: bold;">5.4 Undefined Combinations</h3></span>

   The behavior when specifying a non-signed language tag for a video
   media stream, or a signed language tag for an audio or text media
   stream, is not defined in this document.

   The problem of knowing which language tags are signed and which are
   not is out of scope of this document.
</pre>
    -----New text------------<br>
    5.4 Media, Language and Modality indications<br>
    <br>
    The combination of Language tags and other information in the media
    descriptions should be composed so that the intended modality can be
    concluded by the negotiating parties. For general use, with best
    opportunity for finding matching languages, it is recommended to use
    the most apparent combinations of language tags and media: sign
    language tags in video media, spoken language tags for audio media
    and written language tags for text media. The examples in this
    specification are all from this set of three obvious
    language/media/modality combinations.<br>
    <br>
    The following explains some factors in combining language tags,
    media types and other media description information to identify
    intended language modalities.  <br>
    <br>
    A specific sign language can be identified by its existence in the
    IANA registry of language subtags according to BCP 47 [RFC5646] ,
    and finding that the language subtag is found at least in two
    entries in the registry, once with the Type field "language" and
    once with the Type field "extlang" combined with the Prefix field
    value "sgn". <br>
    <br>
    A generic identification of sign language competence or preference,
    without specifying exactly which sign language, can be indicated by
    use of the value "sgn" in the language tag of the corresponding
    "hlang" attribute.<br>
    <br>
    Sign language communication in the usually used visual modality is
    most often conveyed in a "video" media stream. Application specific
    use may appear in other media, such as "message" and "application".
    Certain textual notation modalities of sign language may appear in
    the "text" media stream. <br>
    <br>
    A specific spoken or written language can be identified by finding
    that the language subtag exsists in the IANA registry of language
    subtags according to  BCP 47 [RFC5646] with the Type field
    "language" and no entry for the language subtag exists with the
    value "sgn" in the Prefix field of any entry with type field value
    "extlang".  The spoken modality is usually conveyed in an "audio"
    media stream. The written modality in real-time is usually conveyed
    in a "text" media stream.  <br>
    <br>
    Use of a language subtag for a written or spoken language in other
    media streams than  "text" or "audio" require further indications or
    application agreements for identification of the modality. A number
    of such further indications are available and new ones may be added
    by further work. Use of written modality in another media stream
    than "text", may be discriminated by use of a script subtag in the
    language tag, where that is appropriate. Use for sending of a visual
    view of a speaking person may be indicated by the value "speaker" in
    an SDP Content attribute according to RFC 4796 [RFC4796] in a
    "video" media stream or another media carrying video (e.g. "message"
    or "application").<br>
    <br>
    Use of written modality in another media stream than "text", may,
    for cases when the script subtag is suppressed, be discriminated by
    use of any other appropriate notation or application agreement. An
    appropriate notation may be use of a media subtype specific to the
    intended modality.<br>
-------------------------------------------------------------------End
    of new text---------------------------------------<br>
    <br>
    Gunnar<br>
    <br>
    <br>
    <br>
    <br>
     <br>
    <br>
    <br>
    <p>  </p>
    <br>
    <div class="moz-cite-prefix">Den 2017-10-24 kl. 03:51, skrev Bernard
      Aboba:<br>
    </div>
    <blockquote type="cite"
cite="mid:CAOW+2dv-Pob1DPVXDe81hyeM8k7hEpT-9BaRte706_J+Snv60g@mail.gmail.com">
      <div dir="ltr">Thanks for suggesting a way forward, Gunnar. I too
        would like to get Issue 43 resolved so we can move forward in
        the process. 
        <div>Please send your thoughts to the mailing list (preferrably
          before the October 30 submission deadline so we can spin a new
          draft version). <br>
          <div><br>
          </div>
          <div>In thinking about the issue, a question Paul asked has
            stuck in my mind:  What difference does it/should it make? </div>
          <div><br>
          </div>
          <div>Let us presume that the user agents are configured to
            signal the language preferences. </div>
          <div><br>
          </div>
          <div>In a pure peer-to-peer case (me calling you, no
            intermediaries), I configure my UA to indicate a preference
            for Mandarin on the text modality, French sign language on
            the audio modality and Swahili on the video modality. <br>
          </div>
          <div><br>
          </div>
          <div>You (knowing my background and weird sense of humour)
            after being notified of my preference, realize I am kidding
            and agree to accept the call, knowing that whatever language
            preference you indicate, we will most likely communicate in
            English since I do not speak Mandarin or Swahili and am
            barely conversant in French, let along French sign
            language. </div>
          <div><br>
          </div>
          <div>Would this scenario have worked out better with rules
            that mandated that my odd choices for audio and video
            languages be labelled "undefined"? I think not.</div>
          <div><br>
          </div>
          <div>In a scenario where the call is between me and a call
            center (not a PSAP) my flippant UA configuration might
            result in the call being rejected due to a lack of Mandarin,
            Swahili or French sign language resources within the call
            center.  But it's not clear that labelling my odd choices as
            "undefined" should play a role in that decision.  For
            example, if the call center did have someone who spoke
            Swahili, and connected me to them (perhaps under the theory
            that my declared preference might indicate an ability to
            lip-read Swahili), this might have improved the chance of
            communication had my UA configuration been based on genuine
            expertise rather than a warped sense of humour. </div>
          <div><br>
          </div>
          <div>In other words,it is not clear to me how Section 5.4's
            discussion of scope improves or clarifies the situation in
            any way - and there is some possibility that it could cause
            problems.</div>
        </div>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On Mon, Oct 23, 2017 at 2:17 PM, Gunnar
          Hellström <span dir="ltr">&lt;<a
              href="mailto:gunnar.hellstrom@omnitor.se" target="_blank"
              moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>&gt;</span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div text="#000000" bgcolor="#FFFFFF">
              <p>Issue #43 is the only issue we have left now. I do not
                want to see the discussion stop again until we have a
                solution on it that seems acceptable. <br>
              </p>
              <p>Section 5.4 seems to be a good place to handle Issue
                #43. <br>
              </p>
              <p>Currently, section 5.4 is more aimed at limiting what
                kind of coding for languages and modalities are
                acceptable. <br>
              </p>
              <p>Some viewpoints said that such limitations are not
                needed and that 5.4 can be deleted. <br>
              </p>
              <p>I think we can do something in between these extremes.
                We can introduce explanations for what is required from
                an acceptable coding of a combination of media,
                languages, directions, and other parameters and explain
                basic ways to assess what is the resulting modality, and
                also explain that the more common media and language
                combinations that are used, the higher chance there is
                for a match. Thus unusual combinations are discouraged
                but not forbidden as long as the modality can be
                assessed from them. They can be used in specific
                applications. <br>
              </p>
              <p>I might continue tomorrow with wording proposal for the
                reasoning above, hoping that we can close issue #43 and
                the discussions around 5.4 soon.</p>
              <span class="HOEnZb"><font color="#888888">
                  <p>/Gunnar<br>
                  </p>
                </font></span>
              <div>
                <div class="h5"> <br>
                  <div class="m_-374027455181744526moz-cite-prefix">Den
                    2017-10-17 kl. 11:02, skrev Gunnar Hellström:<br>
                  </div>
                  <blockquote type="cite">
                    <p>An even more general way to express what section
                      5.4 tries to say is:</p>
                    <p>------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>------------------<br>
                    </p>
                    <p>5.4 Combinations of Language tags and Media
                      descriptions</p>
                    <p>The combination of Language tags and other
                      information in the media descriptions should be
                      made so that the intended modality can be
                      concluded by the negotiating parties.</p>
                    <p><br>
                    </p>
                    <p>
                      ------------------------------<wbr>------------------------------<wbr>----------------------------</p>
                    <p>That makes us not need to investigate what is
                      possible today, and what further attributes or
                      coding rules may be added in the future. <br>
                    </p>
                    <p>We have a risk that implementers start using some
                      insufficient coding, that can cause interop
                      issues. But instead we do not need to limit valid
                      use that we just have not thought about by saying
                      that specific combinations are out of scope or not
                      defined. It is up to implementers to check that
                      the combinations they use result in unambiguous
                      modality.<br>
                    </p>
                    <p>And it opens for use of possible new attributes
                      e.g.  a=modality:spoken  or a=modality:written 
                      etc, to complement the undefined case when a
                      non-signed language tag without script subtag is
                      used in video media, and also for explaining any
                      use of m=application or m=message media in
                      interactive communication. <br>
                    </p>
                    <p>It does not really answer Issue #43 by explaining
                      HOW to assess the modality easily, but it requires
                      the implementers to make sure that it is possible.</p>
                    <p>And deducting the intended modality is the key to
                      successful neotiation and communication.<br>
                    </p>
                    <p>Do you think this would be clear enough, or do we
                      need to go into what clear cases we have?<br>
                    </p>
                    <p>Gunnar<br>
                    </p>
                    <p><br>
                    </p>
                    <br>
                    <div class="m_-374027455181744526moz-cite-prefix">Den
                      2017-10-17 kl. 00:21, skrev Gunnar Hellström:<br>
                    </div>
                    <blockquote type="cite"> Den 2017-10-16 kl. 01:21,
                      skrev Bernard Aboba:<br>
                      <blockquote type="cite">
                        <div dir="ltr">Paul said: 
                          <div><br>
                          </div>
                          <div>""<span
                              style="color:rgb(80,0,80);font-size:12.8px">-
                              can the UA use this information to change
                              how to render the media?"</span></div>
                          <div><span
                              style="color:rgb(80,0,80);font-size:12.8px"><br>
                            </span></div>
                          <div><span
                              style="color:rgb(80,0,80);font-size:12.8px">[BA] 
                              If the video is used for signing, an
                              application might infer an encoder
                              preference for frame rate over resolution
                              (e.g. in WebRTC, RTCRtpParameters.<wbr>degradationPreference
                              = "maintain-framerate" )</span></div>
                        </div>
                      </blockquote>
                      &lt;GH&gt;Right, that is a valid example of how
                      real "knowledge" of the modality can be used by
                      the application. <br>
                      <br>
                      <br>
                      And, as a response on issue #43,<br>
                      <br>
                      A simple way is to say<br>
                      <br>
                      Video media descriptions shall only contain sign
                      language tags<br>
                      Audio media descriptions shall only contain
                      language tags for spoken language<br>
                      Text media descriptions shall only contain
                      language tags for written language<br>
                      Use of other media descriptions such as message
                      and application with language indications require
                      other specifications on how to assess the modality
                      for non-signed languages.<br>
                      <br>
                      The current 5.4 does not mention our main problem
                      with the language tags, that there is no
                      difference on them if we mean use for spoken
                      language or written language. We should have made
                      better efforts to solve that problem long ago, but
                      we have not.<br>
                      <br>
                      5.4 can be modified to specify the simple limited
                      case and the problems that block us from
                      specifying other cases:<br>
                      <br>
                      <pre class="m_-374027455181744526newpage" style="font-size:13.3333px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0);font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial"><span class="m_-374027455181744526h3" style="line-height:0pt;display:inline;white-space:pre-wrap;font-family:monospace;font-size:1em;font-weight:bold"><h3 style="line-height:0pt;display:inline;white-space:pre-wrap;font-family:monospace;font-size:1em;font-weight:bold"><a class="m_-374027455181744526selflink" name="m_-374027455181744526_section-5.4" href="https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4" style="color:black;text-decoration:none" target="_blank" moz-do-not-send="true">5.4</a>.  Media and modality Combination problems</h3></span>


   The problem of indicating a language tag for the view of a speaking person in a video stream is out of scope for this document.

   The problem of indicating a language tag for use of written language coded as a component in a video stream is out of scope for this document.

   The use of language tags for negotiation of languages in other media than audio, video and text is not defined in this document.

   The problem of knowing which language tags are signed and which are not can be deducted 
   from the IANA language tag registry. How this is done is out of scope of this document.
</pre>
                      <br>
                      ------------------------------<wbr>--------------------------<br>
                      <br>
                      <br>
                      But if we want to allow more cases, we need to
                      consider the following complications:<br>
                        <br>
                      <br>
                      1. to assess if a language represents a Sign
                      Language, the application can look for the word
                      "sign" in the description in the  IANA language
                      registry or a copy thereof as Randall already
                      indicated. <br>
                      <br>
                      2. For written languages used as a text component
                      in a video stream, it is possible to code this for
                      languages requiring a script subtag, but not for
                      languages with suppressed script subtags <br>
                      <br>
                      3. We have also discussed proposals for how to
                      code written language in video stream for
                      languages not requiring a script subtag, but not
                      got acceptance for our proposals. So we need to
                      say that that is currently undefined.<br>
                      <br>
                      4. We also discussed how to code a view of a
                      speaking person in video and said that that could
                      be done by using the "definitively not written"
                      script subtag on a non-signed language tag in
                      video. But that was not appreciated by the
                      language experts. Another option was to not allow
                      written language overlayed on video, and that is
                      the lately used option. ( up to version -16 or so)
                      <br>
                      <br>
                      5. For talking and hearing audio media, we only
                      have that case for language-tags in Audio. So that
                      is easy to code and assess.<br>
                      <br>
                      6. For written language in text media, a check can
                      be made about if "sign" is part of the language
                      tag description, and if not, it is a written
                      language. <br>
                      <br>
                      7. For signed language in text media, a check can
                      be made about if "sign" is part of the language
                      tag description, and if it is, it is a signed
                      language in text notation. (extremely unusual)<br>
                      <br>
                      8. For use with language tags in other media than
                      audio, video and text, there is a need for a
                      description on how to assess the modality,
                      especially for non-signed languages before it is
                      used.<br>
                      <br>
                      <br>
                      We can construct a section 5.4 to describe this
                      situation, but I doubt that it is worth the
                      effort.<br>
                      <br>
                      <br>
                      <blockquote type="cite">
                        <div dir="ltr">
                          <div><span
                              style="color:rgb(80,0,80);font-size:12.8px"><br>
                            </span></div>
                          <div><span
                              style="color:rgb(80,0,80);font-size:12.8px">See:  </span><font
                              color="#500050"><span
                                style="font-size:12.8px"><a
href="https://rawgit.com/w3c/webrtc-pc/master/webrtc.html#dom-rtcrtpparameters-degradationpreference"
                                  target="_blank" moz-do-not-send="true">https://rawgit.com/w3c/<wbr>webrtc-pc/master/webrtc.html#<wbr>dom-rtcrtpparameters-<wbr>degradationpreference</a></span></font></div>
                        </div>
                        <div class="gmail_extra"><br>
                          <div class="gmail_quote">On Sun, Oct 15, 2017
                            at 2:22 PM, Gunnar Hellström <span
                              dir="ltr">&lt;<a
                                href="mailto:gunnar.hellstrom@omnitor.se"
                                target="_blank" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>&gt;</span>
                            wrote:<br>
                            <blockquote class="gmail_quote"
                              style="margin:0 0 0 .8ex;border-left:1px
                              #ccc solid;padding-left:1ex"><span>Den
                                2017-10-15 kl. 21:27, skrev Paul
                                Kyzivat:<br>
                                <blockquote class="gmail_quote"
                                  style="margin:0 0 0
                                  .8ex;border-left:1px #ccc
                                  solid;padding-left:1ex"> On 10/15/17
                                  1:49 PM, Bernard Aboba wrote:<br>
                                  <blockquote class="gmail_quote"
                                    style="margin:0 0 0
                                    .8ex;border-left:1px #ccc
                                    solid;padding-left:1ex"> Paul said:<br>
                                    <br>
                                    "For the software to know must mean
                                    that it will behave differently for
                                    a tag that represents a sign
                                    language than for one that
                                    represents a spoken or written
                                    language. What is it that it will do
                                    differently?"<br>
                                    <br>
                                    [BA] In terms of behavior based on
                                    the signed/non-signed distinction,
                                    in -17 the only reference appears to
                                    be in Section 5.4, stating that
                                    certain combinations are not defined
                                    in the document (but that definition
                                    of those combinations was out of
                                    scope):<br>
                                  </blockquote>
                                  <br>
                                  I'm asking whether this is a
                                  distinction without a difference. I'm
                                  not asking whether this makes a
                                  difference in the *protocol*, but
                                  whether in the end it benefits the
                                  participants in the call in any way. <br>
                                </blockquote>
                              </span> &lt;GH&gt;Good point, I was on my
                              way to make a similar comment earlier
                              today. The difference it makes for
                              applications to "know" what modality a
                              language tag represents in its used
                              position seems to be only for imagined
                              functions that are out of scope for the
                              protocol specification.<span><br>
                                <blockquote class="gmail_quote"
                                  style="margin:0 0 0
                                  .8ex;border-left:1px #ccc
                                  solid;padding-left:1ex"> For instance:<br>
                                  <br>
                                  - does it help the UA to decide how to
                                  alert the callee, so that the<br>
                                    callee can better decide whether to
                                  accept the call or instruct the<br>
                                    UA about how to handle the call?<br>
                                </blockquote>
                              </span> &lt;GH&gt;Yes, for a regular human
                              user -to-user call, the result of the
                              negotiation must be presented to the
                              participants, so that they can start the
                              call with a language and modality that is
                              agreed.<br>
                              That presentation could be exactly the
                              description from the language tag
                              registry, and then no "knowledge" is
                              needed from the application. But it is
                              more likely that the application has its
                              own string for presentation of the
                              negotiated language and modality. So that
                              will be presented. But it is still found
                              by a table lookup between language tag and
                              string for a language name, so no real
                              knowledge is needed.<br>
                              We have said many times that the way the
                              application tells the user the result of
                              the negotiation is out of scope for the
                              draft, but it is good to discuss and know
                              that it can be done.<br>
                              A similar mechanism is also needed for
                              configuration of the user's language
                              preference profile further discussed
                              below.<span><br>
                                <blockquote class="gmail_quote"
                                  style="margin:0 0 0
                                  .8ex;border-left:1px #ccc
                                  solid;padding-left:1ex"> <br>
                                  - does it allow the UA to make a
                                  decision whether to accept the media?<br>
                                </blockquote>
                              </span> &lt;GH&gt;No, the media should be
                              accepted regardless of the result of the
                              language negotiation.<span><br>
                                <blockquote class="gmail_quote"
                                  style="margin:0 0 0
                                  .8ex;border-left:1px #ccc
                                  solid;padding-left:1ex"> <br>
                                  - can the UA use this information to
                                  change how to render the media?<br>
                                </blockquote>
                              </span> &lt;GH&gt;Yes, for the specialized
                              text notation of sign language we have
                              discussed but currently placed out of
                              scope, a very special rendering
                              application is needed. The modality would
                              be recognized by a script subtag to a sign
                              language tag used in text media. However,
                              I think that would be best to also use it
                              with a specific text subtype, so that the
                              rendering can be controlled by invocation
                              of a "codec" for that rendering.<span><br>
                                <blockquote class="gmail_quote"
                                  style="margin:0 0 0
                                  .8ex;border-left:1px #ccc
                                  solid;padding-left:1ex"> <br>
                                  And if there is something like this,
                                  will the UA be able to do this
                                  generically based on whether the media
                                  is sign language or not, or will the
                                  UA need to already understand
                                  *specific* sign language tags?<br>
                                </blockquote>
                              </span> &lt;GH&gt;Applications will need
                              to have localized versions of the names
                              for the different sign languages and also
                              for spoken languages and written
                              languages, to be used in setting of
                              preferences and announcing the results of
                              the negotiation. It might be overkill to
                              have such localized names for all
                              languages in the IANA language registry,
                              so it will need to be able to handle
                              localized names of a subset och the
                              registry. With good design however, this
                              is just an automatic translation between a
                              language tag and a corresponding name, so
                              it does in fact not require any
                              "knowledge" of what modality is used with
                              each language tag.<br>
                              The application can ask for the
                              configuration:<br>
                              "Which languages do you want to offer to
                              send in video"<br>
                              "Which languages do you want to offer to
                              send in text"<br>
                              "Which languages do you want to offer to
                              send in audio"<br>
                              "Which languages do you want to be
                              prepared to receive in video"<br>
                              "Which languages do you want to be
                              prepared to receive in text"<br>
                              "Which languages do you want to be
                              prepared to receive in audio"<br>
                              <br>
                              And for each question provide a list of
                              language names to select from. When the
                              selection is made, the corresponding
                              language tag is placed in the profile for
                              negotiation.<br>
                              <br>
                              If the application provides the whole IANA
                              language registry to the user for each
                              question, then there is a possibility that
                              the user by mistake selects a language
                              that requires another modality than the
                              question was about. If the application
                              shall limit the lists provided for each
                              question, then it will need a kind of
                              knowledge about which language tags suit
                              each modality (and media)<span><br>
                                <br>
                                <br>
                                <blockquote class="gmail_quote"
                                  style="margin:0 0 0
                                  .8ex;border-left:1px #ccc
                                  solid;padding-left:1ex"> <br>
                                  E.g., A UA serving a deaf person might
                                  automatically introduce a sign
                                  language interpreter into an incoming
                                  audio-only call. If the incoming call
                                  has both audio and video then the
                                  video *might* be for conveying sign
                                  language, or not. If not then the UA
                                  will still want to bring in a sign
                                  language interpreter. But is knowing
                                  the call generically contains sign
                                  language sufficient to decide against
                                  bringing in an interpreter? Or must
                                  that depend on it being a sign
                                  language that the user can use? If the
                                  UA is configured for all the specific
                                  sign languages that the user can deal
                                  with then there is no need to
                                  recognize other sign languages
                                  generically.<br>
                                </blockquote>
                              </span> &lt;GH&gt;We are talking about
                              specific language tags here and knowing
                              what modality they are used for. The user
                              needs to specify which sign languages they
                              prefer to use. The callee application can
                              be made to look for gaps between what the
                              caller offers and what the callee can
                              accept, and from that deduct which type
                              and languages for a conversion that is
                              needed, and invoke that as a relay
                              service. That invocation can be made
                              completely table driven and have
                              corresponding translation profiles for
                              available relay services. But it is more
                              likely that it is done by having some
                              knowledge about which languages are sign
                              languages and which are spoken languages
                              and sending the call to the relay service
                              to try to sort out if they can handle the
                              translation.<br>
                              <blockquote class="gmail_quote"
                                style="margin:0 0 0 .8ex;border-left:1px
                                #ccc solid;padding-left:1ex"> <br>
                                <br>
                              </blockquote>
                              So, the answer is - no, the application
                              does not really have any knowledge about
                              which modality a language tag represents
                              in its used position. If the user selects
                              to indicate very rare language tag
                              indications for a media, then a match will
                              just become very unlikely.<br>
                              <br>
                              Where does this discussion take us? Should
                              we modify section 5.4 again?<br>
                              <br>
                              Thanks<span
                                class="m_-374027455181744526HOEnZb"><font
                                  color="#888888"><br>
                                  Gunnar</font></span>
                              <div class="m_-374027455181744526HOEnZb">
                                <div class="m_-374027455181744526h5"><br>
                                  <blockquote class="gmail_quote"
                                    style="margin:0 0 0
                                    .8ex;border-left:1px #ccc
                                    solid;padding-left:1ex">     Thanks,<br>
                                        Paul<br>
                                    <br>
                                    <blockquote class="gmail_quote"
                                      style="margin:0 0 0
                                      .8ex;border-left:1px #ccc
                                      solid;padding-left:1ex">       5.4<br>
                                      &lt;<a
href="https://tools.ietf.org/html/draft-ietf-slim-negotiating-human-language-17#section-5.4"
                                        rel="noreferrer" target="_blank"
                                        moz-do-not-send="true">https://tools.ietf.org/html/d<wbr>raft-ietf-slim-negotiating-hum<wbr>an-language-17#section-5.4</a>&gt;.<br>
                                            Undefined Combinations<br>
                                      <br>
                                      <br>
                                      <br>
                                          The behavior when specifying a
                                      non-signed language tag for a
                                      video<br>
                                          media stream, or a signed
                                      language tag for an audio or text
                                      media<br>
                                          stream, is not defined in this
                                      document.<br>
                                      <br>
                                          The problem of knowing which
                                      language tags are signed and which
                                      are<br>
                                          not is out of scope of this
                                      document.<br>
                                      <br>
                                      <br>
                                      <br>
                                      On Sun, Oct 15, 2017 at 10:13 AM,
                                      Paul Kyzivat &lt;<a
                                        href="mailto:pkyzivat@alum.mit.edu"
                                        target="_blank"
                                        moz-do-not-send="true">pkyzivat@alum.mit.edu</a>
                                      &lt;mailto:<a
                                        href="mailto:pkyzivat@alum.mit.edu"
                                        target="_blank"
                                        moz-do-not-send="true">pkyzivat@alum.mit.edu</a>&gt;<wbr>&gt;
                                      wrote:<br>
                                      <br>
                                          On 10/15/17 2:24 AM, Gunnar
                                      Hellström wrote:<br>
                                      <br>
                                              Paul,<br>
                                              Den 2017-10-15 kl. 01:19,
                                      skrev Paul Kyzivat:<br>
                                      <br>
                                                  On 10/14/17 2:03 PM,
                                      Bernard Aboba wrote:<br>
                                      <br>
                                                      Gunnar said:<br>
                                      <br>
                                                      "Applications not
                                      implementing such specific
                                      notations<br>
                                                      may use the
                                      following simple deductions.<br>
                                      <br>
                                                      - A language tag
                                      in audio media is supposed to
                                      indicate<br>
                                                      spoken modality.<br>
                                      <br>
                                                      [BA] Even a tag
                                      with "Sign Language" in the
                                      description??<br>
                                      <br>
                                                      - A language tag
                                      in text media is supposed to
                                      indicate                 written
                                      modality.<br>
                                      <br>
                                                      [BA] If the tag
                                      has "Sign Language" in the
                                      description,<br>
                                                      can this document
                                      really say that?<br>
                                      <br>
                                                      - A language tag
                                      in video media is supposed to
                                      indicate<br>
                                                      visual sign
                                      language modality except for the
                                      case when<br>
                                                      it is supposed to
                                      indicate a view of a speaking
                                      person<br>
                                                      mentioned in
                                      section 5.2 characterized by the
                                      exact same<br>
                                                      language tag also
                                      appearing in an audio media
                                      specification.<br>
                                      <br>
                                                      [BA] It seems like
                                      an over-reach to say that a spoken<br>
                                                      language tag in
                                      video media should instead be<br>
                                                      interpreted as a
                                      request for Sign Language.  If
                                      this<br>
                                                      were done, would
                                      it always be clear which Sign
                                      Language<br>
                                                      was intended?  And
                                      could we really assume that both<br>
                                                      sides, if
                                      negotiating a spoken language tag
                                      in video<br>
                                                      media, were really
                                      indicating the desire to sign?  It<br>
                                                      seems like this
                                      could easily result
                                      interoperability<br>
                                                      failure.<br>
                                      <br>
                                      <br>
                                                  IMO the right way to
                                      indicate that two (or more) media<br>
                                                  streams are conveying
                                      alternative representations of the<br>
                                                  same language content
                                      is by grouping them with a new<br>
                                                  grouping attribute.
                                      That can tie together an audio
                                      with a<br>
                                                  video and/or text. A
                                      language tag for sign language on
                                      the<br>
                                                  video stream then
                                      clarifies to the recipient that it
                                      is sign<br>
                                                  language. The grouping
                                      attribute by itself can indicate
                                      that<br>
                                                  these streams are
                                      conveying language.<br>
                                      <br>
                                              &lt;GH&gt;Yes, and that is
                                      proposed in<br>
                                             
                                      draft-hellstrom-slim-modality-<wbr>grouping   
                                      with two kinds of<br>
                                              grouping: One kind of
                                      grouping to tell that two or more<br>
                                              languages in different
                                      streams are alternatives with the
                                      same<br>
                                              content and a priority
                                      order is assigned to them to guide
                                      the<br>
                                              selection of which one to
                                      use during the call. The other
                                      kind of<br>
                                              grouping telling that two
                                      or more languages in different
                                      streams<br>
                                              are desired together with
                                      the same language content but<br>
                                              different modalities (
                                      such as the use for captioned
                                      telephony<br>
                                              with the same content
                                      provided in both speech and text,
                                      or sign<br>
                                              language interpretation
                                      where you see the interpreter, or<br>
                                              possibly spoken language
                                      interpretation with the languages<br>
                                              provided in different
                                      audio streams ). I hope that that
                                      draft<br>
                                              can be progressed. I see
                                      it as a needed complement to the
                                      pure<br>
                                              language indications per
                                      media.<br>
                                      <br>
                                      <br>
                                          Oh, sorry. I did read that
                                      draft but forgot about it.<br>
                                      <br>
                                              The discussion in this
                                      thread is more about how an
                                      application<br>
                                              would easily know that
                                      e.g. "ase" is a sign language and
                                      "en" is<br>
                                              a spoken (or written)
                                      language, and also a discussion
                                      about what<br>
                                              kinds of languages are
                                      allowed and indicated by default
                                      in each<br>
                                              media type. It was not at
                                      all about falsely using language
                                      tags<br>
                                              in the wrong media type as
                                      Bernard understood my wording. It
                                      was<br>
                                              rather a limitation to
                                      what modalities are used in each
                                      media<br>
                                              type and how to know the
                                      modality with cases that are not<br>
                                              evident, e.g.
                                      "application" and "message" media
                                      types.<br>
                                      <br>
                                      <br>
                                          What do you mean by "know"? Is
                                      it for the *UA* software to know,
                                      or<br>
                                          for the human user of the UA
                                      to know? Presumably a human user
                                      that<br>
                                          cares will understand this if
                                      presented with the information in
                                      some<br>
                                          way. But typically this isn't
                                      presented to the user.<br>
                                      <br>
                                          For the software to know must
                                      mean that it will behave
                                      differently<br>
                                          for a tag that represents a
                                      sign language than for one that<br>
                                          represents a spoken or written
                                      language. What is it that it will
                                      do<br>
                                          differently?<br>
                                      <br>
                                                   Thanks,<br>
                                                   Paul<br>
                                      <br>
                                      <br>
                                              Right now we have returned
                                      to a very simple rule: we define
                                      only<br>
                                              use of spoken language in
                                      audio media, written language in
                                      text<br>
                                              media and sign language in
                                      video media.<br>
                                              We have discussed other
                                      use, such as a view of a speaking
                                      person<br>
                                              in video, text overlay on
                                      video, a sign language notation in<br>
                                              text media, written
                                      language in message media, written
                                      language<br>
                                              in WebRTC data channels,
                                      sign written and spoken in bucket
                                      media<br>
                                              maybe declared as
                                      application media. We do not
                                      define these<br>
                                              cases. They are just not
                                      defined, not forbidden. They may
                                      be<br>
                                              defined in the future.<br>
                                      <br>
                                              My proposed wording in
                                      section 5.4 got too many<br>
                                              misunderstandings so I
                                      gave up with it. I think we can
                                      live with<br>
                                              5.4 as it is in version
                                      -16.<br>
                                      <br>
                                              Thanks,<br>
                                              Gunnar<br>
                                      <br>
                                      <br>
                                      <br>
                                                  (IIRC I suggested
                                      something along these lines a long
                                      time ago.)<br>
                                      <br>
                                                       Thanks,<br>
                                                       Paul<br>
                                      <br>
                                                 
                                      ______________________________<wbr>_________________<br>
                                                  SLIM mailing list<br>
                                                  <a
                                        href="mailto:SLIM@ietf.org"
                                        target="_blank"
                                        moz-do-not-send="true">SLIM@ietf.org</a>
                                      &lt;mailto:<a
                                        href="mailto:SLIM@ietf.org"
                                        target="_blank"
                                        moz-do-not-send="true">SLIM@ietf.org</a>&gt;<br>
                                                  <a
                                        href="https://www.ietf.org/mailman/listinfo/slim"
                                        rel="noreferrer" target="_blank"
                                        moz-do-not-send="true">https://www.ietf.org/mailman/l<wbr>istinfo/slim</a><br>
                                                  &lt;<a
                                        href="https://www.ietf.org/mailman/listinfo/slim"
                                        rel="noreferrer" target="_blank"
                                        moz-do-not-send="true">https://www.ietf.org/mailman/<wbr>listinfo/slim</a>&gt;<br>
                                      <br>
                                      <br>
                                      <br>
                                          ______________________________<wbr>_________________<br>
                                          SLIM mailing list<br>
                                          <a
                                        href="mailto:SLIM@ietf.org"
                                        target="_blank"
                                        moz-do-not-send="true">SLIM@ietf.org</a>
                                      &lt;mailto:<a
                                        href="mailto:SLIM@ietf.org"
                                        target="_blank"
                                        moz-do-not-send="true">SLIM@ietf.org</a>&gt;<br>
                                          <a
                                        href="https://www.ietf.org/mailman/listinfo/slim"
                                        rel="noreferrer" target="_blank"
                                        moz-do-not-send="true">https://www.ietf.org/mailman/l<wbr>istinfo/slim</a><br>
                                          &lt;<a
                                        href="https://www.ietf.org/mailman/listinfo/slim"
                                        rel="noreferrer" target="_blank"
                                        moz-do-not-send="true">https://www.ietf.org/mailman/<wbr>listinfo/slim</a>&gt;<br>
                                      <br>
                                      <br>
                                    </blockquote>
                                    <br>
                                  </blockquote>
                                  <br>
                                </div>
                              </div>
                              <div class="m_-374027455181744526HOEnZb">
                                <div class="m_-374027455181744526h5"> --
                                  <br>
                                  ------------------------------<wbr>-----------<br>
                                  Gunnar Hellström<br>
                                  Omnitor<br>
                                  <a
                                    href="mailto:gunnar.hellstrom@omnitor.se"
                                    target="_blank"
                                    moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a><br>
                                  <a href="tel:%2B46%20708%20204%20288"
                                    value="+46708204288" target="_blank"
                                    moz-do-not-send="true">+46 708 204
                                    288</a><br>
                                  <br>
                                </div>
                              </div>
                            </blockquote>
                          </div>
                          <br>
                        </div>
                      </blockquote>
                      <br>
                      <pre class="m_-374027455181744526moz-signature" cols="72">-- 
------------------------------<wbr>-----------
Gunnar Hellström
Omnitor
<a class="m_-374027455181744526moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se" target="_blank" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
                    </blockquote>
                    <br>
                    <pre class="m_-374027455181744526moz-signature" cols="72">-- 
------------------------------<wbr>-----------
Gunnar Hellström
Omnitor
<a class="m_-374027455181744526moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se" target="_blank" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
                    <br>
                    <fieldset
                      class="m_-374027455181744526mimeAttachmentHeader"></fieldset>
                    <br>
                    <pre>______________________________<wbr>_________________
SLIM mailing list
<a class="m_-374027455181744526moz-txt-link-abbreviated" href="mailto:SLIM@ietf.org" target="_blank" moz-do-not-send="true">SLIM@ietf.org</a>
<a class="m_-374027455181744526moz-txt-link-freetext" href="https://www.ietf.org/mailman/listinfo/slim" target="_blank" moz-do-not-send="true">https://www.ietf.org/mailman/<wbr>listinfo/slim</a>
</pre>
                  </blockquote>
                  <br>
                  <pre class="m_-374027455181744526moz-signature" cols="72">-- 
------------------------------<wbr>-----------
Gunnar Hellström
Omnitor
<a class="m_-374027455181744526moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se" target="_blank" moz-do-not-send="true">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
                </div>
              </div>
            </div>
          </blockquote>
        </div>
        <br>
      </div>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
-----------------------------------------
Gunnar Hellström
Omnitor
<a class="moz-txt-link-abbreviated" href="mailto:gunnar.hellstrom@omnitor.se">gunnar.hellstrom@omnitor.se</a>
+46 708 204 288</pre>
  </body>
</html>

--------------05FF34FA2EE7BBD75AE804E2--

