ICANN/GNSO GNSO Email List Archives

[ga]


<<< Chronological Index >>>    <<< Thread Index >>>

Re: [ga] IDN issues (was: On Elections)

  • To: sotiris@xxxxxxxxxxxxxxxxx, ga@xxxxxxxxxxxxxx
  • Subject: Re: [ga] IDN issues (was: On Elections)
  • From: JFC Morfin <jefsey@xxxxxxxxxxxxxxxx>
  • Date: Mon, 08 Oct 2007 17:26:55 +0200

At 07:13 06/10/2007, Sotiris Sotiropoulos wrote:
JFC wrote:

> (6) IDNs, and further on LDNs (local/language Domain Names), raise a
> large number of issues, in which none of the ICANN numerous
> committees have discussed or engaged in proper dialog with the
> concerned experts.

I see no reason why subscribers to the GA cannot voice their opinions

GA Members are not accepted in ICANN IDN committees: these are reserved to Constituency members. In addition some internal disputes may not help (I keep refering to Danny and I over the NCUC Membership of my organization, because it is the only one which is a little known on the GA [not because of any feud against Danny :-)], but there are many others ones which are ignored).

openly and candidly about their preference for "multilingualization" as
opposed to "internationalization" of the IDNspace, you have done an
admirable job of stating your concerns and preferences.  Thank you for
sharing them with us...  Personally speaking, while I sympathize with your
conception of the "multilingualization" of the IDNspace,

This is not really mine.
It is a world consensual requirement.
Now, the necessary standardization paradigm is confirmed.

I am not sure
that the IAB and the IETF have managed to work out all the problems of an
English based ascii "internationalizing" standardization model,

After years of work, one sees that a fully successful internationalization is not possible in the TCP/IP model. There are many reasons as to why, but the blocking reason is the lack of a presentation layer in the "DoD Model". As a pure non-secure e-system (English inside), the Internet architecture did not need it - saving on its prototyping complexity and cost. Now, this layer is to be faked at the user application level, using different hybrid solutions such as XML, IDNA, semantic web, etc.

There are three possibilities:
- inserting a presentation layer. This is a huge thing to do, but I think of something. - universally faking one (as we are faking it on a per application/SSDO [like W3C] basis)
- building a virtual Internet atop the current one which would include one.

If we do not want to waste time or work for a long time, I think this should be optimized with the result that IETF will come up with (or not) the addressing and routing. This is a major difference, for example, if we have an IPv6 Internet or "IPv8" Internet.

so to
introduce untold numbers of new complexities to the mix by trying to phase
in a multilingual approach, should that even be possible at this point,
may actually cause more confusion than benefit...

Actually, multilingualization is much simpler to technically consider than internationalization (we only know this via the confirmation that the failed Debbie's NWIP brings, as to how the world wants it, and this is what has an impact. The same, the IETF has not yet explored John Klensin's proposition.

This being said, you are correct, IAB/IETF, Open Sources, W3C, etc. have only had internationalization in mind for more than a decade, and you cannot change that. This is why internationalization must continue to develop and stay abreast of multilingualization's parallel emergence. There is still a lot of work to be made at the internationalization layer, but not as in excluding the multilingualisation layer any more.

To understand this, one has to understand internationalization and multilingualisation are not at the same layer in language support model and we have to correct the current IETF layer violation. Therefore, all this should mutually simplify and assist. There are five communication layers to support languages:

(1) no worded language.
This is universalization by numbers, signs, signals, music, etc.

(2) lingualization.
You embed a language into your model. This should only be at the application layers. This is not the case for the Internet, which only has a protocol pile.

(3) globalization (fathered by Mark Davis, current Unicode President).
You increase (a) the language script support (Unicode), (b) the end ability to support some other language (localization), (c) use a common reference and filtering grid (langtags).

If you do it at the network layer, you "lingually bias" the Internet, and if you do it at the application layer, you "lingually bias" the application.

(3.1) This is why they accepted IDNA (Internationalized Domain Name Application) as there was no architectural problem. The problem is that ICANN does not want to manage it as an application layer issue (using xn--abcd formated regular DNs and TLDs), but rather as a network issue, as if the IDN were actually run at the network level. This is complicated because IDNA is Unicode based. And Unicode is definitely no part of a stable, crisp, secure network layer system as ASCII is. There are solutions to that, but the main problem is to get people to accept that there is a problem (look at the IPv6 situation today).

(3.2) The whole BCP47 issue that I had at the IETF is that they documented the second approach (applications), attempting to make people believe that it was the first one (network) for marketing reasons (access to IANA). They did not want to discuss IDN because they were an _other_ application and not a protocol.

(4) multilingualization
This is what I described before. Every language can be supported equally on the technical side of things (obviously real support will depend on the way people use the tools - English, Chinese, and French are likely to obtain more supporting content than Inuit). To show that there is no contradiction between globalization and multilingualization, a good definition of multilingualization is "the localization of globalization", which means to globalize every language. This seems to be a huge and impossible task to centralizing American globalizers. Therefore, they block themselves at their globalization/internationalization layer. Debbie brought them the correct decentralizing way of an English globalizer, which should dramatically help them to organize their transition, but she still confused it with a generalised distributed solution.

(5) metalingualization
The reason as to why they think it is complex is because they are starting from one language (English, but it could be Chinese, or any other single language), so they have no rule/method/know-how/model. They would proceed with a second language the same as with the first one, in turn creating conflicts. This is what is feared regarding what could come from China and Chinese Names.

If you change paradigm (the way everyone considers something) you can more easily understand how all this works. But which one? As indicated above, the vote shows that ISO prefers to work on its existing distributed paradigm rather than on the decentralised one Debbie proposed; but there is interest in seeing the evolution that Debbie's proposition could bring.

What is the current ISO paradigm? A paradigm is also obtained from an example that may serve as a model. ISO 3166 is the paradigm that shows we can agree to write (in languages and scripts) about languages and scripts independently from languages and scripts. This is the expression of a more general axiom: metadata semantics is language and script invariant. This ultimately means that semantic decoders will allow you to speak in your language and for me to receive _your_ semantic in mine (actually better than I do today without such an assistant and through the pragmatic screen - I need to know all the context). This can be achieved in at least in two ways:

- using polynym (synonyms in every language) solutions such as translation memory,
- or conceiving conceptual languages such as IEML. (Canadian Pierre Levy).

John Klensin's major contribution is to have suggested to only consider scripts in the IDNA process and not to feel bound by any linguistic limitation. This means that scripts would belong to the IETF, not languages. Scripts are digitally coded information, languages are the codes for semantics.


Practical implementation
Internationalization and IPv6 are much alike: they are adapted to the evolution of the Internet from the rough consensus unilateral approach of the architecture towards a decentralized influence (this is the IETF framework that was recently detailed by Harald Alvestrand in RFC 3935). This can work for an international US network (it was invented at IBM to keep in close touch with their foreign users). Multilingualization is like relativity: atoms did not change when Einstein discovered it, but they were considered differently.

Therefore, as it is for IPv6, and the current ROAP problem (the IETF meets with routing and addressing), the language problem is the way that they consider the existing technology, resources, and users.

Languages are a key issue, but only one of similar key innovative issues that lead to better perceive the Internet as distributed. Actually, the IETF relation with Internet machines, is the ICANN relation with us, the users of these machines.

The same way ICANN wants to impose on us their old perception of users as people that can be led through influence, the same way IETF is tied to their old perception that the Internet is a decentralized place (this is a basic IETF core value) that can be influenced. They do not consider distributed spaces where interoperability and interintelligibility is the key. Both of them are to get real. The internet is not only made of ICANN contracts and old RFCs.


Impact
The point you make is, therefore, rather important. Languages are a main part of externets axiologies:

- externets can be called "external network look-alike within the network", "colsed gardens", "user groups", etc. they are not very well known to Internet architects. John Klensin proposed to use classes to support the Multilingual Internet that way (ICANN - ICP.3)

- axiologies are set of parameters and functions that may define a virtual environment [ex. a standard] (that for example an ontology may decribe [ex. a norm].

Multilingualization de facto brings up this concept (the English externet, the French and Greek externets where only one language is used). In order to implement externtets, other concepts will emerge , such as "maps" (the real-time description of externet topologies). This makes us enter into considerations such as NIMROD [a key IETF project in terms of understanding the Internet], but on a multilple basis (a global Internet wide map for each externet). Now multi-tier is legal in the US, externets would offer an excellent cross-ISP architecture etc.

However, at the same time, externets means a big loss of legitimacy and technical capacity in controlling the "externets of the Internet of local nets" (networks of the network of networks). Practically, this means that the only network common end to end continuity everything should be built upon would be "interoperability". This is totally conform with the "protocol on the rope" and "dumb network" Internet principles, but not with the way ICANN tries to govern it, IETF has permitted black boxes to develop and is late with OPES, and ISO/IANA have not cooperated yet.

This means that interoperability can only be obtained today through an Open Standard and Open Governance approach. No more IETF standards, US dominance, ICANN control, or even ITU - only an Interoperability Governance Forum and ISO norms (as explained above: a norm describes "normality" [average situation] used as interoperable bottom-up standardization [a standard describes the way one wants to change normality]).

Can this work?

It has to. We have to remember that we, as persons, can stay alive and do something else if we become bored, computers cannot: if they are poorly used they just stop delivering and the network dies.

Cheers.
jfc

"First they ignore you, then they laugh at you, then they fight you, then you win!" Gandhi.

<<< Chronological Index >>>    <<< Thread Index >>>