[Home] [By Thread] [By Date] [Recent Entries]

  • From: Michael Sokolov <sokolov@i...>
  • To: David Lee <dlee@c...>
  • Date: Fri, 03 Jun 2011 12:20:29 -0400

On 6/3/2011 11:56 AM, David Lee wrote:
005101cc2206$cf696200$6e3c2600$@calldei.com" type="cite">

Its "trivial" yes, but its not "right" IMHO :)

Nor is it necessarily efficient.

 

I wouldn't bet a case of beer that for a large value of attribute x that

               

     points = fn:tokenize( $x , "[ ,]")

 

is more efficient then for a node x with point children

     points = $x/point

 

I can imagine in some processors for some size of $x one or the other is more efficient.

 

But is that a reason to make the design decision for a (potentially) widely used standard schema ?

This is a serious question, not rhetorical.

Hmm - I can imagine some cases where the document size would more than double using this scheme - not an unimportant consideration for high-performance worldwide document distribution.  Perhaps the ideal would have been to allow both encodings.

-Mike


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]


Site Map | Privacy Policy | Terms of Use | Trademarks
Free Stylus Studio XML Training:
W3C Member