[Home] [By Thread] [By Date] [Recent Entries]

  • From: Ihe Onwuka <ihe.onwuka@g...>
  • To: Dimitre Novatchev <dnovatchev@g...>
  • Date: Tue, 7 Oct 2014 08:48:57 +0100


On Thu, Oct 2, 2014 at 9:45 PM, Dimitre Novatchev <dnovatchev@g...> wrote:
On Thu, Oct 2, 2014 at 12:07 PM, Ihe Onwuka <ihe.onwuka@g...> wrote:
> Which brings me on to the last point. JSONiq is not an option if XSLT is
> part of the solution


So, what about using XSLT 3.0 native capabilities for processing JSON:

     http://www.w3.org/TR/2014/WD-xslt-30-20141002/#json


Sorry for the tardy reply Dmitre. Good question.

The answer is probably  I could but I don't think I want to.

The problem I am solving (and the source of the json ) is described here

http://en.wikibooks.org/wiki/XQuery/Freebase

but I had to reimplent the JSON to XML conversion in JSONiq because of the performance of the xqjson utility. I haven't yet looked at XSLT 3.0 but I have seen references to xsl:iterate floating around and I imagine that would be the mechanism for iterating over all the cursors.

Since in this case  a lossless conversion of the JSON can be done the question I would ask  is what value is  added by writing a program that entails data formats from more than one technology domain. The answer is none, so why then risk the disadvantages of an application that is dependent on bleeding edge technology.

The fact that a cure {JSONiq, XSLT3) exists for a disease (JSON) isn't good reason to inflict the disease upon yourself. Yes I called it a disease because it pops up in places where it has no business being. If you are integrating corporate data feeds or XHTML (natural format for data scraped from the web) you need someone giving you JSON like you need a hole in the head. The harder you look the more it looks like the result of a misdiagnosis



[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]


Site Map | Privacy Policy | Terms of Use | Trademarks
Free Stylus Studio XML Training:
W3C Member