[Home] [By Thread] [By Date] [Recent Entries]

  • From: Chris Lovett <clovett@m...>
  • To: "'Steven E. Harris'" <steven.harris@t...>, xml-dev@l...
  • Date: Tue, 21 Nov 2000 02:08:50 -0800

I agree a forward-only streamable subset of XSLT/XPath would be very useful.
Upon further investigation streaming-transformations may turn out to be a
completely different animal.  When you change some fundamental assumptions
(like random access in XPath selections) it is wise to revisit the entire
design.  I also like the idea of throwing in regular expressions while
you're at it.  (Hey, the Schema guys got away with it -
http://www.w3.org/TR/xmlschema-2/#regexs :-)


Chris Lovett

> -----Original Message-----
> From: Steven E. Harris [mailto:steven.harris@t...]
> Sent: Monday, November 20, 2000 1:57 PM
> To: xml-dev@l...
> Subject: Re: transformations
> 
> 
> Paul Tchistopolskii <paul@q...> writes:
> 
> > When processing the document of 1 Mb in size and if 
> producing the result 
> > document which is about 2 Mb in size, the amount of RAM 
> required for this 
> > is not 3 Mb. It is much bigger. *Much* bigger. When using key() 
> > 'for speed' ( looks reasonable to use key() only for 
> *large* documents, 
> > right? ) - add some more RAM for building the in-memory index. 
> 
> Agreed. I had worked on a project for a while with XML files that got
> up over 300MB. Anything other than stream-based processing with
> constant memory usage was impossible.
> 
> Whatever happened to that "stream-processing XSLT profile" thread from
> way back when? The closest thing to an implementation I've seen were
> my own Perl modules and the XML::Twig Perl module.
> 
> -- 
> Steven E. Harris        :: steven.harris@t...
> Tenzing                 :: http://www.tenzing.com
> 

Site Map | Privacy Policy | Terms of Use | Trademarks
Free Stylus Studio XML Training:
W3C Member