Performance of XSLT

S

starlight

Hallo,

there were some posts about this, but nothing I could find useful.
I have a large XML file (80MB) and need certain information out of it.
I though I could use XSLT with an fairy simple transformation:
....
<xsl:for-each select="/values/STRING/item[I=10]">
<tr class="own">
<td><xsl:value-of select="A"/></td>
<td><xsl:value-of select="B"/></td>
</tr>
</xsl:for-each>
<tr class="header">
<td><xsl:value-of select="format-number(sum(/values/STRING/item/A),
'###,###')"/></td>
<td><xsl:value-of select="format-number(sum(/values/STRING/item/B),
'###,###')"/></td>
</tr>
....
but the performance is more than miserable (5-6 hours at least!)
How do I solve this problem? Is there a fast XML-parser, which can do
the work? After all its just a straight-forward read of a file.

Kind Regards,
Chris
 
B

Bjoern Hoehrmann

* starlight wrote in comp.text.xml:
How do I solve this problem? Is there a fast XML-parser, which can do
the work? After all its just a straight-forward read of a file.

Well, which processor did you use up until now? Generally speaking, you
might want to try MSXML and Saxon; Saxon and xsltproc also allow you to
do some tracing and performance analysis, if you reduce the input size
and let them analyze the transformation, you might find why its so slow.
Other than that, you've shown too little of the transformation and the
document to give better advise.
 
P

p.lepin

there were some posts about this, but nothing I could
find useful. I have a large XML file (80MB) and need
certain information out of it. I though I could use XSLT
with an fairy simple transformation:

[simple XSLT fragment]
but the performance is more than miserable (5-6 hours at
least!) How do I solve this problem? Is there a fast
XML-parser, which can do the work? After all its just a
straight-forward read of a file.

What XSLT processor are you using? I ran a quick test on a
~50MB test file filled with junk data (with simple and
regular XML structure), copying one tenth of the records
with predicates and summing the values of all numeric
fields, using xsltproc (libxslt). It took about ten
seconds.

I think it has to be one of the three possible problems:

- either your XSLT processor is really slow;
- or the stuff you're doing is quite a bit more complex
than your example suggests;
- or you're doing something very inefficiently.
 
J

Joe Kesselman

XSLT is a programming language. Like any language, its performance
depends on a combination of how well your code is written and how well
the processor can optimize it.

Your example, as written, scans through the entire document three times
-- once in the for-each, then twice in calculating the sums. You didn't
show the context, but if the sequence you've shown us was itself
embedded in another loop...

Also: Because XSLT supports random access to a document's contents, it
normally operates by reading the entire document into memory and
processing it there. With larger documents that can drive you into
swapping, at which point your PC's performance immediately falls through
the floor. Different processors use different in-memory models which can
exacerbate or reduce this problem. (A few, such as the custom XSLT
processor in the Datapower/IBM "network appliance", can perform some
streaming analysis and *seriously* reduce the read-process-write
overhead for a subset of XSLT; I'm not sure whether their algorithm
would stream this particular example or not.)

Outside of trying different processors in a search for one that's
happier with your example (I'd try Apache Xalan, but I'm biased...), the
thing I'd suggest is that you consider hand-coding this as a SAX
application. The example you've shown us, if that's all you're doing,
could indeed be fully streamed and would then be speed-limited only by
the parser, the serializer, and the rate at which you can get data into
and out of it... and ought to deliver the kind of performance you're
looking for. (Again, being biased, I'd suggest Apache Xerces as the
parser/serializer package if you're working in C or Java, but since SAX
is pretty well standardized you can fairly easily experiment with
different parsers if you want to spend the time on that.)
 
J

Joe Kesselman

Good point, Pavel. Only 80MB? Unless the querant has a massively
inadequate or overloaded machine (which is possible), that really
shouldn't be a problem; by today's bloated standards that's a smallish
file. We're missing some information.

My suggestion of switching to SAX-based might still make sense, but I
think it's appropriate to spend more time figuring out where the
bottleneck is in what was actually attempted.
 
P

p.lepin

Only 80MB? Unless the querant has a massively inadequate
or overloaded machine (which is possible), that really
shouldn't be a problem

Well, I was running the test on one of our data-crunchers,
which is *both* inadequate and overloaded. Still wasn't a
problem.
My suggestion of switching to SAX-based might still make
sense

It always does when the problem seems well-suited for
streaming solutions, doesn't it? Granted, typical XSLT
processors are not at all bad at that kind of stuff either,
but coding it in C using a fast SAX-parser is probably
going to result in an order of magnitude increase in
performance. Which might be just what was needed for smooth
operation.
 
R

Richard Tobin

Good point, Pavel. Only 80MB?

80MB is not huge, but there's a big difference between 80MB of
lightly-marked-up text, and 80MB of <a>24</a><a>23.4</a><a>... In the
latter case, it could easily expand greatly when parsed.

-- Richard
 
J

Joseph Kesselman

Richard said:
80MB is not huge, but there's a big difference between 80MB of
lightly-marked-up text, and 80MB of <a>24</a><a>23.4</a><a>... In the
latter case, it could easily expand greatly when parsed.

Depends on what the underlying data model is -- which is why we invented
DTM for the Xalan processor; making every node a Java object would
indeed have been hugely wasteful of memory.
 
S

starlight

Depends on what the underlying data model is -- which is why we invented
DTM for the Xalan processor; making every node a Java object would
indeed have been hugely wasteful of memory.

Hi, sorry for the late reply!
Richard, thats exactly the problem!

I have 80MB of a XML with the following structure:
Code:
....
<suppliers>
<item>
  <ID>21</ID>
  <N>Super Duper Computer store</N>
  <A>24</A>
  <B>18</B>
  <Z>1</Z>
</item>
<item>
  <ID>21</ID>
  <N>Get 1 Pay 2 Computer store</N>
  <A>24</A>
  <B>18</B>
  <Z>2</Z>
</item>
....
</suppliers>
....
<articles>
<item>
  <ID>3</ID>
  <SID>21</SID>
  <A>24</A>
  <B>18</B>
</item>
<item>
  <ID>4</ID>
  <SID>22</SID>
  <A>24</A>
  <B>16</B>
</item>
....
</articles>
....
I'm (ahem.. was) using MSXML DOM.
The weird thing is that I dont know how to deal with the problem. Here
is what I am supposed to do:
- Find all suppliers (in 90% of cases only one) for an article. To do
this use <SID> in "articles", which sorresponds to <ID> in
"suppliers", but only for those where <Z> in "suppliers" have value 2.
(<A> and <B> in "articles" are the prices.)
I didnt invent the XML! Its weird!!

Now I dont know how to deal with the case. I tried SAX and DOM but the
code got ugly so fast, that I gave it up yesterday. XPath sounded like
a good option, but the performance of it is like pfui. ...and by the
way, when using DOM with Java I got rid of the OutOfMemoryException,
when a set JVM max memory to 1024MB.
Any ideas?
 
?

=?ISO-8859-1?Q?J=FCrgen_Kahrs?=

starlight said:
- Find all suppliers (in 90% of cases only one) for an article. To do
this use <SID> in "articles", which sorresponds to <ID> in
"suppliers", but only for those where <Z> in "suppliers" have value 2.
(<A> and <B> in "articles" are the prices.)

The items in your XML data seem to be a simple
list of "records". This kind of data can be
processed efficiently with languages that map
the SAX-approach to their internal control flow.
One example of this kind of XML processing is
described in a small booklet that we wrote for
XMLgawk, the XML extension of GNU Awk. XMLgawk
is known to process large amounts of large files
in a short time. But you wont get a DOM:

http://home.vrweb.de/~juergen.kahrs/gawk/XML/xmlgawk.html#Working-with-XML-paths
way, when using DOM with Java I got rid of the OutOfMemoryException,
when a set JVM max memory to 1024MB.
Any ideas?

Try XMLgawk. 80 MB of the data that you describe should
be processed in less than one minute (assuming the
algorithm is as simple as what you described).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,045
Latest member
DRCM

Latest Threads

Top