Organizing and converting large number of XML files

  • Thread starter Donald Firesmith
  • Start date
D

Donald Firesmith

We are converting the OPEN Process Framework Repository
(www.donald-firesmith.com) of over 1,100 free open source reusable
process components for building development methods for
software-intensive systems from html to xml. The current html files are
organized into a hierarchy of dozens of files based on the natural
metamodel of process components on which the framework is based. I have
the following questions:
1) What is the appropriate way to organize and store the xml files?
Along the same lines as now, placing each XML file in the same folder in
which the current html file is and the future generated xhtml file will
reside? We are a non-profit volunteer organization so we have little
money for databases. Is there a free XML database that we should use
instead?
2) Our website is heavily crosslinked so that each webpage (one per
reusable process component) links to all of the other process component
webpages that are mentioned in it. Currently, our html file hardwires
the location of these links to their current location, making it almost
impossible to change the file structure if the metamodel changes. How
can we make use of the fact that the url for the link should be an
attribute of the process component being linked to and therefore should
be stored in the xml file for the process component being linked to?
How can we make this work when we must incrementally transition to xml
given we are a volunteer organization and have over 1,100 xml files to
generate, not to mention dozens and dozens of xsl files and dtd files?

Any advice on how to practially make the transition and organize/store
the files given the limitations on resources and large numbers of files
would be greatly appreciated.

By the way, browse the website and let us know what you think. If you
have any need for process on your projects, it is a great resource.

Don Firesmith
Chair, OPEN Process Framework Repository Organization
 
P

Peter Flynn

Donald said:
We are converting the OPEN Process Framework Repository
(www.donald-firesmith.com) of over 1,100 free open source reusable
process components for building development methods for
software-intensive systems from html to xml. The current html files are
organized into a hierarchy of dozens of files based on the natural
metamodel of process components on which the framework is based. I have
the following questions:
1) What is the appropriate way to organize and store the xml files?
Along the same lines as now, placing each XML file in the same folder in
which the current html file is and the future generated xhtml file will
reside? We are a non-profit volunteer organization so we have little
money for databases. Is there a free XML database that we should use
instead?

There are a few, but I have found that for *file* storage, the hierarchical
directory structure of the file system is perfectly adequate, and much,
much faster. You do need to take care and be rigorous about naming, though.
2) Our website is heavily crosslinked so that each webpage (one per
reusable process component) links to all of the other process component
webpages that are mentioned in it. Currently, our html file hardwires
the location of these links to their current location, making it almost
impossible to change the file structure if the metamodel changes. How
can we make use of the fact that the url for the link should be an
attribute of the process component being linked to and therefore should
be stored in the xml file for the process component being linked to?

If the data is stored in XML, and the link data is kept as (for example)
attributes of some element (they could also be element content, depending
on your XML design), then they can be accessed by whatever transformation
engine you use when generating the HTML, and the appropriate URI generated.

But you're right, this is a case where a database may be the answer, simply
because it's easier to manage this kind of metadata in bulk (as for example
when your metamodel changes) rather than hand-editing the XML (even though
that would be easier than hand-editing the HTML source).
How can we make this work when we must incrementally transition to xml
given we are a volunteer organization and have over 1,100 xml files to
generate, not to mention dozens and dozens of xsl files and dtd files?

Without studying it in more detail it's hard to say, but my gut feeling
is to make sure your HTML is utterly rigorous and consistent, and then
transform it to XHTML first. This gives you the opportunity to continue
serving it as HTML while you do it, but provides you with files which can
be machine-handled afterwards, when it comes to making your target XML.

///Peter
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,011
Latest member
AjaUqq1950

Latest Threads

Top