M
Mark
We are running a script to load text files into 8 MySQL tables. Each
text file contains tens of thousands of lines, each line containing
pipe seperated values. The script loops until all 8 tables are loaded
and it has worked fine till now. For a new client we must upload
nearly 10mb of data and our script gets threequarters of the way
through the file and then just hangs. Our ISP confirms that the script
has run out of memory.
Does anyone know of any diagnostic tools that can help us to isolate
the component(s) that are not being cleared from memory? As we use a
shared hosting account through an ISP we don't have direct access to
the server. I have been looking at the Devel:
eek module but this
seems to be for mod_perl only.
The server has produced a core file, but we don't have telnet access
to our shared server account. Are there any suitable readers that will
run in windows?
What is meant by including debug data in the appliciation in order to
make the core file more readable?
Is there an alternative way to solving this problem, e.g. if for each
text file we opened a piped process using another perl script would
memory be returned for further use once the piped process had
terminated?
Many thanks in advance,
Mark
text file contains tens of thousands of lines, each line containing
pipe seperated values. The script loops until all 8 tables are loaded
and it has worked fine till now. For a new client we must upload
nearly 10mb of data and our script gets threequarters of the way
through the file and then just hangs. Our ISP confirms that the script
has run out of memory.
Does anyone know of any diagnostic tools that can help us to isolate
the component(s) that are not being cleared from memory? As we use a
shared hosting account through an ISP we don't have direct access to
the server. I have been looking at the Devel:
seems to be for mod_perl only.
The server has produced a core file, but we don't have telnet access
to our shared server account. Are there any suitable readers that will
run in windows?
What is meant by including debug data in the appliciation in order to
make the core file more readable?
Is there an alternative way to solving this problem, e.g. if for each
text file we opened a piped process using another perl script would
memory be returned for further use once the piped process had
terminated?
Many thanks in advance,
Mark