Y
Yash
The functionality required from my program is:
Text files containing around 2 million records in all, should be read,
and every record should be split into its component fields. All
records have their fields separated by commas.
After splitting a line into fields, around 20 in-memory lookups have
to be applied to obtain around 20 new fields.
The original line and the new fields appended to it, should be written
to a different file.
This has to be done within 5 minutes. The program would start just
once and process 2 million records every 5 minutes.
Given that the program has to be in Perl, would you offer any
suggestions regarding performance optimization so that the speed
requirements can be met?
Thanks
Yash
Text files containing around 2 million records in all, should be read,
and every record should be split into its component fields. All
records have their fields separated by commas.
After splitting a line into fields, around 20 in-memory lookups have
to be applied to obtain around 20 new fields.
The original line and the new fields appended to it, should be written
to a different file.
This has to be done within 5 minutes. The program would start just
once and process 2 million records every 5 minutes.
Given that the program has to be in Perl, would you offer any
suggestions regarding performance optimization so that the speed
requirements can be met?
Thanks
Yash