T
TungstenCoil
Hello,
Has anyone encountered a problem with, or have any ideas around, the
..\$MFT on NTFS becoming corrupted while using std:: ofstream? I had
some code that would run for a few days and suddenly crash the system,
and Windows XP would report the MFT was corrupt. The only thing to do
was reinstall Windows (it won't boot, even in safe mode).
I've pared my code down to the offending portion... my code writes many
million small text files. When I reach approximately 25+ million files,
the corruption happens. I am no where near filling the disk. I've
eliminated short file names and increased the size of the MFT. This
problem happens on many different machine hardware configurations,
single/hyper/multi-processor machines with single/multiple/RAID drives
as well as flat directory and nested subdirectory situations (I've
tried a lot of stuff to narrow this down...).
A few more points:
void WriteFlatFileOnly(const std::string &test_directory, const int
iterations){
srand((unsigned)time( NULL ) );
std :: ofstream dummy_file;
std :: string file_name = "";
int file_size = 0;
int error = _mkdir (test_directory.c_str() );
for (int i = 0; i < iterations; i++){ // create a grand total of
iterations files
file_name = test_directory + "/";
char buffer[100];
file_name += _itoa(i, buffer, 10);
file_name += "DummyFile.txt";
// START WRITE
try{
dummy_file.open(file_name.c_str(), std :: ios :: out | std :: ios ::
binary);
}
catch (std::exception &error){
std::cerr << error.what()<<std::endl;
return;
}
file_size = ( (rand() % 12000) + 4000); // generate random file size
between 4K and 16K
for (int i = 0; i < file_size; i++){
try{
dummy_file << "A"; // output a byte
}
catch (std::exception &error){
std::cerr << error.what()<<std::endl;
return;
}
} // for (int i ...)
try{
dummy_file.close();
}
catch (std::exception &error){
std::cerr << error.what()<<std::endl;
return;
}
// END WRITE
}
// END CODE
Has anyone encountered a problem with, or have any ideas around, the
..\$MFT on NTFS becoming corrupted while using std:: ofstream? I had
some code that would run for a few days and suddenly crash the system,
and Windows XP would report the MFT was corrupt. The only thing to do
was reinstall Windows (it won't boot, even in safe mode).
I've pared my code down to the offending portion... my code writes many
million small text files. When I reach approximately 25+ million files,
the corruption happens. I am no where near filling the disk. I've
eliminated short file names and increased the size of the MFT. This
problem happens on many different machine hardware configurations,
single/hyper/multi-processor machines with single/multiple/RAID drives
as well as flat directory and nested subdirectory situations (I've
tried a lot of stuff to narrow this down...).
A few more points:
I am well under the NTFS limit of >4billion files. As well, I've disabled short file names and expanded the size of the MFT as suggested by MSDN and http://www.ntfs.com
I know it seems unusual, but the project dictates things be handled this way. In a nutshell, my module manages the placement of these small files within the directory structure; the requirements are such that many million small files will be placed within the structure.
During testing, the hardware was crashing, not the software (though it ceased doing anything useful). I slowly whittled the problem down to a point where I was simply outputting 25+ million simple ASCII text files of the appropriate size (4 - 16K).
It's been tested on many hardware combinations; all crash. I agree that it should not, but it is, repeatedly and reproducably.
void WriteFlatFileOnly(const std::string &test_directory, const int
iterations){
srand((unsigned)time( NULL ) );
std :: ofstream dummy_file;
std :: string file_name = "";
int file_size = 0;
int error = _mkdir (test_directory.c_str() );
for (int i = 0; i < iterations; i++){ // create a grand total of
iterations files
file_name = test_directory + "/";
char buffer[100];
file_name += _itoa(i, buffer, 10);
file_name += "DummyFile.txt";
// START WRITE
try{
dummy_file.open(file_name.c_str(), std :: ios :: out | std :: ios ::
binary);
}
catch (std::exception &error){
std::cerr << error.what()<<std::endl;
return;
}
file_size = ( (rand() % 12000) + 4000); // generate random file size
between 4K and 16K
for (int i = 0; i < file_size; i++){
try{
dummy_file << "A"; // output a byte
}
catch (std::exception &error){
std::cerr << error.what()<<std::endl;
return;
}
} // for (int i ...)
try{
dummy_file.close();
}
catch (std::exception &error){
std::cerr << error.what()<<std::endl;
return;
}
// END WRITE
}
// END CODE
This example outputs them to one directory file. This was the eventual "simplest example" that I worked down to - this, as well as the original code that creates a robust directory/subdirectory system, crashes the MFT eventually. Both do so at the same approximate time.