I
Immortal Nephi
The programmers want to create a big array. The array can have 1
million elements. They need to decide how to store big array in
either source code or binary file.
They choose compile time. They fill data in each element of big
array on the source code.
They choose dynamic run time. They load binary file through fstream.
Please let me know. Are both examples acceptable to the programmer’s
preference? If you are against compile time or dynamic run time,
please explain your opinion.
For example:
// Compile Time
class ArrayName
{
public:
ArrayName() {}
~ArrayName() {}
private:
static const unsigned char s_kData[ 1000000 ];
};
const unsigned char ArrayName::s_kData[ 1000000 ] =
{
0x12, 0x15, 0x45, // …..more data go into elements
};
You can create multiple ArrayName objects. All ArrayName objects
share s_kData.
// Dynamic Run Time
class ArrayName
{
public:
ArrayName()
{
if( s_pData == 0 )
{
s_pData = new unsigned char[ 1000000 ];
// Do something
// Use fstream to open and read data from binary file
// Copy and store data into s_pData memory
// Close binary file
}
}
~ArrayName()
{
if( s_pData != 0 )
{
delete [] s_pData;
s_pData = 0;
}
}
private:
static unsigned char* s_pData;
};
unsigned char* ArrayName::s_pData = 0;
If you create more than one ArrayName objects, first object allocates
memory before second object or more objects do not need to reallocate
and share s_pData.
After you are ready to destruct all ArrayName objects, last object
tests to see if pointer is not zero before it deallocates memory and
add zero to s_pData. First object and other objects are prevented to
deallocate s_pData.
Please share with me your opinion.
million elements. They need to decide how to store big array in
either source code or binary file.
They choose compile time. They fill data in each element of big
array on the source code.
They choose dynamic run time. They load binary file through fstream.
Please let me know. Are both examples acceptable to the programmer’s
preference? If you are against compile time or dynamic run time,
please explain your opinion.
For example:
// Compile Time
class ArrayName
{
public:
ArrayName() {}
~ArrayName() {}
private:
static const unsigned char s_kData[ 1000000 ];
};
const unsigned char ArrayName::s_kData[ 1000000 ] =
{
0x12, 0x15, 0x45, // …..more data go into elements
};
You can create multiple ArrayName objects. All ArrayName objects
share s_kData.
// Dynamic Run Time
class ArrayName
{
public:
ArrayName()
{
if( s_pData == 0 )
{
s_pData = new unsigned char[ 1000000 ];
// Do something
// Use fstream to open and read data from binary file
// Copy and store data into s_pData memory
// Close binary file
}
}
~ArrayName()
{
if( s_pData != 0 )
{
delete [] s_pData;
s_pData = 0;
}
}
private:
static unsigned char* s_pData;
};
unsigned char* ArrayName::s_pData = 0;
If you create more than one ArrayName objects, first object allocates
memory before second object or more objects do not need to reallocate
and share s_pData.
After you are ready to destruct all ArrayName objects, last object
tests to see if pointer is not zero before it deallocates memory and
add zero to s_pData. First object and other objects are prevented to
deallocate s_pData.
Please share with me your opinion.