R
Randy Howard
Suppose you want to have a large number of items (as an array of struct)
wherein one field is "record-specific" and of variable length, yet not
violate standard C (C90 probably, since C99 isn't available on the platforms
in question) to get it there so that other modules can get the
data at run-time.
I could have the structure contain:
struct element_record foo_tag
{
... /* some number of conventional entries */
size_t blob_size;
unsigned char *blob;
}
Then, manually declare individually named arrays with the raw "blob"
data of varying sizes, and manually put them and their size into an
array of structures like the one above.
Or,
#define WORST_CASE 2048
struct element_record foo_tag
{
... /* some number of conventional entries */
size_t blob_size;
unsigned char blob[WORST_CASE];
}
and initialize them in place, or read it in from a data file, but this
winds up bloating the size of the binary (or memory usage) considerably.
For this application, there is a very wide range from smallest
to largest of this variable portion of the data, (from 2 bytes
up to a little under 2K) and several hundred instances.
This seems like it should be really obvious, but I haven't tripped over
this before and I'm hoping someone can point out what I'm missing without
using some compiler-specific extension.
wherein one field is "record-specific" and of variable length, yet not
violate standard C (C90 probably, since C99 isn't available on the platforms
in question) to get it there so that other modules can get the
data at run-time.
I could have the structure contain:
struct element_record foo_tag
{
... /* some number of conventional entries */
size_t blob_size;
unsigned char *blob;
}
Then, manually declare individually named arrays with the raw "blob"
data of varying sizes, and manually put them and their size into an
array of structures like the one above.
Or,
#define WORST_CASE 2048
struct element_record foo_tag
{
... /* some number of conventional entries */
size_t blob_size;
unsigned char blob[WORST_CASE];
}
and initialize them in place, or read it in from a data file, but this
winds up bloating the size of the binary (or memory usage) considerably.
For this application, there is a very wide range from smallest
to largest of this variable portion of the data, (from 2 bytes
up to a little under 2K) and several hundred instances.
This seems like it should be really obvious, but I haven't tripped over
this before and I'm hoping someone can point out what I'm missing without
using some compiler-specific extension.