A
Alexander Adam
Hi there!
I've got a few (more generic) questions for those C++ experts out
there.
1. I am trying to store any kind of data (custom PODs, numbers,
strings etc.) within some kind of stream which consists of an unsigned
char* array which gets resized whenever needed. Writing and reading is
done by using a simple memcpy(). Is this the most efficient method
considering that a single stream might include up to 10 different data
values within it and for each read/write (which happens very often
especially the reading) I'll do such a memcpy or is there a better,
more general approach for such a thing? I am talking specifically
about a rendering tree storage structure for 2d graphics
2. What's the fastest and most efficient way to compress / uncompress
the mentioned streams in memory?
3. I am using some custom stream implementation for reading which can
also do unicode conversion on the fly etc. The problem is this: first,
I am using a virtual base class which defines a virtual ReadWChar()
function. Next, I do call this function for the whole stream for each
char which can result in thousands of calls at a time. Now, reading is
not so of a problem as I am using a buffer underneath but I am more
concerned of the overhead of the function call. Any thoughts on that?
4. What's the general overhead considering perfomance of virtual
classes / functions? To be honest, I am already scared on declaring a
virtual destructor as I fear that the overall classes' function calls
will be with lower perfomance or is that simply not true?
5. Are there any updated perfomance documents out there giving some
insights and what you can take care from the beginning considering
modern C++ programming?
thanks!
Alex
I've got a few (more generic) questions for those C++ experts out
there.
1. I am trying to store any kind of data (custom PODs, numbers,
strings etc.) within some kind of stream which consists of an unsigned
char* array which gets resized whenever needed. Writing and reading is
done by using a simple memcpy(). Is this the most efficient method
considering that a single stream might include up to 10 different data
values within it and for each read/write (which happens very often
especially the reading) I'll do such a memcpy or is there a better,
more general approach for such a thing? I am talking specifically
about a rendering tree storage structure for 2d graphics
2. What's the fastest and most efficient way to compress / uncompress
the mentioned streams in memory?
3. I am using some custom stream implementation for reading which can
also do unicode conversion on the fly etc. The problem is this: first,
I am using a virtual base class which defines a virtual ReadWChar()
function. Next, I do call this function for the whole stream for each
char which can result in thousands of calls at a time. Now, reading is
not so of a problem as I am using a buffer underneath but I am more
concerned of the overhead of the function call. Any thoughts on that?
4. What's the general overhead considering perfomance of virtual
classes / functions? To be honest, I am already scared on declaring a
virtual destructor as I fear that the overall classes' function calls
will be with lower perfomance or is that simply not true?
5. Are there any updated perfomance documents out there giving some
insights and what you can take care from the beginning considering
modern C++ programming?
thanks!
Alex