The ctor is difficult to read. A bunch of local variables. A bunch of
weird macros (FD_ZERO, FD_SET - can't you do initialization of your
variables without those?). And above all - the infinite loop that after
a minute of looking at it was *unclear how or where it terminates*.
FD* macros are part of the defined API for the select system call.
that said, the op should probably use poll(2) instead of select(2).
Looking at your site it seems that your software generates serialization code automatically ... but what code? IOW i miss some easy explanation whatserialization problems exactly that generated stuff addresses and how and in what "tier" of yours. Also ... what problems it ignores or fails to address and leaves up to users? Our FAQ is good source of inspiration how to discuss it in depth (http://www.parashift.com/c++-faq-lite/serialization.html). If boost::serialization is 1.5 times slower or quicker does not really matter much, since i don't use that either.
The ctor has the event loop. Is it too long?
It has been several years since I considered using poll
and I don't remember why I didn't use it. It may have
been it wasn't as portable as select.
Personally, I don't think it's intuitive (and its
contra-convention) to put the event loop in the
constructor. I would rather perform all
initialization in the constructor, and have the
user call a member function to commence the service.
catch( std::exception const &ex )
If you are talking about poll(2) (the system call) vs. select(2) (the system call), they
are roughly equivalent in that they perform the same operations in the kernel. poll(2) may
copyin() more bytes because the pollfd array may be larger (for some value of nfd) than the
bitmasks copied in by the select(2) system call, but once in the kernel they're roughly
the same. epoll(2) on linux has additional capabilities that can be useful, but it is non-portable.
Dropping Windows support would allow me to clean up
the ctor as mentioned and make it easier to add
encryption between the middle and back tiers.
Apparently this enabled the compiler to inline some functions which it did
not do previously. This ought to enhance performance, at some expense of
memory usage (and 492 bytes rounds to zero nowadays, to be honest). If you
are indeed wanting to optimize for size, there are other compiler options
for that (at expence of speed, of course).
I tried a test comparing O3 and Os in the executable in
question. The version built with O3 took 64 seconds and the
Os version took 63 seconds. The O3 version is 72,472 bytes
and the Os version is 44,836 bytes. Also the variation in
executable size that I mentioned above didn't show up when
using Os. I'm going to start using Os and see how that goes.
I recall someone advising me to optimize for size years ago,
but I needed more proof.
That's cool, but I'm curious why are you wanting to go smaller? We have
DLL-s of size around 200 MB and nobody is complaining.
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.
|Boost 1.50.0 notes||0|
|New version of C++ Middleware Writer now on line||1|
|Support for range based for loops added to C++ Middleware Writer||0|
|Version 1.11 of C++ Middleware Writer now on line||3|
|Hope for your project - a little off topic.||69|
|Release 1.13 of the C++ Middleware Writer now on line||2|
|C++ Middleware Writer version 1.10 is now on line||0|
|Updated serialization performance results||0|