Boost

W

woodbrian77

I feel that you avoid answering what it is that your
*CMW* *does* *better* than competition for example
Protocol Buffers. Where is the edge from what you are
above?

Would someone be willing to supply me with the
Protocol Buffers pieces to serialize and deserialize
a ::std::vector<::std::deque<double> > ?

If so, I'll work on writing a test with CMW code
that compares these. Tia.


Brian
Ebenezer Enterprises
http://webEbenezer.net
 
W

woodbrian77

Boost has rather small amount of executables (Build,
Wave and few others) indeed and code generation is
done mostly by meta-programming (that I don't like).

I forgot about Wave. Good point.

Having executables, that you ship, that are based
on a library you also ship is a form of eating your
own dog food. Our service is made up of 3 executables.
If any of them don't work right, the service itself
is affected. It comes down to fixing either an
executable or the library to get things working again.

So we are set up like an external user. We use the
same code (machine generated code and the library) to
build *programs* as a user does. Imo our examples of
how to use the code (the executables that I've been
talking about) are more detailed and realistic than
those given by the serialization library in Boost.
Here's an example:

localMessages.Marshal(localsendbuf, false, cmwBuf.GiveCharStar());

That line is from this file:

http://webEbenezer.net/misc/cmwAmbassador.cc

The ambassador (middle tier) is getting an error message
from the back tier and is in the process of sending it
to the front tier. GiveCharStar() just hands us a
pointer to a null terminated string without anyone
having to copy it. You shouldn't expect to be able to
use the pointer indefinitely, and we don't need to as
it is used right away by the Marshal function.

We are treating the user like we want to be treated --
like a grown up. Why would we do that? Because we
are users also.

That function's signature is:

char const* GiveCharStar ()

Is that possible with the serialization library in
Boost or other competitors?

Brian
Ebenezer Enterprises
http://webEbenezer.net
 
Ö

Öö Tiib

Would someone be willing to supply me with the
Protocol Buffers pieces to serialize and deserialize
a ::std::vector<::std::deque<double> > ?

If so, I'll work on writing a test with CMW code
that compares these. Tia.

You sell non-standard language-specific alternative to
wide set of well-established open and language-neutral
solutions. You sell it to engineers. Yet you openly tell
that you need help in finding out if it is outstanding in
anything at all?
 
W

woodbrian77

You sell non-standard language-specific alternative to

It's free.
wide set of well-established open and language-neutral
solutions. You sell it to engineers. Yet you openly tell
that you need help in finding out if it is outstanding in
anything at all?

I have tests that compare with the serialization
library in Boost.

I haven't updated my site yet, but I'm updating
those tests using Boost 1.55. For the
vector<deque<double> > test, the Boost version is
over 2 times slower than the CMW version. The
Boost-based executable is over 3 times larger than
the CMW-based executable.

I'm not sure Protocol Buffers is worth comparing
against. If someone is interested in seeing that,
they can do a little bit to make it happen. I'm
happy to turn over my side of the test so they
can do the comparison at their location also.


Brian
Ebenezer Enterprises - A stone the builders refused
has become head of a corner. Psalms 118:22

http://webEbenezer.net
 
Ö

Öö Tiib

It's free.


I have tests that compare with the serialization
library in Boost.

Its usage is very narrow (being language- and version-specific).
Basically clipboard operations (undo/redo/cut/paste) within single
application and dumping objects into text or xml file or log for
debugging/problem reporting/code generation purposes.
For those things it is adequate; it does not matter if undo takes
25 or 50 milliseconds. CMW feels even more narrow in sense that
it does not look like suitable for most of named use-cases.

It is more likely that someone wanting to actually serialise
something between applications/services using boost uses
Boost.PropertyTree instead.
I haven't updated my site yet, but I'm updating
those tests using Boost 1.55. For the
vector<deque<double> > test, the Boost version is
over 2 times slower than the CMW version. The
Boost-based executable is over 3 times larger than
the CMW-based executable.

Yes, but what use case it is that you compare?

You push that 'std::vector<std::deque<double> >' reflection?
Typically it is implementation detail that may change between
versions. It is likely that profiling shows in version 2.1 that
'std::valarray' trumps 'std::deque' but it can't be done since
that would make CMW of 2.2 not compatible with 2.1. Telling
that it is indexed collection of indexed collections of floating
point values gives it more longevity and is neutral to
programming language.

If C++ reflection is major property for you then there is cereal.
http://uscilab.github.io/cereal/index.html (has JSON/XML and
various binary representations). There are even attempts of
making reflection libraries:
https://bitbucket.org/dwilliamson/clreflect
I'm not sure Protocol Buffers is worth comparing
against. If someone is interested in seeing that,
they can do a little bit to make it happen. I'm
happy to turn over my side of the test so they
can do the comparison at their location also.

I'm also not sure. Those seem to be on same market
between small enough for using text-based protocols
and big enough for using real binary formats (audio,
graphics, database etc.). Protobuf lacks reflection
and CMW lacks language neutrality.

In current world it is hard to find need for entirely new
type of service. If it is not entirely new then you need
to collaborate with existing applications.
 
W

woodbrian77

Its usage is very narrow (being language- and version-specific).
Basically clipboard operations (undo/redo/cut/paste) within single
application and dumping objects into text or xml file or log for
debugging/problem reporting/code generation purposes.

For those things it is adequate; it does not matter if undo takes
25 or 50 milliseconds. CMW feels even more narrow in sense that
it does not look like suitable for most of named use-cases.

CMW doesn't support JSON or text formats, but I
don't think it's difficult for application written
using CMW to work with an application written in
a language other than C++. I don't have examples
of that, but there aren't difficult technical
problems behind doing that.
It is more likely that someone wanting to actually serialise
something between applications/services using boost uses
Boost.PropertyTree instead.



Yes, but what use case it is that you compare?

It's a low-level test. Applications are made up
of low-level operations ... I'm open to other
ideas of what to test.
You push that 'std::vector<std::deque<double> >' reflection?
Typically it is implementation detail that may change between
versions. It is likely that profiling shows in version 2.1 that
'std::valarray' trumps 'std::deque' but it can't be done since
that would make CMW of 2.2 not compatible with 2.1.

From a protocol perspective deque and valarray are
the same. First we marshal the size of the container
and then the elements. So it should be possible to
make the change you mention without compatibility
problem.

Telling
that it is indexed collection of indexed collections of floating
point values gives it more longevity and is neutral to
programming language.


If C++ reflection is major property for you then there is cereal.
http://uscilab.github.io/cereal/index.html (has JSON/XML and
various binary representations).

The following type is from that link:

struct SomeData
{
int32_t id;
std::shared_ptr<std::unordered_map<uint32_t, MyRecord>> data;

template <class Archive>
void save( Archive & ar ) const
{
ar( data );
}

template <class Archive>
void load( Archive & ar )
{
static int32_t idGen = 0;
id = idGen++;
ar( data );
}
};

The load function looks funny to me. In order
for something to be loaded, it has to have been
saved. How is id being set in code that's not
shown and can it be coherent with the static in
the load function?

I agree with their requiring C++ 2011 or newer.
But Cereal looks similar to the serialization
library in Boost. Users have to maintain
serialization functions and the dependence on iostream.

There are even attempts of
making reflection libraries:
https://bitbucket.org/dwilliamson/clreflect




I'm also not sure. Those seem to be on same market
between small enough for using text-based protocols
and big enough for using real binary formats (audio,
graphics, database etc.). Protobuf lacks reflection
and CMW lacks language neutrality.

I think Protobuf has some support for reflection.

I don't think support for other languages is hard
to reach. Assuming we stick with byte-level
marshalling, I don't think there are difficult
problems in making applications written with
CMW work with applications in another language.
Also assuming IEEE 754 floating point.

Switching to bit-level marshalling would make
it more work to support other languages so
probably we won't do that.

Brian
Ebenezer Enteprises - In G-d we trust.
http://webEbenezer.net
 
J

James Kanze

(e-mail address removed) wrote:
Maybe you don't but others certainly do!

I don't know. The way it's implemented often creates more
problems than it's worth.

Reference counted pointers seem irrelevant for a lot of
applications: some won't use any dynamic memory, except within
the standard containers, and many others will only use it for
entity objects, which have deterministic lifetimes for which
shared_ptr is totally irrelevant. And even in the few cases
where intensive use of reference counted pointers is relevant,
you probably want some sort of invasive counting, to avoid
errors. We experimented with `std::shared_ptr` when we moved to
C++11, but found that it didn't work for us, and went back to
our home written one.
 
J

James Kanze

Ian Collins oratorically vehemently insists
Wow, -j 32 !! Say no more. Where did you find that 32 since at most I only
can find 16, as 2 threads per core. Actually thread core, not real core.
An i7 is still a 4 core, not sure how that threaded core is embedded into
the hardware.

Maybe he's not using an Intel. Where I used to work (four and
a half years ago), I "inherited" a 32 core Sparc because it
wasn't fast enough for production (where 128 cores was the
minimum). And that was some time ago.
 
Ö

Öö Tiib

I don't know. The way it's implemented often creates more
problems than it's worth.

It seems to be implemented rather well. It is sort of
heavyweight smart pointer but not overly.
Reference counted pointers seem irrelevant for a lot of
applications: some won't use any dynamic memory, except within
the standard containers, and many others will only use it for
entity objects, which have deterministic lifetimes for which
shared_ptr is totally irrelevant. And even in the few cases
where intensive use of reference counted pointers is relevant,
you probably want some sort of invasive counting, to avoid
errors. We experimented with `std::shared_ptr` when we moved to
C++11, but found that it didn't work for us, and went back to
our home written one.

If the objects for 'std::shared_ptr' are allocated with
'make_shared' then I haven't observed much performance
difference with intrusive ref-counting. What are the key features
it lacks? Issue may be that some functionality it has (like dynamic
"deleter" or "weak count") are unneeded for particular application
but errors it does not seem to have.
 
I

Ian Collins

James said:
I don't know. The way it's implemented often creates more
problems than it's worth.

Like any software tool, it depends how you use it. Smart pointers and
reference counting were a couple of the C++ tricks that first pulled me
over from C and I've been using them ever since.
Reference counted pointers seem irrelevant for a lot of
applications: some won't use any dynamic memory, except within
the standard containers, and many others will only use it for
entity objects, which have deterministic lifetimes for which
shared_ptr is totally irrelevant. And even in the few cases
where intensive use of reference counted pointers is relevant,
you probably want some sort of invasive counting, to avoid
errors.

Invasive counting should have a slight performance edge, but the
programmer doesn't always have control of the type contained by the
pointer. In cases where objects are manipulated more often than that
are allocated, the advantages are less obvious. Most of my use cases
for std::shared_ptr are build a tree, process the tree data where the
process phase dominates the run time.

Care to explain "to avoid errors"?
We experimented with `std::shared_ptr` when we moved to
C++11, but found that it didn't work for us, and went back to
our home written one.

I went the other way...
 
Ö

Öö Tiib

Intrusive smartpointers have a convenient property that they can be created
from plain pointers (like 'this') or references.

Let's say you are changing
some function which currently has only a pointer or reference to the
object, and want to call some other function which expects a smartpointer,
or want to store a smartpointer for later use. With intrusive smartpointers
this is a no-braner, otherwise it becomes a PITA to pass smartpointers
through all those functions which actually do not need them.

Is it some sort of mid-way refactoring where we have raw pointers in mix
with smarts?
Of course, this style works best if *all* objects of the given class can
be accessed via smartpointers (all objects are allocated only dynamically,
for example). This requirement may appear as a drawback in some cases.

Yes, typically constructor is protected and there are factories that
'make_shared'. What is the drawback with that approach?
 
Ö

Öö Tiib

The drawback is that sometimes you want a temporary object which could be
an automatic object on the stack, but you cannot have that (as the
constructor is protected, for good reasons). I understand this is only a
performance issue, but in the spirit it is against the zero overhead
principle of C++.

I can't find case when I want that. You can leave particular constructor
public as performance optimization but it is becoming rather dim for me
what type of object we are talking about.
 
J

Jorgen Grahn

I don't know. The way it's implemented often creates more
problems than it's worth.

Reference counted pointers seem irrelevant for a lot of
applications: some won't use any dynamic memory, except within
the standard containers, and many others will only use it for
entity objects, which have deterministic lifetimes for which
shared_ptr is totally irrelevant.

That's where I am. My objects are either on the stack or owned by
containers. No doubt smart pointers have a place in C++, but I have
not yet come across a problem where I needed them.

And that takes us back to Ian Collins' "Maybe you don't but others
certainly do!" above. "Others" doesn't mean "everyone else" but
"enough people to make it worth standardizing".

/Jorgen
 
T

Tobias Müller

Paavo Helde said:
Intrusive smartpointers have a convenient property that they can be created
from plain pointers (like 'this') or references. Let's say you are changing
some function which currently has only a pointer or reference to the
object, and want to call some other function which expects a smartpointer,
or want to store a smartpointer for later use. With intrusive smartpointers
this is a no-braner, otherwise it becomes a PITA to pass smartpointers
through all those functions which actually do not need them.

But that's also a bit dangerous. If a function takes a reference as
parameter/raw pointer I usually expect that it is not taking (any kind of)
ownership of the object.

I always wondered if it useful to have a "smart object" template like:

template <typename T>
class RC : public T
{
public:
/* add perfect forwarding constructors here */
void addRef();
void release();
private:
int count;
};

theoretically it combines the the advantages of internal and external
reference counts:
- you can use T on the stack and RC<T> as smart object, as with external
counts.
- performance of internal counts.
- safety/"ownership contract" of external count: you can only create smart
pointers from RC<T>* not from T*.
- You can pass by raw pointer/reference and create smart pointers on
demand, but still have the "ownership contract"

Any disadvantages?

Tobi
 
Ö

Öö Tiib

I always wondered if it useful to have a "smart object" template like:

template <typename T>
class RC : public T
{
public:
/* add perfect forwarding constructors here */
void addRef();
void release();
private:
int count;
};

theoretically it combines the the advantages of internal and external
reference counts:
- you can use T on the stack and RC<T> as smart object, as with external
counts.
- performance of internal counts.
- safety/"ownership contract" of external count: you can only create smart
pointers from RC<T>* not from T*.
- You can pass by raw pointer/reference and create smart pointers on
demand, but still have the "ownership contract"

Any disadvantages?

The 'boost::intrusive_ref_counter' is bit better since one can regulate
thread safety of access to that 'count'. Also CRTP makes it simpler,
no need for variadic template constructor for template (that makes
most readers head spin).
 
T

Tobias Müller

Öö Tiib said:
The 'boost::intrusive_ref_counter' is bit better since one can regulate
thread safety of access to that 'count'. Also CRTP makes it simpler,
no need for variadic template constructor for template (that makes
most readers head spin).

But boost::intrusive_ref_counter is basically just a helper class for
implementing a plain normal intrusive count.
It has none of the advantages of the above solution.

My RC template works with any existing class, and with appropriate
specialization also with primitive types. Just by using RC<SomeClass> or
RC<int>. No need to declare a wrapper. No need to decide at design time
whether a class should be "smart".

Tobi
 
Ö

Öö Tiib

But boost::intrusive_ref_counter is basically just a helper class for
implementing a plain normal intrusive count.
It has none of the advantages of the above solution.

None? It may be I do not understand the advantages. We can use 'T' on
the stack and 'shared_ptr<T>' as shared object. Performance of internal
counts is there with 'make_shared'. There is 'shared_ptr::use_count'.
Usage of raw pointers directly in code does not make sense for me. I
either use standard iterators/smart pointers/references or some self-made
things. 'boost::intrusive_ref_counter' may be useful for designs
My RC template works with any existing class, and with appropriate
specialization also with primitive types. Just by using RC<SomeClass> or
RC<int>. No need to declare a wrapper. No need to decide at design time
whether a class should be "smart".

Usage of 'RC<int>' is more likely over-engineering than
'std::shared_ptr<int>' (where 'int' may be "handle" of something needing
special "deleter"). The question is not about "smartness". There are
objects that are never "cloned" (may be "copied") and there are objects
that are never "copied" (may be "cloned"). It needs to be clear design
time what type of object it is there and smart pointers make sense only
for the second type.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,042
Latest member
icassiem

Latest Threads

Top