The C++ Object Model: Good? Bad? Ugly?

T

tonytech08

    [...]
An instance of a polymorphic class has to contain
information about its type, so it can't be the same as a
POD class.  Constructors and (non-virtual) methods are
irrelevant.
Just a nit, but it's possible to implement polymorphism with
no additional information in the instance itself; you could,
for example, maintain the information is a separate
hashtable which mapped the address to the type information.
What isn't possible is to establish this mapping without
executing some code. I.e. a constructor.
And how relatively (relative to the current implementation)
slow would that be is the question.

I don't think anyone actually knows, since as far as I know, no
one has actually implemented it this way.  A virtual table
look-up involves two indirections---read the vptr, then the vtbl
itself; it seems pretty obvious that a hash table implementation
can't come close.  It might be useful, however, for
instrumenting the code, in order to generate various statistics,
or even doing some additional error checking on the side
(although I'm not sure what).

As I said above, my comment was just a nit---it's possible to
implement polymorphism with no information about the type in the
class object itself.  Conceptually, at least; it's probably not
practical.

The only other way to avoid adding a vptr to the data portion that
pops into my head at the time is to drastically depart from "struct is
the object" paradigm and have a "2 pointer thing" be "the object": 1
ptr points to the data, the other to the vtable. Of course that breaks
my ideal "data is the object" view of things.
 
T

tonytech08

More support than what.  

More support for OO with "heavyweight" classes than for POD classes.
C++ has support for "full OO type
objects", if that's what you need.  Most of my objects aren't
"full OO type objects", in the sense that they don't support
polymorphism.  C++ supports them just as well.

I think I may be OK without polymorphism in "lightweight" classes, but
overloaded constructors sure would be nice. And conversion operators.
Can a POD class derive from a pure abstract base class? That would be
nice also if not.
But that's simply wrong,

No it's not. It's just an abstract way of looking at it. It's hardly a
stretch either, since the C++ object model or at least most
implementations use that as the foundation upon which to implement
polymorphism: tacking a vptr onto "the thing part" (noun) of "the
object".
at least in the C++ object model.  An
object has a type.  Otherwise, it's just raw memory.  That's a
fundamental principle of any typed language.

I could easily go further and say something like "the memory in which
the data portion of an object, is the object". While that may bother
purists, it is a valid abstract way of thinking about it.
But the implementation *always* adds things to the data portion,
or controls how the data portion is interpreted.  It defines a
sign bit in an int, for example (but not in an unsigned int).
If you want to support signed arithmetic, then you need some way
of representing the sign.  If you want to support polymorphism,
then you need some way of representing the type.  I don't see
your point.  (The point of POD, in the standard, is C
compatibility; anything in a POD will be interpretable by a C
compiler, and will be interpreted in the same way as in C++.)

Well maybe I'm breaking new ground then in suggesting that there
should be a duality in the definition of what a class object is. There
are "heavyweight" classes and "lightweight" ones. I use C++ with that
paradigm today, but it could be more effective if there was more
support for "object-ness" with "lightweight" classes. The limitation
appears to be backward compatibity with C. If so, maybe there should
be structs, lightweight classes, heavyweight classes.

Of course I can only ruminate about such things being implemented in C+
+ in the future. I am beginning to investigate either preprocessing
"my language" to C++ or full new language implementation (though the
latter seems only doable if I could hack an existing C++
implementation and supplant it with an improved object model).
    [...]
It restricts the use of OO concepts to classes designed to
be used with OO concepts.
Not really, since one can have POD classes with methods,
just not CERTAIN methods (you are suggesting that "classes
designed to be used with OO concepts" are those
heavyweight classes that break PODness, right?).
No.  I'm really not suggesting much of anything.  However you
define the concept of OO, the concept only applies to classes
which were designed with it in mind.  C++ doesn't force any
particular OO model, but allows you to chose.  And to have
classes which aren't conform to this model.
"Allows you to choose"? "FORCES you to choose" between
lightweight (POD) class design with more limited OO and and
heavyweight (non-POD) class design with all OO mechanisms
allowed but at the expense of losing POD-ness. It's a
compromise. I'm not saying it's a bad compromise, but I am
wondering if so and what the alternative implementation
possibilities are.

Obviously, you have to choose the appropriate semantics for the
class.  That's part of design, and is inevitable.  So I don't
see your point; C++ gives you the choice, without forcing you
into any one particular model.  And there aren't just two
choices.

The change occurs when you do something to a POD ("lightweight") class
that turns the data portion of the class into something else than just
a data struct, as when a vptr is added. Hence then, you have 2
distinct types of class objects that are dictated by the
implementation of the C++ object model.
Not at all.  You define what you need.  

There are the limitations though: you can't have overloaded
constructors, for example, without losing POD-ness. Or conversion
operators (?). Or derivation from "interfaces" (?).
From a design point of
view, I find that it rarely makes sense to mix models in a
single class: either all of the data will be public, or all of
it will be private.  But the language doesn't require it.


Well, if there is a non-trivial constructor, the class can't be
POD, since you need to call the constructor in order to
initialize it.  

Well maybe then "POD" is the hangup and I should have used
"lightweight" from the beginning. I just want the data portion to
remain intact while having the constructor overloads and such.
Polymorphism I can probably do without, but deriving from interfaces
would be nice if possible.
Anything else would be a contradiction: are you
saying you want to provide a constructor for a class, but that
it won't be called?  

Of course I want it to be called. By "POD-ness" I just meant I want a
struct-like consistency of the object data (with no addition such as a
vptr, for example).
That doesn't make sense.


Because you can't have a constructor in C, basically.  Because
the compiler must generate code when the object is created, if
there is a constructor.  That is what POD-ness is all about; a
POD object doesn't require any code for it to be constructed.

Then apparently I was using "POD" inappropriately. My concern is the
in-memory representation of the object data.
Certainly, since you couldn't instantiate an instance of the
object in C.

That pesky C-compatibility constraint! A good reason for investigation
of "a better C++" (as of course I am pondering already).
The next version of the standard does have an additional
category "layout compatible".  I'm not sure what it buys us,
however.

Where can I read up on that?
Interface compatibility.  And if you're not interfacing with C,
you can have them.  The object won't be a POD, but if you're not
using it to interface with C, who cares?

I don't. I didn't know that that is all I would be "breaking". I was
worried that putting overloaded constructors into "lightweight"
classes may either now or in the future aberrate my "data objects" in
some way. As long as the data portion of the object looks like a
struct of just the data in memory, I'm a happy camper.
But you can derive from a POD class.  A pointer to a POD class
just won't behave polymorphically.

Of course not, because you can't define a virtual function and still
have a POD class. The thing you end up with is a non-POD though
because PODs can't have base classes. Whether I can derive from a POD
and still have one of my "lightweight" classes is a question (?).
 
J

James Kanze

More support for OO with "heavyweight" classes than for POD
classes.

You're not making sense. How does C++ have more support for OO
than for other idioms?
I think I may be OK without polymorphism in "lightweight"
classes, but overloaded constructors sure would be nice. And
conversion operators. Can a POD class derive from a pure
abstract base class? That would be nice also if not.

And C++ supports all of that. I fail.to see what you're
complaining about. In C++, a class is as heavyweight or as
lightweight as its designer wishes. It is the class designer
who makes the choice, not the language. More than anything
else, it is this which sets C++ off from other languages.
No it's not.

Yes it is. Without behavior, all you have is raw memory. C++
is a typed language, which means that objects do have behavior.
(I'm not talking necessarily of behavior in the OO sense here.
In C as well, object have behavior, and the set of operations on
an int is not the same as the set of operations on a float.)
It's just an abstract way of looking at it. It's hardly a
stretch either, since the C++ object model or at least most
implementations use that as the foundation upon which to
implement polymorphism: tacking a vptr onto "the thing part"
(noun) of "the object".

C++ supports dynamic typing, if that's what you mean. In other
words, the type of an object vary at runtime. But I don't see
your point. It is the designer of the class who decides whether
to use dynamic typing or not. The language doesn't impose it.
I could easily go further and say something like "the memory
in which the data portion of an object, is the object". While
that may bother purists, it is a valid abstract way of
thinking about it.

Not in a typed language. If you want raw memory, C++ even
supports that. Any object can be read as an array of unsigned
char. Of course, the representation isn't always defined; are
int's 16, 36, 36, 48 or 64 bits? Are they 2's complement, 1's
complement or signed magnitude?
Well maybe I'm breaking new ground then in suggesting that
there should be a duality in the definition of what a class
object is. There are "heavyweight" classes and "lightweight"
ones.

There's no strict binary division. There are a number of
different classifications possible---at the application level,
the distinction between value objects and entity objects is
important, for example (but there are often objects which don't
fit into either category). In many cases, it certainly makes
sense to divide types into categories (two or more); in this
regard, about the only thing particular with "lightweight" and
"heavyweight" is that the names don't really mean anything.
I use C++ with that paradigm today, but it could be more
effective if there was more support for "object-ness" with
"lightweight" classes.

Again: what support do you want? You've yet to point out
anything that isn't supported in C++.
The limitation appears to be backward compatibity with C. If
so, maybe there should be structs, lightweight classes,
heavyweight classes.

And maybe there should be value types and entity types. Or
maybe some other classification is relevant to your application.
The particularity of C++ is that it lets you choose. The
designer is free to develop the categories he wants. (If I'm
not mistaken, in some circles, these type of categories are
called stereotypes.)
    [...]
It restricts the use of OO concepts to classes designed to
be used with OO concepts.
Not really, since one can have POD classes with methods,
just not CERTAIN methods (you are suggesting that "classes
designed to be used with OO concepts" are those
heavyweight classes that break PODness, right?).
No.  I'm really not suggesting much of anything.  However you
define the concept of OO, the concept only applies to classes
which were designed with it in mind.  C++ doesn't force any
particular OO model, but allows you to chose.  And to have
classes which aren't conform to this model.
"Allows you to choose"? "FORCES you to choose" between
lightweight (POD) class design with more limited OO and and
heavyweight (non-POD) class design with all OO mechanisms
allowed but at the expense of losing POD-ness. It's a
compromise. I'm not saying it's a bad compromise, but I am
wondering if so and what the alternative implementation
possibilities are.
Obviously, you have to choose the appropriate semantics for
the class.  That's part of design, and is inevitable.  So I
don't see your point; C++ gives you the choice, without
forcing you into any one particular model.  And there aren't
just two choices.
The change occurs when you do something to a POD
("lightweight") class that turns the data portion of the class
into something else than just a data struct, as when a vptr is
added. Hence then, you have 2 distinct types of class objects
that are dictated by the implementation of the C++ object
model.

The concept of a POD was introduced mainly for reasons of
interfacing with C. Forget it for the moment. You have as many
types of class objects as the designer wishes. If you want just
a data struct, fine; I use them from time to time (and they
aren't necessarily POD's---it's not rare for my data struct's to
contain an std::string). If you want polymorphism, that's fine
too. If you want something in between, say a value type with
deep copy semantics, no problem.

There is NO restriction in C++ with regards to what you can do.
There are the limitations though: you can't have overloaded
constructors, for example, without losing POD-ness.

Obviously, given the particular role of PODs. So? What's your
point? There are ony two reasons I know for insisting on
something being a POD: you need to be able to use it from C as
well, or you need static initialization. Both mean that
construction must be trivial.
Or conversion operators (?).

A conversion operator doesn't affect POD-ness. In fact, POD
structures with a conversion operator are a common idiom for
certain types of initialization.
Or derivation from "interfaces" (?).

How is a C program going to deal with derivation? For that
matter, an interface supposes virtual functions and dynamic
typing; it's conceptually impossible to create a dynamically
typed object without executing some code.

You do have to know what you want. (And this has nothing to do
with the "design" of the C++ object model; it's more related to
simple possibilties. C++ does try to not impose anything
impossible to implement.)
Well maybe then "POD" is the hangup and I should have used
"lightweight" from the beginning. I just want the data portion
to remain intact while having the constructor overloads and
such.

I'm not sure what you mean by "the data portion to remain
intact". Taken literally, the data portion had better remain
intact for all types of objects. If you mean contiguous, that's
a different issue: not even POD's are guaranteed to have
contiguous data (since C doesn't guarantee it)---on many
machines (e.g. Sparcs, IBM mainframes...) that would introduce
totally unacceptable performance costs.

If anything, C++ specifies the structure of the data too much.
A compiler is not allowed to reorder data if there is no
intervening change of access, for example. If a programmer
writes:

struct S
{
char c1 ;
int i1 ;
char c2 ;
int i2 ;
} ;

for example, the compiler is not allowed to place the i1 and i2
elements in front of c1 and c2, despite the fact that this would
improve memory use and optimization.
Polymorphism I can probably do without, but deriving from
interfaces would be nice if possible.

If you have dynamic typing, some code must be executed when the
object is created; otherwise, there is no way later to know what
the dynamic type is.
Of course I want it to be called. By "POD-ness" I just meant I
want a struct-like consistency of the object data (with no
addition such as a vptr, for example).

I don't understand all this business of vptr. Do you want
polymorphism, or not. If you want polymorphism, the compiler
must memorize the type of the object (each object) somewhere,
when the object is created; C++ doesn't require it to be in the
object itself, but in practice, this is by far the most
effective solution. If you don't want polymorphism, and don't
declare any virtual functions, then the compiler doesn't have to
memorize the type of the object, and none that I know of do.

[...]
Then apparently I was using "POD" inappropriately. My concern
is the in-memory representation of the object data.

Which is implementation defined in C++, just as it was in C.
With some constraints; the compiler can insert padding (and all
do in some cases), but it cannot reorder non-static data members
unless there is an intervening change in access control. That,
and the fact that a class cannot have a size of 0, are about the
only restraints. C (and C++ for PODs) also have a constraint
that the first data element must be at the start of the object;
the compiler may not introduce padding before the first element.
If I'm not mistaken, the next version of the standard extends
this constraint to "standard-layout classes"; i.e. to classes
that have no virtual functions and no virtual bases, no changes
in access control, and a few other minor restrictions (but which
may have non-trivial constructors). This new rule, however,
does nothing but describe current practice.

[...]
Where can I read up on that?

In the current draft. I think
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2798.pdf
should get it (or something close---the draft is still
evolving).
 
T

tonytech08

You're not making sense.  How does C++ have more support for OO
than for other idioms?

Why are you asking that when I said nothing of the sort? I said that
once you put a vptr into the data portion of an object (for example),
it's a different animal than a class without a vptr (for example!). I
distinguished these fundamentally different animals by calling them
"heavyweight" and "lightweight" classes/object (and apparently wrongly
POD-classes wrongly).

Moreso, I was concerned that other things one can do with a class,
such as defining overloaded constructors, may make code fragile
against some future or other current implementation of the language.
Who's to say (not me) that someone won't make a compiler that tacks on
or into a class object some other "hidden" ptr or something to
implement "overloaded constructors"? I don't care if code is generated
but I do care if the compiler starts aberrating the data portion.
And C++ supports all of that.  

But am I guaranteed that my a class will stay lightweight if I do that
or is it implementation defined?
I fail.to see what you're
complaining about.  In C++, a class is as heavyweight or as
lightweight as its designer wishes.  It is the class designer
who makes the choice, not the language.  

That's not the case with polymorphism for example: the vptr in the
data portion is not free. It changes the size of the object. I want to
know where that line of crossover is or how to keep clear of it
anyway.
More than anything
else, it is this which sets C++ off from other languages.


Yes it is.  Without behavior, all you have is raw memory.  C++
is a typed language, which means that objects do have behavior.
(I'm not talking necessarily of behavior in the OO sense here.
In C as well, object have behavior, and the set of operations on
an int is not the same as the set of operations on a float.)

Context matters. You may like to design thinking of objects FIRST as
the set of methods that work on the data. I, OTOH, prefer to think in
terms of the data FIRST (and more importantly because that's what's
going to end up somewhere external to the program).
C++ supports dynamic typing, if that's what you mean.  In other
words, the type of an object vary at runtime.  But I don't see
your point.  It is the designer of the class who decides whether
to use dynamic typing or not.  The language doesn't impose it.

It imposes "a penalty" the second you introduce the vptr. The class
becomes fundamentally and categorically different in a major way.
(Read: turns a lightweight class into a heavyweight one).
Not in a typed language.  If you want raw memory, C++ even
supports that.  Any object can be read as an array of unsigned
char.  Of course, the representation isn't always defined; are
int's 16, 36, 36, 48 or 64 bits? Are they 2's complement, 1's
complement or signed magnitude?

I agree that there are other hindrances to having an elegant
programming model. Sigh. That's not to say that one can't get around
them to a large degree. (Not the least of which is: define your
platform as narrowly as possible).
There's no strict binary division.  

A class with a vptr is fundamentally different than one without, for
example.
There are a number of
different classifications possible

The only ones I'm considering in this thread's topic though is the
lightweight/heavyweight ones.
---at the application level,
the distinction between value objects and entity objects is
important, for example (but there are often objects which don't
fit into either category).  In many cases, it certainly makes
sense to divide types into categories (two or more); in this
regard, about the only thing particular with "lightweight" and
"heavyweight" is that the names don't really mean anything.

I've described it at least a dozen times now, so if you don't get it,
then I'm out of ways to describe it.
Again: what support do you want?  You've yet to point out
anything that isn't supported in C++.

(Deriving from interface classes and maintaining the size of the
implementation (derived) class would be nice (but maybe impossible?)).

I am just trying to understand where the line of demarcation is
between lightweight and heavyweight classes is and how that can
potentially change in the future and hence break code.

And maybe there should be value types and entity types.  Or
maybe some other classification is relevant to your application.
The particularity of C++ is that it lets you choose.  The
designer is free to develop the categories he wants.  (If I'm
not mistaken, in some circles, these type of categories are
called stereotypes.)

I'm only talking about the two categories based upon the C++
mechanisms that change the data portion of the object. Deriving a
simple struct from a pure abstract base class will get you a beast
that is the size of the struct plus the size of a vptr. IOW: an
aberrated struct or heavyweight object. Call it what you want, it's
still fundamentally different.
    [...]
It restricts the use of OO concepts to classes designed to
be used with OO concepts.
Not really, since one can have POD classes with methods,
just not CERTAIN methods (you are suggesting that "classes
designed to be used with OO concepts" are those
heavyweight classes that break PODness, right?).
No.  I'm really not suggesting much of anything.  However you
define the concept of OO, the concept only applies to classes
which were designed with it in mind.  C++ doesn't force any
particular OO model, but allows you to chose.  And to have
classes which aren't conform to this model.
"Allows you to choose"? "FORCES you to choose" between
lightweight (POD) class design with more limited OO and and
heavyweight (non-POD) class design with all OO mechanisms
allowed but at the expense of losing POD-ness. It's a
compromise. I'm not saying it's a bad compromise, but I am
wondering if so and what the alternative implementation
possibilities are.
Obviously, you have to choose the appropriate semantics for
the class.  That's part of design, and is inevitable.  So I
don't see your point; C++ gives you the choice, without
forcing you into any one particular model.  And there aren't
just two choices.
The change occurs when you do something to a POD
("lightweight") class that turns the data portion of the class
into something else than just a data struct, as when a vptr is
added. Hence then, you have 2 distinct types of class objects
that are dictated by the implementation of the C++ object
model.

The concept of a POD was introduced mainly for reasons of
interfacing with C.  Forget it for the moment.  You have as many
types of class objects as the designer wishes.  If you want just
a data struct, fine; I use them from time to time (and they
aren't necessarily POD's---it's not rare for my data struct's to
contain an std::string).  If you want polymorphism, that's fine
too.  If you want something in between, say a value type with
deep copy semantics, no problem.

There is NO restriction in C++ with regards to what you can do.

Yes there is if you don't want the size of your struct to be it's size
plus the size of a vptr. If maintaining that size is what you want,
then you can't have polymophism. Hence, restriction.
Obviously, given the particular role of PODs.  So?  What's your
point?  

My point is that I'm worried about defining some overloaded
constructors and then finding (now or in the future) that my class
object is not "struct-like" anymore (read, has some bizarre
representation in memory).
There are ony two reasons I know for insisting on
something being a POD: you need to be able to use it from C as
well, or you need static initialization.  Both mean that
construction must be trivial.


A conversion operator doesn't affect POD-ness.  In fact, POD
structures with a conversion operator are a common idiom for
certain types of initialization.

Well that's good to know.
How is a C program going to deal with derivation?  For that
matter, an interface supposes virtual functions and dynamic
typing; it's conceptually impossible to create a dynamically
typed object without executing some code.

Code generation/execution is not what I'm worried about.
You do have to know what you want.  

I do.
(And this has nothing to do
with the "design" of the C++ object model;

It does.
it's more related to
simple possibilties.  C++ does try to not impose anything
impossible to implement.)

Base upon your comments, maybe polymorphism IS the only thing that
changes a class from lightweight to heavyweight. I'm not sure that
that can be relied upon with future implementations or even different
implementations of the language, for I think that the mechanisms are
mostly implementation defined.
I'm not sure what you mean by "the data portion to remain
intact".  

Derive a class and you have compiler baggage attached to the data
portion. If I ever instantiate a class object that has overloaded
constructors and find that the size of the object is different from
the expected size of all the data members (please don't bring up
padding and alignment etc), I'm going to be unhappy.
Taken literally, the data portion had better remain
intact for all types of objects.  If you mean contiguous, that's
a different issue: not even POD's are guaranteed to have
contiguous data (since C doesn't guarantee it)---on many
machines (e.g. Sparcs, IBM mainframes...) that would introduce
totally unacceptable performance costs.

If a platform is so brain-damaged that I can't do things to have a
high degree of confidence that the size of a struct is what I expect
it to be, then I won't be targeting that platform.
Other people can program "the exotics".
If anything, C++ specifies the structure of the data too much.
A compiler is not allowed to reorder data if there is no
intervening change of access, for example.  If a programmer
writes:

    struct S
    {
        char c1 ;
        int  i1 ;
        char c2 ;
        int  i2 ;
    } ;

for example, the compiler is not allowed to place the i1 and i2
elements in front of c1 and c2, despite the fact that this would
improve memory use and optimization.

And I think I have control over most of those things on a given
platform. Which is all fine with me, as long as I HAVE that control
(via compiler pragmas or switches or careful coding or whatever).
If you have dynamic typing, some code must be executed when the
object is created; otherwise, there is no way later to know what
the dynamic type is.

Again, I'm not worried about code generation/execution.
I don't understand all this business of vptr.  Do you want
polymorphism, or not.  

Yes, but without the vptr please (coffee without cream please).
If you want polymorphism, the compiler
must memorize the type of the object (each object) somewhere,
when the object is created; C++ doesn't require it to be in the
object itself, but in practice, this is by far the most
effective solution.

But what if just pure ABC derived classes were handled differently?
Then maybe the situation would be less bad.
 If you don't want polymorphism, and don't
declare any virtual functions, then the compiler doesn't have to
memorize the type of the object, and none that I know of do.

    [...]
Then apparently I was using "POD" inappropriately. My concern
is the in-memory representation of the object data.

Which is implementation defined in C++, just as it was in C.
With some constraints; the compiler can insert padding (and all
do in some cases), but it cannot reorder non-static data members
unless there is an intervening change in access control.

Well there's another example then of heavyweightness: sprinkle in
"public" and "private" in the wrong places and the compiler may
reorder data members. (I had a feeling there was more than the vptr
example).
 That,
and the fact that a class cannot have a size of 0, are about the
only restraints.  C (and C++ for PODs) also have a constraint
that the first data element must be at the start of the object;
the compiler may not introduce padding before the first element.

So you are saying that a non-POD does not have to have the first data
element at the start of the object. Example number 3 of
heavyweightness. (NOW we're getting somewhere!).
So "losing POD-ness" IS still "bad" and my assumed implication of that
and use of "POD-ness" seems to have been correct.
If I'm not mistaken, the next version of the standard extends
this constraint to "standard-layout classes"; i.e. to classes
that have no virtual functions and no virtual bases, no changes
in access control, and a few other minor restrictions (but which
may have non-trivial constructors).  This new rule, however,
does nothing but describe current practice.

So in the future I will be able to have overloaded constructors (I'm
not sure what exactly a "trivial" constructor is, but I assumed that
an overloaded one is not trivial) and still have lightweight classes,
good. That threat of a compiler not putting data at the front of non-
PODs is a real killer.
    [...]
Where can I read up on that?

In the current draft.  I thinkhttp://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2798.pdf
should get it (or something close---the draft is still
evolving).

Downloaded it. Thx for the link.

Tony
 
J

James Kanze

Why are you asking that when I said nothing of the sort? I
said that once you put a vptr into the data portion of an
object (for example), it's a different animal than a class
without a vptr (for example!). I distinguished these
fundamentally different animals by calling them "heavyweight"
and "lightweight" classes/object (and apparently wrongly
POD-classes wrongly).

Well, OO is often used to signify the presence of polymorphism,
and the only time you'll get a vptr is if the class is
polymorphic.
Moreso, I was concerned that other things one can do with a
class, such as defining overloaded constructors, may make code
fragile against some future or other current implementation of
the language. Who's to say (not me) that someone won't make a
compiler that tacks on or into a class object some other
"hidden" ptr or something to implement "overloaded
constructors"? I don't care if code is generated but I do care
if the compiler starts aberrating the data portion.

Well, that's C++. And C. And Fortran, and just about every
other language I'm aware of. The only language I know which
specifies the exact format of any types is Java, and it only
does so for the built-in types.

So what's your point. Data layout is implementation defined,
period. That was the case in C, and C++ didn't introduce any
additional restrictions.
But am I guaranteed that my a class will stay lightweight if I
do that or is it implementation defined?

Everything is implementation defined. C++ inherits this from C,
and it was pretty much standard practice at the time C was
invented.

[...]
It imposes "a penalty" the second you introduce the vptr. The
class becomes fundamentally and categorically different in a
major way. (Read: turns a lightweight class into a
heavyweight one).

You're throwing around meaningless adjectives again. The
compiler has to implement dynamic typing somehow. You don't pay
for it unless you use it, and using a vptr is about the cheapest
implementation known.

[...]
I agree that there are other hindrances to having an elegant
programming model. Sigh. That's not to say that one can't get
around them to a large degree. (Not the least of which is:
define your platform as narrowly as possible).

That's a route C and C++ intentionally don't take. If there
exists a platform on which the language is not implementable,
it's pretty much considered a defect in the language.
A class with a vptr is fundamentally different than one
without, for example.

And a class with private data members is fundamentally different
from one with public data members. And a class with user
defined constructors is fundamentally different from one
without.
The only ones I'm considering in this thread's topic though is
the lightweight/heavyweight ones.

Without defining it or showing its relevance to anything at all.
(Deriving from interface classes and maintaining the size of
the implementation (derived) class would be nice (but maybe
impossible?)).

Not necessarily impossible, but it would make the cost of
resolving a virtual function call significantly higher. And
what does it buy you? You say it would be "nice", but you don't
explain why; I don't see any real advantage.
I am just trying to understand where the line of demarcation
is between lightweight and heavyweight classes is and how that
can potentially change in the future and hence break code.

There is no line of demarcation because there isn't really such
a distinction. It's whatever you want it to mean, which puts it
where ever you want.
I'm only talking about the two categories based upon the C++
mechanisms that change the data portion of the object.

Which in turn depends on the implementation, just as it did in
C.

Why do you care about the layout of the data anyway? There's
nothing you can really do with it.
Deriving a simple struct from a pure abstract base class will
get you a beast that is the size of the struct plus the size
of a vptr. IOW: an aberrated struct or heavyweight object.
Call it what you want, it's still fundamentally different.

A C style struct is different from a polymorphic class, yes.
Otherwise, there wouldn't be any point in having polymorphic
classes. The difference isn't any more fundamental than making
the data members private would be, however, or providing a user
defined constructor; in fact, I'd say that both of those were
even more fundamental differences.
    [...]
The change occurs when you do something to a POD
("lightweight") class that turns the data portion of the
class into something else than just a data struct, as when
a vptr is added. Hence then, you have 2 distinct types of
class objects that are dictated by the implementation of
the C++ object model.
The concept of a POD was introduced mainly for reasons of
interfacing with C.  Forget it for the moment.  You have as
many types of class objects as the designer wishes.  If you
want just a data struct, fine; I use them from time to time
(and they aren't necessarily POD's---it's not rare for my
data struct's to contain an std::string).  If you want
polymorphism, that's fine too.  If you want something in
between, say a value type with deep copy semantics, no
problem.
There is NO restriction in C++ with regards to what you can
do.
Yes there is if you don't want the size of your struct to be
it's size plus the size of a vptr. If maintaining that size is
what you want, then you can't have polymophism. Hence,
restriction.

What on earth are you talking about. C++ doesn't guarantee the
size of anything. (Nor does any other language.) If you need
added behavior which requires additional memory, then you need
added behavior which requires additional memory. That's not C++
talking; that's just physical reality.
My point is that I'm worried about defining some overloaded
constructors and then finding (now or in the future) that my
class object is not "struct-like" anymore (read, has some
bizarre representation in memory).

I'm not sure what you mean by "struct-like" or "some bizarre
representation in memory". The representation is whatever the
implementation decides it to be. Both in C and in C++. I've
had the representation of a long change when upgrading a C
compiler. On the platforms I generally work on, the
representation of a pointer depends on compiler options. And on
*ALL* of the platforms I'm familiar with, the layout of a struct
depends on compiler options, both in C and in C++.

The whole point of using a high level language, like C++ (or C,
or even Fortran) is that you're isolated from this
representation.
Code generation/execution is not what I'm worried about.

There's not much point in defining something that can't be
implemented.

[...]
Derive a class and you have compiler baggage attached to the
data portion.

Or you don't. Even without derivation, you've got "compiler
baggage" attached to the data portion. Both in C and in C++.
If I ever instantiate a class object that has overloaded
constructors and find that the size of the object is different
from the expected size of all the data members (please don't
bring up padding and alignment etc), I'm going to be unhappy.

Be unhappy. First, there is no "expected" size. The size of an
object varies from implementation to implementation, and depends
on compiler version and options within an implementation. And
second, I've yet to find anything to be gained by changing this.
If a platform is so brain-damaged that I can't do things to
have a high degree of confidence that the size of a struct is
what I expect it to be, then I won't be targeting that
platform. Other people can program "the exotics".

So what do you expect it to be? You can't expect anything,
reasonable.
And I think I have control over most of those things on a
given platform. Which is all fine with me, as long as I HAVE
that control (via compiler pragmas or switches or careful
coding or whatever).

And compilers have alway been free (and always will be free) to
provide such controls. I've never found the slightest use for
them, but they're there. The standard intentionally doesn't
specify how to invoke the compiler, or what pragmas are
available, to achieve this, since there's nothing you can really
say which would make sense for all possible platforms.
Yes, but without the vptr please (coffee without cream
please).

You mean café au lait without any milk. If you have
polymorphism, it has to be implemented.
But what if just pure ABC derived classes were handled
differently? Then maybe the situation would be less bad.

Propose a solution. If you want polymorphism, the compiler must
maintain information about the dynamic type somehow. Whether
the base class is abstract or not doesn't change anything; an
object must somehow contain additional information. Additional
information means additional bits, which have to be stored
somewhere. The only alternative to storing them in the object
itself is somehow being able to recover them from the address of
the object. A solution which would seem off hand considerably
more expensive in terms of run-time. For practically no
advantage in return.
    [...]
Well there's another example then of heavyweightness: sprinkle
in "public" and "private" in the wrong places and the compiler
may reorder data members. (I had a feeling there was more than
the vptr example).

Yes, because C compatibility is no longer involved. It is
fairly clear that C actually went too far in this regard, and
imposed an unnecessary constraint which had negative effects for
optimization. Since most people would prefer faster programs
with smaller data, C++ did what it could to loosen this
constraint.

Can you give an example of reasonable code where this makes a
difference?
So you are saying that a non-POD does not have to have the
first data element at the start of the object.

Obviously. Where do you think that most compilers put the vptr?
Example number 3 of heavyweightness. (NOW we're getting
somewhere!). So "losing POD-ness" IS still "bad" and my
assumed implication of that and use of "POD-ness" seems to
have been correct.

OK. I'll give you example number 4: the size of a long
typically depends on compiler options (and the default varies),
so putting a long in a class makes it heavyweight. And is "bad"
according to your definitions.

I think you're being emminately silly.
So in the future I will be able to have overloaded
constructors (I'm not sure what exactly a "trivial"
constructor is, but I assumed that an overloaded one is not
trivial) and still have lightweight classes, good. That threat
of a compiler not putting data at the front of non-PODs is a
real killer.

I have no idea. I've not bothered really studying this in the
standard, because I don't see any real use for it. (I suspect,
in fact, that it is being introduced more as a means of wording
other requirements more clearly than for any direct advantages
that it may have.)
 
J

Jerry Coffin

(e-mail address removed)>, (e-mail address removed)
says...

[ ... ]
It would also be very difficult to make it work with static and
local objects; you'd probably need a fall-back solution for
those.

Locals would more or less require a fall-back solution. I don't see a
problem with statics. Another difficult one would be obects that are
inside of a struct/class, which requires that they be allocated in the
same order as they're declared. You could still do it, but space wasted
on padding could render it horribly impactical under the wrong
circumstances.
 
E

Erik Wikström

if i know how data is written i know how modify it
the same all other languages

if i have
32bits:8bits:64bits
i can write well on each of that

But since neither C, C++, Fortran, or just about any other language
guarantees a particular layout of a struct (or equivalent) you would
still need to use compiler specific methods to ensure that layout, both
in C, C++, Fortran, and just about any other language.

If you want full control of the placement of the bits (without using
platform dependent means) you should allocate a char array and perform
manipulations on that:

unsigned char data[13];

unsigned int* i = &data[0];
unsigned char* c = &data[4];
unsigned long* l = &data[5];

Assuming int, char, and long are 32, 8, and 64 bits respectively, and
that your platform does not require aligned accesses (in which case you
need additional work).
 
J

James Kanze

"James Kanze" <[email protected]> ha scritto nel messaggio
if i know how data is written i know how modify it the same
all other languages

The language provides the means to modify it. And as far as I
know, no language except Java specifies the actual data format,
and Java only does it for the basic types (and then, only
because there's no way a Java program can verify that the VM
actually follows in internally---the JVM for Windows doesn't,
for example).
if i have
32bits:8bits:64bits
i can write well on each of that

Sorry, I can't parse that sentence. Maybe if you'd give an
example of what you're trying to do?
 
T

tonytech08

    [...]
Thanks for reiterating my thought: C++ has more support
for OO with "full OO type objects".
More support than what.  
More support for OO with "heavyweight" classes than for
POD classes.
You're not making sense.  How does C++ have more support for
OO than for other idioms?
Why are you asking that when I said nothing of the sort? I
said that once you put a vptr into the data portion of an
object (for example), it's a different animal than a class
without a vptr (for example!). I distinguished these
fundamentally different animals by calling them "heavyweight"
and "lightweight" classes/object (and apparently wrongly
POD-classes wrongly).

Well, OO is often used to signify the presence of polymorphism,
and the only time you'll get a vptr is if the class is
polymorphic.
Moreso, I was concerned that other things one can do with a
class, such as defining overloaded constructors, may make code
fragile against some future or other current implementation of
the language.  Who's to say (not me) that someone won't make a
compiler that tacks on or into a class object some other
"hidden" ptr or something to implement "overloaded
constructors"? I don't care if code is generated but I do care
if the compiler starts aberrating the data portion.

Well, that's C++.  And C.  And Fortran, and just about every
other language I'm aware of.  The only language I know which
specifies the exact format of any types is Java, and it only
does so for the built-in types.

So what's your point.  Data layout is implementation defined,
period.  That was the case in C, and C++ didn't introduce any
additional restrictions.

My point is, apparently, that when a language arrives with "a bit
more" definition at that level, I will use it! But that's a bit
extreme, for one can "get away with" much stuff beyond "it's
implementation defined so it can't be done". If "platform" means
pretty much that one is tied to a single compiler in addition to the
hardware and OS, so be it. So much for such a lame definition of
"portability".
Everything is implementation defined.  C++ inherits this from C,
and it was pretty much standard practice at the time C was
invented.

I'm starting to think more seriously now about putting more effort
into design and evolution of a new language with more definition at
the implementation level to get away from "backward compatibility with
C and perpetuation of C-like paradigm". Much of the headaches with C-
like languages seems so unnecessary.
    [...]
It imposes "a penalty" the second you introduce the vptr. The
class becomes fundamentally and categorically different in a
major way.  (Read: turns a lightweight class into a
heavyweight one).

You're throwing around meaningless adjectives again.  

Nope. It's true.
The
compiler has to implement dynamic typing somehow.  You don't pay
for it unless you use it, and using a vptr is about the cheapest
implementation known.

It's not just vptrs. It's loss of layout guarantee when you introduce
such things as non-trivial constructors.
(I'm repeating myself again).
    [...]
I agree that there are other hindrances to having an elegant
programming model. Sigh. That's not to say that one can't get
around them to a large degree. (Not the least of which is:
define your platform as narrowly as possible).

That's a route C and C++ intentionally don't take.  If there
exists a platform on which the language is not implementable,
it's pretty much considered a defect in the language.

That's probably what C/C++ "portability" means: implementation of the
language. I'd opt for more designed-in portability at the programming
level in exchange for something less at the implementation level. Even
if that means "C++ for handhelds" or something similar.
And a class with private data members is fundamentally different
from one with public data members.  And a class with user
defined constructors is fundamentally different from one
without.

That's a problem, IMO.
Without defining it or showing its relevance to anything at all.

How is it not relevant to want to know what data looks like in memory?
C++ claims to be low level/close to the hardware and a programmer
can't even know what a struct looks like in memory?
Not necessarily impossible, but it would make the cost of
resolving a virtual function call significantly higher.  And
what does it buy you?  You say it would be "nice", but you don't
explain why; I don't see any real advantage.

No advantage to knowing that a struct size will be the same across
compilers?
There is no line of demarcation because there isn't really such
a distinction.  It's whatever you want it to mean, which puts it
where ever you want.

It means that one must avoid pretty much all of the OO features of the
language if one wants to have a clue about what their class objects
look like in memory.
Which in turn depends on the implementation, just as it did in
C.

C may have other things that you are refering to, what you are
implying, I don't know. I was concerned about the C++ object model
features, of which most of them cause any semblence of data layout
knowledge in memory to get thrown out the window.
Why do you care about the layout of the data anyway?  There's
nothing you can really do with it.

Why do you keep saying that? Of course one can do things with data in
memory. It's all just memory and CPU pretty much, afterall. That's
what "close to the hardware" means to me.
A C style struct is different from a polymorphic class, yes.
Otherwise, there wouldn't be any point in having polymorphic
classes.  The difference isn't any more fundamental than making
the data members private would be, however, or providing a user
defined constructor; in fact, I'd say that both of those were
even more fundamental differences.

That, of course, is the whole point of this thread. The polymorhism
case is just the easiest one to talk about because the common
implementation (vptr) is well known.
    [...]
The change occurs when you do something to a POD
("lightweight") class that turns the data portion of the
class into something else than just a data struct, as when
a vptr is added. Hence then, you have 2 distinct types of
class objects that are dictated by the implementation of
the C++ object model.
The concept of a POD was introduced mainly for reasons of
interfacing with C.  Forget it for the moment.  You have as
many types of class objects as the designer wishes.  If you
want just a data struct, fine; I use them from time to time
(and they aren't necessarily POD's---it's not rare for my
data struct's to contain an std::string).  If you want
polymorphism, that's fine too.  If you want something in
between, say a value type with deep copy semantics, no
problem.
There is NO restriction in C++ with regards to what you can
do.
Yes there is if you don't want the size of your struct to be
it's size plus the size of a vptr. If maintaining that size is
what you want, then you can't have polymophism. Hence,
restriction.

What on earth are you talking about.  C++ doesn't guarantee the
size of anything.

Not if you take the "across the whole universe" approach. I can make
enough simplifying assumptions, though, that will allow me to write
out, byte-wise, a struct and then read it back in with confidence that
I'll get back the same thing I wrote out. That doesn't apply to those
few projects where the goal is to be portable to everthing from the
abacus to the Cray 3001.
 (Nor does any other language.)  If you need
added behavior which requires additional memory, then you need
added behavior which requires additional memory.  That's not C++
talking; that's just physical reality.

That's only one paradigmical view of programming. It's also a very
tedious one. I use C++ "in the mean time" (until something better
comes along). Until then, I'll shoe horn it to make it fit as much as
possible.
I'm not sure what you mean by "struct-like" or "some bizarre
representation in memory".  

Containing only the data members _I_ specified rather than additional
compiler-introduced stuff.
The representation is whatever the
implementation decides it to be.  Both in C and in C++.  I've
had the representation of a long change when upgrading a C
compiler.

But "long" seems obsolete to me for that very reason. Use width-
specified ints and test them at compile time.
 On the platforms I generally work on, the
representation of a pointer depends on compiler options.

That is not a problem though because lightweight classes shouldn't
have pointers in them. I don't require a guarantee on pointer size. My
compiler flags attempts to do non-portable things with pointers (that
is, 32-bit to 64-bit platform portability).
 And on
*ALL* of the platforms I'm familiar with, the layout of a struct
depends on compiler options, both in C and in C++.

Which is fine, as long as you decide on what you can live with. To
program in C/C++ and not force some things to simplify is being way
too anal about portability. My motto: target your platform as narrowly
as possible. Consider things outside of that narrowly defined
platform, a separate product.
The whole point of using a high level language, like C++ (or C,
or even Fortran) is that you're isolated from this
representation.

If I wanted separation from "closeness to the hardware", I'd use Java
or something similar.
There's not much point in defining something that can't be
implemented.

Who suggested that?
    [...]
Derive a class and you have compiler baggage attached to the
data portion.

Or you don't.  Even without derivation, you've got "compiler
baggage" attached to the data portion.  Both in C and in C++.

C++'s baggage is a "deal breaker" though because one doesn't know what
that baggage is. It could be reordering of the data members or crap
inserted in front, behind or in between the data members.
Be unhappy.  First, there is no "expected" size.  The size of an
object varies from implementation to implementation, and depends
on compiler version and options within an implementation.  And
second, I've yet to find anything to be gained by changing this.

It's gotten around pretty easily. Yes, you're right, that there is no
"portability.h" file in the standard, but that's not to say that they
are not commonly built up on top of the loosely-defined standard and
implementations.
So what do you expect it to be?  You can't expect anything,
reasonable.

Of course I can and I do. I don't think anyone really expects to use C+
+ "bare", that is without "customizing" it to a useful level with such
things as an in-house "portability.h". Require 64-bit ints? Test for
it and if not there, provide an implementation or bail. Don't like the
endianness of the machine? Handle it or bail.
And compilers have alway been free (and always will be free) to
provide such controls.  I've never found the slightest use for
them, but they're there.

I wouldn't expect YOU to because you seem to have a paradigm or
preference to view a program as behavior only and "structured data be
damned". That is not my view though.
 The standard intentionally doesn't
specify how to invoke the compiler, or what pragmas are
available, to achieve this, since there's nothing you can really
say which would make sense for all possible platforms.

I don't need portability to every possible platform and it is not
practical to program as if you need that all of the time when one
never does. C++ has the goal of being implementable to all platforms
but that is separate from "applied C++" (developing programs for a
target platform(s)).
You mean café au lait without any milk.  If you have
polymorphism, it has to be implemented.

Again, I just used polymorphism as an example. I'm not really
concerned about that OO feature right now, but am about the features
that more subtley change or potentially change things (read, pull the
rug out from under you). There are classes that I may want
polymorphism but avoid it simply because that would change the class
from lightweight to heavyweight.
Propose a solution.  

I'm not a compiler writer. But I'll bet there are bunch of people here
that could probably come up with something (maybe, I dunno).
If you want polymorphism, the compiler must
maintain information about the dynamic type somehow.  Whether
the base class is abstract or not doesn't change anything; an
object must somehow contain additional information.  Additional
information means additional bits, which have to be stored
somewhere.  The only alternative to storing them in the object
itself is somehow being able to recover them from the address of
the object.  A solution which would seem off hand considerably
more expensive in terms of run-time.  For practically no
advantage in return.
I haven't analyzed it. Maybe though the pure ABC (interfaces) concept
should not be implemented with the same machinery as non-pure ABCs
(interfaces). Maybe there should be an interface keyword. I dunno. All
I know is that at this time, if I derive from a pure ABC that my class
object will have a vptr tacked onto it.
    [...]
Well there's another example then of heavyweightness: sprinkle
in "public" and "private" in the wrong places and the compiler
may reorder data members. (I had a feeling there was more than
the vptr example).

Yes, because C compatibility is no longer involved.  It is
fairly clear that C actually went too far in this regard, and
imposed an unnecessary constraint which had negative effects for
optimization.  Since most people would prefer faster programs
with smaller data, C++ did what it could to loosen this
constraint.

Bad choice IMO. Has me chompin at the bit for "a much better C++".
Can you give an example of reasonable code where this makes a
difference?

Yes, but I'd rather not. It should be clear that I don't want to view
all classes like interfaces (pure ABCs). If data is public, I want to
manipulate it directly sometimes as bits and bytes.
Obviously.  Where do you think that most compilers put the vptr?

That eliminates some programming scenarios then and is why I
consciously consider whether to design a class as a lightweight one or
heavyweight one.
OK.  I'll give you example number 4: the size of a long
typically depends on compiler options (and the default varies),
so putting a long in a class makes it heavyweight.  

No that doesn't make it a heavyweight class because I can control
that: use width-specific ints and test for the assumption at compile
or runtime.
And is "bad"
according to your definitions.

No, it's only bad when I have no control of it. I can't control the
bizarre contortions a compiler may put my pristine data structs
through as when it does when I choose to use OO features such as
constructors, derivation etc.
I think you're being emminately silly.

That doesn't bother me in the least.
I have no idea.  I've not bothered really studying this in the
standard, because I don't see any real use for it.  (I suspect,
in fact, that it is being introduced more as a means of wording
other requirements more clearly than for any direct advantages
that it may have.)

I haven't read it yet, but it would appear that, just going by the
name, there is some recognition that other people want the guarantees
I would like to have and have been bringing up in this thread.
 
T

tonytech08

Maybe though the pure ABC (interfaces) concept
should not be implemented with the same machinery as non-pure ABCs
(interfaces).

I meant to write: Maybe though, the pure ABC (interfaces) concept
should not be implemented with the same machinery as non-interface
classes.
 
J

James Kanze

On Nov 9, 3:45 am, James Kanze <[email protected]> wrote:

[...]
My point is, apparently, that when a language arrives with "a bit
more" definition at that level, I will use it!

Why? What does it buy you (except slower code and increased
memory use)?
But that's a bit extreme, for one can "get away with" much
stuff beyond "it's implementation defined so it can't be
done". If "platform" means pretty much that one is tied to a
single compiler in addition to the hardware and OS, so be it.
So much for such a lame definition of "portability".

C++ is designed so that you can do non-portable things; they're
sometimes necessary at the lowest levels (e.g. like writing OS
kernel code). It makes it quite clear what these are, though,
and if you don't use them, your code should be reasonably
portable.
It's not just vptrs. It's loss of layout guarantee when you
introduce such things as non-trivial constructors. (I'm
repeating myself again).

You're repeating false assertions again, yes. You have no
layout guarantees, period. Even in a POD, this goes back to C.
Given that there are no layout guarantees that could be portable
given that wouldn't lead to excessive loss of performance.
That's probably what C/C++ "portability" means: implementation
of the language. I'd opt for more designed-in portability at
the programming level in exchange for something less at the
implementation level. Even if that means "C++ for handhelds"
or something similar.

But you've yet to show how defining things any further would
improve portability.

[...]
That's a problem, IMO.

Why? If you want classes to be different, the language provides
the means.
How is it not relevant to want to know what data looks like in
memory?

How is it relevant? What can you do with this information?
C++ claims to be low level/close to the hardware and a
programmer can't even know what a struct looks like in memory?

A programmer does know for a specific implementation, if he
needs to. But there's no way that it could be the same for all
implementations, so it has to be implementation defined.

[...]
No advantage to knowing that a struct size will be the same
across compilers?

It's not physically possible for the struct size to be the same
across compilers, given that the size of a byte varies from one
platform to the next. Even on the same platform, I fail to see
any real advantage.
It means that one must avoid pretty much all of the OO
features of the language if one wants to have a clue about
what their class objects look like in memory.

If you want something portable, guaranteed by the standard, you
have to avoid even int and double, since what they look like in
memory isn't defined by the standard, and varies in practice
between platforms. If you're willing to accept implementation
defined, then you can have as much of a clue as the implementor
is willing to give you---I know exactly how my class objects are
layed out with G++ on a Sparc 32 bits, for example. Even in
cases where virtual inheritance is involved. I've never found
any practical use for this information, however (but I might if
I were writing a debugger, or something similar).

[...]
C may have other things that you are refering to, what you are
implying, I don't know. I was concerned about the C++ object
model features, of which most of them cause any semblence of
data layout knowledge in memory to get thrown out the window.

C++ doesn't allow anything that isn't allowed in C. Nothing
gets thrown out the window. (Regretfully, because that means we
pay a price for something that is of no real use.)
Why do you keep saying that? Of course one can do things with
data in memory.

Such as? If there's something you can do with it, tell me about
it.
[...]
Yes there is if you don't want the size of your struct to
be it's size plus the size of a vptr. If maintaining that
size is what you want, then you can't have polymophism.
Hence, restriction.
What on earth are you talking about. C++ doesn't guarantee
the size of anything.
Not if you take the "across the whole universe" approach. I
can make enough simplifying assumptions, though, that will
allow me to write out, byte-wise, a struct and then read it
back in with confidence that I'll get back the same thing I
wrote out.

From withing the same executable, maybe. But that's about it.
If you write data in an unknown format, you can't guarantee that
it will be readable when you upgrade the machine. Or the
compiler. (At least one compiler changed the format of a long
between versions, and the size of a long generally depends on
compiler options, so may change anytime you recompile the code.
And unless I'm badly mistaken---I've no actual experience with
the platform---Apple changed the format of everything but the
character types at one time.)
That doesn't apply to those few projects where the goal is to
be portable to everthing from the abacus to the Cray 3001.

It doesn't apply to those projects which have to reread the data
after a hardware upgrade. Or a compiler upgrade. Or even if
the code is recompiled under different conditions.
That's only one paradigmical view of programming. It's also a
very tedious one.

Reality can be tedious.

[...]
Containing only the data members _I_ specified rather than
additional compiler-introduced stuff.

That's not the case in C. Luckily, because if it were, my code
would run almost an order of magnitude slower.
But "long" seems obsolete to me for that very reason. Use
width- specified ints and test them at compile time.

Width-specified isn't sufficient. Width isn't the only aspect
of representation.
Who suggested that?

You said that you wanted to only get what you'd written in the
struct. That would make an implementation with reasonable
performance impossible on some architectures. Like a Sparc, or
an IBM mainframe. Or many, many others.
[...]
Or you don't. Even without derivation, you've got "compiler
baggage" attached to the data portion. Both in C and in
C++.
C++'s baggage is a "deal breaker" though because one doesn't
know what that baggage is.

Nor in C, nor in any other language.
It could be reordering of the data members or crap inserted in
front, behind or in between the data members.

Except for reordering the members (which is very limited
anyway), C pretty much offers the same liberties. Most
languages offer the liberty to completely reorder members,
because of the space and performance improvements which can
result.

[...]
Yes, but I'd rather not. It should be clear that I don't want
to view all classes like interfaces (pure ABCs). If data is
public, I want to manipulate it directly sometimes as bits and
bytes.

Why? That's what I don't see. If I need to manipulate bits and
bytes (say to implement a transmission protocol), then I
manipulate bits and bytes. And not struct's; there's no
reasonable way struct's can be used at this level, without
breaking them for all other uses.
 
T

tonytech08

On Nov 9, 3:45 am, James Kanze <[email protected]> wrote:

    [...]
My point is, apparently, that when a language arrives with "a bit
more" definition at that level, I will use it!

Why?  What does it buy you (except slower code and increased
memory use)?

An elegant programming model.
    [...]
It's not just vptrs. It's loss of layout guarantee when you
introduce such things as non-trivial constructors.  (I'm
repeating myself again).

You're repeating false assertions again, yes.  You have no
layout guarantees, period.  Even in a POD, this goes back to C.
Given that there are no layout guarantees that could be portable
given that wouldn't lead to excessive loss of performance.

I have guarantees because I define the target platform. C++ is not
"ready to use right out of the box". It has to be adapted for use in a
given environment. (Unless one is a masochist).
    [...]
That's probably what C/C++ "portability" means: implementation
of the language. I'd opt for more designed-in portability at
the programming level in exchange for something less at the
implementation level. Even if that means "C++ for handhelds"
or something similar.

But you've yet to show how defining things any further would
improve portability.

Currently, C++ isn't portable to anything: it leaves too much
undefined and puts the burden on the developer to make it work. The
developer does the porting.
    [...]
That's a problem, IMO.

Why?  If you want classes to be different, the language provides
the means.

Nah. It's very restrictive unless you want heavyweight classes. I
found a template that generates code to allow interfaces without a
vptr in the data portion of the object, BTW. It's like 10 years old so
obviously other's have found issue with the single C++ implementation
also.
How is it relevant?  What can you do with this information?

Manipulate the data/bytes in a "physical" way.
A programmer does know for a specific implementation, if he
needs to.  But there's no way that it could be the same for all
implementations, so it has to be implementation defined.

And that's why I've said that there are guarantees. And I take
advantage of them. Who cares about "portability to every platform"?
That's impractical.
    [...]
No advantage to knowing that a struct size will be the same
across compilers?

It's not physically possible for the struct size to be the same
across compilers,

Yes it is.
given that the size of a byte varies from one
platform to the next.  

It may, but surely doesn't have to and isn't even the common case.
Trying to have a single codebase that spans very diverse platforms is
ill-fated.
Even on the same platform, I fail to see
any real advantage.
Okies.


If you want something portable, guaranteed by the standard, you
have to avoid even int and double, since what they look like in
memory isn't defined by the standard, and varies in practice
between platforms.  If you're willing to accept implementation
defined, then you can have as much of a clue as the implementor
is willing to give you---I know exactly how my class objects are
layed out with G++ on a Sparc 32 bits, for example.  Even in
cases where virtual inheritance is involved.  I've never found
any practical use for this information, however (but I might if
I were writing a debugger, or something similar).

I wish you would stop with the useless definition of portability
already. Developers program to a platform(s), not to a language
standard that has little to say about the portability issues and does
nothing to address them.
    [...]
C may have other things that you are refering to, what you are
implying, I don't know. I was concerned about the C++ object
model features, of which most of them cause any semblence of
data layout knowledge in memory to get thrown out the window.

C++ doesn't allow anything that isn't allowed in C.  Nothing
gets thrown out the window.

It does, but it will probably take "a better C++" to arrive to make
that obvious.
 (Regretfully, because that means we
pay a price for something that is of no real use.)


Such as?  If there's something you can do with it, tell me about
it.

I'm not going to answer that, you're being facetious.
    [...]
Yes there is if you don't want the size of your struct to
be it's size plus the size of a vptr. If maintaining that
size is what you want, then you can't have polymophism.
Hence, restriction.
What on earth are you talking about.  C++ doesn't guarantee
the size of anything.
Not if you take the "across the whole universe" approach. I
can make enough simplifying assumptions, though, that will
allow me to write out, byte-wise, a struct and then read it
back in with confidence that I'll get back the same thing I
wrote out.

From withing the same executable, maybe.  But that's about it.
If you write data in an unknown format, you can't guarantee that
it will be readable when you upgrade the machine.

Who said it was unknown? Sure if you blindly take one of those
heavyweight class objects and write it out without serializing it,
then who knows what kinds of stuff you may be writing out: vptrs etc.
The point is that those heavyweight classes WOULD leave you with no
knowledge about data AND that the whole "use a different compiler"-
scenario get more risky (more risky to the point where anything but
serializing is not ever a choice).
 Or the
compiler.  (At least one compiler changed the format of a long
between versions, and the size of a long generally depends on
compiler options, so may change anytime you recompile the code.
And unless I'm badly mistaken---I've no actual experience with
the platform---Apple changed the format of everything but the
character types at one time.)


It doesn't apply to those projects which have to reread the data
after a hardware upgrade.  Or a compiler upgrade.  Or even if
the code is recompiled under different conditions.

If you've not built anything on top of the base language and are using
C++ right out of the box and expect things to just magically take care
of themselves, of course it's not going to work. Who's being silly
now? Everyone and their mother knows the issues of width, alignment,
padding, endianness, floating point representation... etc. If you use
a sharp knife, you have to be careful with it.
Reality can be tedious.

C++ is not the end-all of programming. Nor is there only one way to
use it.
    [...]
Containing only the data members _I_ specified rather than
additional compiler-introduced stuff.

That's not the case in C.  

Oh, so my struct of two 32-bit wide integers is really not 8 bytes
even when sizeof() tells me so?
Luckily, because if it were, my code
would run almost an order of magnitude slower.

You're being facetious again (or defensive). You seem to be saying
similar to: "If I don't manually align data and don't give the
compiler the opportunity to do so, that causes my specific
application's section of code to tank". Well "duh". Don't do that if
it hurts. Realize to that that's not the general case: neither that
the data will necessarily be misaligned just because one turned off
padding nor that every or even the majority of application code is
performance critical. (It's an analogy only. I'm using another level
of indirection to get "interfaces" without using inheritence. There
would have to be a performance PROBLEM before I would give up clean
code and an elegant programming model).
Width-specified isn't sufficient.  Width isn't the only aspect
of representation.

Stop being facetious (or insinuating).
You said that you wanted to only get what you'd written in the
struct.  That would make an implementation with reasonable
performance impossible on some architectures.

It depends on your application. But why penalize the common case for
the special case?
 Like a Sparc, or
an IBM mainframe.  Or many, many others.

You seem to have a "it's got to be either black or white" mindset.
There are many possibilities for language design and many potential
goals.
    [...]
Or you don't.  Even without derivation, you've got "compiler
baggage" attached to the data portion.  Both in C and in
C++.
C++'s baggage is a "deal breaker" though because one doesn't
know what that baggage is.

Nor in C, nor in any other language.

I don't have an issue with C structs. I have issues with heavyweight C+
+ classes (not really a problem though, because I AVOID heavyweight
classes in certain scenarios, but it does leave me longing for
something more elegant).
Except for reordering the members (which is very limited
anyway), C pretty much offers the same liberties.

As long as I know what it is an I have control over it, I'm a happy
camper. C++ makes the situation worse, to the point of having to avoid
major OO features because of it.
 Most
languages offer the liberty to completely reorder members,
because of the space and performance improvements which can
result.

I won't allow it in my language.
    [...]
Yes, but I'd rather not. It should be clear that I don't want
to view all classes like interfaces (pure ABCs). If data is
public, I want to manipulate it directly sometimes as bits and
bytes.

Why?  That's what I don't see.  If I need to manipulate bits and
bytes (say to implement a transmission protocol), then I
manipulate bits and bytes.  And not struct's; there's no
reasonable way struct's can be used at this level, without
breaking them for all other uses.

Then maybe there is something missing. ("layout classes"?). I don't
believe your last sentence is true.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,764
Messages
2,569,566
Members
45,041
Latest member
RomeoFarnh

Latest Threads

Top