sizeof() calculated at compile time or run time

S

shnsfz

gotta question on sizeof keyword

does the sizeof keyword calcuates the size at compile time or run time ??

-- Posted on news://freenews.netfront.net - Complaints to (e-mail address removed) --

sizeof() calcuates the size at compile time

vairable array won't measured by sizeof()

just like vector.size()
 
G

gpderetta

It's far more difficult (if not impossible) in C++ because C++
doesn't allow compacting garbage collection.  You can't move
objects around in C++.

Hum... if the compiler can prove statically that the lifetime of an
heap allocation is restricted to a lexical scope, it certainly could.
It doesn't require a compacting GC nor a GC at all.

Think for example of a stack allocated std::string: its internal
buffer could be stack allocated as well. I do not think it require
any extraordinary work for the compiler to figure it out.
 
J

James Kanze

Hum... if the compiler can prove statically that the lifetime
of an heap allocation is restricted to a lexical scope, it
certainly could. It doesn't require a compacting GC nor a GC
at all.

That's another way of obtaining the performance. If the
compiler can determine a maximum size, it wouldn't even have to
use dynamic allocation. Unless the compiler has built-in
knowledge of std::vector, however, it's very difficult to do.
Think for example of a stack allocated std::string: its
internal buffer could be stack allocated as well. I do not
think it require any extraordinary work for the compiler to
figure it out.

Again, it depends. With knowledge of std::string built into the
compiler, it wouldn't be too difficult if the object was never
passed to an external function. Even without built in
knowledge, it would be possible. Once a reference to the string
"escapes" from the function, however, it becomes much, much
harder.
 
G

gpderetta

That's another way of obtaining the performance.  If the
compiler can determine a maximum size, it wouldn't even have to
use dynamic allocation.  Unless the compiler has built-in
knowledge of std::vector, however, it's very difficult to do.

It only need to 'see' inside the std::vector and find the call to
'new' and the companion call to 'delete'.
Again, it depends.  With knowledge of std::string built into the
compiler, it wouldn't be too difficult if the object was never
passed to an external function.  

It only need builtin knowledge of memory allocation primitives
(something I'm sure most compilers already have). If by static
analysis it can prove that a memory allocation is always freed at the
end of scope and it can prove that the allocation size cannot exceed a
certain value (or has a fallback mechanism for that), then it can
convert a call to new in the equivalent of alloca+placement new.
Even without built in
knowledge, it would be possible. Once a reference to the string
"escapes" from the function, however, it becomes much, much
harder.

Of course, this is why you need escape analysis.
 
J

James Kanze

[...]
It only need to 'see' inside the std::vector and find the call
to 'new' and the companion call to 'delete'.

And verify, of course, that no other function called modifies
the pointer. Except, of course, that in most uses, the call to
new isn't in the constructor. The real optimization problem
with std::vector is to make a sequence of successive push_back
rapid.
It only need builtin knowledge of memory allocation primitives
(something I'm sure most compilers already have).

It needs to ensure that no other functions modify the pointer.
A lot do, and there are generally a lot of indirections involved
in calling them.
If by static analysis it can prove that a memory allocation is
always freed at the end of scope and it can prove that the
allocation size cannot exceed a certain value (or has a
fallback mechanism for that), then it can convert a call to
new in the equivalent of alloca+placement new.

The key here (and with std::vector) is, first, proving that
allocation size cannot exceed a certain value, and second, that
doing just one allocation with this value, rather than the
allocation/copy/free which occurs when the capacity is increased
doesn't change the guaranteed semantics, is what requires
internal knowledge: the user may have replaced to global
operator new (making calls to the function "observable
behavior"), and in the case of std::vector, the copy
constructors may have side effects. The transformation is never
the less legal, because the standard explicitly allows
std::string and std::vector to have any capacity they want after
construction, but there's nothing in the code that indicates
this.

The problem with std::string is simpler, because the compiler
knows that copying char or wchar_t has no observable side
effects, so one part of the problem is solved. And I'm sure
that an optimization which ignored possible "observable
behavior" in operator new would be acceptable (although it
should be documented as "not strictly conforming").
Of course, this is why you need escape analysis.

Which generally means intermodule analysis; most people pass
strings and vectors by reference, and not by value. (And even
if the reference is const, if the original object is non-const,
the called function may legally cast away const and modify the
object anyway.)
 
A

Andrey Tarasevich

Francis said:
Sounds like premature optimization to me. Humans are notoriously bad at
anticpating where the inefficiencies will be in their code. Write it to
be readable and maintainable, then if it performs poorly, profile it,
and optimize the parts the performance profile says need optimizing.

There's no such thing as abstract "premature optimization", i.e. there's
not such thing as "premature optimization" outside of the context of
concrete piece of code.

To say that C++ doesn't need VLAs or stack allocation mechanism because
of "premature optimization" considerations is no different than to say
that the whole C++ is just one big "premature optimization" compared to,
say, Java.
 
A

Andrey Tarasevich

Vidar said:
My conclusion is that the VLA feature is more of an allocation
mechanism than a type.

Well, yes. VLAs are a typed allocation mechanism that is free of the
problems inherent to 'alloca'.
Rather than adopt VLAs, i.e. another irregular
array type, I think C++ should rather focus on a dynamic stack
allocation mechanism and find a way to make that accessible to any
type.

Actually, VLAs, among other things, are an attempt to work around some
problems inherent to explicit allocation mechanisms. VLAs are objects
that need to be _declared_, thus restricting the points of actual
allocation to the set of locations where declarations are permitted by
the language. "Allocation mechanisms" on the other hand, are not usually
tied to declarations, but most of the time are accessible within
_expressions_ (i.e. virtually anywhere), which might easily leads to
problems with allocating stack space, like when 'alloca' is called in
the function call argument list

foo(<smthng>, alloca(42), <smthng>)

Of course, one can argue that the problems that arise in cases like that
are inherent to the "traditional" implementation of stack frame
management and that some alternative approach can be used to solve them.
Although apparently there's no good alternative approach that would
maintain the [fortunately] still-valued efficiency of the code.
 
V

Vidar Hasfjord

That's not true in C, although there are some restrictions.

No. Declarations of VLAs must be at either block scope or function
prototype scope. A VLA cannot be extern, static or a member. See
below.
It can certainly be aggregated in C, although it must be the
last element in a structure.  

No. That is a flexible array member (FAM); also introduced in C99.
Flexible array members allow incomplete array struct members without
bounds; and must be the last member as you say. See

http://www.comeaucomputing.com/techtalk/c99
No.  It has an array type.


Not at all, although from another posting, I gather that that is
what Alf really wants: not VLAs, but a type-safe alloca.
(Something which bears the same relation to alloca that the new
operator bears to malloc.  Perhaps a special placement new?)


Why?  What does that buy us?

Not much... :)

My point is simply that if such an optimized allocation feature is
found to be wanted for C++, then C++ should look for better (more
regular) ways of achieving it. Automatic compiler optimization would
of course be the ideal.

Regards,
Vidar Hasfjord
 
J

James Kanze

No. Declarations of VLAs must be at either block scope or
function prototype scope. A VLA cannot be extern, static or a
member.

Yes. The C standard seems to distinguish between arrays with a
variable length, used as local arguments, and arrays with a
variable length placed at the end of a struct; in the latter
case, it considers the array to have an incomplete type (which
it calls a flexible array member), rather than being a "variable
length array". I hadn't noticed that before; in my mind, they
were both variable length arrays (since both are used as
variable length arrays).

This has a number of repercusions, and definitely simplifies the
adaptation in C++, since most of the problems concern flexible
array members, and not what C calls variable length arrays.
Off hand, I don't see any problems with adopting this
definition of VLA directly in C++, without any changes from C.
But I imagine there are some; otherwise, I can't see why the
committee didn't do it. Flexible arrays pose all sorts of
problems with regards to inhertance, constructors and
destructors, assignment, etc. (Come to think of it, how does C
handle the problem of assignment?) Interestingly enough, sizeof
is a constant expression when applied to a struct with a
flexible array. Logically, this would suggest that sizeof
should always return 0 when applied to a VLA, but I guess the C
committee wanted to maintain the use of sizeof in array
addressing in the case of an array of VLAs.

It's also a question why the C committee didn't allow something
like:

struct A { int i; int j; double a[j]; } ;

There's no reason this can't be made to work, as long as it's
not an object with static lifetime, and it would seem very
useful.
See below.
No. That is a flexible array member (FAM); also introduced in
C99. Flexible array members allow incomplete array struct
members without bounds; and must be the last member as you
say. See

So I see. I don't know why; I've always thought of them as more
or less the same thing. (Actually, I do know why; I was
influenced by another, very influent member of the C++
committee. Which may explain why the integration of VLA's
wasn't considered.)

Anyway, this point changes my opinion completely with regards to
VLAs. Without the problems which allowing them in a class
introduces, simple C compatibility should be enough of a reason.
(The traditional rule is "as close to C as possible, but no
closer". Unless there's a good reason for not being compatible
with C, we should be.)
Not much... :)

On the other hand, as long as the concept of VLA and flexible
arrays is well separated in C, the effort to add VLAs to C++
wouldn't have gone much beyond copy/paste. And they are useful
in the case of multidimensional arrays (which curiously, no one
in this thread has mentionned as a motivation): using
std::vector for a multidimentional array (except within the
implementation of your own multidimentional array class) is
exceedingly painful, and std::valarray doesn't seem to be widely
used either.

Of course, there is the problem that C style arrays are broken
to begin with. But off hand, even ignoring the performance (and
static vs. dynamic initialization issues), I don't see any good
alternative for multidimensional arrays. (But maybe I just
haven't looked hard enough. In my own work, I don't need them.)
My point is simply that if such an optimized allocation
feature is found to be wanted for C++, then C++ should look
for better (more regular) ways of achieving it. Automatic
compiler optimization would of course be the ideal.

I don't think that the optimization is that much of an issue.
Modern allocators can be very fast, and various other
considerations make variable stack allocation more and more
difficult. The real issue, IMHO, is multidimensional arrays.
Because let's face it:
std::vector< std::vector< double > >
a( 10, std::vector< double >( 20 ) ) ;
instead of:
double a[ 20 ][ 10 ] ;
really isn't very programmer friendly. (Also, of course, the
former cannot easily be passed to other languages, which
generally do have multidimensional arrays, and lay them out
contiguously in memory.)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top