J
JKop
Consider a simple POD:
struct Blah
{
int a;
const char* p_b;
unsigned c;
double d;
};
Now, let's say we want to create a "zero-initialized" object of this class,
yielding:
a == 0
p_b == null pointer value (not necessarily all bits zero)
c = 0U
d = 0.0
The only way to achieve this is via:
Blah poo = Blah();
....which ridiculously, ludacrisly, may create a temporary if it wishes.
Okay... so let's say we have a template. This template is a function:
template<class T> void Monkey();
What this function does is define an automatic local object of type T and
makes it zero initialized.
template<class T> void Monkey()
{
T poo = T();
}
But... this template function is to be designed to work with ALL types
(intrinsics, POD's, non-POD's, what have you). But then some-one goes and
does:
Monkey<std:
stringstream>();
which brings in the bullshit complication of not being able to copy certain
types (ie. the above syntax must have the choice to copy).
Anyway, short of using dynamic memory allocation, I've found one way of
making a zero-initialized automatic object. . . but it has to be const:
template<class T> void Monkey()
{
T const &poo = T();
}
Now Monkey<std:
stringstream> will work, and the "temporary" bound to the
reference lasts for the length of the function... but it has to be const.
Okay anyway... can anyone think of a way of doing this to achieve a *non-
const* object? So far all I've got is:
template<class T> void Monkey()
{
T &poo = *new T();
delete &poo;
}
....and all of this because of the "function declaration Vs object
definition" bullshit! If only the "extern" keyword were mandatory... or if
the "class" keyword could specify that it's an object definition...
While I'm on the subject... is there actually anything "wrong" with using
dynamic memory allocation? I myself don't know assembly language, so I can't
see what's going on under the hood... but could you tell me, do the
following two programs result in the same assembly code?
Program 1:
int main()
{
int k = 5;
k -= 2;
}
Program 2:
int main()
{
int &k = *new int(5);
k -=2;
delete &k;
}
Is there anything inherently "efficent" or "...bad" about using dynamic
memory allocation... ?
-JKop
struct Blah
{
int a;
const char* p_b;
unsigned c;
double d;
};
Now, let's say we want to create a "zero-initialized" object of this class,
yielding:
a == 0
p_b == null pointer value (not necessarily all bits zero)
c = 0U
d = 0.0
The only way to achieve this is via:
Blah poo = Blah();
....which ridiculously, ludacrisly, may create a temporary if it wishes.
Okay... so let's say we have a template. This template is a function:
template<class T> void Monkey();
What this function does is define an automatic local object of type T and
makes it zero initialized.
template<class T> void Monkey()
{
T poo = T();
}
But... this template function is to be designed to work with ALL types
(intrinsics, POD's, non-POD's, what have you). But then some-one goes and
does:
Monkey<std:
which brings in the bullshit complication of not being able to copy certain
types (ie. the above syntax must have the choice to copy).
Anyway, short of using dynamic memory allocation, I've found one way of
making a zero-initialized automatic object. . . but it has to be const:
template<class T> void Monkey()
{
T const &poo = T();
}
Now Monkey<std:
reference lasts for the length of the function... but it has to be const.
Okay anyway... can anyone think of a way of doing this to achieve a *non-
const* object? So far all I've got is:
template<class T> void Monkey()
{
T &poo = *new T();
delete &poo;
}
....and all of this because of the "function declaration Vs object
definition" bullshit! If only the "extern" keyword were mandatory... or if
the "class" keyword could specify that it's an object definition...
While I'm on the subject... is there actually anything "wrong" with using
dynamic memory allocation? I myself don't know assembly language, so I can't
see what's going on under the hood... but could you tell me, do the
following two programs result in the same assembly code?
Program 1:
int main()
{
int k = 5;
k -= 2;
}
Program 2:
int main()
{
int &k = *new int(5);
k -=2;
delete &k;
}
Is there anything inherently "efficent" or "...bad" about using dynamic
memory allocation... ?
-JKop