Teaching new tricks to an old dog (C++ -->Ada)

  • Thread starter Turamnvia Suouriviaskimatta
  • Start date
R

REH

This question can be rephrased to: "Why is memcpy() needed?".



--
No, it is not about whether memcpy() is a necessary function. You purposely
over simplified what I did. I don't want to appear rude, but your whole
statement about moving a double byte-by-byte to avoid alignment issues is
just nonsense. You didn't take portability into account when you wrote the
snippets of code using placement new, and now you wish to backpedal. You
really are not going to convince me that you intended write all your classes
to copy all the bytes of variables with alignment needs greater than one,
instead of just fixing the alignment. Do you really expect my to believe
you would write:

struct foo {
foo(double d) : m_dbl(d) {}
double get() const
{
double tmp;
memcpy(&tmp, &m_dbl, sizeof(tmp));
return tmp;
}

private:
double m_dbl;
}

char buf[sizeof(foo)];

foo* pf = new(buf) foo();
double d = pf->get();


instead of:

struct foo {
foo(double d) : m_dbl(d) {}
double get() const {return m_dbl;}

private:
double m_dbl;
}

union data_type {
double align;
char buf[sizeof(foo)];
} data;

foo* pf = new(data.buf) foo();
double d = pf->get();
 
J

jayessay

Hyman Rosen said:
Can Lisp macros manipulate the text of the macro arguments?

They don't manipulate text at all, but they _do_ manipulate the
_un_evaluated forms passed to them.

Here is a "top 5" list of why Lisp macros aren't anything like C
preprocessor stuff. At the bottom I give an example.



Top five reasons why Lisp macros are not similar to the C/C++
preprocessor in any meaningful sense.


Number 5:

The/a cpp is a, well, preprocessor. Typically a separate program
even. While you could dilute the meaning here to the point where a
compiler is a preprocessor for a linker is a processor for a loader is
a preprocessor for a cpu etc., the typical meaning is that such things
perform minor transformations on a block of input and had the result
to the primary processor.

Lisp macros are tightly bound up with both the definition of the total
language and any implmentation of it. Also expansion time is
interwoven with compilation, interpretation, or explicit evaluation.
They are available and can be defined during any phase: compile, load,
and execute/runtime.


Number 4:

The/a cpp is not Turing complete.

Lisp macros are Turing complete.


Number 3:

The/a cpp processes text, typically from a text stream such as (most
typically) a text file.

Lisp macros process _data structures_. In particular the syntax trees
output from the parser. Text is long gone before macros are involved
at all.


Number 2:

The/a cpp doesn't know or have any access to the full language, user
defined functions, or its environment.

Lisp macros are just Lisp functions which can make use of the entire
language, and any other user defined functions and macros in libraries
or the running image. They also have access to their lexical
environment information (such as type and profile information) as well
as any public aspects of the global system/application.


And the Number 1 reason why Lisp macros are not similar to the C/C++
preprocessor in any meaningful sense:

Jerry Coffin thinks they are.


;-) and just to make sure :) :)


-----------------------------

The astute reader will have noticed that it is C++ _templates_ which
are slighly similar to Lisp macros. And they would be right.
Templates are significantly less expressive and mind numbingly painful
to use in comparison, but they are Turing complete and you can do some
interesting things with them.

For example, while it might take several pages of code, I think it is
_physically_ possible (i.e., humanly doable not just theoretically
doable) to achieve the (somewhat contrived) following example with
templates. Note: I don't see how you can do this _in_ Ada at all.


Allow a user/programmer to define a dictionary like object while
ensuring at compile time that the size of the table implementing it is
a prime number.



(defun small-primep (n)
(and (< 0 n most-positive-fixnum)
(the-primes :from n :to n)))


(deftype prime () `(and integer (satisfies small-primep)))


(defmacro define-dictionary
((name &key (size 101) (test #'eql) (hash #'sxhash)) &body body)
(assert (symbolp name) nil
"Dictionary names must be symbols. ~A is of type ~A"
name (type-of name))
(assert (typep size 'prime) nil
(prime-failure-message size name))
`(defvar ,name
,(if body
`(let ((it (make-hash-table
:test ,test :size ,size :hash-function ,hash)))
,@(mapcar #'(lambda (p)
`(setf (gethash ,(car p) it) ,(cdr p)))
body)
it)
`(make-hash-table
:test ,test :size ,size :hash-function ,hash))))


(defun prime-failure-message (n name)
(concatenate
'string
(format nil "Dictionary ~A: " name)
(cond
((not (integerp n))
(format nil "~A of type ~A is not a number" n (type-of n)))
((< n 0)
(format nil "~A < 0; Dictionary sizes must > 0 and prime numbers" n))
((> n most-positive-fixnum)
(format nil "Dictionary sizes must be prime numbers < ~A"
most-positive-fixnum))
(t
(format nil "~A is not prime; Next prime larger than ~A is ~A"
n n (first (the-primes :from n :to (+ 1000 n))))))))


At _compile_ time

(define-dictionary (foo :size 20))

produces:

Error: Dictionary FOO: 20 is not prime; Next prime larger than 20 is 23


(define-dictionary (foo :size 23)
(1 . "one")
(2 . "two")
(3 . "three"))

Compiles, passes checks and produces the dictionary with the given entries.
This generates the following code:

(DEFVAR FOO
(LET ((IT (MAKE-HASH-TABLE
:TEST #<Function EQL>
:SIZE 23
:HASH-FUNCTION #<Function SXHASH>)))
(SETF (GETHASH 1 IT) "one")
(SETF (GETHASH 2 IT) "two")
(SETF (GETHASH 3 IT) "three")
IT))


/Jon
 
I

Ioannis Vranos

REH said:
No, it is not about whether memcpy() is a necessary function. You purposely
over simplified what I did. I don't want to appear rude, but your whole
statement about moving a double byte-by-byte to avoid alignment issues is
just nonsense.


My point was that we can create objects in a low level way like that.


You didn't take portability into account when you wrote the
snippets of code using placement new, and now you wish to backpedal.


I considered them to be portable (apart from the non-explicit call of destructors mistake).


You
really are not going to convince me that you intended write all your classes
to copy all the bytes of variables with alignment needs greater than one,
instead of just fixing the alignment.

Do you really expect my to believe
you would write:

struct foo {
foo(double d) : m_dbl(d) {}
double get() const
{
double tmp;
memcpy(&tmp, &m_dbl, sizeof(tmp));
return tmp;
}

private:
double m_dbl;
}

char buf[sizeof(foo)];

foo* pf = new(buf) foo();
double d = pf->get();


instead of:

struct foo {
foo(double d) : m_dbl(d) {}
double get() const {return m_dbl;}

private:
double m_dbl;
}

union data_type {
double align;
char buf[sizeof(foo)];
} data;

foo* pf = new(data.buf) foo();
double d = pf->get();



Yes the first one is also always guaranteed to work. In summary, the standard guarantees
that you can read *any* POD type as a sequence of chars/unsigned chars and copy them to a
new char/unsigned char array and you will have an exact, working copy of the original
*POD* type.

For non-POD types, you can only read them as sequences of unsigned chars only, but if you
copy them it is *not* guaranteed that you will have either exact or working copies of the
original.

No alignment is mentioned about this anywhere.


Now this means, that strictly speaking my code with the class having the defined
constructor and destructor is not portable since is a non-POD type, and thus should not
had used it in clc++, since strict theoretical portability is important in here.


However the following code *is* portable:


#include <iostream>
#include <cstring>

class SomeClass
{
public:

double d;
int i;
float f;
long l;
};


int main()
{
using namespace std;

unsigned char array[sizeof(SomeClass)];

SomeClass obj= {1, 2, 3, 4};

memcpy(array, &obj, sizeof(obj));

SomeClass *p= reinterpret_cast<SomeClass *>(array);

cout<<p->d<<" "<<p->i<<" "<<p->f<<" "<<p->l<<"\n";
}


C:\c>temp
1 2 3 4

C:\c>


So since this is *guaranteed* to be portable, why this isn't?


#include <iostream>
#include <cstring>

class SomeClass
{
public:

double d;
int i;
float f;
long l;
};


int main()
{
using namespace std;

unsigned char array[sizeof(SomeClass)];

SomeClass obj= {1, 2, 3, 4};

SomeClass *p= new(array)SomeClass(obj);

cout<<p->d<<" "<<p->i<<" "<<p->f<<" "<<p->l<<"\n";
}


C:\c>temp
1 2 3 4

C:\c>
 
J

jayessay

You use _reader_ macros for that sort of thing.

Lisp macros cannot produce partial syntax either, whereas C/C++
macros can (e.g. on macro can create a '{' and another separate
invocation can create the closing '}').

Again, that would be the sort of thing you would use reader macros.
It's worth noting again that Lisp macros (the kind you generally mean
when unqualified) do not work on text, they work on the syntax trees
passed to them.

Jerry is correct to say that they do similar things--they parametize
code with code.

But that is not really what they do. It is _one_ thing you can _use_
them to do.

The main differences is that in Lisp (etc.) their

I think the main differences are the ones I listed.


/Jon
 
R

REH

However the following code *is* portable:


#include <iostream>
#include <cstring>

class SomeClass
{
public:

double d;
int i;
float f;
long l;
};


int main()
{
using namespace std;

unsigned char array[sizeof(SomeClass)];

SomeClass obj= {1, 2, 3, 4};

memcpy(array, &obj, sizeof(obj));

SomeClass *p= reinterpret_cast<SomeClass *>(array);

cout<<p->d<<" "<<p->i<<" "<<p->f<<" "<<p->l<<"\n";
}


C:\c>temp
1 2 3 4

C:\c>


So since this is *guaranteed* to be portable, why this isn't?


#include <iostream>
#include <cstring>

class SomeClass
{
public:

double d;
int i;
float f;
long l;
};


int main()
{
using namespace std;

unsigned char array[sizeof(SomeClass)];

SomeClass obj= {1, 2, 3, 4};

SomeClass *p= new(array)SomeClass(obj);

cout<<p->d<<" "<<p->i<<" "<<p->f<<" "<<p->l<<"\n";
}


C:\c>temp
1 2 3 4

C:\c>
No, NEITHER is portable! That both have alignment issues! Both exihibit
undefined behavior. Show me where in the standard is says you may cast a
pointer to a char array to a pointer of any other type (other than another
signed/unsigned char*) and access it. You do this on a system that cannot
handle misaligned accesses, and your program will most likely crash!
 
L

Ludovic Brenta

Tapio said:
If a replace I < J; with null; the result is:
gcc -c -gnatg temp.adb
temp.adb:2:04: warning: "I" is not modified, could be declared constant
temp.adb:2:04: warning: variable "I" is not referenced
temp.adb:3:04: warning: "J" is not modified, could be declared constant
temp.adb:3:04: warning: variable "J" is not referenced
gnatmake: "temp.adb" compilation error

This is an excellent example of the power of GNAT. I think this would
deserve more publicity in future presentations of the language and of
GNAT.
 
R

Randy Brukardt

....
The C++ way of catching all exceptions of a class and its derived
classes can lead to confusion. One can have multiple exception
handlers for the same exception and it may not be immediately obvious
to the reader which one is called. I see this as a maintenance
problem.

If Ada had some sort of exception classes, there certainly would be a
overlapping handler check. Having two different handlers for any possible
exeception would be prohibited (it certainly is now). So I don't find this
much a reason for not having this capability. The reasons for not having it
are finalization, memory allocation, and compatibility with Ada 83 & Ada 95.
(And mainly that the person working on it ran out of energy for it, and no
one else picked it up.)

Randy.
 
C

Chad R. Meiners

Jerry said:
And so? What part of "it seems likely" don't you understand?

I was commenting that the slides you quoted were irrelevant to your
comment
It seems likely to me that if
they were using Ada for the hand-written code, they'd generate Ada
as well.

I would appreciate if did resort to begging the question when
responding.
 
C

Chad R. Meiners

Jerry said:
If I ever teach a class,

If you teach a class the way you argue in newsgroups, your students
would not approve. You would frustrate them because you would be
putting words in their mouth and insulting them instead of listening to
what they are saying, and you would be ignoring arguments that show you
are error instead of admitting error and going back to find the truth.
anybody who tries to abuse Turing completeness like this

I haven't abused the definition of Turing completeness. I was making
as comment how your statement


didn't address what Jayessay was talking about.
 
I

Ioannis Vranos

REH said:
#include <iostream>
#include <cstring>

class SomeClass
{
public:

double d;
int i;
float f;
long l;
};


int main()
{
using namespace std;

unsigned char array[sizeof(SomeClass)];

SomeClass obj= {1, 2, 3, 4};

memcpy(array, &obj, sizeof(obj));

SomeClass *p= reinterpret_cast<SomeClass *>(array);

cout<<p->d<<" "<<p->i<<" "<<p->f<<" "<<p->l<<"\n";
}


C:\c>temp
1 2 3 4

C:\c>


So since this is *guaranteed* to be portable, why this isn't?


#include <iostream>
#include <cstring>

class SomeClass
{
public:

double d;
int i;
float f;
long l;
};


int main()
{
using namespace std;

unsigned char array[sizeof(SomeClass)];

SomeClass obj= {1, 2, 3, 4};

SomeClass *p= new(array)SomeClass(obj);

cout<<p->d<<" "<<p->i<<" "<<p->f<<" "<<p->l<<"\n";
}


C:\c>temp
1 2 3 4

C:\c>

No, NEITHER is portable! That both have alignment issues! Both exihibit
undefined behavior. Show me where in the standard is says you may cast a
pointer to a char array to a pointer of any other type (other than another
signed/unsigned char*) and access it. You do this on a system that cannot
handle misaligned accesses, and your program will most likely crash!



Perhaps you are right. If you have the standard, have a look at 3.9-2, 3.9-4, and 3.9-5
which I think agrees with you.
 
M

Martin Krischik

I looked at Ada Array and immediately found errors
(eg., in 'typename TheIndex const First;' the 'typename'
keyword is not permitted).

I know the code is full of problem zones and I think I fixed that in
"Ada::Range" allready. The sad thing is that it did compile with one
compiler and it won't with the next.

The last fix I did was "throw ( ... )" - IBM accepts it. MS even warns you
about missing features if you try anthing else.

The GNU compiler immediately moaned about it. I checked the ISO standard and
hey "throw ( ... )" isn't part of the ISO standart. You never stop
learning!

So I changed to "throw ( std::exeption )" knowing that this will result in
warnings in MS-C++ (and splint and pc-lint for that matter).
I suggest you try out your
code on Comeau, God's Own C++ Compiler (they ought to
trademark that phrase :) It's available on the web at
<http://www.comeaucomputing.com/tryitout/>.

I try that after I got a working GNU version.

Martin
 
A

Alf P. Steinbach

* Martin Krischik:
The last fix I did was "throw ( ... )" - IBM accepts it. MS even warns you
about missing features if you try anthing else.

"throw (...)" is not standard C++ syntax.

The GNU compiler immediately moaned about it. I checked the ISO standard and
hey "throw ( ... )" isn't part of the ISO standart. You never stop
learning!

So I changed to "throw ( std::exeption )" knowing that this will result in
warnings in MS-C++ (and splint and pc-lint for that matter).

Why not "throw std::exception();", or, if you're rethrowing, "throw;".

Btw., could you Ada folks please stop spouting new disconnected threads or
else at least indicate wherever the hell your postings originated, perhaps
also with a note mentioning the crossposting.
 
A

Alex R. Mosteo

Pascal said:
For a domain problem where you have to create an area centered on (0,0)
for example. What about a vector representing altitude, the sub-zero values
being under the water. Just some examples, I bet you'll be able to think about
lot more :)

One classical example is solving the eight queens problem. There's an
elegant solution using arrays with ranges starting at several values.
 
P

Paul Mensonides

jayessay said:
They don't manipulate text at all, but they _do_ manipulate the
_un_evaluated forms passed to them.

Here is a "top 5" list of why Lisp macros aren't anything like C
preprocessor stuff. At the bottom I give an example.

Nobody said that the mechanisms are equivalent. Lisp macros are like C/C++
macros in a pretty major way: they parametize code with code.
Number 5:

The/a cpp is a, well, preprocessor. Typically a separate program
even. While you could dilute the meaning here to the point where a
compiler is a preprocessor for a linker is a processor for a loader is
a preprocessor for a cpu etc., the typical meaning is that such things
perform minor transformations on a block of input and had the result
to the primary processor.

Lisp macros are tightly bound up with both the definition of the total
language and any implmentation of it. Also expansion time is
interwoven with compilation, interpretation, or explicit evaluation.
They are available and can be defined during any phase: compile, load,
and execute/runtime.

So what? Where and when they can be evaluated doesn't mean that the mechanisms
are totally dissimilar.
Number 4:

The/a cpp is not Turing complete.

Yes it is.
Number 3:

The/a cpp processes text, typically from a text stream such as (most
typically) a text file.

No, a C or C++ preprocessor processes tokens, not text.
Lisp macros process _data structures_. In particular the syntax trees
output from the parser. Text is long gone before macros are involved
at all.

So what? A syntax tree is code, as is source code text.
Number 2:

The/a cpp doesn't know or have any access to the full language, user
defined functions, or its environment.

So what? This is the same thing as your fifth reason. When or where it can be
evaluated doesn't change what it does.
Lisp macros are just Lisp functions which can make use of the entire
language, and any other user defined functions and macros in libraries
or the running image. They also have access to their lexical
environment information (such as type and profile information) as well
as any public aspects of the global system/application.

Again, so what? There is no doubt that Lisp macros are superior to C/C++ macros
in most categories. However, that doesn't change what they fundamentally do:
parametize code with code.
-----------------------------

The astute reader will have noticed that it is C++ _templates_ which
are slighly similar to Lisp macros. And they would be right.
Templates are significantly less expressive and mind numbingly painful
to use in comparison, but they are Turing complete and you can do some
interesting things with them.

C++ templates are similar to Lisp macros also, but C/C++ macros are much closer.
The template mechanism parametizes code with types (primarily) and only
indirectly parametizes code with code.

To reiterate: nobody has said that C/C++ macros are as powerful as Lisp or
Scheme macros. However, despite everything you say, Lisp and Scheme macros
parametize code with code--regardless of when or where that can happen. That is
the same capability that C/C++ macros have. Does Lisp do it better? Yes. Does
Scheme do it better still? Yes. Neither of those answers precludes the
similarity in what the mechanisms do, nor do any of your reasons.

Regards,
Paul Mensonides
 
P

Paul Mensonides

jayessay said:
Again, that would be the sort of thing you would use reader macros.
It's worth noting again that Lisp macros (the kind you generally mean
when unqualified) do not work on text, they work on the syntax trees
passed to them.

Neither do macros in C or C++. They operate on tokens. In any case, the
difference is meaningless. The only thing that matters is what the semantics
are (such as name binding). Lisp macros are easily inferior to Scheme macros in
that sense.
But that is not really what they do. It is _one_ thing you can _use_
them to do.

That is exactly what they do. It is irrelevant whether that code is in the form
of a syntax tree or not. The kind of thing that you seem to be referring to is
what Haskell does instead.
I think the main differences are the ones I listed.

Obviously.

Regards,
Paul Mensonides
 
M

Martin Krischik

Alf said:
* Martin Krischik:
"throw (...)" is not standard C++ syntax.

I did say that a few lines later. It's actually down there is the quote:
Why not "throw std::exception();", or, if you're rethrowing, "throw;".

You missunderstood. I speak about:

void f ()
throw (std::exception)
{
....
}

as stated in ISO/IEC14882(15.4). I know it's one of the C++ features which
are seldomly used and using it is an uphill struggle - just like using
const was when it first appeared. But that never stops me.

With Regards

Martin
 
K

Kurt Stutsman

Ioannis said:
int main()
{
using namespace std;

unsigned char array[sizeof(SomeClass)];

SomeClass obj= {1, 2, 3, 4};

SomeClass *p= new(array)SomeClass(obj);

cout<<p->d<<" "<<p->i<<" "<<p->f<<" "<<p->l<<"\n";
}


C:\c>temp
1 2 3 4

C:\c>

No, NEITHER is portable! That both have alignment issues! Both exihibit
undefined behavior. Show me where in the standard is says you may cast a
pointer to a char array to a pointer of any other type (other than
another
signed/unsigned char*) and access it. You do this on a system that
cannot
handle misaligned accesses, and your program will most likely crash!




Perhaps you are right. If you have the standard, have a look at 3.9-2,
3.9-4, and 3.9-5 which I think agrees with you.

I couldn't find anything that guarantees the alignment of an unsigned char
array like that. I don't think there is such a guarantee. But I do know that
according to 3.7.3.1.2 of the Standard, a pointer returned from operator new
is properly aligned for any type. Also I think somewhere it guarantees copying
to and back from an unsigned char array is guaranteed to be portable (if the
array is large enough). I couldn't find that reference though.
 
J

jayessay

Paul Mensonides said:
Nobody said that the mechanisms are equivalent. Lisp macros are like C/C++
macros in a pretty major way: they parametize code with code.

No, they are fundamentally code _transformers_, not parametric
substituters.

So what? Where and when they can be evaluated doesn't mean that the
mechanisms are totally dissimilar.

OK, you would then also claim that the preprocessor is basically
similar to the C++ compiler. Fine. I think that makes the term
basically meaningless.

Yes it is.

No, it is not. The only way to get it to act as such is to run it in
phases over its own output.

No, a C or C++ preprocessor processes tokens, not text.

No, the _input_ is _text_. Of course it turns that into _tokens_ but
that is _irrelevant_

So what? A syntax tree is code, as is source code text.

You can't be serious. OK, so the source code of a C++ program is the
same as the AST produced by the frontend. You have again rendered the
term "syntax tree" basically meaningless.

So what? This is the same thing as your fifth reason. When or
where it can be evaluated doesn't change what it does.

No it is not the same, and it is telling that you think it is. And it
makes a huge difference. You can write programs with Lisp macros
pretty much like you write any other program. You write them _in_ the
language, not as some adjunct separate aspect to the language. That
level of expressivity and use of previously defined libraries of
functions is as important as being able to use class libraries and
language intrincs in typical programs. I don't see how you can
plausibly disagree with this.


Again, so what?

See above. To claim that this is a "so what" is to claim that there
is no need or anything particularly useful about being able to use any
language construct and/or library in programming any program. I
suppose you don't realize that that is what you are actually saying
here, because I can't believe you would actually think this.

There is no doubt that Lisp macros are superior to C/C++ macros in
most categories. However, that doesn't change what they
fundamentally do: parametize code with code.

I'm afraid that saying its so doesn't make it so. They are code
transformers. Of couse you can use them to "parameterize" code, but
that is a single _use_ case.

C++ templates are similar to Lisp macros also, but C/C++ macros are
much closer.

Not in their more fundamental character. Templates are about
metaprogramming - not very potent example, but metaprogramming
nonetheless. Lisp macros are still _the_ most capable extent example
of metaprogramming.
The template mechanism parametizes code with types (primarily) and
only indirectly parametizes code with code.

That's pretty irrelevant.

To reiterate: nobody has said that C/C++ macros are as powerful as
Lisp or Scheme macros. However, despite everything you say, Lisp
and Scheme macros parametize code with code

And to reiterate, that is not what they primarily do.

better still? Yes. Neither of those answers precludes the
similarity in what the mechanisms do, nor do any of your reasons.

That basically eliminates any meaning in most of the terms:
"preprocessor", "compiler", "metaprogramming", "code transformer".
Fine. But I think that's pretty extreme thing to do simply to try to
pretend that C/C++ macros are anything like Lisp macros.


/Jon
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top