Syntax for union parameter

R

Rick C. Hodgin

All of those could be replaced by
#include <stdint.h>
and, if you insist on the short names:
typedef uint64_t u64;
typedef int64_t s64;
/* ... */

Am I to understand that C itself does not even have the defined types as
first class citizens? That's what the _t means, isn't it? That it's not
a native type, but it is some extension provided for by an include file.
[Virtual pause]. I just went and looked at the file found on this page
as an add-on for the missing include file in Visual Studio ... it is
nothing but a bunch of typedefs, and a lot of other things I'd doubtfully
ever use.

http://code.google.com/p/msinttypes/

Well ... I did the same thing without knowing about int32_t and so on
because Visual Studio 2008 and earlier do not have that file.
The fixed-size types you want are already in the language. And if an
implementation doesn't define them (say, because there is no unsigned
32-bit type or because it's pre-C99), then you probably won't be able to
define them yourself anyway.

I have brought forward my definition style from older C compilers (before
1999), and I have always had those typedef forms working. I've had to
tweak them in various versions to get it to work, but the resulting code
I had written using s32 and u32 never had to be changed.
There are no float32_t and float64_t types, probably because
floating-point has more characteristics than size (exponent bits,
exponent representation, significand bits, base, and so forth). But
most implementations these days support IEEE floating-point
representations. You can check whether __STDC_IEC_559__ is defined, and
#error out if it isn't.

RDC will always support f32, f64, and I've given long consideration to f80
on the x86. However, I also plan to incorporate the DD and QD (double-double
and quad-double) libraries, as well as MPFR's floating point library, so for
larger precision formats I will probably just use those, incorporating the
native types f128 and f256.

Double-double, Quad-double, and MPFR:

http://crd-legacy.lbl.gov/~dhbailey/mpdist/
http://www.mpfr.org/

Eventually I will write a replacement for MPFR using my own algorithms.
You can have exactly what you want; it's just spelled "int32_t" rather
than "int", and "uint32_t" rather than "unsigned int".

Or I could use s32 and u32 in RDC, or in C using my own typedef'd names.
If your complaint that you can't do this in C is a ploy to irritate
someone into telling you how to do it, you could just *ask* how to do it
in C.

I can't do it in Visual Studio 2008 or earlier without manually doing it
(because Microsoft did not provide those files until Visual Studio 2010),
which is what I've done. VS2008 and earlier do not have the include file,
nor the types which equate to those clunky _t names.

Basically, I did what the C authors did ... which I think was a horrid
solution as well, by the way, because it requires something that should
not be anything I ever need to do in a language in the age of 16-bit,
32-bit, and 64-bit computers running the Windows operating system, which
is what Visual Studio's C compilers run on. Those types of variables
should all be native types ... but C doesn't provide them, so they were
never included until C99, and even then it was only a typedef (same as
what I did).

I say ... pathetic that C does not natively define these types as first
class citizens (without the _t).

Best regards,
Rick C. Hodgin
 
J

James Kuyper

Wrong. Developers need to know if on this computer the int size is 16-bits,
or 32-bits, or 64-bits, or a gazillion bits,

Why? I don't need to know any of that information to write my code. When
I need exactly 32 bits, I don't use int, I use int32_t. I only use int
when I have to because of an interface requirement (such as fseek()), or
because I need the equivalent of int_fast16_t. Similarly, short is
roughly equivalent to int_least16_t, and long is roughly equivalent to
int_fast32_t. When the requirements for those types meet my needs, I'll
use the C90 type name, because it requires less typing; in all other
cases, I'll use the <stdint.h> typedefs (if I can).

When my code needs to know the size in bytes, it uses sizeof(int); when
it needs to know the size in bits, I multiply that by CHAR_BIT - but
that's what my program knows - there's no need for me to know it. I
happen to know that int is 32 bits on some of the machines where my code
runs, and 64 bits on some of the others, but I don't make use of that
information in any way while writing my code. It's correctly written to
have the same behavior in either case (it would also have the same
behavior if int were 48 bits, or even 39 bits).

... which means they need to know
something other than things related to C alone, and even the compiler alone,
but they need to know about the mechanics of the machine's underlying
architecture ... and that's just wrong to impose upon every developer in
that way.

I know almost nothing about the underlying architecture of the machines
my program runs on. That's not quite true - twenty years ago, I knew a
lot about a remote ancestor of those machines, because I had to write
some assembly code targeting that ancestor. I wouldn't be surprised to
find that much of what I knew at that time has remained valid - but
neither would I be surprised to learn that there have been major changes
- but I don't need to know what those changes are in order to write my
programs, and in fact I don't know what those changes have been. Yet I'm
happily writing programs without that knowledge, and they work quite
well despite that ignorance. That's because the compiler writers know
the details about how the machine works, saving me from having to know
them. That's good, because the coding standards I work under prohibit me
from writing code that depends upon such details.
 
R

Rick C. Hodgin

That's what I meant by the dangers of over-specification. You did in
fact specify the mechanics, even though you had not intended to do so.

I used that phrase because it conveys what we all use today. It's is
the only form I'm aware of actually. When I make changes in an IDE for
certain options, for example, they still relate exactly to what's issued
on the command line as a real switch.

In my specification I will probably still refer to it as a command line
switch, since there is at some point a command that's invoked (to compile),
and there are switches which are provided to turn things on or off.
The authors of the C standard very emphatically disagreed with you on
that point.

I know.
That, on the other hand, they agree with. However, another key part of
the standard is that it should also clearly specify what you can NOT
rely upon. Ignore that part of the standard at your own risk.

They allow syntax to be compiled which operates differently depending on
the compiler author's whims. I believe there was an example earlier like
this, where the results of the operation were dependent upon their order:

int a=5, b;
b = foo1(&a) + foo2(&a);

Will foo1() or foo2() be called first?
Because the people who wrote the standard would have considered it to be
poorly written if it had specified all of the things that you want it to
specify.

Yeah. We're in different camps. I offer what I offer. People will either
see value in it or not. I'm content either way because I am writing what
I am writing for God, using the skills He gave me, so that I can give back
to those people on this Earth who will use these tools.
The authors of the standard, on the other hand, considered it absolutely
essential that implementations be allowed a certain well-defined amount
of freedom in how they implemented the language. They wanted it to be
possible to efficiently implement C on a wide variety of platforms,
including ones where it would have been difficult bordering on
impossible to efficiently implement a more tightly specified language,
when that specification had been written with a very different kind of
platform in mind. As a result of making that decision, C is one of the
most widely implemented languages in the world. One of the first things
implemented on almost any new platform is a C compiler. Many of the
things implemented later are compilers, written in C, for languages with
less flexible specifications.

The entire world is going to follow the anti-Christ when he comes (all
except those who are being saved). That doesn't make the vast majority
of people who follow evil right ... it just makes them followers.

Following a correct path is a real thing in and of itself. And having
explicit data types as first class citizens, and being able to rely upon
an order of computed operation regardless of platform or compiler version,
is a correct path.

Best regards,
Rick C. Hodgin
 
J

James Kuyper

On 02/08/2014 12:18 PM, Rick C. Hodgin wrote:
....
I'm writing C code. I don't care about how difficult it is for the hardware
to carry out my instructions. I want them carried out as I've indicated,

Then you're using the wrong language. C doesn't allow you to specify how
your instructions are carried out, it only allows you specify the
desired result. It's deliberately designed to let implementations
targeting hardware where one way of doing things is too difficult, to
use a different way instead. All that matters is whether the result
meets the requirement of the standard, not how that result was achieved.
 
R

Rick C. Hodgin

As has been pointed out to you multiple times, C has the answer - the
<stdint.h> types such as int32_t, uint16_t, etc., handle such cases
perfectly. The work I do is often dependent on these types.

Visual Studio 2008 and earlier does not include that file. However, I
was able to find a version online that someone wrote to overcome that
shortcoming in the Microsoft tool chain. I discovered it does exactly
what I do in my manual version. And mine has the added advantage of
not requiring another #include file to parse at compile time, making
turnaround time faster, and the generated program database smaller.
Minor features to be sure, but real ones nonetheless.
Just because C has types whose sizes are not fixed, does not mean it
does not also have fixed size types!

C doesn't doesn't have fixed types. Not even C99. It has an add-on
typedef hack which allows a person to use "fixed-size types" within its
variable-sized arsenal. But as someone else pointed out, there are some
platforms which may not be able to implement those types because they are
not supported on the hardware, or in the compiler. In such a case, because
C does not require that 64-bit forms be supported, for example, now I'm
writing manual functions to overcome what C is lacking.
"cpp" is the file extension for C++, not for C. Your compiler will
therefore treat it as C++. For many purposes, C is a subset of C++ -
but there are plenty of exceptions. How can you talk so arrogantly
about C when you don't even know what language you are using?

I use the C++ compiler for my C code because it has syntax relaxations.
I do not use C++ features, apart from those relaxations, and perhaps a
handful of features that are provided in C++ (such as anonymous unions).
It has been a /very/ long time since this was necessary - even before
C99 was in common use, a great many compilers supplied a <stdint.h>
header with fixed size types. But before <stdint.h>, it was very common
to include a platform-specific header so that you had things like fixed
integer types, boolean types, etc., easily available. Since you are
using Windows, you can include <WinDef.h> in your code to get these
platform-specific definitions. Or you can use use <stdint.h> - even
though MSVC++ is not intended for C development (it is limited to C90
standards - 24 years out of date), it still has a <stdint.h> header.

I began working on my project in the mid-1990s. I have worked on it in
varying degrees since then, but I have done a major push since 2009.

stdint.h does not come with the latest version of Visual Studio 2008 I
personally use. And WinDef.h does not have facilities to access types
by size, only by name.

-----
As for the rest of your post, you are very insulting to me and I do not
desire to communicate with you further.

Best regards,
Rick C. Hodgin
 
K

Keith Thompson

Rick C. Hodgin said:
Am I to understand that C itself does not even have the defined types as
first class citizens?
Yes.

That's what the _t means, isn't it?

No, the _t is merely a suffix that the designers of <stdint.h> chose to
use. (C++ has at least one built-in type whose name is a keyword ending
in "_t".)
That it's not
a native type, but it is some extension provided for by an include file.

It's not a native type, but neither is it an "extension"; it's a
standard feature of ISO C that happens to be provided by a header rather
than by the compiler.

Actually, let me rephrase that. int32_t is an *alias* for some native
(compiler-implemented) type.
[Virtual pause]. I just went and looked at the file found on this page
as an add-on for the missing include file in Visual Studio ... it is
nothing but a bunch of typedefs, and a lot of other things I'd doubtfully
ever use.

http://code.google.com/p/msinttypes/

Well ... I did the same thing without knowing about int32_t and so on
because Visual Studio 2008 and earlier do not have that file.

That was certainly a reasonable approach at the time (though since
<stdint.h> was added to the standard in 1999, and had been discussed
before that, a little research might have saved you some work.). If
you're still using Visual Studio 2008, then I suppose you'll need to use
something that provides features similar to those provided by
<stdint.h>. If not, you can probably just use <stdint.h> exclusively
and not worry about it. (Or you can continue using your old solution;
revised versions of the C standard place a strong emphasis on backward
compatibility.)
I have brought forward my definition style from older C compilers (before
1999), and I have always had those typedef forms working. I've had to
tweak them in various versions to get it to work, but the resulting code
I had written using s32 and u32 never had to be changed.

And if you use <stdint.h>, you'll never have to do that tweaking again.
Your C implementation will already have done it for you.

[...]
Or I could use s32 and u32 in RDC, or in C using my own typedef'd names.


I can't do it in Visual Studio 2008 or earlier without manually doing it
(because Microsoft did not provide those files until Visual Studio 2010),
which is what I've done. VS2008 and earlier do not have the include file,
nor the types which equate to those clunky _t names.

Basically, I did what the C authors did ... which I think was a horrid
solution as well, by the way, because it requires something that should
not be anything I ever need to do in a language in the age of 16-bit,
32-bit, and 64-bit computers running the Windows operating system, which
is what Visual Studio's C compilers run on. Those types of variables
should all be native types ... but C doesn't provide them, so they were
never included until C99, and even then it was only a typedef (same as
what I did).

I say ... pathetic that C does not natively define these types as first
class citizens (without the _t).

It doesn't matter whether int32_t is implemented directly by the
compiler or indirectly as a typedef in a standard header. I'd ask why
you have this phobia about typedefs, but I'm afraid you'd tell me.
 
R

Rick C. Hodgin

Then you're using the wrong language. C doesn't allow you to specify how
your instructions are carried out, it only allows you specify the
desired result. It's deliberately designed to let implementations
targeting hardware where one way of doing things is too difficult, to
use a different way instead. All that matters is whether the result
meets the requirement of the standard, not how that result was achieved.

Hence RDC, James.

Best regards,
Rick C. Hodgin
 
R

Rick C. Hodgin

Why? I don't need to know any of that information to write my code. When
I need exactly 32 bits, I don't use int, I use int32_t.

It doesn't exist in Visual Studio 2008 or earlier. I have to manually
download a commensurate file and install it (and test it, because you
never know with C for sure until you test it, right?), which is basically
what I did with my own typedefs. I just replaced the C99 clunky naming
convention typedefs with my own short, elegant forms.
I only use int
when I have to because of an interface requirement (such as fseek()), or
because I need the equivalent of int_fast16_t. Similarly, short is
roughly equivalent to int_least16_t, and long is roughly equivalent to
int_fast32_t. When the requirements for those types meet my needs, I'll
use the C90 type name, because it requires less typing; in all other
cases, I'll use the <stdint.h> typedefs (if I can).

I prefer to use a certain type, and to know what it is I'm using without
question. If some bit of code fails for some reason, then I can see that
in my code it is correct, but that the function I'm using needs a different
size.
When my code needs to know the size in bytes, it uses sizeof(int); when
it needs to know the size in bits, I multiply that by CHAR_BIT - but
that's what my program knows - there's no need for me to know it.

I think the need for sizeof(int) is representative of the things that
are wrong with C.
I happen to know that int is 32 bits on some of the machines where my
code runs, and 64 bits on some of the others, but I don't make use of
that information in any way while writing my code. It's correctly
written to have the same behavior in either case (it would also have
the same behavior if int were 48 bits, or even 39 bits).

You will know with RDC, though I doubt you'll ever use it. I'm reminded
of a quote from the movie "Switching Channels" about a man who worked on
the newspaper, addressing the program manager of a television news show:

"I only keep a TV around to entertain the cat."

Best regards,
Rick C. Hodgin
 
R

Rick C. Hodgin

It doesn't matter whether int32_t is implemented directly by the
compiler or indirectly as a typedef in a standard header. I'd ask why
you have this phobia about typedefs, but I'm afraid you'd tell me.

I think typedefs are brilliant. I include them in RDC. They have uses
galore and I use them directly and indirectly in my code.

I think it's beyond lunacy to typedef variable-sized native types to a
form that then provides explicit-sized types through typedefs when they
should be expressly provided for by the language, and not through the
typedef add-on hack.

Best regards,
Rick C. Hodgin
 
D

David Brown

The standard is violated purposefully by the inclusion of some kind of
command line switch which enables the alternate behavior.


Yes you can. The standard says "int is always 32-bit" and the extension
says "when enabled, int is no longer 32-bit, but is now 16-bit."

The standard never changes. The extensions override the standard. Get it?

Yes, I get it - if you have overrides like this, you do not have a standard.

Sometimes I wonder whether or not English is your mother tongue - you
certainly seem to have different definitions for some words than the
rest of the world.
C's choice is a bad one. It is incorrect. It produces code which remains
in question until tested on a particular implementation because there is
no standard. Code that works perfectly well on 99 compilers might fail
on the 100th because that particular compiler author did something that
the C "standard" allowed through not having a definition, but yet is
inconsistent with other compilers. As such, my code, because I use the
behavior demonstrated through 99 out of 100 compilers, cannot be known
to work on any new compiler.

If the programmer writes code that depends on an assumption not
expressed in the C standards, then he is not writing valid C code. This
is no different from any other language.

For many of the implementation-defined points in the C standards, the
particular choices are fixed by the target platform, which often lets
you rely on some of these assumptions. And often there are ways to
check your assumptions or write code without needing them - compilers
have headers such as <limits.h> so that your code /will/ work with each
compiler.

Just because you don't know how to do such programming, does not mean C
does not support it.
It is a variable. And it's a hideous allowance, one that the authors of
C should never have allowed to be introduced ever. The rigid definition
should've been created, and then overrides for specific and unique
platform variances should've been encouraged. I'm even open to a common
set of command line switches which allow them to all enable the same
extensions with the same switch.


Yes. Lunacy. Absolute lunacy.

Are you really trying to say it is "lunacy" to rely on Linux features
when writing Linux-specific code, or Windows features when writing
Windows-specific code?
If all I'm doing is counting up to 1000 then I don't care. How many useful
programs do you know of that only count up to 1000?

The huge majority of numbers in most programs are small.

And if you are writing a program that needs to count 100,000 lines of a
file, then you can be confident it will be running on a system with at
least 32-bit ints, and can therefore happily continue using "int".
The language needs to be affixed so that people can have expectations
across the board, in all cases, of how something should work. And then,
on top of that, every compiler author is able to demonstrate their coding
prowess by introducing a host of extensions which make their C compiler
better than another C compiler.

Marvellous idea - lets make some fixed standards, then tell all the
compiler implementers to break those standards in different ways. We
can even standardise command-line switches used to choose /how/ the
standards are to be broken. Perhaps we can have another command line
switch that lets you break the standards for the command line switches.

Competition would abound. New features
created. Creativity employed. The result would be a better compiler for
all, rather than niche implementations which force someone into a
particular toolset because the man-hours of switching away from compiler
X to compiler Y introduces unknown variables which could exceed reasonable
time frames, or expenses.


I don't care. We are not going backwards. We are going forwards. We
are not in the 1970s. We are in the 2010s. And things are not getting
smaller. They are getting bigger.

Again, you have no concept of how the processor market works, or what
processors are used.

If you want to restrict your own language to a particular type of
processor, that's fine - but you don't get to condemn C for being more
flexible.
32-bit and 64-bit CPUs today exist for exceedingly low cost. ARM CPUs
can be created with full support for video, networking, external storage,
for less than $1 per unit.

Again, you have no concept of what you are talking about. Yes, there
are ARM devices for less than $1 - about $0.50 is the cheapest available
in very large quantities. They don't have video, networking, etc., -
for that, you are up to around $15 minimum, including the required
external chips.

Putting an 8-bit cpu on a chip, on the other hand, can be done for
perhaps $0.10. The same price will give you a 4-bit cpu with ROM in a
bare die, ready to be put into a cheap plastic toy or greeting card.

Yes, 32-bit cpus (but not 64-bit cpus) are available at incredibly low
prices. No, they do not come close to competing with 8-bit devices for
very high volume shipments.

I will certainly agree that most /developers/ are working with 32-bit
devices - and most embedded developers that worked with 8-bit or 16-bit
devices are moving towards 32-bit devices. I use a lot more 32-bit
chips than I did even 5 years ago - but we certainly have not stopped
making new systems based on 8-bit and 16-bit devices.

If I were designing a new language, I would pick 32-bit as the minimum
size - but I am very glad that C continues to support a wide range of
devices.
We are not stuck in the past. We are standing at a switchover point.
Our mechanical processes are down below 20nm now. We can put so many
transistors on a chip today that there is no longer any comparison to
what existed even 15 years ago, let alone 40 years ago.

It's time to move on.


Nonsense.

Exactly - it is nonsense.

Will your specifications say anything about the timing of particular
constructs? I expect not - it is not fully specified. In some types of
work it would be very useful to be able to have guarantees about the
timing of the code (and there are some languages and toolsets that give
you such guarantees). For real-time systems, operating within the given
time constraints is a requirement for the code to work correctly as
desired - it is part of the behaviour of the code. Will your
specifications say anything about the sizes of the generated code? I
expect not - it is not fully specified. Again, there are times when
code that is too big is non-working code. Will your specifications give
exact requirements for the results of all floating point operations? I
expect not - otherwise implementations would require software floating
point on every processor except the one you happened to test on.
You can give more rigid specifications than the C
standards do - but there are no absolutes here. There are /always/
aspects of the language that will be different for different compilers,
different options, different targets. Once you understand this, I think
you will get on a little better.

We'll see. RDC will have rigid standards, and then it will allow for
variances that are needed by X, Y, or Z. But people who write code,
even code like a = a[i++], in RDC, will always know how it is going
to work, platform A or B, optimization setting C or D, no matter the
circumstance.


People who write code like "a = a[i++]" should be fired, regardless
of what a language standard might say.
People are what matter. People make decisions. The compiler is not
authorized to go outside of their dictates to do things for them. If
the developer wanted something done a particular way they should've
written it a particular way, otherwise the compiler will generate
EXACTLY what the user specifies, in the way the user specifies it,
even if it has to dip into clunky emulation to carry out the mandated
workload on some obscure CPU that probably has no business still
being around in the year 2014.


No. It would guarantee that your program would work the same on all
CPUs, regardless of internal mechanical abilities. The purpose of the
language is to write common code. Having it work one way on one CPU,
and potentially another way on another CPU, or maintaining such an
insane concept as "with int, you can always know that it is at least
16-bits," when we are today in the era of 64-bit computers, is to be
found on the page of examples demonstrating wrongness.

You throw around insults like "insane" and "lunacy", without making any
attempt to understand the logic behind the C language design decisions
(despite having them explained to you).
The language should determine how all things are done. Anything done
beyond that is through exceptions to the standard through switches
which must be explicitly enabled (because on a particular platform you
want to take advantage of whatever features you deem appropriate).


Bzzz! Wrong. You still have rigid specs. The specs are exactly what
they indicate. Only the overrides change the spec'd behavior. A person
writing a program on ANY version of RDC will know that it works EXACTLY
the same on ANY version of RDC no matter what the underlying architecture
supports. And that will be true of every form of syntax a person could
invent, use, employ, borrow, and so on...

Rigid specs are what computers need. They expose a range of abilities
through the ISA, and I can always rely upon those "functions" to work
exactly as spec'd. I don't need to know whether the underlying hardware
uses method A or method B to compute a 32-bit sum ... I just want a
32-bit sum.

That's how it will be with RDC.

It's odd that you use the example of computing the sum of two numbers as
justification for your confused ideas about rigid specifications, when
it was previously given to you as an example of why the C specifications
leave things like order of operation up to the implementation - it is
precisely so that when you write "x + y", you don't care whether "x" or
"y" is evaluated first, as long as you get their sum. But with your
"rigid specs" for RDC, you are saying that you /do/ care how it is done,
by insisting that the compiler first figures out "x", /then/ figures out
"y", and /then/ adds them.

Note, of course, that all your attempts to specify things like ordering
will be in vain - regardless of the ordering generated by the compiler,
modern processors will re-arrange the order the instructions are carried
out.
NO! You only have "implementation dependent" behavior because the authors
of the C language specs did not mandate that behavior should be consistent
across all platforms, with extensions allowed.

C exists as it does today because they chose this model of "I will leave
things open in the standard so that people can implement whatever they
choose, possibly based upon whether or not the breakfast they ate is
settling well in their stomach today, or not," rather than the model of
"I will define everything, and let anyone who wants to step outside of
what I've defined do so, as per an extension."

To me (b) is FAR AND AWAY the better choice.


Yes. I'd rather be a farmer than deal with this variation in behavior
across platforms for the rest of my life. I am a person. I am alive.

I am not convinced. I suspect you are a Turing test that has blown a
fuse :)
I write something in a language, I expect it to do what I say. I do not
want to be subject to the persona inclinations of a compiler author who
may disagree with me on a dozen things, and agree with me on a dozen
others. I want to know what I'm facing, and then I want to explore the
grandness of that author's vision as to which extension s/he has provided
unto me, making use of them on architecture X whenever I'm targeting
architecture X.


It should be that way on all aspects. On an 8-bit computer, ints should
should be 32-bit and the compiler should handle it with emulation.

Why? You have absolutely no justification for insisting on 32-bit ints.
If I am writing code that needs a 32-bit value, I will use int32_t.
(Some people would prefer to use "long int", which is equally good, but
I like to be explicit.) There are no advantages in saying that "int"
needs to be 32-bit, but there are lots of disadvantages (in code size
and speed).
However, by using the compiler switch to enable the alternate size, then
my code can be compiled as I desire for the lesser hardware.

This means that the same code has different meanings depending on the
compiler flags, and the target system. This completely spoils the idea
of your "rigid specifications" and that "RDC code will run in exactly
the same way on all systems". It is, to borrow your own phrase, lunacy.
 
I

Ian Collins

Rick said:
C doesn't doesn't have fixed types. Not even C99. It has an add-on
typedef hack which allows a person to use "fixed-size types" within its
variable-sized arsenal.

In much the same way as C doesn't have I/O except through an add in
library hack? Come on, get real.
But as someone else pointed out, there are some
platforms which may not be able to implement those types because they are
not supported on the hardware, or in the compiler. In such a case, because
C does not require that 64-bit forms be supported, for example, now I'm
writing manual functions to overcome what C is lacking.

That's a lot of work. Implementing a 64 bit int on something like an
8051 isn't too big a job, but adding an 8 bit char on a DSP which can
only address 16 or 32 bit units is more of a challenge (especially if
you include pointer arithmetic) and will run like a two legged dog.

Now if you were using C++, you could create your own types without
having to write your own compiler. It looks to me that you could create
your desired language fairly easily as a c++ library..
 
J

James Kuyper

On 02/07/2014 05:19 PM, Rick C. Hodgin wrote:
....
It's not a stereotype. It's a guess: Legacy code made the decision on
what they should do, and so as to not lose sales... It's not unlikely.

Since so many other people have made the same decision deliberately by
reason of believing it was a good idea, I do consider it unlikely.
Why do you continue to post to me, James? Wouldn't it be better to just
ignore me and my numerous flaws?

I'm posting to the newsgroup, not to you.

I'm not quite sure why I've continued to do so, but you'll be pleased to
know that I've decided to stop.
I may mention your idiocies from time to time - you make an excellent
example of how not to think - but I've decided to spare myself the pain
of reading your future messages.
 
B

Ben Bacarisse

Rick C. Hodgin said:
Unless they try to access a disk file with a given structure, or read a
network packet with a given structure, not an uncommon task (to say the
least). In such a case there are fixed sizes that are needed. And in
general programming there are fixed sizes that are needed.

I think you don't know that C has integer types of know widths (if the
machine has them at all).

But you have just lost track of what is being discussed: "The cases
where one needs to know are very rare" refers to what you claimed every
programmer needs to know which is the bit-size of int. Fixed file
format are not a counter-example to this.
To address your issues, I have bypassed this limitation in C with my
own mechanism. There is a definition at the header which allows typedefs
for determining what native variable type for the compiler is used for
the fixed size entities I use.

No, you have shown that you don't know the language and have tried, not
very successfully to re-invent the wheel. I would not normally put it
so harshly, but you are so convinced that you are right about everything
that it seems the appropriate tone to take.
And, being as C is lackadaisical in
this area, you will have to manually redefine those typedefs for your
particular version of C (to maintain as is indicated in their type
(the bit size, such as 8 for u8, 32 for u32, and so on).

From the top of sha1.cpp:

And elsewhere in you code base, it seems.
typedef unsigned long long u64;
typedef unsigned long u32;
typedef unsigned short u16;
typedef unsigned char u8;

typedef long long s64;
typedef long s32;
typedef short s16;
typedef char s8;

typedef float f32;
typedef double f64;

These work for Visual C++. If you use another compiler, re-define them
to work for your platform. Were you using RDC, they would be native
types and there would not be an issue. Ever.

If you were using modern C that would be true as well. (I see Keith
Thompson has written more clam reply that gives you the details).

Computers today are blazingly fast on nearly everything they do.

Just an aside: that's a terrible reason to put efficiency to one aside.
Computers are always pushed to the limit of what they can do, no matter
how fast they are. My tablet struggles with many websites. It would
not be any faster had all the programmers taken care all the way down,
because the saving would simply mean that people would write more
complex websites, but that in itself is a gain.

FWIW, I would rather have slower code operating in parallel than faster
code operating in serial.

Yes, but that is an as yet an un-cracked nut, despite decades of
research. Something tells me, though, that you will have a solution to
it.

Because, at present, it is the best tool for what I'm trying to
accomplish.

Yes, but why is it the best tool? It seems unlikely on the face of it.

<snip>
 
D

David Brown

Visual Studio 2008 and earlier does not include that file. However, I
was able to find a version online that someone wrote to overcome that
shortcoming in the Microsoft tool chain. I discovered it does exactly
what I do in my manual version. And mine has the added advantage of
not requiring another #include file to parse at compile time, making
turnaround time faster, and the generated program database smaller.
Minor features to be sure, but real ones nonetheless.

These are not "minor features" - they are irrelevant "features". It
makes no conceivable difference whether "int32_t" is defined by the
compiler using a standard header, or defined by the compiler as part of
the compiler's executable file.

Some parts of C are defined as part of the core compiler language, while
other parts are defined as part of the standard library and headers. It
is all C - and int32_t and friends have been part of the C standards
since C99.
C doesn't doesn't have fixed types. Not even C99. It has an add-on
typedef hack which allows a person to use "fixed-size types" within its
variable-sized arsenal. But as someone else pointed out, there are some
platforms which may not be able to implement those types because they are
not supported on the hardware, or in the compiler. In such a case, because
C does not require that 64-bit forms be supported, for example, now I'm
writing manual functions to overcome what C is lacking.


I use the C++ compiler for my C code because it has syntax relaxations.
I do not use C++ features, apart from those relaxations, and perhaps a
handful of features that are provided in C++ (such as anonymous unions).

These are not "syntax relaxations" - they are differences you get from
using a different programming language. Some of the language features
in C99 were copied from C++ (at least roughly), such as "inline", mixing
declarations and statements, // comments, etc. Sane C programmers use
modern C toolchains rather than MSVC++, and thus get these features from
C99. You are programming in C++, but skipping half the language.

I began working on my project in the mid-1990s. I have worked on it in
varying degrees since then, but I have done a major push since 2009.

stdint.h does not come with the latest version of Visual Studio 2008 I
personally use. And WinDef.h does not have facilities to access types
by size, only by name.

The sizes of the types in WinDef.h are, AFAIK, fixed - even though the
names don't include the sizes.

In my early C programming days, I too used my own set of typedefs for
fixed size types - as did everyone else who needed fixed sizes. I still
have code that uses them. But I moved over to the <stdint.h> types as
C99 became common.

It is fine to keep doing that for long-lived projects, to save changing
the rest of the code (though personally I find "s32" style names ugly).
But I would recommend you make your typedefs in terms of <stdint.h>
types, as those are part of the modern C language - it makes it easier
to maintain consistency if you change compiler or platform (that's why
the <stdint.h> types exist).
 
J

James Kuyper

On 02/08/2014 04:49 PM, Ian Collins wrote:
....
That's a lot of work. Implementing a 64 bit int on something like an
8051 isn't too big a job, but adding an 8 bit char on a DSP which can
only address 16 or 32 bit units is more of a challenge (especially if
you include pointer arithmetic) and will run like a two legged dog.

You mean, like Dominic? <
>. :)
 
B

Ben Bacarisse

Rick C. Hodgin said:
Except that with C some things are left to compiler author whims that can
tick-tock version after version.

But, as you say, you can reply on the specification.
 
B

BartC

Rick C. Hodgin said:
I think typedefs are brilliant. I include them in RDC. They have uses
galore and I use them directly and indirectly in my code.

(I suppose we're all different. In my own syntaxes, I don't have anything
like typedef for creating aliases at all.

I have a way of creating actual new types, but aliases for existing types
are specifically excluded (because they would be an actual new type derived
from the original, not an alias, causing all sorts of problems).

When I do sometimes need a simple alias for a type, then I just use a macro,
which isn't a solution for C, because some complex types require to be
wrapped around any name they are used to define, you can't just have a type
name adjacent to a variable name.

I don't know if your language copies C's convoluted type-specs; that is a
genuine shortcoming of the language which I suggest you try and avoid.)
I think it's beyond lunacy to typedef variable-sized native types to a
form that then provides explicit-sized types through typedefs when they
should be expressly provided for by the language, and not through the
typedef add-on hack.

I thought it was odd too. Obviously just bolting on a set of typedefs was
simpler than adding them properly to the implementation.
 
R

Rick C. Hodgin

In much the same way as C doesn't have I/O except through an add in
library hack? Come on, get real.

I used to think the way I/O was handled in C was a hack (back in the 1990s),
but I no longer think that way.
Now if you were using C++, you could create your own types without
having to write your own compiler. It looks to me that you could create
your desired language fairly easily as a c++ library..

RDC defines a particular sequence of processing operations, introduces new
features for flow control, multi-threading, exception handling, and it has
some requirements relating its foundational association with the graphical
IDE, needs relating to edit-and-continue abilities and the debugger, a new
ABI, and I have a target that RDC will later be compiled within itself.

Most features of C++ will be dropped in RDC.

Best regards,
Rick C. Hodgin
 
I

Ian Collins

Rick said:
I used to think the way I/O was handled in C was a hack (back in the 1990s),
but I no longer think that way.


RDC defines a particular sequence of processing operations, introduces new
features for flow control, multi-threading, exception handling, and it has
some requirements relating its foundational association with the graphical
IDE, needs relating to edit-and-continue abilities and the debugger, a new
ABI, and I have a target that RDC will later be compiled within itself.

Most features of C++ will be dropped in RDC.

Such as being able to define a particular sequence of processing
operations, multi-threading and exception handling?
 
R

Rick C. Hodgin

Such as being able to define a particular sequence of processing
operations, multi-threading and exception handling?

RDC defines explicitly the order of operation for data processing, logic
tests, function calls, and passed parameters, allows multiple return
parameters, the insertion of generic code at any point via something I
call a cask, and much more. There will never be any ambiguity on what
is processed and in what order.

As for the rest, no semicolons required, but they can still be used:

-----
Multi-threading:
in (x) {
// Do some code in thread x
} and in (y) {
// Do some code in thread y
} and in (z) {
// Do some code in thread z
}
tjoin x, y, z // Wait until x, y, and z threads complete
// before continuing...

-----
Exception handling can be handled using a try..catch like construction,
or you can insert casks which allow exceptions to be thrown to specific
destinations depending on where you are in source code. The generic
(|mera|) cask allows for insertion at any point, to indicate that if an
exception is thrown at that part of the command, then it will go to the
indicated destination.

The flow { } block allows for many more features, and there are a lot of
other new constructs as well.

flow {
// By default, any error will trap to error (like try..catch)
do_something1() (|mera|flowto error1||) // Will trap to error1 here
do_something2() (|mera|flowto error2||) // Will trap to error2 here

subflow error1 {
}

subflow error1 {
}

error (SError* err) {
}
}

Best regards,
Rick C. Hodgin
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top