Best C++ compiler for DOS programs

W

Walter Bright

Alex said:
I've just been testing your Digital Mars C++ compiler and
OpenWatcom 1.4 under Windows 98, targetting MSDOS 6.22 platforms. Very,
very, nice. But I'd really like to be able to write STL code that can
run under DOS under the large memory model. Is that even do-able? One
of my STL projects only takes up 300k when compiled as a 32 bit windows
console application. OpenWatcom 1.4 barfs on most of my STL code, but I
suppose 1.5 will be much better in that respect.

STL isn't very usable for 16 bit code, it's just too big. Another
problem is it has no accommodation for near/far. Although DMC++ does
implement exception handling for 16 bit DOS, even that is more of a
technological feat than a practical one - due to size constraints, I'd
recommend using the older error code technique instead.

Exception handling, STL, etc., are much more practical in a 32 bit system.
Oh, and by the way, I've managed to compile 16 bit Windows programs
using just the *downloaded* Digital Mars compiler toolchain plus the
16bit DOS development package. All I did was to copy over Win16.h from
OpenWatcom 1.4 into the \h\win16 include directory and renamed it
windows.h. Mind you, I can't link at all!

Getting the CD does get you the Windows 16 libraries!

-Walter Bright
www.digitalmars.com C, C++, D programming language compilers
 
S

Sjouke Burry

Florian said:
It seems to be a commercial product - it is under development anymore?

Bye
Flo
Shareware it was last time I looked.
They also have a 16 bit version (svga.zip)
 
A

Alex Buell

STL isn't very usable for 16 bit code, it's just too big. Another
problem is it has no accommodation for near/far. Although DMC++ does
implement exception handling for 16 bit DOS, even that is more of a
technological feat than a practical one - due to size constraints,
I'd recommend using the older error code technique instead.

Exception handling, STL, etc., are much more practical in a 32 bit
system.

I guess that's the price of progress. BTW, I bought a copy of Zortech C
decades ago and it was worth every penny I spent on it. Shame I don't
have the orignal disks nor the software any more, or you'd have got
them. Borland have put up a historial library of their Pascal & C
compilers. Hopefully you could do the same for Zortech C? ;)
Getting the CD does get you the Windows 16 libraries!

Yes that's right, thanks.
 
W

Walter Bright

Alex said:
I guess that's the price of progress. BTW, I bought a copy of Zortech C
decades ago and it was worth every penny I spent on it. Shame I don't
have the orignal disks nor the software any more, or you'd have got
them. Borland have put up a historial library of their Pascal & C
compilers. Hopefully you could do the same for Zortech C? ;)

I've wanted to, but I needed the agreement of a couple of people who I
have been unable to locate, and one who has sadly passed away.
 
P

P.J. Plauger

We use mingw with our libraries as a convenient test bed. It offers
a free, reasonably current gcc that lets us compile large-model programs
under DOS.
STL isn't very usable for 16 bit code, it's just too big.

News to me, and to many of our embedded customers. STL itself is
weightless. How big the memory footprint is depends on just those
functions you choose to use.
Another problem
is it has no accommodation for near/far.

Also news to me. We've preserved the near/far notation that H-P put
in the earliest allocators. Not used much, AFAIK, but it's there.
Although DMC++ does
implement exception handling for 16 bit DOS, even that is more of a
technological feat than a practical one - due to size constraints, I'd
recommend using the older error code technique instead.

We provide a simplified mechanism for those who choose to compile
with execptions disabled, but once again it's a matter of taste.
And particular needs.
Exception handling, STL, etc., are much more practical in a 32 bit system.

More precisely, *large programs* are much more practical in 32-bit
systems. Neither exception handling, nor STL, nor etc. are intrinsically
too big to be of use in some programs for 16-bit processors.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
W

Walter Bright

P.J. Plauger said:
We use mingw with our libraries as a convenient test bed. It offers
a free, reasonably current gcc that lets us compile large-model programs
under DOS.

I thought mingw only supported 32 bit code under DOS. I just checked the
website for it, and it only mentions 32 bit code. "large-model" programs
under DOS are 16 bit programs.

News to me, and to many of our embedded customers. STL itself is
weightless. How big the memory footprint is depends on just those
functions you choose to use.

I'm curious what C++ compiler you're using to generate 16 bit code with STL.
Also news to me. We've preserved the near/far notation that H-P put
in the earliest allocators. Not used much, AFAIK, but it's there.

Having it in there doesn't mean it works very well. Effective large
model programs need careful management of which segments each function
goes in to, when things can be near, when things can be referred to by
__ss pointers, etc. Templates aren't conducive to this, and neither is
just throwing in a near allocator.


We provide a simplified mechanism for those who choose to compile
with execptions disabled, but once again it's a matter of taste.
And particular needs.

I'm curious what 16 bit C++ compiler you're using that supports
exception handling.

More precisely, *large programs* are much more practical in 32-bit
systems. Neither exception handling, nor STL, nor etc. are intrinsically
too big to be of use in some programs for 16-bit processors.

A large part of the effort in developing 16 bit programs was always
spent trying to squeeze the size down. Exception handling adds a big
chunk of size, which will just make it that much harder, and so will
actually reduce the complexity of a program you can build for 16 bits.
STL adds another chunk of size, if only because it doesn't allow tuning
of near/far. I'm not as convinced of the lightweightness of STL as you
are, and iostreams in particular seems to add a huge amount of code even
for simple things. Using C stdio for 16 bit programs is best because
many years were spent optimizing it to get the size down (some vendors
even implemented printf entirely in assembler!), and such effort was
never expended on iostreams.

-Walter Bright
www.digitalmars.com C, C++, D programming language compilers
 
W

Walter Bright

P.J. Plauger said:
News to me, and to many of our embedded customers. STL itself is
weightless. How big the memory footprint is depends on just those
functions you choose to use.

One of the problems templates have (and STL is thoroughly based on
templates) is that it can go too far with customization, thereby
generating bloat. For example, I use a (non-template) linked list
package that creates a list of 'int' items. I can use it to store lists
of unsigned, shorts, unsigned shorts, char types, near pointers, etc.,
without adding any code. But if it was templatetized, a separate
implementation would be generated for each type.

This isn't a problem for 32 bit code generation, where there's lots of
room for the extra code. But it *is* a problem for 16 bit code, where
your code and data have to fit in 640Kb.

-Walter Bright
www.digitalmars.com C, C++, D programming language compilers
 
P

P.J. Plauger

I thought mingw only supported 32 bit code under DOS. I just checked the
website for it, and it only mentions 32 bit code. "large-model" programs
under DOS are 16 bit programs.

Sorry, I blurred the distinction here.
I'm curious what C++ compiler you're using to generate 16 bit code with
STL.

We have a number of embedded OEMs who ship our libraries for both 16-bit
and 32-bit targets. I'm sure that C++ is more popular on the larger ones,
but I know it's not completely absent on the smaller ones.
Having it in there doesn't mean it works very well. Effective large model
programs need careful management of which segments each function goes in
to, when things can be near, when things can be referred to by __ss
pointers, etc. Templates aren't conducive to this, and neither is just
throwing in a near allocator.

Right. But sometimes it works *well enough*.
I'm curious what 16 bit C++ compiler you're using that supports exception
handling.

I defer to our OEMs.
A large part of the effort in developing 16 bit programs was always spent
trying to squeeze the size down. Exception handling adds a big chunk of
size, which will just make it that much harder, and so will actually
reduce the complexity of a program you can build for 16 bits.

Not necessarily. You can trade time vs. space for exception handling, and
I've seen both extremes.
STL adds another chunk of size, if only because it doesn't allow tuning of
near/far. I'm not as convinced of the lightweightness of STL as you are,
and iostreams in particular seems to add a huge amount of code even for
simple things.

Ah, I see part of the communication gap here. By STL *you* mean "the
Standard C library", while *I* mean "that set of containers and algorithms
based heavily on the Hewlett-Packard Standard Template Library". We avoid
the iostreams bloat by offering EC++ (as well as the full Standard C++
library), which looks more like the original cfront iostreams than the
full bore templated and internationalized thing that got standardized.
Our Abridged Library consists of EC++ with STL bolted on. That's what
I mean by "weightless" -- the presence of STL costs nothing unless you
use it.
Using C stdio for 16 bit programs is best because many
years were spent optimizing it to get the size down (some vendors even
implemented printf entirely in assembler!), and such effort was never
expended on iostreams.

Well, it was by us. I agree that stdio can be smaller, particularly if
you use a bespoke printf that omits floating-point when you don't need
it. But once again, EC++ has proved repeatedly to be *small enough*.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
P

P.J. Plauger

One of the problems templates have (and STL is thoroughly based on
templates) is that it can go too far with customization, thereby
generating bloat. For example, I use a (non-template) linked list package
that creates a list of 'int' items. I can use it to store lists of
unsigned, shorts, unsigned shorts, char types, near pointers, etc.,
without adding any code. But if it was templatetized, a separate
implementation would be generated for each type.

This isn't a problem for 32 bit code generation, where there's lots of
room for the extra code. But it *is* a problem for 16 bit code, where your
code and data have to fit in 640Kb.

Yep, that's the standard bogey man trotted out by people leery of
templates. In real life, most people don't use eleven different map
types, with eleven versions of tree-walking code. In a sub-megabyte
program, it's not likely they'll need even two. And in real life,
very little code in STL benefits from distilling out in parameter
independent form. As always, the rule should be, try it first to see
if it's *good enough*. If so, you're done, and way earlier in the day.
Don't optimize for speed or space until you know you have to.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
W

Walter Bright

P.J. Plauger said:
Right. But sometimes it works *well enough*.

I remember well the 80's. Lots of people ported unix utilities to 16 bit
DOS. Those utilities were designed for 32 bit flat code, and whether
they worked "well enough" on DOS was certainly a matter of opinion. They
usually got stomped by utilities and applications that were custom
crafted for the quirks of 16 bit computing.


I defer to our OEMs.

I ask the question because I don't know of any 16 bit C++ compiler that
supports either modern templates or exception handling, besides Digital
Mars C++.

Not necessarily. You can trade time vs. space for exception handling, and
I've seen both extremes.

The two main schemes for doing exception handling are:

1) Microsoft style, where runtime code is inserted to keep track of
where one is in a table of destructors that would need to be unwound

2) Linux style, where the PC is compared against a static table of
addresses to determine where in the table one is

Both involve the addition of a considerable chunk of code (1) or data (1
and 2). Under (2), that chunk consists of data that isn't actually
needed unless an exception is thrown. This is an efficient
implementation under a system that has demand paged virtual memory,
where executables' pages are only loaded from disk if the address is
actually referenced.

This is not the case for 16 bit DOS, which *always* loads the entire
executable into memory. DOS doesn't have demand paged virtual memory. 32
bit DOS extenders do add demand paged virtual memory, but only for 32
bit code, not 16 bit.

Hence, the exception handling bloat is always taking away space from
that precious 640Kb of memory. I suppose it is possible for the
compiler/linker to write the exception handling tables out to a separate
file, but I've never heard of an implementation that did that.

Ah, I see part of the communication gap here. By STL *you* mean "the
Standard C library", while *I* mean "that set of containers and algorithms
based heavily on the Hewlett-Packard Standard Template Library".

I mean STL as in "C++ Standard Template Library."
We avoid
the iostreams bloat by offering EC++ (as well as the full Standard C++
library), which looks more like the original cfront iostreams than the
full bore templated and internationalized thing that got standardized.
Our Abridged Library consists of EC++ with STL bolted on.

Digital Mars C++ for 16 bits does offer both of the two older
implementations of iostreams (iostreams went through a couple major
redesigns before being standardized). These work tolerably well on 16
bit platforms, but they are not Standard C++ iostreams by any stretch of
the imagination.

Well, it was by us. I agree that stdio can be smaller, particularly if
you use a bespoke printf that omits floating-point when you don't need
it. But once again, EC++ has proved repeatedly to be *small enough*.

From http://www.dinkumware.com/embed9710.html:
-----------------------------------
What's Not in Embedded C++
Embedded C++ is a library specification and a minimum language
specification. The minimum language specification is a proper subset of
C++, omitting:

multiple inheritance and virtual base classes
runtime type identification
templates
exceptions
namespaces
new-style casts
------------------------------------

EC++ being practical for 16 bit targets does not imply that templates
and exception handling are. EC++ is kinda what C++ was back in 1991 or
so, when it worked well on 16 bit targets.

Do you know anyone using STL (Standard Template Library) for 16 bit X86
programming? I would be surprised if there were any. I looked around on
the Dinkumware site, but didn't find anything specifically mentioning 16
bit support or any particular 16 bit C++ compilers, but perhaps I missed it.

-Walter Bright
www.digitalmars.com C, C++, D programming language compilers
 
W

Walter Bright

P.J. Plauger said:
Yep, that's the standard bogey man trotted out by people leery of
templates. In real life, most people don't use eleven different map
types, with eleven versions of tree-walking code. In a sub-megabyte
program, it's not likely they'll need even two.

In today's world, a sub-megabyte program is a trivial program, and I
would agree with you. But in the 16 bit DOS days, this was not true at
all. A 250K program could be extremely complex. My compiler, for
example, had to be split into 3 passes, and there was lots of various
list types and tree-walking code in it, and it benefited substantially
(and critically) from being able to reuse existing object code as much
as possible. Reusing source code (what templates do) was relatively not
so important.

And in real life,
very little code in STL benefits from distilling out in parameter
independent form. As always, the rule should be, try it first to see
if it's *good enough*. If so, you're done, and way earlier in the day.
Don't optimize for speed or space until you know you have to.

That is a good rule. But in the 16 bit DOS world, you have to start
optimizing for speed/space often right out of the gate, as the limits
were reached very quickly for non-trivial programs. If your program was
going to use more than 64K of data, you had to design that in from the
start, not retrofit it in later. Programs were also far more sensitive
to such optimizations then than today - I don't believe languages like
Ruby or Python would have enjoyed widespread success on those machines.
And remember that early Java implementation - the UCSD P-system? There
was a setup years before its time.
 
S

Steve

Rod said:
If you use DEGFX instead of Allegro, under the DEGFX directories there is
a DJGPP directory. There are four files. Three are small. I think, but
am not sure, that these are the only files that need ported. It appears
to me that these are mostly DPMI calls or below 1Mb memory accesses
(farpeek's etc..). It's fairly straightforward but time consuming to
port these. I also see some packed structs and use of DJGPP transfer
buffer.
.................

DJGPP packed structs would need to be rewritten:
.................

The DJGPP transfer buffer can be setup for PM Watcom, get_dos_mem() is
called to setup __tb, and free_dos_mem() when done:
..................


Tanks for info. I'll see what I can do.

Steve
 
P

P.J. Plauger

I remember well the 80's. Lots of people ported unix utilities to 16 bit
DOS. Those utilities were designed for 32 bit flat code, and whether they
worked "well enough" on DOS was certainly a matter of opinion. They
usually got stomped by utilities and applications that were custom crafted
for the quirks of 16 bit computing.

As the guy who did the first rewrite of Unix, I can attest that it ran
just fine on 16-bit computers. We also ported our utilities to other
platforms, including DOS. To this day, I still use quite a few of those
utilities in house to build the packages we ship. So IMO they work
"well enough". YMMV.
I ask the question because I don't know of any 16 bit C++ compiler that
supports either modern templates or exception handling, besides Digital
Mars C++.

See http://www.iar.com, by way of example. They use the EDG front end, our
EC++ and Abridged libraries, and a host of their own 8-, 16-, and 32-bit
back ends. The Abridged Library supports templates, which don't require
back-end support (other than huge names in the linker). I don't know which
IAR back ends support exceptions.

IAR is one of about a dozen of our OEM customers who supply C/C++
compilers for the embedded marketplace.
The two main schemes for doing exception handling are:

1) Microsoft style, where runtime code is inserted to keep track of where
one is in a table of destructors that would need to be unwound

2) Linux style, where the PC is compared against a static table of
addresses to determine where in the table one is

Both involve the addition of a considerable chunk of code (1) or data (1
and 2). Under (2), that chunk consists of data that isn't actually needed
unless an exception is thrown. This is an efficient implementation under a
system that has demand paged virtual memory, where executables' pages are
only loaded from disk if the address is actually referenced.

This is not the case for 16 bit DOS, which *always* loads the entire
executable into memory. DOS doesn't have demand paged virtual memory. 32
bit DOS extenders do add demand paged virtual memory, but only for 32 bit
code, not 16 bit.

Hence, the exception handling bloat is always taking away space from that
precious 640Kb of memory. I suppose it is possible for the compiler/linker
to write the exception handling tables out to a separate file, but I've
never heard of an implementation that did that.

Right. All I'm challenging is whether your "considerable chunk" of "bloat"
is so excessive as to make C++ completely unusable in the sub-megabyte
domain.
I mean STL as in "C++ Standard Template Library."

Then why do you refer to "iostreams in particular", which is not a part
of STL?
Digital Mars C++ for 16 bits does offer both of the two older
implementations of iostreams (iostreams went through a couple major
redesigns before being standardized). These work tolerably well on 16 bit
platforms, but they are not Standard C++ iostreams by any stretch of the
imagination.

Whereas istream/ostream/fstream etc. in EC++ is often indistinguishable
from the Standard C++ version. It is, in fact, the subset of iostreams
that most people use most of the time.
From http://www.dinkumware.com/embed9710.html:
-----------------------------------
What's Not in Embedded C++
Embedded C++ is a library specification and a minimum language
specification. The minimum language specification is a proper subset of
C++, omitting:

multiple inheritance and virtual base classes
runtime type identification
templates
exceptions
namespaces
new-style casts
------------------------------------

EC++ being practical for 16 bit targets does not imply that templates and
exception handling are. EC++ is kinda what C++ was back in 1991 or so,
when it worked well on 16 bit targets.

You've described EC++, as specified in 1997. It restricted the language
to give existing (pre-standard) C++ compilers a fighting chance. But
the existence of off-the-shelf complete front ends like EDG have made
that aspect of EC++ way less important. Our most popular embedded
product is the Abridged Library, which relaxes *all* of the above
language restrictions. It's the Standard C++ library that eats space
and time, so the simplified EC++ library iostreams, string, etc. offer
the most significant savings.
Do you know anyone using STL (Standard Template Library) for 16 bit X86
programming? I would be surprised if there were any. I looked around on
the Dinkumware site, but didn't find anything specifically mentioning 16
bit support or any particular 16 bit C++ compilers, but perhaps I missed
it.

See above.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
P

P.J. Plauger

In today's world, a sub-megabyte program is a trivial program, and I would
agree with you. But in the 16 bit DOS days, this was not true at all. A
250K program could be extremely complex.

Huh? Why does a 250KB program suddenly get less complex? I agree that
code now freely sprawls because memory is so extensive and so cheap,
but it doesn't follow that a small program now *has* to be simpler than
20 years ago.
My compiler, for example,
had to be split into 3 passes, and there was lots of various list types
and tree-walking code in it, and it benefited substantially (and
critically) from being able to reuse existing object code as much as
possible. Reusing source code (what templates do) was relatively not so
important.

Huh again? If it's important, you do it. If it's not, and it costs you
productivity, you don't. Even today you can make one unified list type
do the work of two or three *if that is important to your code size*.
You get bloat only if you indulge in bloat (and you can afford it).
That is a good rule. But in the 16 bit DOS world, you have to start
optimizing for speed/space often right out of the gate, as the limits were
reached very quickly for non-trivial programs.

But you "optimize" by picking a program design that fits the box, not
by fretting over potential code bloat that may or may not matter.
If your program was
going to use more than 64K of data, you had to design that in from the
start, not retrofit it in later. Programs were also far more sensitive to
such optimizations then than today - I don't believe languages like Ruby
or Python would have enjoyed widespread success on those machines. And
remember that early Java implementation - the UCSD P-system? There was a
setup years before its time.

We obviously have a different aesthetic, since I consider the P-system
an idea whose time had come and gone before it really hit the ground.
(Remember Softech Microsystems?) But that's wandering afield. The point
of this respons is, there's nothing intrinsic in exceptions, templates,
or C++ in general that prohibits their use in sub-megabyte systems.
Back in the 1980s people were still fretting over the 5-15 per cent
overhead you get when writing in C instead of assembler. C won, mostly
(IMO) because of the much greater productivity and in part because of
the steady increase in memory size and the steady decrease in memory
cost.

Now some people in the embedded world are fretting because of the
additional 10-20 per cent overhead when writing in C++ instead of C.
Memory is dirt cheap, so it's primarily architectural limitations (like
address size) that cause problems. If that overhead pushes you from a
16-bit to a 32-bit architecture, it's worth worrying about. Otherwise,
time to market trumps any piddling extra cost in storage, yes even
when you're making 10 million of 'em. Choice of programming language
is rarely black and white.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
M

Michael O'Keeffe

P.J. Plauger said:
We obviously have a different aesthetic, since I consider the P-system
an idea whose time had come and gone before it really hit the ground.
(Remember Softech Microsystems?) But that's wandering afield. The point

Hasn't hit the ground? That's an interesting viewpoint, in this day
and age of Java and .NET.

[snip]
Otherwise,
time to market trumps any piddling extra cost in storage, yes even
when you're making 10 million of 'em.

Maybe in niche markets. But in a competitive market, you'll need more
than just good enough. I think what illustrates the difference in
programming 16 bit vs. 32 bit, was when Lotus trounced all competitors
by writing their spreadsheet in 100% assembler, and wrote directly to
the video system, so as to extract every bit of power available from
the machine. In fact, one of their competitors was a P-system based
spreadsheet, Context MBA.
 
P

P.J. Plauger

Hasn't hit the ground? That's an interesting viewpoint, in this day
and age of Java and .NET.

The p-system was a failure for three big reasons (IMO):

1) It didn't have adequate performance on the processors of its time.

2) The interpreter, on a 16-bit system, left even less space for a
program.

3) It didn't deliver the one big thing you should get in trade for
the above -- adequate portability -- because the p-code didn't hide
from the endianness of the target platform.

Obviously, Java and .NET have avoided these problems and have each
established an important niche. The UCSD p-system made a splash that
lasted just a few years, by comparison. I stand by what I said.
Maybe in niche markets. But in a competitive market, you'll need more
than just good enough.

Sorry, but in today's competitive marketplace any given "release" of
an embedded product might well sell for just a year or two. Plenty of
time and opportunity to fix the bugs for the next improvement, provided
you have a market for it. If you're three months late to that market,
however...

How well do you think the first iPod would compete if it were released
today?
I think what illustrates the difference in
programming 16 bit vs. 32 bit, was when Lotus trounced all competitors
by writing their spreadsheet in 100% assembler, and wrote directly to
the video system, so as to extract every bit of power available from
the machine. In fact, one of their competitors was a P-system based
spreadsheet, Context MBA.

Agreed. Lotus had to be written in assembly in those days to be "good
enough". That doesn't alter my basic point.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
B

Benry

Have you heard of Linux? Solaris? Um, MIPS? Do you mean the only
Microsoft product that can give you accurate timing and access to low
level hardware? You can get access to low level hardware (registers,
buses, etc) with windows languages, like C and C++. I'm actually
confused, perhaps I misunderstood the lessons I learned over the last
ten years...Could you explain?

(yes, off topic, but I'm really confused).

-Benry
 
W

Walter Bright

P.J. Plauger said:
As the guy who did the first rewrite of Unix, I can attest that it ran
just fine on 16-bit computers. We also ported our utilities to other
platforms, including DOS. To this day, I still use quite a few of those
utilities in house to build the packages we ship. So IMO they work
"well enough". YMMV.

I'm pretty sure that although you may still be using those programs, you
aren't using them on 16 bit DOS said:
See http://www.iar.com, by way of example. They use the EDG front end, our
EC++ and Abridged libraries, and a host of their own 8-, 16-, and 32-bit
back ends. The Abridged Library supports templates, which don't require
back-end support (other than huge names in the linker). I don't know which
IAR back ends support exceptions.

IAR is one of about a dozen of our OEM customers who supply C/C++
compilers for the embedded marketplace.

IAR doesn't seem to support 16 bit X86 - at least they don't list it on
their web site. Their page entitled "Extended Embedded C++" makes it
pretty clear they do not support exception handling, multiple
inheritance, or RTTI. They do support templates as well as
being "memory attribute aware", which is not elaborated.

Right. All I'm challenging is whether your "considerable chunk" of "bloat"
is so excessive as to make C++ completely unusable in the sub-megabyte
domain.

I didn't say "completely unusable", though I will say it is impractical.
As evidence, no compiler (other than Digital Mars C++) seems to have
implemented it for 16 bit code. IAR is using the EDG front end, which
supports EH, but have apparently *removed* support for it for their 16
bit targets.

Then why do you refer to "iostreams in particular", which is not a part
of STL?

I've always considered it part of STL, after all, it is part of STLPort
(which is the STL that Digital Mars ships). If there is an official
definition of STL which excludes iostreams, so be it.

You've described EC++, as specified in 1997. It restricted the language
to give existing (pre-standard) C++ compilers a fighting chance. But
the existence of off-the-shelf complete front ends like EDG have made
that aspect of EC++ way less important. Our most popular embedded
product is the Abridged Library, which relaxes *all* of the above
language restrictions. It's the Standard C++ library that eats space
and time, so the simplified EC++ library iostreams, string, etc. offer
the most significant savings.

Ok, but IAR doesn't support exception handling, RTTI, or multiple
inheritance for 16 bit targets (they do support templates). Do you know
anyone (besides Digital Mars C++) that does?
See above.

I checked the web site www.iar.com. They do not list X86 as a supported
target for their C/C++ compilers.

But maybe I am all wrong. If there is a demand for 16 bit X86 compilers
that support exception handling, RTTI, multiple inheritance, etc., I'd
certainly be pleased to work with Dinkumware to fill it.

-Walter Bright
www.digitalmars.com C, C++, D programming language compilers
 
W

Walter Bright

P.J. Plauger said:
Huh? Why does a 250KB program suddenly get less complex? I agree that
code now freely sprawls because memory is so extensive and so cheap,
but it doesn't follow that a small program now *has* to be simpler than
20 years ago.

It usually is because of the standard bloat brought in by the C++
runtime library. Once you start supporting locales, wide characters,
exceptions, etc., or linking to some other library, big chunks of code
get pulled in, and so even fairly simple programs are pretty fat
compared with programs of similar size in the DOS daze.
We obviously have a different aesthetic, since I consider the P-system
an idea whose time had come and gone before it really hit the ground.

It was before its time because its performance was so poor on the old
processors. What made the idea workable in the 90's was 100x processor
speed improvements. What sealed the deal for Java was the emergence of
the JIT (Just In Time) compiler for Java (first invented by Symantec).
(Remember Softech Microsystems?) But that's wandering afield. The point
of this respons is, there's nothing intrinsic in exceptions, templates,
or C++ in general that prohibits their use in sub-megabyte systems.
Back in the 1980s people were still fretting over the 5-15 per cent
overhead you get when writing in C instead of assembler.

Actually, the cost of writing in C vs asm was about 40% for 16 bit code.
At least for an expert asm programmer. And being the (so far) only
implementer of exceptions, multiple inheritance, and RTTI on 16 bit DOS
I *know* it works. You're just not going to be able to write a program
approaching the complexity and capability of one not using such features.

C won, mostly
(IMO) because of the much greater productivity and in part because of
the steady increase in memory size and the steady decrease in memory
cost.

And improving processor speed. The most successful 16 bit DOS apps,
however, still tended to be written in assembler. Remember how Pkware
buried ARC? All pkware was was a hand optimized assembler version of
ARC. It was common to use a mix of asm and C. 32 bit processors have
pretty much killed off the need for writing in asm anymore.

BTW, another big reason that C won was because the C compilers of the
day were much, much better than the compilers for other languages. That,
for example, buried Pascal. By the time Pascal compilers got better, it
was too late.

Now some people in the embedded world are fretting because of the
additional 10-20 per cent overhead when writing in C++ instead of C.
Memory is dirt cheap, so it's primarily architectural limitations (like
address size) that cause problems. If that overhead pushes you from a
16-bit to a 32-bit architecture, it's worth worrying about. Otherwise,
time to market trumps any piddling extra cost in storage, yes even
when you're making 10 million of 'em. Choice of programming language
is rarely black and white.

I'm not referring to the cost of storage, I'm referring to the 640K
hardwired limitation. Any overhead that adds code size takes away from
the size of the data set the program can handle. This even applies to
simple utilities, like diff.

-Walter Bright
www.digitalmars.com C, C++, D programming language compilers
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,071
Latest member
MetabolicSolutionsKeto

Latest Threads

Top