How to remove // comments

W

Walter Bright

Richard said:
Walter Bright said:
...unless it makes business sense or technical sense to do that, which it
might, one day.

So what if some future encoding doesn't have a '?' ? Then trigraphs
won't work. If C is concerned about such a possibility, why does it
require the '?' character to exist, or any other character? '?' isn't a
valid character in the (once popular) RADIX50 encoding.
(The Microsoft Office guys had much the same opinion of int
- "the compiler guys wouldn't change the size of an int on us - they know
it'd break all our code", but the compiler guys changed it anyway.

Programmers knew that ints were going from 16 to 32 bits, and it was
useless to resist such a change. If it really was going to make life
impossible for the Office guys, I'm sure they had the clout to get a
special compiler built just for them. Microsoft wasn't going to endanger
the revenue stream from Office.

(The above is reasonable speculation on my part, I don't have any inside
knowledge of Microsoft.)

Trigraphs would be great if they solved the problem you mentioned. But
they don't. People overwhelmingly write C code using fancy characters {
and [, and that source code fails on EBCDIC systems. You're going to
have to run the source through a translator whether trigraphs are in the
standard or not.

That's mostly true, yes, although I did work on one site which required the
programmers to use trigraphs in their code (which was written and debugged
on PCs before being moved up to the mainframe for testing).

It's a silly requirement, because a trigraph translator program is about
as trivial as it gets. Heck, CR-LF translation is routine. A viable
EBCDIC system these days has already got to be doing a lot of
translation of ASCII <-> EBCDIC in order to deal with an ASCII world.
Those PC C programs you wrote had to be translated *anyway* to move them
to the mainframe.
 
R

Richard Heathfield

Walter Bright said:
Programmers knew that ints were going from 16 to 32 bits, and it was
useless to resist such a change. If it really was going to make life
impossible for the Office guys, I'm sure they had the clout to get a
special compiler built just for them. Microsoft wasn't going to endanger
the revenue stream from Office.

(The above is reasonable speculation on my part, I don't have any inside
knowledge of Microsoft.)

My source is "Writing Solid Code", written by Steve Maguire and published by
Microsoft Press. If the anecdote were not true, I'm sure Microsoft had the
clout to refuse to publish it.
 
M

Mark McIntyre

EBCDIC is parochialism, not ASCII. ASCII covers 99.99999% of the systems
out there.

Counted 'em have you? And did you do it by number of boxes, users or
compute power, revenue generated, importance to GDP or what?
No sane person is going to invent a new character encoding
that doesn't include ASCII.

Apparently nobody told IBM.
Trigraphs would be great if they solved the problem you mentioned. But
they don't. People overwhelmingly write C code using fancy characters {
and [, and that source code fails on EBCDIC systems.

Apparently nobody told the thousands of C programmers using IBM
mainframes for the last 40 years.
Nevertheless, they are in the standard and C compilers should implement
them. Digital Mars C does.

Glad we agree about that.
--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
 
K

Keith Thompson

Mark McIntyre said:
On Fri, 20 Oct 2006 16:40:37 -0700, in comp.lang.c , Walter Bright


Apparently nobody told IBM.

It's unlikely *now* that anyone would invent a new encoding that's not
based on ASCII. This wasn't the case when IBM invented EBCDIC.

But we shouldn't assume anything.
 
W

Walter Bright

Richard said:
Walter Bright said:


My source is "Writing Solid Code", written by Steve Maguire and published by
Microsoft Press. If the anecdote were not true, I'm sure Microsoft had the
clout to refuse to publish it.

Microsoft Windows and Office bring in the bulk of Microsoft's revenues.
There's just no way management would ever allow a few rogue compiler
guys to jeopardize that.

Going to 32 bit ints offers real advantages to programs, and I bet that
the Office guys realized those advantages outweighed going through and
fixing a few bugs here and there.
 
W

Walter Bright

Mark said:
Counted 'em have you? And did you do it by number of boxes, users or
compute power, revenue generated, importance to GDP or what?

Count 'em any way you want.

When was the last time you saw trigraphs (outside of a test case or
obfuscated C code entry) in:

1) a programming magazine listing
2) a book on programming
3) a project on sourceforge
4) a posting in this n.g.
5) a C programming web page
6) a paper submitted at a programming conference

? I haven't seen any in 25 years of being in the C business.
Apparently nobody told IBM.

IBM isn't going to, either. EBCDIC is a legacy encoding going back to
punchcard machines. It was a legacy encoding back when I started in the
early 70's. Modern EBDCID machines are loaded up with software, and even
Trigraphs would be great if they solved the problem you mentioned. But
they don't. People overwhelmingly write C code using fancy characters {
and [, and that source code fails on EBCDIC systems.
Apparently nobody told the thousands of C programmers using IBM
mainframes for the last 40 years.

Thousands of C programmers over 40 years, ok. Just Digital Mars has
shipped over half a million C compilers over the last 6 years. How many
C compilers would you guess Microsoft has shipped? Sun? gcc? Borland?
Watcom? Intel? Apple? Green Hills?

In 25 years I've never seen a tty, printer, or modem that supported
EBCDIC, from an ASR-33 to an HP laserprinter. I've worked on embedded
systems from Mattel Intellivision cartridges to phones. None did EBCDIC.
 
K

Keith Thompson

Walter Bright said:
Count 'em any way you want.

When was the last time you saw trigraphs (outside of a test case or
obfuscated C code entry) in:

1) a programming magazine listing
2) a book on programming
3) a project on sourceforge
4) a posting in this n.g.
5) a C programming web page
6) a paper submitted at a programming conference

? I haven't seen any in 25 years of being in the C business.

Earlier this week. Search this newsgroup for postings by "Jalapeno".

[...]
In 25 years I've never seen a tty, printer, or modem that supported
EBCDIC, from an ASR-33 to an HP laserprinter. I've worked on embedded
systems from Mattel Intellivision cartridges to phones. None did
EBCDIC.

Neither have I, but reliable sources tell us that EBCDIC is still
being used.

My suspicion is that IBM mainframe programmers mostly keep to
themselves. We don't see them here in comp.lang.c because they rarely
post here, not because they don't exist. They're just a separate
community.

I'd *like* to find a solution to the trigraph problem that (a) lets
anyone who still needs them continue using them (or something better,
if we can come up with it), but (b) doesn't impose "accidental
trigraphs" on the rest of us ("Huh??!"). But that's not going to
happen if we assume that trigraph users don't exist.
 
W

Walter Bright

Keith said:
It's unlikely *now* that anyone would invent a new encoding that's not
based on ASCII. This wasn't the case when IBM invented EBCDIC.

But we shouldn't assume anything.

You're forced to make assumptions when writing a spec for a language.
The C standard, for example, assumes that the character '?' will always
exist.
 
K

Keith Thompson

Walter Bright said:
You're forced to make assumptions when writing a spec for a
language. The C standard, for example, assumes that the character '?'
will always exist.

You're right, I overstated it.

We shouldn't make *unnecessary* assumptions. It's necessary to assume
that some core set of characters exists; when the C standard was
introduced, the best assumption was the intersection of EBCDIC (in all
its variants), ASCII, and the various national ASCII-based sets. I
think we can safely assume that future character sets will include at
least that core set.

I *don't* think it's necessary, or safe, to assume much more than
that.
 
W

Walter Bright

Keith said:
I'd *like* to find a solution to the trigraph problem that (a) lets
anyone who still needs them continue using them (or something better,
if we can come up with it), but (b) doesn't impose "accidental
trigraphs" on the rest of us ("Huh??!"). But that's not going to
happen if we assume that trigraph users don't exist.

I wouldn't have put trigraphs in the standard because it doesn't solve
the problem for EBCDIC users, as pointed out before. But nevertheless,
it is in the standard and should be implemented. As a C tool vendor, I
support the standard. As a programmer, I don't worry about being EBCDIC
compatible.
 
R

Richard Heathfield

Walter Bright said:
Microsoft Windows and Office bring in the bulk of Microsoft's revenues.
There's just no way management would ever allow a few rogue compiler
guys to jeopardize that.

Where did "rogue compiler guys" come from? The migration of Visual C++ to
32-bit was not a maverick operation.
Going to 32 bit ints offers real advantages to programs, and I bet that
the Office guys realized those advantages outweighed going through and
fixing a few bugs here and there.

If I come across the book in the next day or two, I'll cite the relevant
passage.
 
W

Walter Bright

Richard said:
Walter Bright said:


Where did "rogue compiler guys" come from? The migration of Visual C++ to
32-bit was not a maverick operation.

Employees doing what they want to regardless of the best interests of
the corporation are known as rogues. A few rogues are good for the
health of a large organization, as they tend to shake things out of
complacency. Too many, and the organization comes unglued.
If I come across the book in the next day or two, I'll cite the relevant
passage.

I'll look forward to it.
 
M

Mark McIntyre

It's unlikely *now* that anyone would invent a new encoding that's not
based on ASCII.

I'm not even sure that's true. I can see the Chinese deciding on some
totally new encoding scheme more suitable for their needs.
--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
 
M

Mark McIntyre

Count 'em any way you want.

Okay, lets count by excluding all non-commercial uses and by counting
CPU cycles.
When was the last time you saw trigraphs

Earlier this week.
Thousands of C programmers over 40 years, ok. Just Digital Mars has
shipped over half a million C compilers over the last 6 years. How many
C compilers would you guess Microsoft has shipped? Sun? gcc? Borland?
Watcom? Intel? Apple? Green Hills?

I've no idea and nor is it germane. Someone shipping for a mainframe,
or even a mini, is probably servicing hundreds, if not thousands, of
users with a single instance.

--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
 
W

Walter Bright

Mark said:
I'm not even sure that's true. I can see the Chinese deciding on some
totally new encoding scheme more suitable for their needs.

If their needs don't include communicating with the rest of the world or
the internet or using the C, C++, Perl, Java, Ruby, Python, or D
programming languages, then they should go for it.
 
C

CBFalconer

Walter said:
If their needs don't include communicating with the rest of the
world or the internet or using the C, C++, Perl, Java, Ruby,
Python, or D programming languages, then they should go for it.

But that is precisely the point. With the existing C99 standard
they could go ahead and implement such a character set and program
with it in C. No sweat.
 
R

Richard Heathfield

Walter Bright said:
[whoa! :) ]
Employees doing what they want to regardless of the best interests of
the corporation are known as rogues. A few rogues are good for the
health of a large organization, as they tend to shake things out of
complacency. Too many, and the organization comes unglued.


I'll look forward to it.

The following extract is taken from "Writing Solid Code", Steve Maguire,
Microsoft Press, 1993. (Steve Maguire was hired by Microsoft in 1986 to
work on Macintosh Excel.)

'Owning the Compiler Is Not Enough

Some applications groups at Microsoft are now finding that they have to
review and clean up their code because so much of it is littered with
things like +2 instead of +sizeof(int), the comparison of unsigned values
to 0xFFFF instead of to something like UINT_MAX, and the use of int in data
structures when they really meant to use a 16-bit data type.
It may seem to you that the original programmers were being sloppy, but
they thought they had good reason for thinking they could safely use +2
instead of +sizeof(int). Microsoft writes its own compilers, and that gave
programmers a false sense of security. As one programmer put it a couple of
years ago, "The compiler group would never change something that would
break all of our code."
That programmer was wrong.
The compiler group changed the size of ints (and a number of other things)
to generate faster and smaller code for Intel's 80386 and newer processors.
The compiler group didn't want to break internal code, but it was far more
important for them to remain competitive in the marketplace. After all, it
wasn't their fault that some Microsoft programmers made erroneous
assumptions.'
 
W

Walter Bright

CBFalconer said:
But that is precisely the point. With the existing C99 standard
they could go ahead and implement such a character set and program
with it in C. No sweat.

Ok, what if that new character set doesn't include a '?' character?
 
G

Guest

Walter said:
Ok, what if that new character set doesn't include a '?' character?

Then either no conforming C implementation will be available using that
character set, or another character takes the function of '?' (similar
to how 'Â¥' takes the function of '\' in certain character sets).
 
W

Walter Bright

Harald said:
Then either no conforming C implementation will be available using that
character set, or another character takes the function of '?' (similar
to how 'Â¥' takes the function of '\' in certain character sets).

1) So trigraphs don't future proof C against future arbitrary encodings.

2) I've programmed Japanese computers with the 'Â¥' for '\'. It's not so
bad for one or two characters. But for more, it rapidly becomes unusable
(otherwise why didn't EBCDIC users go this route?). Would anyone want to
program in C if every character was represented by some arbitrary
squiggle that happens to have the same bit pattern? That wouldn't even
make sense for the implementors of that encoding.

3) Please explain how C99 makes it possible to make a conforming C
implementation for RADIX50 encoding, http://en.wikipedia.org/wiki/RADIX-50.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads

// comments 35
A simple parser 121
Text processing 29
Command Line Arguments 0
Working with files 1
Serial port 5
hexump.c 79
Taking a stab at getline 40

Members online

No members online now.

Forum statistics

Threads
473,786
Messages
2,569,625
Members
45,320
Latest member
icelord

Latest Threads

Top