Writing an int to a file, not quite sure how buffers work.

R

Richard Heathfield

Tom St Denis said:
What the hell is a mainframe? :)

A great big box of bits.
I used to play with HPCs while working at AMD and frankly you're an old
timer. They pretty much all run GNU/Linux (or some derivative) and the
default code page is the ASCII (whatever the ANSI equiv is).

So what you're saying is that you lack experience of non-ASCII systems. This
comes as no great surprise, but I don't see how it advances the argument in
either direction.
And nowadays we call them "clusters" not mainframes.

Since by your own admission you don't know what a mainframe is, I don't
quite see how you can make definitive statements about them.
:)

Tom

P.S. mostly just messing with you, so put down the pitchfork and step
aside.

I have no pitchfork. If you want one, I suggest you try eBay rather than
comp.lang.c.
 
T

Tom St Denis

Richard said:
I have no pitchfork. If you want one, I suggest you try eBay rather than
comp.lang.c.

My post was a joke. Hence the P.S.

If your reply is supposed to be funny I don't get it.

Just makes me glad I'm not an old-timer fart who thinks everything was
better "in the old days." While I liked my 6809 and 8051 systems too
(and I'm no old timer) I think my Core 2 Duo is way more efficient
(watts/mips) and more productive. I think my modern GNU/Linux setup is
better than a 1983 copy of AT&T Unix, etc, etc, etc...

And btw, using a non-ASCII system isn't that special. I've used a few
EBIDIC [sp?] systems at IBM and frankly it's just a different encoding
(and converting/to/from is easy). I'd rather use ASCII out of
simplicity than other coding standards out of elitism.

So go out, grab a coffee, look up to the sky and get big heaping dosage
of PERSPECTIVE AND REALITY.

Tom
 
R

Random832

2006-11-27 said:
Not on all possible implementations. Mainframes and embedded systems
often have bigger char types.

But that's not guaranteed. Re-read for comprehension.
Yes, but even the first 128 characters aren't guaranteed to be ASCII :)

That wasn't the question, though.
 
R

Random832

2006-11-27 said:
And I suppose, vertical tab, backspace, carriage-return and form
feed...

Those are ASCII, aren't they? I was referring to the fact that there is
a difference between "line feed" and "newline" that is ignored on most
modern systems, and ASCII contains only the former.
They'll be represented in that manner all right, only encoded
differently, as naturally, the standard doesn't specify any encoding
scheme :)

...."that particular manner" that I was referring to was ASCII encoding.
You're twisting words around.
 
R

Richard Heathfield

Tom St Denis said:
My post was a joke. Hence the P.S.

If your reply is supposed to be funny I don't get it.

Does that mean that, if it were *not* supposed to be funny, you *do* get it?
Just makes me glad I'm not an old-timer fart who thinks everything was
better "in the old days."

What is "in the old days" about non-ASCII systems? Do you think EBCDIC
systems have ceased to exist?
And btw, using a non-ASCII system isn't that special.

Nobody ever said it was. What *is* special is writing C code that won't
break just because you're running it on a non-ASCII system.
I've used a few
EBIDIC [sp?] systems at IBM and frankly it's just a different encoding
(and converting/to/from is easy). I'd rather use ASCII out of
simplicity than other coding standards out of elitism.

I'd rather write code once than twice.
So go out, grab a coffee, look up to the sky and get big heaping dosage
of PERSPECTIVE AND REALITY.

My perspective is evidently different to yours, but that doesn't mean I
don't have one. As for reality, ignoring great big lumps of it won't make
those lumps cease to exist.
 
T

Tom St Denis

Richard said:
What is "in the old days" about non-ASCII systems? Do you think EBCDIC
systems have ceased to exist?

If I ignore them hard enough, yes.
Nobody ever said it was. What *is* special is writing C code that won't
break just because you're running it on a non-ASCII system.

Well given that I write code that is routinely used on platforms I
don't even have access to... I'd think I'm not striving for
non-portable code here.
I'd rather write code once than twice.

Um same here. I'm just really tired of hearing the "well on the
mainframes..." Newsflash, big deal. Embedded systems outweigh
mainframes a million to one in usage. It just irks me to read "the
big players ..." in context with mainframes.

Yes, write portable code. No, don't worship mainframe coders.

That said, the OP should just ASN1 encode their integer and avoid all
ambiguity :)

Tom
 
R

Richard Heathfield

Tom St Denis said:
If I ignore them hard enough, yes.

So presumably you are not concerned about portability to such systems. Well,
that's fine, but some people are, and it is perfectly legitimate and proper
to discuss such matters in comp.lang.c, which deals with C, not with "C on
ASCII machines".
Well given that I write code that is routinely used on platforms I
don't even have access to... I'd think I'm not striving for
non-portable code here.

I'm not claiming that you are. But if your code relies on an ASCII
representation, then it /is/ non-portable to some extent, whether you are
striving for that outcome or not. That doesn't mean your code is no use. It
just means it isn't as portable as it could be.
Um same here. I'm just really tired of hearing the "well on the
mainframes..."

They happen to be the best example of non-ASCII systems, that's all.
Newsflash, big deal. Embedded systems outweigh
mainframes a million to one in usage.

That's irrelevant to mainframe users, though.
It just irks me to read "the
big players ..." in context with mainframes.

Yes, write portable code.
Agreed.

No, don't worship mainframe coders.

Nobody is asking you to, or claiming that it's a good idea to do so. But
mainframe programmers have to be aware of such issues, and so do those who
claim to be writing portable code.
 
J

John Bode

I'm wanting to write an int to a file, and so far I have written:

const char buff[20];
int num = 256 //for the sake of the example
sprintf(buff, "%d", num);

Now my question is what to do next. I could use fwrite, but I don't
understand how the size works, I'm not sure if it writes out the whole
buffer or not, and I definetly only want to write out "256."

fwrite() allows you to specify the number of bytes to be written out,
so you could accomplish what you want through:

fwrite(buff, 1, strlen(buff), outstream);

The second argument to fwrite is the size of a single element of buff
in bytes (to make it more generic, you could replace the hardcoded 1
with sizeof *buf). By definition, a single char has size 1.
strlen(buff) returns the number of characters in buff before the nul
terminator, which in this case is 3.

The result is that the characters '2', '5', and '6' are written to
outstream.

Of course, you could accomplish the exact same thing with far less pain
simply by writing

fprintf(outstream, "%d", num);

and bypass the buffer entirely, unless there's a specific reason you
need to use the buffer.
I could use putc to go through the buffer and write out one character
at a time, but same issue, I don't know if that writes out the whole 20
character buffer or stops after the '256'.

Strings in C are terminated by a 0, so the string "256" is the sequence
{'2', '5', 6', 0}. So you could write

int i = 0;
while (buff != 0) // or just while(buf)
{
fputc(buff[i++], outstream);
}

Or you could just write

fputs(buff, outstream);

which will stop writing characters once it encounters the nul
terminator.
I haven't had to use this stuff for a long time, and I completely
forget how this buffer stuff works and I'm finding it fairly annoying
to have to allot the space beforehand.

Why do you think you have to allocate space beforehand? Why not just
use the conversion operations offered by the *printf() family?
 
T

Tom St Denis

Richard said:
So presumably you are not concerned about portability to such systems. Well,
that's fine, but some people are, and it is perfectly legitimate and proper
to discuss such matters in comp.lang.c, which deals with C, not with "C on
ASCII machines".

Wholeheartedly agree.

Before I get into my reply further, I want to apologize for upsetting
you. My original reply was just meant to be a joke. I wasn't trying
to suggest that "C is ASCII" or whatever. :)
I'm not claiming that you are. But if your code relies on an ASCII
representation, then it /is/ non-portable to some extent, whether you are
striving for that outcome or not. That doesn't mean your code is no use. It
just means it isn't as portable as it could be.

Exactly.

[for the benefit of others...] Internally I use tables to convert chars
to ASCII for my ASN1 code. The idea is simple, 'c' is 'c' on any
platform, the decimal representation of 'c' may change. But if you
have a table that has

{'c', 99},

You can easily convert 'c' to the ASCII value of 99 without violating
portability rules.
They happen to be the best example of non-ASCII systems, that's all.

And many IBM systems (zLinux for instance) :)
Nobody is asking you to, or claiming that it's a good idea to do so. But
mainframe programmers have to be aware of such issues, and so do those who
claim to be writing portable code.

Perhaps. My experience with portable coding comes from the random
video game console and other embedded developers who picked up my code
and ran with it on random mixes of big/little endian, 32-bit/64-bit,
unaligned memory/aligned memory users.

Having your code which you only tested on a 32-bit little endian
unaligned memory x86 box run smoothly out of the box on a 64-bit
aligned big endian playstation [or HP-UX box] is fairly cool :).

I guess in the context of code pages you're right. Mainframes are
likely the best source of violating the "all ASCII" world.

Tom
 
S

santosh

Random832 said:
Those are ASCII, aren't they? I was referring to the fact that there is
a difference between "line feed" and "newline" that is ignored on most
modern systems, and ASCII contains only the former.

Yes, they're ASCII but just like the linefeed vs. newline ambiguity,
some of these other control characters may not have the same meaning,
and effect, from system to system. In that sense they're similar to the
linefeed character. The printable characters of ASCII don't suffer from
such system specific interpretations.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,778
Messages
2,569,605
Members
45,238
Latest member
Top CryptoPodcasts

Latest Threads

Top