C language on 8-bit microcontrollers

M

Mark

It's clear that for 8-bit MCUs the native word length is 8 bits. But we
can't represent 'int' data in 8 bits, as it contradicts the standard (the
int type must contain at least 16 bits to hold the range of values). Does it
mean, compilers used on 8-bit processors are not standard compliant, or they
provide some tricks to overcome this and make illusion of 16-bit ints?

Thanks in advance.
 
B

bartc

Mark said:
It's clear that for 8-bit MCUs the native word length is 8 bits. But
we can't represent 'int' data in 8 bits, as it contradicts the
standard (the int type must contain at least 16 bits to hold the
range of values). Does it mean, compilers used on 8-bit processors
are not standard compliant, or they provide some tricks to overcome
this and make illusion of 16-bit ints?

The 8-bit mcu might have some 16-bit capability, eg. with some registers and
operations, even though the databus and alu might be just 8. The compiler
won't have a problem with those.

With the rest, 16-bit ints just have to be emulated using 8-bit registers
and operations. It just means it's a bit slower, and in your C code it might
be an idea to keep to 8-bit ints (ie. signed/unsigned char) where possible.

So yes it's an illusion, as is any datatype longer than 8-bits.
 
P

Paul N

It's clear that for 8-bit MCUs the native word length is 8 bits. But we
can't represent 'int' data in 8 bits, as it contradicts the standard (the
int type must contain at least 16 bits to hold the range of values). Does it
mean, compilers used on 8-bit processors are not standard compliant, or they
provide some tricks to overcome this and make illusion of 16-bit ints?

The standard says "A ‘‘plain’’ int object has the natural size
suggested by the architecture of the execution environment (large
enough to contain any value in the range INT_MIN to INT_MAX as defined
in the header <limits.h>)."

So while 8 bits might appear to be the natural size, the standard
seems to allow you to pick a slightly less natural size if the natural
size is not big enough.
 
R

Rich Webb

The 8-bit mcu might have some 16-bit capability, eg. with some registers and
operations, even though the databus and alu might be just 8. The compiler
won't have a problem with those.

With the rest, 16-bit ints just have to be emulated using 8-bit registers
and operations. It just means it's a bit slower, and in your C code it might
be an idea to keep to 8-bit ints (ie. signed/unsigned char) where possible.

There's also the "escape clause" in 5.1.2.3 that permits a compiler to
omit the promotion of char to int and then back again if it can
determine that the result would be the same.

So, while the abstract machine may require that char operands be
converted to ints and then the int resulting from the operation be
truncated back to char, a compiler could do it all in 8 bits.
 
N

Nobody

It's clear that for 8-bit MCUs the native word length is 8 bits. But we
can't represent 'int' data in 8 bits, as it contradicts the standard (the
int type must contain at least 16 bits to hold the range of values). Does it
mean, compilers used on 8-bit processors are not standard compliant, or they
provide some tricks to overcome this and make illusion of 16-bit ints?

Either choice is possible, but it's more common for "int" to be 16 bits,
with the rule about implicitly promoting char/short to int being ignored.

Standards-compliance is generally considered unimportant for low-end MCUs.
The standards were designed for larger systems, and are a poor choice for
something like a PIC16. For a PIC10/12, even a highly non-standard C
dialect is a poor choice. For PIC18, you don't need to go too far
from the standard, but you can't really avoid using extensions (e.g.
ROM/RAM pointers).
 
R

robertwessel2

The standard says "A ‘‘plain’’ int object has the natural size
suggested by the architecture of the execution environment (large
enough to contain any value in the range INT_MIN to INT_MAX as defined
in the header <limits.h>)."

So while 8 bits might appear to be the natural size, the standard
seems to allow you to pick a slightly less natural size if the natural
size is not big enough.


Actually the standard *requires* it.
 
T

Thad Smith

Nobody said:
Either choice is possible, but it's more common for "int" to be 16 bits,
with the rule about implicitly promoting char/short to int being ignored.

Most C compilers for 8-bit MCUs implement ints as 16 bits, but typical
applications have many variables of type unsigned char where you might
use an int on a larger processor: if you are indexing through an array
of length 20, a byte suffices.

Also, most of the compilers I am familiar with DO promote chars and
shorts to ints, but they also optimize. Consider
unsigned char a,b,c;
...
a = b + c;

A compiler would add b and c with 8-bit arithmetic and store in a
without producing an explicit extension. This produces exactly the same
result as converting to 16-bit values, adding, and then truncating the
result, so is conforming. Of course, a = (b+c)/2; requires a means to
handle the full sum prior to dividing.
Standards-compliance is generally considered unimportant for low-end MCUs.
The standards were designed for larger systems, and are a poor choice for
something like a PIC16. For a PIC10/12, even a highly non-standard C
dialect is a poor choice. For PIC18, you don't need to go too far
from the standard, but you can't really avoid using extensions (e.g.
ROM/RAM pointers).

I have programmed PIC16s with C, using Standard C for most of the code.
The compiler did not produce bloated code as a result of standard C
minimum sizes. I did, however, adapt my coding style to the target
processor.
 
J

jellybean stonerfish

I don't program those anymore, but when I did I thought that Microchip
PIC microcontroller assembly language was just the easiest thing in the
world to learn and work with, to the extent that I couldn't fathom why
there was any demand for a C compiler for those chips.

I don't know that device, but maybe the reason for a compiler is that a
person might already know c, and might already have some c code written,
and wants to program the device without re-writing every function again
in asm
 
N

Nobody

Also, most of the compilers I am familiar with DO promote chars and
shorts to ints, but they also optimize. Consider
unsigned char a,b,c;
...
a = b + c;

A compiler would add b and c with 8-bit arithmetic and store in a
without producing an explicit extension. This produces exactly the same
result as converting to 16-bit values, adding, and then truncating the
result, so is conforming. Of course, a = (b+c)/2; requires a means to
handle the full sum prior to dividing.

Microchip's C18 compiler doesn't do integer promotion by default
(arithmetic is done at the size of the largest operand), but this can be
enabled with the -Oi switch.
I have programmed PIC16s with C, using Standard C for most of the code.
The compiler did not produce bloated code as a result of standard C
minimum sizes. I did, however, adapt my coding style to the target
processor.

It would produce bloated code if you wanted other aspects of real C. In
particular, automatic variables are horribly inefficient (they're better
on the PIC18, but still worse than static/overlay variables), so they
aren't normally used, which means functions aren't normally re-entrant.
 
N

Nobody

I don't program those anymore, but when I did
I thought that Microchip PIC microcontroller assembly language
was just the easiest thing in the world to learn and work with,

Hmm. PIC10/12/16 assembler is quite messy due to the bank/page switching
required. But for the same reasons, C is a poor choice, and "pure"
(standards-conformant) C would be an even worse choice (mostly due to the
significant overhead of implementing automatic variables).
to the extent that I couldn't fathom why there was any demand for
a C compiler for those chips.

It's useful if you want to write code for more than one architecture. Or
for libraries which you want to be able to customise easily.

I normally use assembler on 8-bit PICs, but if you want to use e.g. USB,
you can either write your own USB stack in assembler, or use Microchip's
stack. Microchip's stack is written in C, as it supports the PIC18, 24,
and 32 families, and there is a lot of conditional compilation and use of
macros to facilitate efficient customisation.

Even if you were writing your own stack, assembler may not be such a good
choice if you think you might want to use it for more than one project, or
even if you're not entirely sure how to structure it. Something as simple
as changing a variable from 1 byte to 2 bytes is one line in C but could
mean a substantial re-write in assembler.
 
T

Thomas Matthews

Mark said:
It's clear that for 8-bit MCUs the native word length is 8 bits. But we
can't represent 'int' data in 8 bits, as it contradicts the standard
(the int type must contain at least 16 bits to hold the range of
values). Does it mean, compilers used on 8-bit processors are not
standard compliant, or they provide some tricks to overcome this and
make illusion of 16-bit ints?

Thanks in advance.

One can support any bit-width integer. It just takes a bit of
extra coding. For example, your 8-bit controllers and use
multi-byte arithmetic for 16-bit integers and still be compliant.

However, knowledgeable programmers will prefer to use the 8-bit
char because it is more efficient.

--
Thomas Matthews

C++ newsgroup welcome message:
http://www.slack.net/~shiva/welcome.txt
C++ Faq: http://www.parashift.com/c++-faq-lite
C Faq: http://www.eskimo.com/~scs/c-faq/top.html
alt.comp.lang.learn.c-c++ faq:
http://www.comeaucomputing.com/learn/faq/
Other sites:
http://www.josuttis.com -- C++ STL Library book
http://www.sgi.com/tech/stl -- Standard Template Library
 
G

Gene

It's clear that for 8-bit MCUs the native word length is 8 bits. But we
can't represent 'int' data in 8 bits, as it contradicts the standard (the
int type must contain at least 16 bits to hold the range of values). Does it
mean, compilers used on 8-bit processors are not standard compliant, or they
provide some tricks to overcome this and make illusion of 16-bit ints?

Thanks in advance.

It's no trick. The compiler must generate code that does the
arithmetic in accordance with the C standard. The case of 16-bit (or
more) arithmetic just requires more than one instruction per
operattion. For example most processors have an "add with carry"
instruction. This accounts for the carry of the previous operation.
So adding 16-bit quantities A and B will be something like

MOV lo8(A) to R1
ADD lo8(B) to R1
MOV hi8(A) to R2
ADC hi8(B) to R2

Now the register pair R2|R1 contains the 16-bit result. There are
other solutions that don't even need the ADC instruction.

Of course all the other arithmetic ops can be implemented with
appropriate sequences like this. When the sequence is long (as in the
case of floating point ops), the code is often written as a subroutine
in the runtime system, and the compiler generates a call to the
routine, which is comparatively short.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top