Portability: Harmony between PC and microcontroller

  • Thread starter Tomás Ó hÉilidhe
  • Start date
T

Tomás Ó hÉilidhe

I'll try to summarise this as best I can, as my last thread wasn't
very to-the-point:

The C Standard says the following two things:

* int is the natural integer type for the system.
* int must be at least 16-Bit.

Now the problem here is that these two criteria conflict if the
natural type for the system is in fact 8-Bit, which is the case with
many microcontrollers today.

As an example, let's take the following code:

char unsigned x, y;
...
x = y;

On my microcontroller compiler, this produces different assembler
depending on whether 'x' is an "unsigned int" or an "unsigned char".
If it's an "unsigned char", then the assembler is:

MOVF y, W /* Copy y to the accumulator */
MOVWF x /* Copy the accumulator to x */

However, if 'x' is an "unsigned int", then the assembler is:

MOVF y, W /* Copy y to the accumulator */
MOVWF x /* Copy the accumulator to x */
MOVF y+1, W /* Copy the next byte of y to the acc */
MOVWF x+1 /* Copy the acc to the next byte of x */

Now quite plainly to see, the "int" version takes twice as many
instructions in this case, and will therefore take exactly twice as
long to execute, and so will be twice as slow. In other situations,
the difference is far worse; let's take for example the following
code:

if (condition) x = y;

Depending on the type of x and y, this produces either:

MOVF y, W /* Copy y to the accumulator */
BTFSC condition /* If condition is false, skip the next
instruction */
MOVWF x /* Copy the accumulator to x */

or:

BTFSS condition /* Skip the next instruction if condition is true
*/
GOTO There
MOVF y, W /* Copy y to the accumulator */
MOVWF x /* Copy the accumulator to x */
MOVF y+1, W /* Copy the next byte of y to the acc */
MOVWF x+1 /* Copy the acc to the next byte of x */
There:

Not only does the int version consist of more instructions, but it
also involves a goto which will take up even more time. So basically
if your microcontroller is running at 8 MHz, then you may aswell
pretend it's running at 4 MHz or 2 MHz if you're going to be using int
for doing everyday arithmetic.

Now we could go down the road of discussing how C is inadequate in
terms of its accommodation of microcontrollers, but I'd rather discuss
ways of "making it right". The reason I'm so eager to bridge the gap
is that, other than the "int" situation, C is actually great for
programming an embedded system. I used it in my college project this
year to program a portable electronic Connect4 game, and it worked
great!

One way of making things right is to stop using int for arbitrarily
storing numbers, and instead use something like ufast8 (the fastest
integer type that's at least 8-Bit). In this way, neither the
microcontrollers nor the PC's suffer.

Examples of a piece of code that could be brought between PC's and
microcontrollers is something like a look-up table as follows:

ufast8 days_in_month[12] =
{ 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 };

To those people out there who are enthusiastic about writing portable
code, how do you feel about using types such as ufast8 instead of int?

stdint.h is great and all, but I find the type names to be too long
winded. For instance I'd rather write "ufast8" instead of
"uint_fast8_t".
 
B

Bartc

Tomás Ó hÉilidhe said:
I'll try to summarise this as best I can, as my last thread wasn't
very to-the-point:

The C Standard says the following two things:

* int is the natural integer type for the system.
* int must be at least 16-Bit.

Now the problem here is that these two criteria conflict if the
natural type for the system is in fact 8-Bit, which is the case with
many microcontrollers today.

As an example, let's take the following code:

char unsigned x, y;
...
x = y;

It doesn't seem an insurmountable problem.

If you want a default int size that is best for your cpu, try something
like:

typedef unsigned char uint; /* Or uint_fast8_t etc. */
typedef signed char sint;

Then use uint and sint everywhere in place of unsigned/signed int.

When moving to a bigger processor, you need to change those two lines or use
some conditional compilation tricks.
 
T

Tomás Ó hÉilidhe

Thanks for the reply.


Try doing another embedded project, this time with an ARM.  ST just
announced some ARM parts with up to 2MB of flash and 96KB of RAM.


For my next hobby project, I want to make a very simple two-port
router. When the router receives a packet, it will look up the IP
address in its routing table, and then decide what port to forward it
out on and what destination MAC address to use. That's pretty much all
it will do. Of course I'll have to make it do a few other things, like
send and receive ARP requests, but nothing too complicated.

I started throwing some code together in notepad, just to see how I'd
make it work. Now the thing is, I see no reason why I shouldn't be
able to move this code over to a PC. Here's the beginnings of it:

typedef uint_fast32_t IPv4addr;
typedef uint_fast64_t MACaddr;

typedef struct RoutingTableEntry {
IPv4addr addr;
IPv4addr mask;
IPv4addr router_addr;
uint_fast8_t port; /* Here's a great example of where I'd
normally use "unsigned int" */
} RoutingTableEntry;


typedef struct InfoForForwarding {
uint_fast8_t port;
MACaddr dst_mac;
} InfoForForwarding;

#define LEN_ROUTING_TABLE 16u
RoutingTableEntry routing_table[LEN_ROUTING_TABLE]; /* Hold 16
routes max in the table */
#define pend_routing_table (routing_table + LEN_ROUTING_TABLE)

InfoForForwarding GetInfoForForwarding(IPv4addr const dst_ip)
{
InfoForForwarding iff = { 0 };

RoutingTableEntry const *p = routing_table;

do
{
if ((dst_ip & p->mask) == p->addr)
{
iff.port = p->port;

/* Now consult ARP table to get MAC address of router */
iff.dst_mac = GetMACaddr(iff.port,p->router_addr);

return iff;
}

} while (pend_routing_table != ++p);

return iff;
}

As I hope you'll agree from looking at this code, there's nothing
microcontroller-specific or PC-specific about it. There's no reason
why the code couldn't be used to make a PC program that would
implement a "virtual router" between two network cards.

It appears that quite a few people think that PC programming and
embedded programming are quite separate from each other, but I hope my
code example above shows why there's no reason why code can't migrate
and be portable between the two. Many C programmers already are
enthusiastic about their code being portable, but I just hope they'd
consider microcontrollers too.

Slightly off-topically, I don't know if you've been following my
thread entitled "Ethernet in its most basic form". I've been asking
around to see what microcontroller I should use for making my little
two port router. I've been given many suggestions of microcontrollers
that will work with one sole ethernet port, but obviously I'll need a
microcontroller that will work with two. (Or then again I might need
two microcontrollers that will communicate with each other... ?). I
don't suppose you'd have any idea what I should use for that? I want
to work at 100 MBps full-duplex.
 
K

Keith Thompson

Bartc said:
Tomás Ó hÉilidhe said:
I'll try to summarise this as best I can, as my last thread wasn't
very to-the-point:

The C Standard says the following two things:

* int is the natural integer type for the system.
* int must be at least 16-Bit.

Now the problem here is that these two criteria conflict if the
natural type for the system is in fact 8-Bit, which is the case with
many microcontrollers today.
[...]

It doesn't seem an insurmountable problem.

If you want a default int size that is best for your cpu, try something
like:

typedef unsigned char uint; /* Or uint_fast8_t etc. */
typedef signed char sint;

Then use uint and sint everywhere in place of unsigned/signed int.

When moving to a bigger processor, you need to change those two lines or use
some conditional compilation tricks.

*Please* don't call them "uint" and "sint".

What the name "uint" says to me is "unsigned int, but I care more
about saving a few keystrokes than writing clear code"; likewise for
"sint". The only thing worse than typedef'ing "unsigned int" to
"uint" is typedef'ing something else to "uint". I understand that
"uint" is intended to convey "unsigned integer" rather than "unsigned
int", but that's not how it comes across.

If you want to call them, say, "small_signed" and "small_unsigned",
that's fine.
 
C

Chris Dollin

Tomás Ó hÉilidhe said:
I'll try to summarise this as best I can, as my last thread wasn't
very to-the-point:

The C Standard says the following two things:

* int is the natural integer type for the system.
* int must be at least 16-Bit.

Now the problem here is that these two criteria conflict if the
natural type for the system is in fact 8-Bit, which is the case with
many microcontrollers today.

That just means that those microcontrollers aren't a natural fit
to C, so programmers writing looks-like-C for them need to be
aware that natural-C idioms might not work as nicely.

I don't see a problem here.

--
"It was the first really clever thing the King had /Alice in Wonderland/
said that day."

Hewlett-Packard Limited registered no:
registered office: Cain Road, Bracknell, Berks RG12 1HN 690597 England
 
T

Tomás Ó hÉilidhe

That just means that those microcontrollers aren't a natural fit
to C, so programmers writing looks-like-C for them need to be
aware that natural-C idioms might not work as nicely.


This is what I'm against. When I first started programming in C for
embedded systems, I was weary of the compiler's compliance to the
Standard. I was hesitant to rely on rules from the Standard when it
came to things like:
* Minimum size of integer types
* Behaviour of overflow
* Existance and usage of a stack

Having written a fully working program though in C for embedded
systems, and also having looked at the assembler produced to check
what it's actually doing, I've seen that my embedded compile is
extremely compliant. I defined an object as "long unsigned", and lo
and behold the assembler produced used four bytes to store it (even
though it can only do arithmetic on 8-Bit numbers).

I don't see a problem here.


The problem comes with writing portable code. For instance, I'm
currently writing code to implement an internet protocol router. The
code show be able to run on both a microcontroller and on a PC.
However, if the code uses "int" then the code will be less efficient
on a microcontroller. And if it uses "char" then the code will be less
efficient on a PC. Usage of uint_fast8_t would produce optimal
assembler for both systems.

There's no reason why there has to be an "embedded version of C"
distinct from "Standard C".
 
C

Chris Dollin

Tomás Ó hÉilidhe said:
This is what I'm against. When I first started programming in C for
embedded systems, I was weary of the compiler's compliance to the
Standard. I was hesitant to rely on rules from the Standard when it
came to things like:
* Minimum size of integer types
* Behaviour of overflow
* Existance and usage of a stack

Having written a fully working program though in C for embedded
systems, and also having looked at the assembler produced to check
what it's actually doing, I've seen that my embedded compile is
extremely compliant. I defined an object as "long unsigned", and lo
and behold the assembler produced used four bytes to store it (even
though it can only do arithmetic on 8-Bit numbers).




The problem comes with writing portable code.

No, it doesn't. Not if the compiler conforms to the standard.
For instance, I'm
currently writing code to implement an internet protocol router. The
code show be able to run on both a microcontroller and on a PC.
However, if the code uses "int" then the code will be less efficient
on a microcontroller.

Your problem is not with portability; it's with performance.
Different platforms can have differing performance profiles
at whim, and portable code may need tweaking for best performance
on /any/ of them. Singling out performance issues on 8-bit
micros and thinking everyone should write code so that it
(by /hypothesis/) performs equally (well, badly) on those
implementations is, I think, obsession over microefficiency.
 
T

Tomás Ó hÉilidhe

Your problem is not with portability; it's with performance.


They can be one in the same thing if the performance affects the
usability of the product. If I port a web browser to a Nintendo gaming
console, is it really a succesful port if it takes seven minutes to
load a webpage? I don't think it is.

Different platforms can have differing performance profiles
at whim, and portable code may need tweaking for best performance
on /any/ of them.


Yes but there are more fundamental concepts here, that is, the choice
of integer types, whether to use 1, 2, or 4 bytes to store a number.

You say that different platforms have differing performance profiles,
and you're right. What gets better performance on one system might
result in poorer performance on another. But let me draw a crude
analogy:

Let's say you have a farm, and you want to get the best performance
out of all your animals. For the sheep, you might give them a field of
nice thick grass. For the young chicks, you might keep them in a
heated enclosure. For the horses, you might give them a vast open
space to run around. But there'a more fundamental way of getting
better performance out of all your animals -- give them water.

Just as water is a common thing to all animals, integer types are
common to all computers. Before you bother doing specialised things
for each animal such as giving them more grass or more space, do the
universal thing first: water.

And for computers, this universal thing is the choice of integer
types. You can have the best optimiser in the world, but it can only
do so good if you're using sub-optimal integer types.

on 8-bit
micros and thinking everyone should write code so that it
(by /hypothesis/) performs equally (well, badly) on those
implementations is, I think, obsession over microefficiency.


Firstly, zero effort would go into making it perform equally well on
both systems. It's just a matter of getting into the habit of using
uint_fast8_t instead of unsigned int where possible.

For my current embedded project, if I were to change "uint_fast8_t"
from "unsigned char" to "unsigned int", then I bet I'd see flicker in
my display (because the microcontroller can't flash the display fast
enough so that the human eye can't see the flashing). I've already
submitted my project board to my college to be graded but I should be
getting it back tomorrow. I'll try it out changing the integer types
and see if the display flickers. If it does, then I'll have to reduce
the duration of each flash, which will result in a dimmer display.
 
S

soscpd

They can be one in the same thing if the performance affects the
usability of the product. If I port a web browser to a Nintendo gaming
console, is it really a succesful port if it takes seven minutes to
load a webpage? I don't think it is.

Did it load? Yes? Then, it is. Don't think a nintendo (hey... some
nintendo can work very fine here... :) hardware was designed to work
as a http daemon (maybe that is not the product!). Performance isn't
100% hardware and, isn't too, 100% code. Run a vanilla 2.4 kernel in a
dual xeon with 1Tb memory, and run the same in a 286 with 1Mb RAM. Do
you really expect code tuning, best practices or hacks to make both
run, at least close to each other?

Did you pick the wrong language or the wrong hardware (haven't you
picked C? haven't you picked the 8 bit platform? Why mix them if you
think that will not work?)? That is the question I think you must
answer before post C standards or limits as the source of your
problems. The right tools Tomás. The right tools.


Regards
Rafael
 
T

Tomás Ó hÉilidhe

Did it load? Yes? Then, it is.


I'd love to actually have to take a trial of it. I'd love to have you
sit down in a room with a desk and a chair and my machine running the
ported webbrowser. I'd love to see you type in "google.ie" and wait 7
minutes for it to load.

I'd lose the plot if I had to wait one minute for a page to load, let
alone seven.

Don't think a nintendo (hey... some
nintendo can work very fine here... :) hardware was designed to work
as a http daemon (maybe that is not the product!).


Nintendo was an arbitrary choice on my part by the way.
(Also, the "daemon" is the program that runs on the server, not the
client. The daemon listens on a port number, e.g. port 80 for HTTP,
and processes requests to that port number).

Performance isn't
100% hardware and, isn't too, 100% code. Run a vanilla 2.4 kernel in a
dual xeon with 1Tb memory, and run the same in a 286 with 1Mb RAM. Do
you really expect code tuning, best practices or hacks to make both
run, at least close to each other?


I'm talking about getting optimal performance out of every system,
whether it runs at 31 kHz or 3.6 Ghz.

Did you pick the wrong language or the wrong hardware (haven't you
picked C? haven't you picked the 8 bit platform? Why mix them if you
think that will not work?)?


They work great if used properly.

That is the question I think you must
answer before post C standards or limits as the source of your
problems. The right tools Tomás. The right tools.


I'm not talking about changing tools, or even about critiquing the C
language. I'm talking about adopting a habit of using the likes of
uint_fast8_t instead of int, because it will lead to faster code on
every conceivable platform.
 
C

Chris Dollin

Tomás Ó hÉilidhe said:
They can be one in the same thing if the performance affects the
usability of the product. If I port a web browser to a Nintendo gaming
console, is it really a succesful port if it takes seven minutes to
load a webpage? I don't think it is.

It's a nice hypothetical example, but that's all it is -- hypothetical.
If it actually happened that way, then one would profile the browser
and find out where the time went, and address the problem. I very
much suspect that sizes of integers would be a non-issue in this
case.
Firstly, zero effort would go into making it perform equally well on
both systems. It's just a matter of getting into the habit of using
uint_fast8_t instead of unsigned int where possible.

In the kind of C code I've written, I think that would be
almost nowhere.
 
S

soscpd

I'd love to actually have to take a trial of it. I'd love to have you
sit down in a room with a desk and a chair and my machine running the
ported webbrowser. I'd love to see you type in "google.ie" and wait 7
minutes for it to load.
Pointless.

Nintendo was an arbitrary choice on my part by the way.
(Also, the "daemon" is the program that runs on the server, not the
client. The daemon listens on a port number, e.g. port 80 for HTTP,
and processes requests to that port number).

Sorry. Didn't get the browser in your text, and didn't realize someone
typing a url with a joystick. I am not a game player.
I'm talking about getting optimal performance out of every system,
whether it runs at 31 kHz or 3.6 Ghz.

Nonsense. Aren't you the guy who "...lose the plot if had to wait one
minute for a page to load, let alone seven...."? Can you build the
same browser, with the same code, to this 2 machines? Do you think
that is the best product? Shouldnt the 31khz guy use (sample) linx and
the 3Ghz one use whatever he like (including linx)?

If performance was the only issue, why shold one have to use something
portable or standard? Or either keep the 8 bit platform? Hack around
my friend. You cannot answer all the worlds questions with a single
answer. Each system demand his own way to deal with the same problems.
If you try to build a web browser to run in a (sample) nintendo and in
a PC, you will need to "port" something. Maybe everything.
They work great if used properly.
Where? For what? By who? When? How? Answer that to have a percent of
the meaning of "properly".

Regards
Rafael
 
T

Tomás Ó hÉilidhe

Pointless.


The point is to show you that a person is dissatisfied with it. This
means that the port was a failure.

Sorry. Didn't get the browser in your text, and didn't realize someone
typing a url with a joystick. I am not a game player.


Again my choice of Nintendo and also of a webbrowser were arbitrary.

A better example would be an instant messenger client running on a
small handheld device using an LCD display. If the algorithmic code is
too slow, then the display will suffer. Using int instead of
uint_fast8_t will lead to code which is about 2 to 4 times slower.

Nonsense. Aren't you the guy who "...lose the plot if had to wait one
minute for a page to load, let alone seven...."?


Yes, I am that guy -- I'd probably smash the keyboard off the ground
if it took seven minutes to load a webpage. I'd then probably launch
the monitor out a window (a closed one, preferably).

If performance was the only issue, why shold one have to use something
portable or standard?


I'm talking about portable performance, not just performance.

Where? For what? By who? When? How? Answer that to have a percent of
the meaning of "properly".


Countless programs have been written in C for embedded systems. I've
written one full one myself.

All I'm talking about here is using types such as uint_fast8_t instead
of it. I think int should be abandoned altogether.
 
H

Herbert Rosenau

I'll try to summarise this as best I can, as my last thread wasn't
very to-the-point:

You does simply not understund what C ist.

C comes with a set of different data types usable for fixed point
arithmetic:

unsigned char - fits all arithmetic needs for any unsigned value
in the range of 0 to UCHAR_MIN
signed char - fits all arithmetic needs for any signed value
in the range of MIN_CHAR to MAX_CHAR

unsigned int - same as unsigned char, but at least 16 bit wide
signed int - same as signed char, but at least 16 bit wide

long - like above but at least 32 bit

long long - like above but may be wider.

There is no need to invent other data types. A programmer with brain
for only 1 penny will decide to use the right data type that fits the
needs for the value it has to fit in. This will be char if the range
of the value fits in, it will be int, long or long long otherwise. It
doesn't matter that the mashine has only one single register or 4096
of them to wor with, it does'nt matter if the natural width of a
register is 8, 9, 16, 18, 24, 32, 36, 48, 64, 128 or 256 whatever
bit. Either one owns a compiler that works trusty for the job it has
to do or assembly is the solution.

It is relevant to write standard complilant code whereever it is
possible, use standard compilant data types, use commononly used code
constructs because things are changing constantly.

- tomorrow the program has to run under another CPU - the standard has
foreseen that, so only a new translation cycle has to be done
and anything works again without unforseen restrictions
- tomorrow you gets fired because you has written constantly
keywords in an unusual order and the new boss dosn't like that
because it makes fixing your shitty code more cost intensive.

There is further really no need to invent other order of calles, type
as the whole C community uses since BCPL, the predecessor of was
foundet.

Except one will blame himself as ignorant and ties to obfuscate his
shitty code

Be sure you'll gets never hired here based on your misbehavior in
follllowing common practise, unability to write readable code and
kidding.
--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2R Deutsch ist da!
 
K

Keith Thompson

Tomás Ó hÉilidhe said:
This is what I'm against. When I first started programming in C for
embedded systems, I was weary of the compiler's compliance to the

(I presume you mean "wary", not "weary".)
Standard. I was hesitant to rely on rules from the Standard when it
came to things like:
* Minimum size of integer types

Nitpick: The standard defines minimum ranges for integer types, but
the effect is the same.

I've heard of C-like compilers that provide a type "int" with a range
narrower than -32767 .. +32767. That's ok, as long as there's no
claim that it conforms to the C standard. But I would expect the
compiler's documentation to *clearly* state the ways in which it fails
to conform.
* Behaviour of overflow

For floating-point and signed integers, the standard doesn't define
the behavior of overflow. For unsigned integers, the required
behavior is well defined; it wraps around modulo 2**N. I'd be
surprised if even a non-conforming C-like compiler didn't do this (it
should be straightforward to implement in hardware). But again, if
the behavior differs from what the standard specifies, I'd expect it
to be clearly stated in the documentation.
* Existance and usage of a stack

The C standard doesn't even use the word "stack" (and *please* let's
not re-open the argument about what "stack" means). It does require
that functions can be called recursively. If the implementation can't
support that, then again, I'd expect that to be clearly documented.

[snip]
 
T

Tomás Ó hÉilidhe

unsigned char      - fits all arithmetic needs for any unsigned value
                     in the range of 0 to UCHAR_MIN


You mean UCHAR_MAX of course.

Despite that though, you haven't provided any information. Of course
an unsigned char will store anything up to UCHAR_MAX, but what is
UCHAR_MAX? Well the Standard gives it a lower limit of 255u and gives
it no upper limit. This mean's it's at least 8-Bit.

signed char        - fits all arithmetic needs for any signed value
                     in the range of MIN_CHAR to MAX_CHAR


You mean SCHAR_MIN to SCHAR_MAX (which are distinct from CHAR_MIN and
CHAR_MAX).

unsigned int       - same as unsigned char, but at least 16 bit wide
signed int         - same as signed char, but at least 16 bit wide

long               - like above but at least 32 bit

long long          - like above but may be wider.


"long long" is at least 64-Bit.

There is no need to invent other data types.


No need for new intrinsic types, I agree. There is though a great use
for typedef's such as uint_fast8_t.

A programmer with brain
for only 1 penny will decide to use the right data type that fits the
needs for the value it has to fit in. This will be char if the range
of the value fits in, it will be int, long or long long otherwise.


A common novice mistake. If you use char to store numbers on a PC,
then you'll end up with code that is both slower and consumes more
memory. Slower because more instructions are needed to deal with
integer types that are smaller than the registers. Consume more memory
because you have to store those extra instructions somewhere.
It
doesn't matter that the mashine has only one single register or 4096
of them to wor with, it does'nt matter if the natural width of a
register is  8, 9, 16, 18, 24, 32, 36, 48, 64, 128 or 256 whatever
bit.


Prove it to me. Show me some C code and then show me the assembler
that your compiler produced from it. I've already shown the assembler
produced by the 8-Bit PIC C compiler, and it clearly shows the int is
AT LEAST two times slower than char, and other times it's even 4 or 5
times slower. On a PC, you'll see the opposite, you'll see that int is
faster than char.

Either one owns a compiler that works trusty for the job it has
to do or assembly is the solution.


Nope buddy, you're wrong there. If you write a function that works
with an "int", then you've no way of telling the compiler that you
won't need to store numbers greater than 255. Because of this, the
compiler can't take the liberty of using a char. Similarly, if you
have a function that works with "char", then you've no way of telling
the compiler that it will never store numbers greater than 255, so it
can't optimise it to "int" if that's the case.

It is relevant to write standard complilant code whereever it is
possible, use standard compilant data types, use commononly used code
constructs because things are changing constantly.


Yes you're right, and it's great to write portable code. Using
uint_fast8_t instead of int helps to advance the cause.

- tomorrow the program has to run under another CPU - the standard has
  foreseen that, so only a new translation cycle has to be done
  and anything works again without unforseen restrictions


Haven't a bull's notion what you're tryint to say there. Please
elaborate.

- tomorrow you gets fired because you has written constantly
  keywords in an unusual order and the new boss dosn't like that
  because it makes fixing your shitty code more cost intensive.


I would have been fired two weeks already for calling him a thick c*nt
to his face if he can't understand "unsigned char". In fact, I never
would have taken the job if I'd got the slightest inclination that the
boss was borderline retarded.

There is further really no need to invent other order of calles, type
as the whole C community uses since BCPL, the predecessor of was
foundet.


Please re-word that paragraph, I can't make sense of it.

Except one will blame himself as ignorant and ties to obfuscate his
shitty code


Some sort of dig at me, I assume, but it's hard to make out through
the language barrier.

Be sure you'll gets never hired here based on your misbehavior in
follllowing common practise, unability to write readable code and
kidding.


comp.lang.c are hiring now?
 
S

soscpd

The point is to show you that a person is dissatisfied with it. This
means that the port was a failure.

The port is a hack. If someone is "dissatisfied with it", buy or build
something better.
Again my choice of Nintendo and also of a web browser were arbitrary.

That was your arbitrary choice to explain your subjects.
I can just assume you know what you are talking about.
A better example would be an instant messenger client running on a
small handheld device using an LCD display. If the algorithmic code is
too slow, then the display will suffer. Using int instead of
uint_fast8_t will lead to code which is about 2 to 4 times slower.

Have you EVER wrote a instant messenger to ANY embedded device?
The embedded device use a PIC? Or either a Nintendo processor (if yes,
again, its a hack)?

As more you tell me about yourself, less I am interested in you as a
personal or professional contact.
Keep that to you, or cut the coffee before answer back. I am kind of
taking your thread seriously, so far.
If performance was the only issue, why should one have to use something
portable or standard?
I'm talking about portable performance, not just performance.

So, try to plan where you will use your code. The nintendo choice
fails here.
Where? For what? By who? When? How? Answer that to have a percent of
the meaning of "properly".

How many embedded systems exists?
They are all the same?
Do you plan to reuse all the code of embedded systems again on, say,
PC?
Do you know what a project is? Solution, maybe?
Have you ever designed some hardware to a specific job
(power contactors/airplane CDU's/Black Box Firmware/Radar/Wireless
AP... something?)?
Do you know how reliable this need (demand) to be?
Think I should work with this kind of projects using portable code?
Countless programs have been written in C for embedded systems. I've
written one full one myself.

Congrats! If you write some 362 more (just embedded... lets keep the
other domains out here, but ASM is included), we will have the same
experience!!
All I'm talking about here is using types such as uint_fast8_t instead
of it. I think int should be abandoned altogether.

Ok. Do it.

Look Tomàs... I am really trying to get the "where we are going"
here.
The whole point is to convince people to code just as you think they
should?


Regards
Rafael
 
B

Bart van Ingen Schenau

A better example would be an instant messenger client running on a
small handheld device using an LCD display. If the algorithmic code is
too slow, then the display will suffer. Using int instead of
uint_fast8_t will lead to code which is about 2 to 4 times slower.

This is where your reasoning starts to break down.
A small, handheld device is likely to be using an ARM-based processor.
These processors are nowadays almost universally 32-bitters. (A really
old ARM processof might be 16 bit).
Therefore, the difference between using 'int' and 'uint_fast8_t' is
expected to be zero.

The problem with your reasoning is that you try to extrapolate from a
single experience with one particular embedded system. Your
extrapolation just does not work.
I would strongly advise you to gather data-points on about a dozen
different processors (with as much difference between them as you can)
before making sweeping recommendations like you do in this thread.

I'm talking about portable performance, not just performance.

Then tell me, what is the performance of that webbrowser on your PIC?

Bart v Ingen Schenau
 
R

robertwessel2

A small, handheld device is likely to be using an ARM-based processor.
These processors are nowadays almost universally 32-bitters. (A really
old ARM processof might be 16 bit).


There were never any 16 bit ARMs, assuming you mean "16 bit" to be
descriptive of the ISA. ARM1 and ARM2 supported only a 26 bit address
space, but are pretty obsolete at this point.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,743
Messages
2,569,478
Members
44,899
Latest member
RodneyMcAu

Latest Threads

Top