API Design

R

robertwessel2

probably though, the amount of code produced is much smaller than the number
of units produced...

for example, for an embedded system maybe the code is written and tweaked
some, and then maybe reused for many thousands of units sold?...


Embedded systems run the gamut from very small (a few dozen lines of
assembler) to very large (a couple of million lines of Ada on the
F-22), and from few shipped units (starting at *1*) to tens of
millions. Numerically, small and limited quantity designs vastly
outnumber large and/or high volume designs, but you tend not to see
them.
 
S

Squeamizh

You decide.  Most compilers now targetting small embedded processors
implement most of the C90 base language, plus the standard library minus
operating system and I/O calls.  Localization features are usually
ignored.  In that sense, they are between the C standalone and hosted
implementations.

The compiler I am using for an 8-bit processor advertises, like many
embedded-target cross compilers, ANSI C compliance.  Take that with a
grain of salt.  What is meant by that is C90, not C99.

That advice would be a lot more useful if you just stated the name of
the compiler you're using.
 
B

BGB / cr88192

probably though, the amount of code produced is much smaller than the
number
of units produced...

for example, for an embedded system maybe the code is written and tweaked
some, and then maybe reused for many thousands of units sold?...

<--
Embedded systems run the gamut from very small (a few dozen lines of
assembler) to very large (a couple of million lines of Ada on the
F-22), and from few shipped units (starting at *1*) to tens of
millions. Numerically, small and limited quantity designs vastly
outnumber large and/or high volume designs, but you tend not to see
them.
-->

yep...


(and I am off here having "fun" with an app written in large part in
Python... as a service to anyone using said apps, people should probably
refrain from using Python...).

C has a lot of theoretical badness, and Python a lot of practical badness.
with C, one can worry about memory leaks and bugginess.
with Python, one can see the terrible performance and tendency to die
resulting from random typecheck failures, uncaught exceptions due to trying
to fetch a non-existant object field, ...
 
F

Flash Gordon

BGB said:
yep, including in apps...

A lot less there than there used to be.

I have noticed this as well...

so, probably a lot more of the commercial SW field is non-C, but C still
holds dominance for open-source?...

There is probably still a fair bit of open source done in C for a
variety of reasons. There are a lot of library projects where they want
to be usable from multiple languages, a lot of languages being written
and extended etc. Also a lot of people historically would not have had
easy access to compilers other than C compilers.

probably though, the amount of code produced is much smaller than the number
of units produced...

for example, for an embedded system maybe the code is written and tweaked
some, and then maybe reused for many thousands of units sold?...

A lot of designs sell rather more than thousands. However, new models
are always being developed which will often require changes to the software.
I suspect it is more common than Cobol or Fortran though, on account of not
to my knowledge having used apps which had been written in Fortran of
Cobol...

how do you know what language a closed source program is written in? I
only found out about some of the Cobol because I tripped over the way
the licensing for the Cobol runtime was done in one case (I got an error
that it could not find the license for the Cobol runtime).
possibly, although I don't personally own a car...

The buses and trains you probably use will also have a number of
embedded processors...

I meant what are actually "legacy" devices, IOW, the pieces of hardware that
typically only OS developers know about and any more have little use beyond
being vestigial (or operate underlying hardware notably different, such as a
PC speaker which makes noise on the sound card,

That would not have been run by a processor in all probability.
keyboard controller which
operates via USB,

You still *have* a keyboard controller, and it will *still* be in the
keyboard (where it has been probably ever since the keyboard was
connected to a computer by a wire).
USB-connected drives faking being connected via ATA when
the BIOS boots them, ...).

If anything those will have *more* software in the external drive box.
Also, they generally run SCSI over USB not ATA. Oh, and the software in
the hard disk is also still probably being developed (the disks can
normally report the software version, and you do find disks with
different versions). The same goes for the software in the DVD drive.
I suspect "something" is going on here...

There is a lot going on here.
NIC or USB are far too new, and almost invariably have actual electronics.

Network cards are new? What planet have you been on? I was using
networked computers in the 1980s! They were not new either!
however, I am not sure which exact devices would be in question, since what
I read did not list them.

<snip wild speculation>

If you don't know what it is talking about, then it really does not
support any point at all, so why bother raising it?
 
J

jacob navia

Markus Schaub a écrit :
It's not 128 bits, it's 128 bytes.

Markus

128 BYTES!!!!!

WOW!

That changes everything.

Of course you will need a GC for that amount of RAM.

:)
 
B

BGB / cr88192

Flash Gordon said:
A lot less there than there used to be.

yep.



There is probably still a fair bit of open source done in C for a variety
of reasons. There are a lot of library projects where they want to be
usable from multiple languages, a lot of languages being written and
extended etc. Also a lot of people historically would not have had easy
access to compilers other than C compilers.

yep, that is a reason in my case as well...

similarly, most of the coding I do is also open-source...

A lot of designs sell rather more than thousands. However, new models are
always being developed which will often require changes to the software.

ok.



how do you know what language a closed source program is written in? I
only found out about some of the Cobol because I tripped over the way the
licensing for the Cobol runtime was done in one case (I got an error that
it could not find the license for the Cobol runtime).

I have seen the source for many of the apps I use (or have general
background knowledge), since, apart from Windows itself, I mostly use
open-source software...

I am frustrated though some by what apps I know of which are written in
large part in Python, since nearly every app I have seen which has
components in Python has been slow and buggy...

although, I know of another app which had some issues like this, but had
observed in this case that some parts of the app were written in Tcl...


there is something to be said for static typechecking, since at least it
will weed out most of the errors before they show up at runtime (in the form
of ugly death messages, such as a typecheck error, missing object field,
incorrect number of method arguments, ...).

The buses and trains you probably use will also have a number of embedded
processors...

I don't use these either (not like they would be available where I live
anyways, given "here" is approx 20 miles from the city limits, and this here
is the desert, and my house is on a dirt road, ...).

(around here, the UPS people don't knock, they sneak up, leave the packages,
and try to get away quickly... it is ammusing some...). one typically knows
a package has arrived when they hear the UPS guy stomp the gas to get
away...

granted, my parents drive...

That would not have been run by a processor in all probability.

at least, in their original forms, now in all possibility some these devices
are faked in SW in the bus controller, from what I had read some...
You still *have* a keyboard controller, and it will *still* be in the
keyboard (where it has been probably ever since the keyboard was connected
to a computer by a wire).

not the one in the keyboard...

the one that is controlled via ports, and by setting the right values causes
the computer to reboot...

older keyboards communicate via the computer via a special serial bus, and a
chip on the motherboard served as a controller. other times, the keyboard
can be connected via USB, but the same IO ports still look and behave the
same (even though, in all sense, USB is wired up to the computer, and sends
its scan codes differently, than would have been sent via a keyboard using a
serial connection and a DIN-5 plug...).

If anything those will have *more* software in the external drive box.
Also, they generally run SCSI over USB not ATA. Oh, and the software in
the hard disk is also still probably being developed (the disks can
normally report the software version, and you do find disks with different
versions). The same goes for the software in the DVD drive.

but, they may end up looking to the OS and DOS software as if they were
connected ATA (although, observably, Windows and Linux see through this
trick, where Linux sees both USB and SATA drives as SCSI devices, rather
than as legacy ATA devices, and 64-bit Windows doesn't boot correctly from
USB...).

There is a lot going on here.

yeah.



Network cards are new? What planet have you been on? I was using networked
computers in the 1980s! They were not new either!

but, they were not typically onboard or standardized until the 2000's...

the NIC was usually a separate card, which required special drivers and
such.

this is in constrast to HW which was there since the beginning, and more or
less has to be present so that backwards compatibility is maintained (say,
with old DOS software which directly messes with IO ports, ...).

it is most notable that many of these IO ports still work on newer systems,
even if presumably the underlying HW and mechanisms have chainged.

I expect there IS a probably a processor somewhere in this case, maybe if
doing little more than watching IO ports and redirecting things or
something...


<snip wild speculation>

If you don't know what it is talking about, then it really does not
support any point at all, so why bother raising it?

it is speculation based on my own prior OS dev work (from years ago),
although I am a little fuzzy on my memory of which all devices exist or
which IO ports they use (but, I have done enough OS dev, and seen enough how
DOS apps work, that for them to continue working, likely these devices are
still in place, presuming modern computers can still boot DOS and run DOS
SW, which last I had seen, they still do...).


the part I most remember though is the VGA registers and how the thing got
set up for things like ModeX and 640x480x16 color, ... but, even this is
faded...


all it really said much was that the bus controller in question:
emulated legacy hardware;
had the code for such legacy HW written in C.

although, it was a bus controller for an unorthodix x86 chipset (I think
Intel Atom or VIA Nano or similar), and it is possible that for more
traditional computers, the HW is not so much emulated?...


then again, even x86 is emulated, even on its own mainstay chips...

 
B

BGB / cr88192

Richard said:
How long have you been a programmer? I hope not too long.

I have actually been doing C coding for around 15 years now, or, IOW, a
decent portion of my life thus far (given I was born back in the 80s...).

What the hell kind of inference can you make from the high level
scripting language to the underlying compiled language?

usually, there are correllations, especially as can be noted if one has much
familiarity with the Quake-family engines.

often, in these engines, the script language is a fairly direct
manifestation of the underlying engine architecture. Quake1 was scripted in
QC, which was C-like, but used an unusual object system (though, still
fairly solidly rooted in its underlying implementation).

IOW, these script languages tend to be mostly wrappers for the underlying
engine machinery, and also for providing for all of the game-related
behaviors (such as items, ...).


Quake2 and Quake3 used C, although in Quake3 it was compiled to bytecode and
JIT'ed at runtime (or interpreted on non-x86 archs).

Quake 1, 2, and 3 also were written in C, and I have a decently good
understanding of the engine source in both cases (although I am most
familiar with Quake 1...). this is because id tended to release their engine
source some time after their next major games went to market.


Doom 3 seems to use a scripting language which uses class-instance OO, which
is unusual for them...

if the script is doing this, it may be a possible indicator for their
underlying engine architecture (the possibility that this, too, uses
class/instance?...), although this is a weak assertion, and id has not
released the Doom3 engine source as of yet.

similarly, it can be noted that Valve had started their work based on the
Quake1 engine, but had fairly quickly migrated to C++ for Half-Life, ...


or, it could just be that id is doing like in my case, where I use
class-instance some in scripting, even though most of my codebase remains in
plain C.
 
B

BGB / cr88192

jacob navia said:
Markus Schaub a écrit :

128 BYTES!!!!!

WOW!

That changes everything.

Of course you will need a GC for that amount of RAM.

:)


with 16x more you could have the NES...

although, I guess they had 64kB for ROM, and used banking or similar from
what I remember...
 
F

Flash Gordon

BGB said:
I have seen the source for many of the apps I use (or have general
background knowledge), since, apart from Windows itself, I mostly use
open-source software...

Ah well, if you only use OSS it is different. A lot of people don't have
that as an option.
I am frustrated though some by what apps I know of which are written in
large part in Python, since nearly every app I have seen which has
components in Python has been slow and buggy...

although, I know of another app which had some issues like this, but had
observed in this case that some parts of the app were written in Tcl...

I've come across slow and buggy SW written in C. It happens in all
languages. I don't know if any language is worse than another for it.
there is something to be said for static typechecking, since at least it
will weed out most of the errors before they show up at runtime (in the form
of ugly death messages, such as a typecheck error, missing object field,
incorrect number of method arguments, ...).

Static type checking can help, but if you really want that then C is the
wrong language since the type system is pretty weak.
I don't use these either (not like they would be available where I live
anyways, given "here" is approx 20 miles from the city limits, and this here
is the desert, and my house is on a dirt road, ...).

Then your air-con... OK, so some really old air-cons might not have
processors.
(around here, the UPS people don't knock, they sneak up, leave the packages,
and try to get away quickly... it is ammusing some...). one typically knows
a package has arrived when they hear the UPS guy stomp the gas to get
away...

Well, the UPS driver will have loads of embedded devices in his van.
granted, my parents drive...

And probably in their car.
at least, in their original forms, now in all possibility some these devices
are faked in SW in the bus controller, from what I had read some...

Oh, I don't doubt that some of the are faked in other embedded devices.
not the one in the keyboard...

the one that is controlled via ports, and by setting the right values causes
the computer to reboot...

older keyboards communicate via the computer via a special serial bus, and a
chip on the motherboard served as a controller. other times, the keyboard
can be connected via USB, but the same IO ports still look and behave the
same (even though, in all sense, USB is wired up to the computer, and sends
its scan codes differently, than would have been sent via a keyboard using a
serial connection and a DIN-5 plug...).

I know a little about how the old keyboards worked, since one of the
things I came across way back (in the 80s) was instructions for
uploading a very small program in to the processor in the keyboard. A
long time ago, so I don't have the details now ;-) I can't guarantee if
was for a PC, but with a serial interface to the keyboard you probably
had a processor at each end of the link!
but, they may end up looking to the OS and DOS software as if they were
connected ATA (although, observably, Windows and Linux see through this
trick, where Linux sees both USB and SATA drives as SCSI devices,> rather
than as legacy ATA devices, and 64-bit Windows doesn't boot correctly from
USB...).

I can't say I've looked too deeply in to USB drives, but I would not be
surprised if they were basically running SCSI over the USB interface.
SATA drives are another matter.

but, they were not typically onboard or standardized until the 2000's...

I'm sure they were standardised a lot before then (although there are
newer standard now). Not onboard though.
the NIC was usually a separate card, which required special drivers and
such.

They *still* require special drivers, it's just that the drivers
generally come with the OS now.
this is in constrast to HW which was there since the beginning, and more or
less has to be present so that backwards compatibility is maintained (say,
with old DOS software which directly messes with IO ports, ...).

it is most notable that many of these IO ports still work on newer systems,
even if presumably the underlying HW and mechanisms have chainged.

Actually, I believe the old way of doing it is faked. Just as USB
keyboards get faked to appear as PS/2 keyboards.
I expect there IS a probably a processor somewhere in this case, maybe if
doing little more than watching IO ports and redirecting things or
something...

The NIC will have a processor, and a decent one will be doing rather
more than just watching the IO ports. Even a cheap one does more than
watch IO ports!

For other stuff it probably varies. There may be a processor encoded in
to a PLD, or maybe a processor embedded in an IO device...
it is speculation based on my own prior OS dev work (from years ago),
although I am a little fuzzy on my memory of which all devices exist or
which IO ports they use (but, I have done enough OS dev, and seen enough how
DOS apps work, that for them to continue working, likely these devices are
still in place, presuming modern computers can still boot DOS and run DOS
SW, which last I had seen, they still do...).

Ah well, I'm talking based on having looked ages ago at some HW
documentation, and knowing some of the technologies in use in the
embedded world.
the part I most remember though is the VGA registers and how the thing got
set up for things like ModeX and 640x480x16 color, ... but, even this is
faded...

That will all be faked.
all it really said much was that the bus controller in question:
emulated legacy hardware;
had the code for such legacy HW written in C.

although, it was a bus controller for an unorthodix x86 chipset (I think
Intel Atom or VIA Nano or similar), and it is possible that for more
traditional computers, the HW is not so much emulated?...

A lot of the time chipsets like that actually do it by having multiple
devices on the one piece of silicon which previously would have been on
several pieces of silicon, but not necessarily always.
then again, even x86 is emulated, even on its own mainstay chips...

Microcoded processors have been around for rather a long time.
 
R

robertwessel2

I know a little about how the old keyboards worked, since one of the
things I came across way back (in the 80s) was instructions for
uploading a very small program in to the processor in the keyboard. A
long time ago, so I don't have the details now ;-) I can't guarantee if
was for a PC, but with a serial interface to the keyboard you probably
had a processor at each end of the link!


I can verify that. There was an 8042 (a microcontroller) on the
motherboard of the original PC (and XT). It had a serial port that
accepted input from the keyboard. Those keyboards had another
microcontroller (8031 or 8042 depending on version) than handled
scanning, debounce, repeat, etc., and would send the resulting scan
codes over the serial link.

On the AT the serial link became bidirectional and you could send
commands to the keyboards, to set the repeat rate, set the indicator
lights (there were none on the PC/XT keyboard), and some other stuff.

The PS/2 style keyboard basically extended the AT approach a bit.

While the newer keyboards still have a processor inside, it hasn't
been a discrete 8031 or 8042 in many, many years, and these days
keyboards (and mice) have full (peripheral) USB stacks in them.

I can't say I've looked too deeply in to USB drives, but I would not be
surprised if they were basically running SCSI over the USB interface.
SATA drives are another matter.


Yep. Other than SATA hard drives, pretty much everything uses SCSI
commands - including non-hard drive PATA devices (often called ATAPI
devices or packet ATA - for example, most CD-ROM drives). Most
storage type devices on USB, Fibre Channel, Firewire, PATA and SATA
(excluding parallel and serial ATA hard drives) use SCSI commands (not
to mention many non-storage-type devices).

Actually, I believe the old way of doing it is faked. Just as USB
keyboards get faked to appear as PS/2 keyboards.


It's the BIOS which fakes the old style (PS/2) keyboard and mouse port
using SMI mode on the processor. Many OS's that understand USB take
over the keyboard and drive it in USB mode directly (Windows XP, and
later, for example).
 
S

stan

jacob said:
Marco a écrit :
After saying this, you go on saying:


Well, the GC is exactly that. Absolutely NO changes in the language are needed
at all! Instead of callinc malloc(56) you call gc_malloc(56).

And the good side is that instead of calling free() you do not call anything.

This is in NO WAY a language change!

How do you arrive at that idea?
It is just an API.

So code it up. If you're correct then the malloc family will fade
away. Debate seems pointless, doesn't it?

Personally I doubt there is a one size fits all solution. I can
imagine cases where I would never consider GC due to overkill. But I
can also imagine cases where it would be handy if nothing else but for
rapid prototyping stuff.

There seem to be two distinct and intractable camps involved. The
no-GC crowd is never going to code up a GC system (understandably) and
the GC crowd hasn't done so yet. It sounds a little too blunt to me
but maybe it's time to put up or shut up.

If GC really is a near silver bullet, why isn't it done yet? If it's
as simple as a library then it doesn't have to wait for anything or
anyone. Seems like a great candidate for either a commercial solution
or open source. Improved productivity with fewer errors and no
performance loss would certainly be of value to someone.
 
K

Keith Thompson

stan said:
There seem to be two distinct and intractable camps involved. The
no-GC crowd is never going to code up a GC system (understandably) and
the GC crowd hasn't done so yet. It sounds a little too blunt to me
but maybe it's time to put up or shut up.

If GC really is a near silver bullet, why isn't it done yet? If it's
as simple as a library then it doesn't have to wait for anything or
anyone. Seems like a great candidate for either a commercial solution
or open source. Improved productivity with fewer errors and no
performance loss would certainly be of value to someone.

GC has been implemented. See, for example,
<http://www.hpl.hp.com/personal/Hans_Boehm/gc/>.
 
N

Nobody

FWIW, Quake 1 / 2 / 3 were in C.

Doom 3, maybe C, but I haven't seen the source (the scripts though have more
of an OO style, so possibly C++ may have been used in Doom 3...).

id Tech 4 (the engine used for Doom 3 and Quake 4) uses C++.

OOP is a natural fit for games (and other types of simulation).
Java is one of those languages:
one hears about it enough, but where is the code?...

Java's strongest constituency is bespoke enterprise software, which
typically never leaves the organisation for which it was developed.

You may as well ask where all the COBOL or RPG-III code is.
 
T

Thad Smith

Squeamizh said:
That advice would be a lot more useful if you just stated the name of
the compiler you're using.

It was a description of the typical C environment for some low-end
processors with poor indexed addressing, not advice (except for the
"grain of salt"). This pertains to the 8051 family and Microchip 8-bit
PIC family, although I think that Microchip has a compiler that supports
recursion for PIC18x and I believe Keil offers recursion as an option
for the 8051 family (you pay the price in less efficient access of the
parameters and auto variables). There are probably several other
low-end architectures for which the limitations also apply.

In general, you tend to think in smaller chunks for smaller processors.
 
B

BGB / cr88192

Nobody said:
id Tech 4 (the engine used for Doom 3 and Quake 4) uses C++.

OOP is a natural fit for games (and other types of simulation).

yes, ok.

I would have figured as much from what I had seen of Doom 3, although
granted I have not seen any of their engine source.


recently, I have been messing with Quake 2 engine (various reasons, mostly
non-serious), and hacked on a number of features, most recently:
expanding the map size (now ~98304 inches, or 8192 ft, or 1.55 mi);
adding "drivable vehicles" (technically, they are neither drivable nor
vehicles, but nevermind this...).

prior additions:
support for shadering;
real-time stencil lighting/shadows;
rigid-body physics;
....

nevermind that it still mostly looks and behaves like Q2 at present, and
infact a lot of content was re-added from Q1 as well, ...

Java's strongest constituency is bespoke enterprise software, which
typically never leaves the organisation for which it was developed.

You may as well ask where all the COBOL or RPG-III code is.

yes, ok.
 
B

BGB / cr88192

Keith Thompson said:
GC has been implemented. See, for example,
<http://www.hpl.hp.com/personal/Hans_Boehm/gc/>.

yep.

Boehm is popular for implementing many VM's, such as Mono and GCJ.


in my case, I have my own GC, and differs in some of the details from Boehm
(the most notable being that mine does types tagging, whereas Boehm leaves
it up to the frontend to keep track of types).

similarly, AFAIK, Boehm has a higher per-object overhead for small objects
(in my case it is 8 or 16 bytes, depending on needed alignment, and size is
padded up to a multiple of 16 bytes).

however, I suspect Boehm does have better reliability and performance.


so, I guess, viewed another way:
Boehm works really well as a GC'ed malloc replacement;
it is not necessarily as good as a more specialized GC.

where it is used, such as in Mono and GCJ, I don't exactly think they have
much reason to worry about per-object overhead (errm... because there are no
"small" objects...).


well, for those few who have gazed into Mono's internals, well, they are not
exactly, pretty...
I guess, some could debate this, but this is mostly my opinion (actually,
IMO, Mono's internals make the Quake engine look pretty, and this much is
scary... as IMO the Quake engine is a tangled mess...).

but, alas, both projects seem to work well enough, so that much is good...


mine was designed, of course, to assume that, say, 8-24 byte objects would
be common.
mine also supports special "cons cell" objects, which take 8 or 16 bytes
(depending on word size).

this doesn't matter so much if, say, the first thing the VM is to do is
create an object with a 32 or 48 byte header and maybe 128+ bytes of
payload.

granted, one can think at a higher level of abstraction, and realize that
this larger object may be infact more memory dense than, say, an structure
made out of cons cells and symbols located all over the place (and, just the
same, sometimes nastier code is both more efficient and easier to work
with...).


in this case, Boehm is probably a better choice...
 
S

stan

Keith said:
GC has been implemented. See, for example,
<http://www.hpl.hp.com/personal/Hans_Boehm/gc/>.

I'm aware of Boehm; it doesn't really seem to be taking the world by
storm. I don't intend it as a put down but it's not really the easiest
thing to set up. Maybe if it starts to catch on people will improve
the ease of use. If GC was really solving global warming and world
hunger then there would be little need for the endless debate. Boehm
hasn't proved to be the library that meets those needs so far so
either GC claims aren't completely and precisely correct or "THE GC
LIBRARY" (TM) hasn't been written yet.

For the record I've used and have no problems with Boehm's GC lib.

Programming is a never ending series of compromises and contexts
IMHO. Most people in the two camps seem to suffer tunnel vision. GC is
useful in some cases and a problem in others. The truth is that for
most of the cases where GC would be most useful, C may not be the best
language choice. Like the committee trying to design a horse and
produced a camel, I think the committee trying to add higher level
stuff to C already had one crack at it and they produced C++.

Personally I think the best and so far least explored option would be
a hardware memory management system with GC. I've talked to others
about it but right now the money seems to be pushing parallel
stuff.

Just so I can sleep tonight and not have bad dreams about the lack of
thread topicality, I haven't really thought about a possible ISA API
for a hardware solution. I've thought about both in the existing
memory/cache management HW or possibly in the northbridge or bus
management levels but not about the API. Since I'm all but snowed in,
I might just get some eggnog and stoke up the fire!
 
B

BGB / cr88192

stan said:
I'm aware of Boehm; it doesn't really seem to be taking the world by
storm. I don't intend it as a put down but it's not really the easiest
thing to set up. Maybe if it starts to catch on people will improve
the ease of use. If GC was really solving global warming and world
hunger then there would be little need for the endless debate. Boehm
hasn't proved to be the library that meets those needs so far so
either GC claims aren't completely and precisely correct or "THE GC
LIBRARY" (TM) hasn't been written yet.

For the record I've used and have no problems with Boehm's GC lib.

Programming is a never ending series of compromises and contexts
IMHO. Most people in the two camps seem to suffer tunnel vision. GC is
useful in some cases and a problem in others. The truth is that for
most of the cases where GC would be most useful, C may not be the best
language choice. Like the committee trying to design a horse and
produced a camel, I think the committee trying to add higher level
stuff to C already had one crack at it and they produced C++.

Personally I think the best and so far least explored option would be
a hardware memory management system with GC. I've talked to others
about it but right now the money seems to be pushing parallel
stuff.

Just so I can sleep tonight and not have bad dreams about the lack of
thread topicality, I haven't really thought about a possible ISA API
for a hardware solution. I've thought about both in the existing
memory/cache management HW or possibly in the northbridge or bus
management levels but not about the API. Since I'm all but snowed in,
I might just get some eggnog and stoke up the fire!

you don't need GC in the HW...
you don't even really need (high-level) memory-management in the HW.


it would be enough if it were implemented at the OS kernel + C runtime
level, and the implementation could fudge around and make a GC-friendly
malloc.

probably what this would mean would be "decent" OS-level write barriers, ...
as well as a GC which is present in the main C runtime (right alongside
malloc/free).

having to catch a signal and change the page status before returning
(Linux), or perform a query of which ones have been written and reset their
status (Windows), are just kind of lame...

(it is also lame that one doesn't generally know either the exact address
written, or what was written there).

also useful could be to be able to cause any threads which write to these
pages to essentially stall until the page status is changed (similar to
blocking file IO). in this way, a simple mark/sweep, or even a
copy-collector, could be done with C, with no particular added effort (the
copy-collector could work by essentially locking all non-GC access to the
memory until after the GC completes, thus preventing the mutator threads
from being able to see the heap in an intermediate state).

as-is, one either has to use crappy+awkward write barriers, or not bother
(using simply a SW write barrier, but I have seen how well this turns
out...).

the "stall until status change" option would not be too difficult to
implement I would think, since something fairly similar is likely already
needed to implement things like swapfiles, ...


the big problem then would be convincing anyone to do so. their likely
response would be something like "well, you can do it easily enough on your
own", or maybe "the standard doesn't say so, so we are not going to
bother...".
 
J

jacob navia

stan a écrit :
I'm aware of Boehm; it doesn't really seem to be taking the world by
storm. I don't intend it as a put down but it's not really the easiest
thing to set up.

With lcc-win there is nothing to setup. And anyway,if you
do not want to recompile it and use a binary library
it is as easy as using any other library. Use the
#include and link with the library.
Maybe if it starts to catch on people will improve
the ease of use. If GC was really solving global warming and world
hunger then there would be little need for the endless debate. Boehm
hasn't proved to be the library that meets those needs so far so
either GC claims aren't completely and precisely correct or "THE GC
LIBRARY" (TM) hasn't been written yet.

This is just your opinion. It i widely used, has been ported to
a variety of systems, and there are thousands and thousands
of users. Gcc has adopted it too and distributes it with the
source code of gcc, zs far as I remember.
For the record I've used and have no problems with Boehm's GC lib.

Well, you see?
Programming is a never ending series of compromises and contexts
IMHO. Most people in the two camps seem to suffer tunnel vision. GC is
useful in some cases and a problem in others.

Yes. As everything. Nobody is telling here that is the solution for
all software problems.
The truth is that for
most of the cases where GC would be most useful, C may not be the best
language choice.

Always the same story. C is bad language when you want to program
an application. Look, if you want, so be it. Please go to another
discussion group then.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,811
Messages
2,569,693
Members
45,477
Latest member
IsidroSeli

Latest Threads

Top