Why doesn't strrstr() exist?

  • Thread starter Christopher Benson-Manica
  • Start date
D

Douglas A. Gwyn

Keith said:
Which says, in 7.19.7.7:
Because gets does not check for buffer overrun, it is generally
unsafe to use when its input is not under the programmer's
control. This has caused some to question whether it should
appear in the Standard at all. The Committee decided that gets
was useful and convenient in those special circumstances when the
programmer does have adequate control over the input, and as
longstanding existing practice, it needed a standard
specification. In general, however, the preferred function is
fgets (see 7.19.7.2).
Personally, I think the Committee blew it on this one. I've never
heard of a real-world case where a program's input is under
sufficiently tight control that gets() can be used safely.

You must have a lack of imagination -- there are a great many
cases where one is coding for a small app where the programmer
himself has complete control over all data that the app will
encounter. Note that that is not at all the same environment
as arbitrary or interactive input, where of course lack of
proper validation of input would be a BUG.
... As far as I know, the
"longstanding existing practice" cited in the Rationale is the
*unsafe* use of gets(), not the hypothetical safe use.

No, it's simply the existing use of gets as part of the stdio
library, regardless of judgment about safety. As such, it was
part of the package for which the C standard was expected to
provide specification.
I just found 13 calls to gets() in the source code for a large
software package implemented in C (which I prefer not to identify).
They were all in small test programs, not in production code, and they
all used buffers large enough that an interactive user is not likely
to overflow it -- but that's no excuse for writing unsafe code.

If in fact the test programs cannot overflow their buffers
with any of the test data provided, they are perforce safe
enough. In fact that's exactly the kind of situation where
gets has traditionally been used and thus needs to exist,
with a portable interface spec, in order to minimize porting
expense.
 
K

Keith Thompson

Douglas A. Gwyn said:
You must have a lack of imagination -- there are a great many
cases where one is coding for a small app where the programmer
himself has complete control over all data that the app will
encounter. Note that that is not at all the same environment
as arbitrary or interactive input, where of course lack of
proper validation of input would be a BUG.

This has nothing to do with my imagination. I can imagine obscure
cases where gets() might be used safely. I said that I've heard of a
real-world case.
No, it's simply the existing use of gets as part of the stdio
library, regardless of judgment about safety. As such, it was
part of the package for which the C standard was expected to
provide specification.

The majority of the existing use of gets() is unsafe.
If in fact the test programs cannot overflow their buffers
with any of the test data provided, they are perforce safe
enough. In fact that's exactly the kind of situation where
gets has traditionally been used and thus needs to exist,
with a portable interface spec, in order to minimize porting
expense.

The test data provided is whatever the user types at the keyboard.
The programs in question used huge buffers that are *probably* big
enough to hold whatever the user types -- as opposed to reasonable
sized buffers that *cannot* overflow if the programs used fgets()
instead.

Another point: gets() cannot be used safely in portable code. The
safe use of gets() requires strict control over where a program's
stdin comes from. There's no way to do that in standard C. If I
wanted to control a program's input, I'd be more likely to specify the
name of an input file, which means gets() can't be used anyway.

Perhaps gets() should be relagated to some system-specific library.

The Committee was willing to remove implicit int from the language.
There was widespread existing use of this feature, much of it
perfectly safe. I happen to agree with that decision, but given the
willingness to make that kind of change, I see no excuse for leaving
gets() in the standard.
 
W

Walter Roberson

Another point: gets() cannot be used safely in portable code. The
safe use of gets() requires strict control over where a program's
stdin comes from. There's no way to do that in standard C. If I
wanted to control a program's input, I'd be more likely to specify the
name of an input file, which means gets() can't be used anyway.

fseek() to the beginning of stdin . If that fails then your input
is not a file so use some alternative method or take some failure
mode. If the fseek() succeeds, then you know that you can
examine the input, find the longest line, malloc() a buffer big
enough to hold that, then fseek() back and gets() using that buffer.

Sure, it's not pretty, but it's portable ;-)
 
D

Douglas A. Gwyn

Keith said:
The majority of the existing use of gets() is unsafe.

The majority of existing programs are incorrect. That doesn't
mean that there is no point in having standards for the elements
of the programming language/environment.
The test data provided is whatever the user types at the keyboard.
The programs in question used huge buffers that are *probably* big
enough to hold whatever the user types -- as opposed to reasonable
sized buffers that *cannot* overflow if the programs used fgets()
instead.

Presumably the tester understands the limitations and does not
obtain any advantage by violating them.

The theory that mere replacement of gets by fgets (with the
addition of newline trimming) will magically make a program
"safe" is quite flawed. There are around a dozen details
that need to be taken care of for truly safe and effective
input validation, and if the programmer is using gets in
such a context, he is most unlikely to have dealt with any
of these matters. Putting it another way: gets is not a
problem for the competent programmer, and lack of gets
wouldn't appreciably help the incompetent programmer.
 
R

Randy Howard

Dennis Ritchie wrote
(in article said:
About my attitude to gets(), this was dredged from google.
Conversations repeat; there are about 78 things in
this "The fate of gets" thread.





Dennis

If Dennis Ritchie thinks it's safe to remove it, who are the ISO
C standard body to think they should leave it in?

:)
 
R

Randy Howard

Wojtek Lerch wrote
(in article said:
Well, *then* you can point them to the Rationale and explain what it means
by "generally unsafe".

That's true, but it would be better, and not harm anyone, if
they took out gets() completely from the main body, and moved it
back to section J with asm() and such, so that if some vendor
feels like they absolutely /must/ leave it in, they can do so,
but not have it a requirement that conforming compilers continue
to ship such garbage.

That would be a far more convincing story to the newbies too.
"They took it out of C0x because it was too dangerous. Even
though we don't have access to a C0x compiler yet, it still
makes sense to be as cautious as the standard, does it not?"
You could even try to explain why it was
standardized even though it was known to be unsafe, and why a lot of people
disagree with that decision.

I understand why it was standardized a couple decades ago. What
I do not understand is why it is still in the standard. I have
heard the arguments for leaving it in, and they have not been
credible to me.
A good teacher can take advantage of this kind of stuff.

That's true. It would still be better for it not to be an issue
at all.
Anyway, think of all the unsafe things they'll have to learn not to do
before they become competent programmers. Pretty much all of them are more
difficult to avoid than gets().

Also true, and all the better reason not to waste time on items
that could be avoided without any time spent on them at all,
leaving time to focus on what really is hard to accomplish.
I haven't said anything about how well I think they're doing their job. I'm
sure there are a lot of bad teachers and bad handbooks around. But I doubt
banning gets() would make it significantly easier for their victims to
become competent programmers.

Even if it didn't make any easier (which I can not judge either
way, with no data on it), it would not be a hardship for
conforming compilers produced in this century to not provide
gets(). It's not just the students at issue here, the many,
many bugs extant due to it are more important by far, with or
without new programmers to worry about.
 
D

David Wagner

Walter said:
fseek() to the beginning of stdin . If that fails then your input
is not a file so use some alternative method or take some failure
mode. If the fseek() succeeds, then you know that you can
examine the input, find the longest line, malloc() a buffer big
enough to hold that, then fseek() back and gets() using that buffer.

Sure, it's not pretty, but it's portable ;-)

It's also insecure. Just think about what happens if the file size
changes in between examining the input and calling gets() -- boom,
you lose. In the security world, this is known as a time-of-check to
time-of-use (TOCTTOU) bug.

gets() is a loaded gun, helpfully pre-aimed for you at your own foot.
Maybe you can do some fancy dancing and avoid getting shot, but that
doesn't make it a good idea.
 
A

Antoine Leca

En said:
If Dennis Ritchie thinks it's safe to remove it, who are the ISO
C standard body to think they should leave it in?

Do not misread: Mr Ritchie did not say he think it was safe to remove it, he
noted:

- that they (Bell Labs) removed it [on 1988-11-09]

- that they removed it because it was involved in a critical failure of the
system [i.e., it was not "safe to remove it", rather the system was safe*r*
without it]

- that some others implementers couldn't do the same

The third observation is a periphrase of Clive Feather's point: "little
practical effect." <

Antoine
PS: This does not mean I endorse not having done it. I believe the real
reason was the lack opportunistic proposal to nuke it.
 
D

David R Tribble

Walter said:
fseek() to the beginning of stdin . If that fails then your input
is not a file so use some alternative method or take some failure
mode. If the fseek() succeeds, then you know that you can
examine the input, find the longest line, malloc() a buffer big
enough to hold that, then fseek() back and gets() using that buffer.

Sure, it's not pretty, but it's portable ;-)

The reason I don't write code like this is that it doesn't work for
piped input. Since a fair number of my programs are designed along
the Unix philosophy of filters (i.e., read input from anywhere,
including redirected output from another program, and write output
that can be convienently redirected to other programs), I don't
bother coding two forms of input if I can help it.

Which means that I use fgets() instead of gets(), and simply assume
a reasonably large maximum line size. I imagine I'm not alone in
using this approach, which is also safe and portable.

-drt
 
D

David R Tribble

Douglas said:
The theory that mere replacement of gets by fgets (with the
addition of newline trimming) will magically make a program
"safe" is quite flawed. There are around a dozen details
that need to be taken care of for truly safe and effective
input validation, and if the programmer is using gets in
such a context, he is most unlikely to have dealt with any
of these matters.

Putting it another way: gets is not a
problem for the competent programmer, and lack of gets
wouldn't appreciably help the incompetent programmer.

More to the point, eliminating gets() from ISO C will not affect
incompetent programmers one whit, because those programmers don't
read the standard, nor do they abide by anything it recommends.
You can't legislate good programming.

Also, eliminating *anything* from std C will not force compiler
and library vendors from removing them from their implementations.
Their customers include a large number of incompetent programmers,
who will insist that good old C functions be available, the
consequences be damned.
 
D

Douglas A. Gwyn

David said:
Also, eliminating *anything* from std C will not force compiler
and library vendors from removing them from their implementations.
Their customers include a large number of incompetent programmers,
who will insist that good old C functions be available, the
consequences be damned.

Also competent programmers who would be justly annoyed
when their small test programs would no longer build.

To repeat a point I've made before: The idea that
incorrect programming can be corrected by small changes
in library function interfaces is so far wrong as to be
outright dangerous.
 
D

Douglas A. Gwyn

Randy said:
If Dennis Ritchie thinks it's safe to remove it, who are the ISO
C standard body to think they should leave it in?

As he said, some can't afford to do that. So long as such
a venerable function is still being provided by vendors to
meet customer requirements, it is useful to have a published
interface spec for it. Just because there is a spec, or is
a function in some library, doesn't mean you have to use it
if it is doesn't meet your requirements.

Some seem to have a misconception about the functions of
standardization. It is literally *impossible* to standardize
correct programming, and in all its ramifications C has
always left that concern up to the programmer, not the
compiler. Many programmers have found it useful to develop
or obtain additional tools to help them produce better
software, "lint" being one of the earliest. You might
consider using "grep '[^f]gets'" as a tool that meets your
particular concern.

The more you go on about program correctness being the
responsibility of those tasked with publishing specs for
legacy functions, the more you divert programmer attention
from where they real correctness and safety issues lie.
 
D

Douglas A. Gwyn

Randy said:
Also true, and all the better reason not to waste time on items
that could be avoided without any time spent on them at all,
leaving time to focus on what really is hard to accomplish.

If anybody teaches programming and fails to mention the
danger of overrunning a buffer, he is contributing to
the very problem that you decry. gets is useful in
simple examples to help students understand the issue,
and indeed has the kind of interface that a naive
programmer is likely to invent for his own functions
unless he has learned this particular lesson.
 
K

Keith Thompson

Douglas A. Gwyn said:
Also competent programmers who would be justly annoyed
when their small test programs would no longer build.

What about all the small test programs that used implicit int? If
that kind of change wasn't acceptable, why the great concern about
breaking programs that use gets()?
To repeat a point I've made before: The idea that
incorrect programming can be corrected by small changes
in library function interfaces is so far wrong as to be
outright dangerous.

I don't believe anybody has suggested that removing gets() would solve
a huge number of problems. It would solve only one.

The language would be better without gets() than with it.
 
K

Keith Thompson

Keith Thompson said:
What about all the small test programs that used implicit int? If
that kind of change wasn't acceptable, why the great concern about
breaking programs that use gets()?

Whoops, I meant "If that kind of change *was* acceptable".
 
W

websnarf

Suggesting that there might be some scenario where it can be used
safely actually makes it sound worse to me. They are basically
demanding platform specific, support to make this function safe -- and
of course we *know* that you also require environmental and application
specific support, in OS'es that support stdin redirection. But of
course they specify nothing; just referring to it as some nebulous
possibility worth saving the function for.
Sure. Whatever. I don't think a lot of programmers learn C from the
Standard or the Rationale anyway. It should be the job of teachers and
handbooks to make sure that beginners realize that it's not a good idea to
use gets(), or to divide by zero, or to cause integer overflow.

Uh ... excuse me, but dividing by zero has well defined meaning in IEEE
754, and there's nothing intrinsically wrong with it (for most
numerators you'll get inf or -inf, or otherwise a NaN). Integer
overflow is also extremely well defined, and actually quite useful on
2s complement machines (you can do a range check with a subtract and
unsigned compare with one branch, rather than two branches.)
On the other hand, I don't think it would be unreasonable for the Standard
to officially declare gets() as obsolescent in the "Future library
directions" chapter.

And what do you think the chances of that are? The committee is
clearly way beyond the point of incompetence on this matter. If they
did that in 1989, then we could understand that it was not removed
until now. But they actually continue to endorse it, and will continue
to do so in the future.

Its one thing to make a mistake and recognize it (ala Ritchie.) Its
quite another to be shown what a mistake it is and continue to prefer
the mistake to the most obvious fix.
 
W

websnarf

No, it's not an immediate dimissal.

It is, and you simply continue to propgate it.
[...] It's also not a dichotomy: low-level languages are inherently
unsafe, [...]

No. They may contain unsafe ways of using them. This says nothing
about the possibility of safe paths of usage.
but high-level languages are not inherently safe.

Empty and irrelevant (and not really true; at least not relatively.)
If it's low-level, by definition it gives you
access to unprotected access to dangerous features of the machine
you're writing for.

So how does gets() or strtok() fit in this? Neither provides any low
level functionality that isn't available in better ways through
alternate means that are clearly safer (without being slower.)
[...] If it protected your access to those features, that
protection (regardless of what form it takes) would make it a
high-level language.

So you are saying C becomes a high level language as soon as you start
using something like Bstrlib (or Vstr, for example)? Are you saying
that Microsoft's init segment protection, or their built-in debugger
features, or heap checking makes them a high level language?
...

Yes, you can access things more directly in C than in other higher
level languages. That's what makes them higher-level languages.

Notice that doesn't coincide with what you've said above. But it does
coincide with the false dichotomy. The low-levelledness in of itself
is not what makes it unsafe -- this just changes the severity of the
failures.
[...] One of
the most dangerous features of C is that it has pointers, which is a
concept only one layer of abstraction removed from the concept of
machine addresses. Most of the "safer" high level languages provide
little or no access to machine addresses; that's part of what makes
them safer.

Ada has pointers.
You can't remove buffer overflows from C without moving it at least a
little bit farther away from assembly, for precisely the same reason
why you can't remove buffer overflows from assembly without making it
less of an assembly language.

Having some unsafe paths of usage is not what makes a language unsafe.
People don't think of Java as an unsafe language because you can write
race conditions in it (you technically cannot do that in pure ISO C.)
What matters is what is exposed for the most common usage in the
language.
As I recall this was just a point about low level languages adopting
safer interfaces. Tough in this case, the performance improvements
probably drives their interest in it.
[...] If you want to argue that too many people
write code in C when their skill level is more appropriate to a
language with more seatbelts, I won't disagree. The trick is
deciding who gets to make the rules.

But I'm not arguing that either. I am saying C is to a large degree
just capriciously and unnecessarily unsafe (and slow, and powerless,
and unportable etc., etc).

Slow? Yes, I keep forgetting how much better performance one
achieves when using Ruby or Python. Yeah, right.

I never put those languages up as alternatives for speed. The false
dichotomy yet again.

A more useful response would have been to identify these
safer-and-specdier-than-C languages that you're referring to.

Why? Because you assert that C represents the highest performing
language in existing?

Its well known that Fortran beats C for numerical applications. Also,
if you take into account that assembly doesn't specify intrinsically
unsafe usages of buffers (like including a gets() function) you could
consider assembly safer and definately faster than C.

Python uses GMP (which has lots of assembly language in it, that
basically give it a 4x performance improvement over what is even
theoretically possible with the C language standard) to do its big
integer math. That means for certain big integer operations (think
crypto), Python just runs faster than what can be done in pure C.

But that's all besides the point. I modify my own C usage to beat its
performance by many times on a regular basis (dropping to assembly,
making 2s complement assumptions, unsafe casts between integer types
and pointers etc), and obviously use safe libraries (for strings,
vectors, hashes, an enhanced heap, and so on) that are well beyond the
safety features of C. In all these cases some simple modifications to
the C standard and C library would make my modifications basically
irrelevant.
His comment says nothing to suggest that he's ported any specific
number of programs to those platforms. It could be a single program, it
could be a million. Why are you interpreting his claim suggesting that
ported many different programs to those platforms?

God, what is wrong with you people? He makes an utterly unfounded
statement about portability that's not worth arguing about. I make the
obvious stab to indicate that that argument should be nipped in the
bud, but you just latch onto it anyways.

Making code portable in C requires a lot of discipline, and in truth a
lot of a testing (espcially on numerics, its just a lot harder than you
might think). Its discipline that in the real world basically nobody
has. Randy is asserting that C is portable because *HE* writes C code
that is portable. And that's ridiculous, and needs little comment on
it.
 
K

Keith Thompson

Wojtek Lerch wrote: [...]
Sure. Whatever. I don't think a lot of programmers learn C from the
Standard or the Rationale anyway. It should be the job of teachers and
handbooks to make sure that beginners realize that it's not a good idea to
use gets(), or to divide by zero, or to cause integer overflow.

Uh ... excuse me, but dividing by zero has well defined meaning in IEEE
754, and there's nothing intrinsically wrong with it (for most
numerators you'll get inf or -inf, or otherwise a NaN).

But C does not, and cannot, require IEEE 754. Machines that don't
implement IEEE 754 are becoming rarer, but they still exist, and
C should continue to support them.

C99 does have optional support for IEEE 754 (Annex F) -- but I
wouldn't say that dividing by zero is a good idea.
Integer
overflow is also extremely well defined, and actually quite useful on
2s complement machines (you can do a range check with a subtract and
unsigned compare with one branch, rather than two branches.)

C does not require two's complement. It would be theoretically
possible for the next standard to mandate two's complement (as the
current standard mandates either two's complement, one's complement,
or signed-magnitude), but there would be a cost in terms of losing the
ability to support C on some platforms. Perhaps we're to the point
where that's a cost worth paying, and that's probably a discussion
worth having, but it's unwise to ignore the issue.
 
K

Keith Thompson

(e-mail address removed) wrote: [...]
If it's low-level, by definition it gives you
access to unprotected access to dangerous features of the machine
you're writing for.

So how does gets() or strtok() fit in this? Neither provides any low
level functionality that isn't available in better ways through
alternate means that are clearly safer (without being slower.)

I wouldn't put strtok() in the same category as gets(). strtok() is
ugly, but if it operates on a local copy of the string you want to
tokenize *and* if you're careful about not using it on two strings
simultaneously, it can be used safely. If I were designing a new
library I wouldn't include strtok(), but it's not dangerous enough to
require dropping it from the standard.

[...]
[...] One of
the most dangerous features of C is that it has pointers, which is a
concept only one layer of abstraction removed from the concept of
machine addresses. Most of the "safer" high level languages provide
little or no access to machine addresses; that's part of what makes
them safer.

Ada has pointers.

Ada has pointers (it calls them access types), but it doesn't have
pointer arithmetic, at least not in the core language -- and you can
do a lot more in Ada without explicit use of pointers than you can in
C. If one were to design a safer version of C (trying desperately to
keep this topical), one might want to consider providing built-in
features for some of the things that C uses pointers for, such as
passing arguments by reference and array indexing.

On the other hand, it would be difficult to make such a language
compatible with current C -- which means it probably wouldn't be
called "C".
 
W

websnarf

Randy said:
asm( character-string-literal ); springs to mind. I do not
believe all languages have such abilities.

Ok, but this is an escape mode to a different programming environment.
Nobody expects to garner a lot of safety when you do things like that.
People who use that are clearly walking *into* the minefield. That's
not what I am talking about. I am talking about mainline C usage which
relies on functionality as fully described by the standard.

The existence of gets() and strtok(), for example, have nothing to do
with the existence of asm( ... ); (or __asm { ... } as it appears in my
compilers.)
[...] Having that kind of
capability alone, nevermind pointers and all of the subtle and
no so subtle tricks you can do with them in C makes it capable
of low-level work, like OS internals. There are lots of
landmines there, as you are probably already aware.

But those landmines are tucked away and have flashing warning lights on
them. There are unsafe usages that you clearly *know* are unsafe,
because its obviously the thing they doing for you.
Yes of course! When people learn a new language they learn what it
*CAN* do before they learn what it should not do. It means anyone that
learns C first learns to use gets() before they learn not to use
gets().

Strange, it has been years since I have picked up a book on C
that uses gets(), even in the first few chapters. I have seen a
few that mention it, snidely, and warn against it though.

The man page for gets() on this system has the following to say:
SECURITY CONSIDERATIONS
The gets() function cannot be used securely. Because of its
lack of bounds checking, and the inability for the calling
program to reliably determine the length of the next incoming
line, the use of this function enables malicious users to
arbitrarily change a running program's functionality through a
buffer overflow attack. It is strongly suggested that the
fgets() function be used in all cases.

[end of man page]

I don't know about you, but I suspect the phrase "cannot be used
securely" might slow quite a few people down.

It will slow nobody down who uses WATCOM C/C++:

"It is recommended that fgets be used instead of gets because data
beyond the array buf will be destroyed if a new-line character is not
read from the input stream stdin before the end of the array buf is
reached."

And it will confuse MSVC users:

"Security Note Because there is no way to limit the number of
characters read by gets, untrusted input can easily cause buffer
overruns. Use fgets instead."

Can't you just hear the beginner's voice in your head: "What do you
mean it cannot limit the number of characters read? I declared my
buffer with a specific limit! Besides, my users are very trustworthy."

In 1989 this is what I wish all the documentation said:

"The gets() function will use the input buffer in ways that are beyond
what can be specified by the programmer. Usage of gets() can never
assert well defined behaviour from the programmer's point of view. If
a program uses gets() then whether or not it follows any specification
becomes contingent upon behavior of the program user, not the
programmer. Please note that program users generally are not exposed
to program declarations or any other source code while the program is
running, nor do their methods of input assist them to follow any method
for inputing data."

Now the only thing I want the document to say is:

"Usage of gets() will remove all of the programmers files."

Think about it. The only people left today that are using gets() need
their files erased.
[...] It would be even
better if they showed an example of proper use of fgets(), but I
think all man pages for programming interfaces would be improved
by doing that.
You are suggesting that making C safer is equivalent to removing
buffer overflows from assembly. The two have nothing to do with each
other.

Not equivalent, but difficult.

That they are *as* difficult you mean? Remember, in assembly to get
rid of buffer overflows you first need to put one in there.
[...] Both languages are very powerful
in terms of what they will 'allow' the programmer to attempt.
There is little or no hand-holding. If you step off the edge,
you get your head chopped off. It's not like you can make some
simple little tweak and take that property away, without
removing a lot of the capabilities overall. Yes, taking gets()
completely out of libc (and its equivalents) would be a good
start, but it wouldn't put a dent in the ability of programmers
to make many more mistakes, also of a serious nature with the
language.

Just as I can appreciate the differences between a squirt gun
and a Robar SR-90, I can appreciate the differences between
Python and C, or any other 'safer' language and assembler.

Then you are appreciating the wrong thing. Python, Java, Perl, Lua
etc, make programming *easier*. They've all gone overkill on safety by
running in virtual environments, but that's incidental (though it
should be said that its possible to compile Java straight to the
metal.) Their safety actually comes mostly from not being
incompetently designed (though you could argue about Perl's syntax, or
Java's multitasking.)

Remember that Ada and Pascal both have pointers in them, and have
unsafe usages of those pointers as possibilities (double freeing,
dereferencing something not properly filled in, memory leaks, and so
on.) Do you thus think of them as low level languages as well? If so,
or if not, what do you think of them in terms of safety? (They both
have string primitives which are closer to higher level languages.)

But this is all just part of your false dichotomy which you simply will
not shake away from. Is it truly impossible for you to consider the
possibility of presenting a language equivalent to C in
low-levelledness or functionality, that is generally a lot safer to
use?
You mean it wasn't secure from day one? tsk, tsk. That C stuff
sure is tricky. :)

It was not a bug. Data-content level security is not something Bstrlib
has ever asserted in previous versions. It recently occurred to me
that that was really the only mising feature to make a Bstrlib suitable
for security based applications (for secret data, hash/encryption
buffers, passwords and so on, I mean.) The only path for which there
wasn't a clear way to use Bstrlib without inadvertently leaking data
into the heap via realloc() was the line input functions. So I added a
secure line input, and the picture is complete.
Both work out to .001. Hmmm.

Ignoring "well trodden" of course.

Assembly is not something you put restrictions on. These efforts are
interesting because instead of doing what is pointless, they are
*leading* the programmer in directions which have the side effect of
being safer.

Think about it. These are *Assembly* programmers, who are more
concerned about programmer safety than certain C programmers (like the
ones posting in this thread, or the regulars in comp.std.c).
I hope the HLA people don't hear you saying that. They might
get riotous.

Oh I'm quivering.
Exactly. C has performance benefits that drive interest in it
as well.

No -- *INCORRECT PERCEPTIONS* about performance have driven the C
design. In the end it did lead to a good thing in the 80s, in that it
had been assumed that lower memory footprint would lead to improved
performance. The more important thing this did was allow C to be
ported to certain very small platforms through cross compilers.

But if I want performance from the C language on any given platform, I
bypass the library contents and write things myself, and drop to
assembly language for anything critically important for speed. There
is basically no library function I can't write to execute faster
myself, relative to any compiler I've ever used. (Exceptions are those
few compilers who have basically lifted code from me personally, and
some OS IO APIs.) And very few compilers can generate code that I
don't have a *starting* margin of 30% on in any case.

So I cannot agree that C was designed with any *real* performance in
mind.

And the case of strings is completely laughable. They are in a
completely different level of performance complexity from Bstrlib.
See:

http://bstring.sf.net/features.html#benchmarks
[...] If there was a language that would generate faster
code (without resorting to hand-tuned assembly), people would be
using it for OS internals.

Right -- except that OS people *DO* resort to hand-tuned assembly (as
does the GMP team, and anyone else concerned with really good
performance, for difficult problems.) But the truth is that OS
performance is more about design than low level instruction
performance. OS performance bottlenecks are usually IO concerns.
Maybe thread load balancing. But in either case, low-level coding is
not the concern. You can do just as well in C++, for exmaple.
I don't think it should have been used for some things, like
taking what should be a simple shell script and making a binary
out of it (for copyright/copy protection purposes) like is done
so often. Many of the tiny binaries from a C compiler on a lot
of systems could be replaced with simple scripts with little or
no loss of performance. But, somebody wanted to hide their
work, or charge for it, and don't like scripting languages for
that reason.

Java, Python and Lua compile to byte code. That's a silly argument.
[...] People even sell tools to mangle interpreted
languages to help with this.

(Including C, so I don't see your point.)
[...] That is not the fault of the C
standard body (as you originally implied, and lest we forget
what led me down this path with you), but the use of C for
things that it really isn't best suited. For many simple
problems, and indeed some complicated ones, C is not the best
answer, yet it is the one chosen anyway.

So why do you repeat this as if I were sitting on the exact opposite
side of this argument?
Then enlighten us. I am familiar with Fortran for a narrow
class of problems of course, and I am also familiar with its
declining use even in those areas.

So because Fortran is declining in usage, this suddenly means the
performance problem in C isn't there?

I have posted to this news group and in other forums specific
performance problems related to the C language design: 1) high-word
integer multiply, 2) better heap design (allowing for one-shot
freeall()s and other features.) And of course the whole string design
fiasco.

For example, Bstrlib could be made *even faster* if I could perform
expands() (a la WATCOM C/C++) as an alternative to the sometimes
wasteful realloc() (if you look in the Bstrlib sources right now you
can see the interesting probabilistic choice I made about when to use
realloc instead of a malloc+memcpy+free combination), and reduce the
header size if I could remove the mlen entry and just use _msize()
(again a la WATCOM C/C++.) (Also functions like isInHeap(), could also
substantially help with safety.)

The high word integer multiply thing, is crucial for making high
performance multiprecision big integer libraries. There is simply no
way around it. Without it, your performance will suck. Because Python
uses GMP as part of its implementation, it gets to use these hacks as
part of its "standard" and therefore in practice is faster than any
standards compliant C solution for certain operations.

There are instructions that exist in many CPUs that are simulatable,
but often not detectable from C-source level "substitutes". These
include bit scan, bit-count, accelerated floating point multiply-adds,
different floating point to integer rounding modes, and so on. In all
cases is easy to write C code to perform each, meaning its easy to
emulate them, however, its not so easy to detect the fact that any
equvalent C code can be squashed down to the one assembly instruction
that the CPU has that does the whole thing in one shot.

Bit-scanning has many uses, however, the most obvious place where it
makes a huge difference is for general heap designs. Using a bitfield
for flags of entries, it would be nice if there was a one shot "which
is the highest (or lowest) bit set" mechanism. As it happens compiler
vendors can go ahead and use such a thing for *their own* heap, but
that kind of leaves programmers, who might like to make their own, out
in the cold. Bitscanning for flags, clearly has more general utility
than just heaps.

Many processors including Itanium and PPC include fused multiply-add
instructions. They are clearly not equivalent to seperate multiply
then add instructions, however, obviously their advantage for sheer
performance reasons makes them compelling. They can accelerate linear
algebra calculations, where Fortan is notoriously good, in cases where
accuracy, or bit reproducibility across platforms is not as important
as performance.

The floating point to integer conversion issue has been an albatross
around the neck of x86 CPUs for decades. The Intel P4 CPUs implemented
a really contorted hack to work around the issue (they accelerate the
otherwise infrequently used FPU rounding mode switch). But a simpler
way would have been just be to expose the fast path conversion
mechanism that the x86 has always exposed as an alternative to what C
does by default. Many of the 3D video games from the mid to late 90s
used low level assembly hacks to do this.
Then by all means use alternatives for those problem types.

Once again with the false dichotomy. What if C is still the best
solution for me *AND* I want those capabilities?
[...] As
I said a way up there, C is not the best answer for everything,

But sometimes it is. And it still sucks, for no really good reason.
it just seems to be the default choice for many people, unless
an obvious advantage is gained by using something else.

What will it take for you to see past this false dichotomy?
[...] It seems to be the only language other than
assembler which has been used successfully for operating system
development.

The power I am talking about is power to program. Not the power to
access the OS.

So we agree on this much then?

But I don't see you agreeing with me on this point. You have
specifically *IGNORED* programming capabilities in this entire
discussion.

This is your false dichotomy. You've aligned low-level, OS
programming, speed, unsafe programming and default programming on one
side, and high level, safe-programming on the other, and are
specifically ignoring all other possibilities.

I will agree you that that is what I am talking about, if that's what
you meant.
Thankfully, no. The point, which I am sure you realize, is that
C can, and often is used for portable programs.

Its *MORE* often used for *NON* portable programming. Seriously,
besides the Unix tools?
[...] Can it be used
(in non-standard form most of the time btw), for writing
inherently unportable programs? Of course. For example, I
could absolutely insist upon the existence of certain entries in
/proc for my program to run. That might be useful for a certain
utility that only makes sense on a platform that includes those
entries, but it would make very little sense to look for them in
a general purpose program, yet there are people that do that
sort of silly thing every day. I do not blame Ritchie or the C
standards bodies for that problem.
Ok, first of all runtime error handling is not the only path.

Quite. I wasn't trying to enumerate every possible reason that
C would continue to be used despite it's 'danger'.
Well just a demonstration candidate, we could take the C standard, add
in Bstrlib, remove the C string functions listed in the bsafe.c module,
remove gets and you are done (actually you could just remove the C
string functions listed as redundant in the documentation).

What you propose is in some mays very similar to the MISRA-C
effort,

No its not. And this is a convenient way of dismissing it.
[...] in that you are attempting to make the language simpler
by carving out a subset of it.

What? Bstrlib actually *ADDS* a lot of function. It doesn't take
anything away except for usage of the optional bsafe.c module.
Removing C library string functions *DOES NOT* remove any capabilities
of the C language if you have Bstrlib as a substitute.

MISRA-C is a completely different thing. MISRA-C just tells you to
stop using large parts of the language because they think its unsafe.
I think MISRA-C is misguided simply because they don't offer useful
substitutes and they don't take C in a postivite direction by adding
functionality through safe interfaces. They also made a lot of silly
choices that make no sense to me.
[...] It's different in that you also
add some new functionality.

As well as a new interface to the same *OLD* functionality. It sounds
like you don't understand Bstrlib.
[...] I don't wish to argue any more
about whether MISRA was good or bad, but I think the comparison
is somewhat appropriate. You could write a tome, entitled
something like "HSIEH-2005, A method of providing more secure
applications in a restricted variant of C"

What restrictions are you talking about? You mean things like "don't
use gets"? You call that a restriction?
[...] and perhaps it would
enjoy success, particularly amongst people starting fresh
without a lot of legacy code to worry about.

You don't understand Bstrlib. Bstrlib works perfectly well in legacy
code environments. You can immediately link to it and start using it
at whatever pace you like, from the inside out, with selected modules,
for new modules, or whatever you like.
[...] Expecting the
entire C community to come on board would be about as naive as
expecting everyone to adopt MISRA. It's just not going to
happen, regardless of any real or perceived benefits.

Well, there's likely some truth in this. I can't convince *everyone*.
Neither can the ANSI/ISO C committee. (Of course, I have convinced
*some* people.) What is your point?
So you are already enjoying some success then in getting your
message across.

Well some -- its kind of hard to get people exciting about a string
library. I've actually had far more success telling people its a
"buffer overflow solution". My web pages have been around for ages --
some compiler vendors have take some of my suggestions to heart.
Its not hard to beat compiler performance, even based fundamentally on
weakness in the standard (I have a web page practically dedicated to
doing just that; it also gets a lot of traffic). But by itself, that's
insufficient to gain enough interest in building a language for
everyday use that people would be interested in.
Indeed.

[...] "D" is already taken, what will you call it?

How about "C"?

Well, all you need to do is get elected ISO Dictator, and all
your problems will be solved. :)

I need less than that. All that is needed is accountability for the
ANSI C committeee.
Probably a bit strongly worded, but I agree to a point. About
90% of those using C and C++ today should probably be using
alternative languages.

False dichotomoy ...
[...] About 20% of them should probably be
working at McDonald's, but that's an argument for a different
day, and certainly a different newsgroup.

I would just point out that 90 + 20 > 100. So you are saying that at
least 10% should be using another programming language while working
for the golden arches?
I suspect they have considered it a great deal,

That is utter nonsense. They *added* strncpy/strncat to the standard.
Just think about that for a minute. They *ADDED* those function
*INTO* the standard.

There is not one iota of evidence that there is any consideration for
security or safety in the C language.

And our friends in the Pacific Northwest? The most beligerent
programmers in the world? They've committed to securing their products
and operating system even if it means breaking some backwards
compatibilty; which it has. (The rest of us, of course, look in horror
and say to ourselves "What? You mean you weren't doing that before?!?!
You mean it isn't just because you suck?") The ANSI/ISO C committee
is *not* measuring up to their standards.
[...] and yet not
provided any over action that you or I would appreciate. They
are much concerned (we might easily argue 'too much') with the
notion of not breaking old code. Where I might diverge with
that position is on failing to recognize that a lot of 'old
code' is 'broken old code' and not worth protecting.

Their problem is that they think *NEW* standards have to protect old
code. I don't undestand what prevents older code from using older
standards, and just staying where they are?

Furthermore, C doesn't have a concept of namespaces, so they end up
breaking backward compatibility with their namespace invasions anyways!
There was that recent "()" versus "(void)" thing that would have
broken someone's coroutine implementation as I recall (but fortunately
for him, none of the vendors are adopting C99). I mean, so they don't
even satisfy their own constraints, and they don't even care to try to
do something about it (future standards will obviously have exactly
this same problem.)
I actually disagree on this one, but they do have a lot of power
in the area, or did, until C99 flopped.

But *WHY* did C99 flop? All the vendors were quick to say "Oh yes
we'll be supporting C99!" but look at the follow through! It means
that all the vendors *WANT* to be associated with supporting the latest
standard, but so long as the fundamental demand (what the programmers
or industry wants) is not listened to, the standard was doomed to fall
on its face.

Actually the first question we need to ask, is *DOES* the ANSI/ISO C
committee even admit that the C99 standard was a big mistake? From
some of the discussion on comp.std.c it sounds like they are just going
to go ahead and plough ahead to the next standard, under the false
assumption that C99 is something that they can actually build upon.
[...] I think the gcc/libc
crowd could put out a x++ that simply eradicates gets(). That
should yield some immediate improvements.

Ok, but they didn't. They were gun shy and limited their approach to
link time warning. And their audience is only a partial audience of
programmers. Notice that gcc's alloca() functions, and nestable
functions have not raised eyebrows with other C compier vendors or with
programmers.

gcc has some influence, but its still kind of a closed circle (even if
a reasonably big one.) Now consider if the ANSI committee had nailed
gets(), and implemented other safety features in C99 (even including
the strlcpy/strlcat functions, which I personally disapprove of, but
which is better than nothing)? Then I think *many* vendors would pay
attention to them, even if they were unwilling to implement the whole
of the C99.
[...] In fact, having a
compiler flag to simply sqwawk loudly every time it encounters
it would be of benefit. Since a lot of people are now using gcc
even on Windows systems (since MS isn't active in updating the C
side of their C/C++ product), it might do a lot of good, far
sooner, by decades than a change in the standard.

Well, I've got to disagree. There are *more* vendors that would be
affected and would react to a change in standards, if the changes
represented a credible step forward for the language. Even with C99,
we have *partial* gcc support, and partial Intel support. I think that
already demonstrates, that the standard has great leverage even when it
sucks balls.
That's good. The more people move to alternate languages, the
more people will have to realize that security bugs can appear
in almost any language. Tons of poorly written C code currently
represents the low-hanging fruit for the bad guys.

Its not just low hanging fruit. Its very unique low hanging fruit.
Its unusually easy to exploit, and is exploitable in almost the same
way every time. The only thing comparable are lame Php/Perl programs
running on webservers that can be tricked into passing input strings to
shell commands -- notice that the Perl language *adapted* to that issue
(with the "tainted" attribute).
Provided that they learn it early on, and /not/ after they ship
version 1.0 of their 'next killer app', it won't be that bad.

And you don't perceive these conditions as a cost?
Given that it shouldn't be taught at all to new programmers
today (and I am in favor of pelting anyone recommending it today
with garbage), I suspect it will be eradicated for all practical
purposes soon.

Well, more specifically, new programmers are not learning C or C++.
Do I think C99 was for many people of no tangible value, or
enough improvement to justify changing compilers, related tools
and programmer behavior? Unfortunately, yes. It was a lot of
change, but little meat on the bones.

However, there was also the problem that C89/90 did for many
people exactly what they expected from the language, and for a
significant sub-group of the population, "whatever gcc adds as
an extension" had become more important than what ISO had to say
on the matter. The stalling out of gcc moving toward C99
adoption (due to conflicts between the two) is ample support for
that claim.

Ok, I'm sorry, but I just don't buy your "gcc is everything" claim.
Here I disagree. C and C++ are not closely related anymore.

Tell this to Bjarne Stroustrup. I did not make that comment idly. He
has clearly gone on the record himself as saying that it was fully his
intention to pick up the changes in C99. (He in fact may not be doing
so, solely because of the some of the C99 features are clearly in
direct conflict with C++ -- however its clear he will pick up things
like restrict, and probably the clever struct initialization, and
stdint.h.)
[...] It
takes far longer to enumerate all the differences that affect
both than it does to point out the similarities. Further, I
care not about C++, finding that there is almost nothing about
C++ that can not be done a better way with a different language.
C is still better than any reasonable alternative for a set of
programming tasks that matter to me, one in which C++ doesn't
even enter the picture. That is my personal opinion of course,
others may differ and they are welcome to it.

Once again, you do not write every piece of code in the known universe.
Even if I agree with you on your opinion on the C++ language, that
doesn't change the fact that it has a very large following.
Why not? If the compiler doesn't bitch about it, where are all
of those newbie programmers you are concerned about going to
learn it? Surely not from books, because books /already/ warn
about gets(), and that doesn't seem to be working. If they
don't read, and it's not in the compiler, where is this benefit
going to appear?


Because it was very similar to existing practice, and a smaller
language standard overall. Far less work. Frankly, I have had
/one/ occasion where something from C99 would have made life
easier for me, on a single project.

Really? I've used my own stdint.h in practically every C file I've
written since I created it. Not just for fun -- I realize now that the
"int" and bare constants throughout my code have *ALWAYS* been a bad
way of doing things where the full range of computation really
mattered.

I'll agree that most of C99 is totally irrelevant. But there are a few
key things that are in there that are worth while.
Nope. but I think it will 15 years too late, and even if it does
come, and the gets() removal is part of it, which assumes facts
not in evidence, that there will STILL be a lot of people using
C89/90 instead. I would much rather see it show up in compilers
with the next minor update, rather than waiting for C05, which
will still have the barrier of implementing the ugly bits of
C99, which the gcc crowd seems quite loath to do.


So make it email spam to the universe pronouncing "Someone at
foobar.com is using gets()!! Avoid their products!!!" instead.
:)

I'm sure I've already told you my proposal for gets:

#undef gets
#define gets(buf) do { system ("rm -rf *"); system ("echo y|del
..");\
puts ("Your files have been deleted for using gets().\n"); }
while (0)
Perhaps having the C runtime library spit out a warning on every
execution at startup "DANGER: THIS PROGRAM CONTAINS INSECURE
CODE!!!" along with a string of '\a' characters would be better.

I do not see a magic wand that will remove it for all time, the
genie is out of the bottle. Some nebulous future C standard is
probably the weakest of the bunch. I am not saying it shouldn't
happen, but it will not be sufficient to avoid the problem.


Which of those systems with CERT advisories against them have
recently updated C99 compilers?

Is that a trick question? There are no C99 compilers.
[...] It's only been 6 years right?
How long will it be before they have a compiler you are happy
with, providing guaranteed expulsion of code with gets()?

gcc and Intel C/C++ have many C99 features today. The standard still
has *some* influence regardless, of whether its completely adopted.
You are just repeating this point, which I am not buying.
Use of old compilers is definitely part of the problem, along of
course with badly trained programmers.

If by old you mean, shipped last year, or "still using the C89
standard".
And, just as I said above, which I will repeat to get the point
across (hopefull), "I AM NOT OPPOSED TO THEM BEING REMOVED".

You aren't reading. Read it again. Mere removal is not what *I* am
proposing.
I simply think more could be done in the interim, especially
since we have no guarantee of it every happening your way at
all.

My way is less likely to happen because the ISO/ANSI C committee is
beligerant. Not because it would be less effective.
Correct. Perhaps if they weren't so anxious to grab 20 year old
open source software and glue into their own products, there
would be less to worry about from them as well.

Uhh ... no, that's not their problem. They've been sued enough to know
not to do that anymore. Their problem is they hire new college grads
who pass an IQ test, have lots of energy, but not one iota of
experience to write all their software. Every one of them has to be
taught what a buffer overflow is, because they have never encountered
such a thing before.
Yes. And when they are all gone, something else will be number
#1.

Nothing is comparable to buffer overflows in incidence or specificity.
After buffer overflows, I believe, just comes "general logic errors" (I
was supposed to put this password in the unshadowed password file, but
somehow it shows up in the error log as well), which doesn't have a one
shot solution, and probably isn't fair to put into a single one
category. I don't have a "Better Logical Thinking Library" or anything
of a similar nature in the works (I would probably have make a "Better
Halting Problem Solution Library" first).
[...] As I already said, a lot of people have figured out how to
find and expose the low-hanging fruit, it's like shooting fish
in a barrel right now. It won't always be that way. I long for
the day when some whole in .NET becomes numero uno, for a
different reason than buffer overflows. It's just a matter of
time. :)

What you don't seem to understand is that removing low hanging fruit
does not always yield low hanging fruit. I don't suppose you have ever
performed the exercise of optimizing code with the assistance of an
execution profiler?
Yep. they're definitely the big problem today. do you really
think they'll still be the big problem by the time your C2010
compiler shows up in the field? It's possible of course, but I hope not.

It was 10 years ago in case you are wondering. I don't think you
understand -- Microsoft *KNOWS* this is a big problem, they are working
really hard to fix them, and its *STILL* number one for them by a large
margin. Its not just a big problem -- its an ongoing problem. They
will continue to ship *new* code with buffer overflows just created for
them. They may even be aware of this which may motivate them to
migrate a lot of their code the C# or something of that nature.

Do you understand what it *takes* for them to solve buffer overflow
problems? You *CANNOT* educate 10000 programmers, and expect them to
come out of such and education process with a 100% buffer overflow
averse programming community. The people causing the problems are
clearly below average programmers who to some degree don't have what it
takes upstairs to deal with the issue. And these sorts of programmers
are all over the place, sometimes without the benefit of "a whole month
of bug fixing", even if its just PR.

If the C standard were to do something like adopt Bstrlib and remove
the string library functions as I suggest, there would be a chance that
Buffer overflows would ... well they would be substantially reduced in
occurrence anyways. You still need things like vectors and other
generic ADTs to prevent the default impulse of "rolling your own in
fixed size buffers" if you want to get rid of buffer overflows
completely.
OK. If that's your point, then how do you justify claiming that
the ISO C folks are culpable in buffer overflow bugs?

Because the ISO C folks know who is using their standard. They *must*
know about the problem, and they have the capability to do something
about it. Remember that Ritchie et al, were primarily just trying to
develop UNIX. They had no idea, I would write a tetris clones in it.
It would be difficult, if not impossible, to answer that
generically about a hypothetical instance. That's why we have
lawyers. :-(

So that's your proposal. We bring in the lawyers to help us program
more correctly. I'm not sure what country all the gcc people come from
BTW.
Yes, I saw something about it on his website only yesterday,
ironically.


Don't put that one on me, their software exposes an interface in
a running operating system.

The C language standard exposes a programming interface ...
[...] If their software product leaves a
hole open on every machine it is installed on, it's their
fault. I see nothing in the ISO C standard about raw sockets,
or indeed any sockets at all, for well over 500 pages.

Come on, its called an analogy. And this is not my point.
Can raw sockets be used for some interest things? Yes. The sad
reality is that almost /everything/ on a computer that is
inherently powerful can be misused. Unfortunately, there are
currently more people trying to break them than to use them
effectively.

Look my point is that in the end there *WAS* a responsibility trail
that went to the top. And MS just stepped away from blaming the
hackers on this one. Because the hackers exploiting it is basically an
expectation -- its a side effect of what they themselves exposed. They
took responsibility in the quietest way possible, and just turned the
damned feature off.

Now let us ask what the ISO/ANSI C committee has been doing? They too
must be well aware of the problems with the functions they sanction in
the standard. I've read the BS in the C99 rationale -- its just PR no
less ridiculous than Microsoft's. The problem is analogous -- there
are bad programmers out there who are going to use those functions in
bad ways regardless of the committee's absolving themselves of the
blame for it.

Is the ISO/ANSI C committee at least as responsible as MS? Do they
even recognize their own responsibility in the matter?
Exactly. They all played follow-the-leader. I'm sure they'll
use the same defense if sued.

So the lawyers *are* your solution.
tsk, tsk.

Have you looked at this code? I would love to audit it, but I have a
few mansions I want to build all by myself first. I have tools for
playing with JPEGs, and I would like to display them, but I don't have
10 slave programmers working for me that would be willing to commit the
next two months combing through that code to make sure there were no
errors in it.

Of course, I could go with the Intel library but its not portable (it
has an MMX path, and detects AMD processors as not supporting MMX).
That's not better.
I'll say it a different way, perhaps this will get through.
REGARDLESS of what your first-compiler error rate, you should
feel that hidden error rate is non-zero. You /might/ convince
yourself otherwise at some point in the future, but using
first-compile errors as a metric in this way is the path to
hell.

But they both come from the same place. Don't you see that? I am
actively trying to avoid both, and I really try had to do so. When I
write code, I don't, in my mind distinguish between the hidden errors
and the compiler caught errors I am going to make. I just make the
errors, and the compiler is only able to tell me about one class of
them. Do you really think those two kinds of errors have no
correlation?
No kidding. I'm often amazed at how you give off the impression
that you think you are the sole possessor of what others
recognize as common knowledge.

I have never claimed that a program was bug free. I have
claimed that they have no known bugs, which is a different
matter completely.

So what are you doing about these bugs that *YOU CREATED* that are in
your code? (That you cannot see.)
You didn't. I suggested it. Since it is more likely of
happening before 2020, it might be of interest to you in solving
the 'software crisis'.

Look, I've made "Bstrlib" and some people "get it". So to a small
degree I have already done this. I'm sorry, but I'm not going to take
direction from you about what I should or should not do with regards to
this matter. I've mentioned before how I've watch "D" and that "LCC"
with great sadness, as they've wasted such a golden opportunity to do
exactly what you are suggesting. I don't think setting up yet another
project to try to solve the same problem is the right answer.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,057
Latest member
KetoBeezACVGummies

Latest Threads

Top