Bounds checking functions

A

Aaron Hsu

Hey all,

After seeing the Secure version I/O functions thread, it occured to me
that maybe not everyone agrees with the almost universal adage that I
have heard. I have Always been told that using things like strlcpy and
other explicitly bounded functions were better than using the
non-bounded versions like strcpy.

Is this a matter of good programming style, or is this just needless overhead?
 
P

Paul Hsieh

After seeing the Secure version I/O functions thread, it occured to me
that maybe not everyone agrees with the almost universal adage that I
have heard. I have Always been told that using things like strlcpy and
other explicitly bounded functions were better than using the
non-bounded versions like strcpy.

Fundamentally, if you don't make errors in logic, you can use strcpy()
any time you can use strlcpy(). (This is the only thing some people
consider as proper -- these people live in a fantasy world where
nobody ever makes programmer errors.)

The idea of using strlcpy() or the strcpy_s() kind of things is that
be exposing the length explicitly that it will reduce the number of
errors (you can still get the length wrong by copy-paste errors, or
other errors in reasoning). This is fine, but reduction is not the
same thing as elimination. If, for example, we say that there will be
an exponentially increasing number of programs or programmers over
time, and we only reduce the number of errors by a finite percentage,
then we have done little more than bought ourselves some time before
some catastrophic bugs hit us due to a bounds overrun error.

This is the fundamental advantage to using solutions that *solve* the
problem of buffer overruns in some way or another. Bstrlib does this,
of course.

But, of course, if you believe C is a dead language, and that we are
or will soon asymptote in the number of programmers that use it, then
perhaps this is not an issue. In practice, carving percentages out of
the error scenarios, then going after the remaining one by one could
work too, since there is no longer any significant growth in the
number of C programs. I kind of doubt that the statistics on
sourceforge.net bear this assumption out just yet. Certainly C has
slowed, but I think asymptoting has not quite happened yet.
Is this a matter of good programming style, or is this just needless overhead?

Its a hack. Its not technically needed, and when it can be used for
its primary purpose its effects are partial. I don't think the
question of style or needless overhead enter into it. Its more
philosophical.
 
S

santosh

Aaron said:
Hey all,

After seeing the Secure version I/O functions thread, it occured to me
that maybe not everyone agrees with the almost universal adage that I
have heard. I have Always been told that using things like strlcpy and
other explicitly bounded functions were better than using the
non-bounded versions like strcpy.

Is this a matter of good programming style, or is this just needless
overhead?

I personally don't consider strlcpy or strcpy_s of much of an
improvement over strcpy and similarly for the other functions. You
still need to get the length right, and if you do get it right, then
strcpy, strcat etc. are perfectly safe.

I would consider a complete strings package like say Bstrlib a
worthwhile improvement over <string.h> and runtime bounds checking
(optional of course) as icing on the cake; the latter should be very
useful during development.

But the fact remains that C was always designed to be used by people who
are already proficient in programming. It was never designed as
a "first language" or a "teaching language" though it's practical
popularity has meant that it has been widely used in those roles too.

I would recommend that new programmers start out with something like
Java or Python and *then* pick up C or C++, even though the reverse was
true in my own case.
 
Y

ymuntyan

Hey all,

After seeing the Secure version I/O functions thread, it occured to me
that maybe not everyone agrees with the almost universal adage that I
have heard. I have Always been told that using things like strlcpy and
other explicitly bounded functions were better than using the
non-bounded versions like strcpy.

Is this a matter of good programming style, or is this just needless overhead?

If you write some file handling stuff, what's better - corrupting
file system because your buffer is too small and you got wrong
paths, or screwing up your whole system or worse because you wrote
code like

char buf[MAX_PATH];
strcpy(buf, dir);
strcat(buf, file);

? The latter seems worse, but the real better alternative is to avoid
both, and for that you shouldn't use that infamous buf[SMART_SIZE]
in the first place. And that's the only case where "safe" functions
can really help, when you can do sizeof(buf). Otherwise, you're back
to buffer overruns. It's not as "safe" as the proposal authors make
you
think it is, by using terms like "safe" and "bounds checking". And
that's
one of the problems (the main one I think), that it's deceiving.

Yes, some people here do say that it's not hard to write correct code
in standard C, but that's a lie as their real code shows. Or science
fiction, if you wish. Safe string handling API is good, absolutely.
But don't you confuse "safe" meaning what vocabulary says, and "safe"
in the title of some paperwork ;)

Yevgen
 
J

jacob navia

santosh said:
I personally don't consider strlcpy or strcpy_s of much of an
improvement over strcpy and similarly for the other functions. You
still need to get the length right, and if you do get it right, then
strcpy, strcat etc. are perfectly safe.

I would consider a complete strings package like say Bstrlib a
worthwhile improvement over <string.h> and runtime bounds checking
(optional of course) as icing on the cake; the latter should be very
useful during development.

But the fact remains that C was always designed to be used by people who
are already proficient in programming. It was never designed as
a "first language" or a "teaching language" though it's practical
popularity has meant that it has been widely used in those roles too.

I would recommend that new programmers start out with something like
Java or Python and *then* pick up C or C++, even though the reverse was
true in my own case.

That should teach you something. You started with C...
The problem santosh, as I have been telling in ALL this threads since
several ages, is that you can be a "good programmer" only 80-90%
of the time.

Since you are human, you will be always limited by the borders of your
circuit, the human circuit. This circuit can do things that computers
can't do, but it CAN'T do the things computers do, since it is a
DIFFERENT kind of circuit.

Specifically, the human circuit is NOT able to NEVER make a mistake,
what computers ALWAYS DO. They NEVER make "mistakes", they always do
what they are told to do EXACTLY.

This basic fact of software engineering is IGNORED by the "regulars"
here that always boast of their infallible powers.
 
R

Richard Heathfield

jacob navia said:

Specifically, the human circuit is NOT able to NEVER make a mistake,
what computers ALWAYS DO. They NEVER make "mistakes", they always do
what they are told to do EXACTLY.

Not quite true. Hardware failures, cosmic rays, etc. But nevertheless
*almost* true.
This basic fact of software engineering is IGNORED by the "regulars"
here that always boast of their infallible powers.

Actually, I don't know of anyone here who claims to be infallible (except
perhaps in jest), let alone boasts about it. But you say the "regulars"
*always* boast of their infallible powers; if you are right, you will have
no trouble providing a reference to an article in which such a boast is
made.
 
S

santosh

jacob said:
That should teach you something. You started with C...

But not for any particular reason. It just happened.
The problem santosh, as I have been telling in ALL this threads since
several ages, is that you can be a "good programmer" only 80-90%
of the time.

Since you are human, you will be always limited by the borders of your
circuit, the human circuit. This circuit can do things that computers
can't do, but it CAN'T do the things computers do, since it is a
DIFFERENT kind of circuit.

Specifically, the human circuit is NOT able to NEVER make a mistake,
what computers ALWAYS DO. They NEVER make "mistakes", they always do
what they are told to do EXACTLY.

This basic fact of software engineering is IGNORED by the "regulars"
here that always boast of their infallible powers.

What you say is true. That's why we have a spectrum of languages all the
way from assembler to stuff like BASIC, JavaScript and even more
abstracted ones. It helps if the programmer knows a few selected
languages, at equidistant points along this spectrum, as in my case
which is: assembler(x86), C, C++(not really well), Java, Perl. Then he
well be able to pick the best one for the job at hand and mix and match
as desired.

Perhaps one reason why your posts are sometimes met with resistance is
your seeming *insistence* that C (with your embellishments) is the
*only* *viable* language for development. The fact is, no one language
has yet managed to satisfactorily address all kinds of software
development, and it's likely that that will not happen for a long time.

BTW, you often point out the large amount of downloads that your
compiler receives as indicative of the foresight of your extensions to
C, but do you actually have hard data on what fraction of lcc-win's
users actually make regular use of it's extensions to C, and not merely
use it as a compiler for ANSI C?

Do you have a document anywhere that summarises all your extensions to
ISO C and your rationale for them?
 
R

Randy Howard

jacob navia said:



Not quite true. Hardware failures, cosmic rays, etc. But nevertheless
*almost* true.

And quite few other failures as well. Ignoring the real life aspects
of signal integrity problems in all their myriad forms, other hardware
design issues, and the complexity issues involved in platform vendors
testing all the likely hardware configurations used by their customers
with various peripherals, I/O slot population choices, etc. is putting
far too much faith in the "infallible computer". It's probably less
likely in general to make mistakes, but data corruption and other
hardware induced errors /do/ happen. There is no concept of "computers
NEVER make mistakes". That's hopelessly naive.
Actually, I don't know of anyone here who claims to be infallible (except
perhaps in jest), let alone boasts about it. But you say the "regulars"
*always* boast of their infallible powers; if you are right, you will have
no trouble providing a reference to an article in which such a boast is
made.

There is no language that magically corrects for the inherently
fallible nature of programming. C certainly does much less than most
other languages in this regard, but this is not a secret. No special
club membership is required to know this.

When a programmer makes the choice to use C instead of some other
language, that individual is responsible for producing the software,
finding the bugs, and eliminating them as much as possible.

If the boss tells the programmer(s) to use C, it's a slightly different
issue, but the responsibilities are much the same.

People can whine all they like about the failures of various languages,
but at the end of the day, the outcome is still that whatever language
is chosen, there will still be bugs, logic errors, misuse of APIs,
poorly tested code blocks, etc. that are an issue both during
development and post-deployment.

If someone knows of the magical programming language that makes all of
these issues go away, I would like to be told of it. If there isn't
one, then we'll have to continue to pick languages based upon their
appropriateness for a given task, and continue to fix bugs.
 
J

jacob navia

santosh said:
What you say is true. That's why we have a spectrum of languages all the
way from assembler to stuff like BASIC, JavaScript and even more
abstracted ones. It helps if the programmer knows a few selected
languages, at equidistant points along this spectrum, as in my case
which is: assembler(x86), C, C++(not really well), Java, Perl. Then he
well be able to pick the best one for the job at hand and mix and match
as desired.

But wxhy can't we make interfaces better?

Why KEEP this OLD interfaces that have proven wrong over decades?
strncpy, gets, asctime, trigraphs, all that CRUFT?

Perhaps one reason why your posts are sometimes met with resistance is
your seeming *insistence* that C (with your embellishments) is the
*only* *viable* language for development. The fact is, no one language
has yet managed to satisfactorily address all kinds of software
development, and it's likely that that will not happen for a long time.

A simple language like C is much better than other "OO" ones. A simple
software like my IDE/Debuger that installs in 30 seconds is much
EASIER TO USE than C# with Visual studio 2008 and 4GB of software!
BTW, you often point out the large amount of downloads that your
compiler receives as indicative of the foresight of your extensions to
C, but do you actually have hard data on what fraction of lcc-win's
users actually make regular use of it's extensions to C, and not merely
use it as a compiler for ANSI C?

Who knows? I do not know.
Do you have a document anywhere that summarises all your extensions to
ISO C and your rationale for them?
http://www.q-software-solutions.de/~jacob/proposal.pdf
 
S

santosh

jacob said:
But wxhy can't we make interfaces better?

Who said we can't. But the Committee is (understandably) slow in
proposing new stuff. Meanwhile we can still use popular de facto
standard libraries.
Why KEEP this OLD interfaces that have proven wrong over decades?
strncpy, gets, asctime, trigraphs, all that CRUFT?

You know very well why. For backward compatibility. To be quite honest,
more maintenance than new code development is the current situation
with C. There is a very large amount of existing software that depends
on the Standard library functions and other quirks of the language.

It's probably one reason why Stroustrup decided to create an independent
language rather than extending C and trying to force it through the
Standards process and onto all it's users, as you seem to want to do.

The problem is your proposals for interfaces depend on your extensions,
so your extensions have to first be accepted by the Committee before
your generic collections and other packages can be considered.
A simple language like C is much better than other "OO" ones. A simple
software like my IDE/Debuger that installs in 30 seconds is much
EASIER TO USE than C# with Visual studio 2008 and 4GB of software!

That's not purely the fault of C++ (assuming that C++ is indeed the
chief implementation language of the Visual Studio software). It's just
the Microsoft way of s/w development.

There is nothing inherently in C++ or Ada that says that s/w produced by
it must be bloated to several gigabytes. You are confusing the language
with specific software produced with it.

Can you blame English after reading one of Werty's or Wade Ward's posts?
Similarly.
Who knows? I do not know.

You should provide a feedback system on your website that lets lcc-win
users to anonymously post their comments and any statistics that they
want to. You might want to put up a few questionnaires. You should
probably convert to HTML and place online your tutorials to lcc-win's C
and it's benefits over ISO C.

Instead of trying to convert the Committee and this group, you probably
should improve your marketing of your compiler and try to increase the
interest of casual visitors to your site.

Oh, and also try and make your compiler portable to at least a few major
systems, other than Windows.

Thanks.
 
R

Randy Howard

But wxhy can't we make interfaces better?

We can. We do. You can create new interfaces independent of the std
library functions in many cases, or you can create "better" versions of
std library functions via wrappers to add safety features, or to
provide additional functionality. This has been done for ages. Open
source has to a certain degree wiped out the old commercial library
development market, but in either form, alternatives exist by the
bushel.
Why KEEP this OLD interfaces that have proven wrong over decades?

Because the legacy issue can't be gotten rid of just by snapping your
fingers. Billions (trillions?) of lines of C code are out there being
used. A lot of people love to reinvent wheels, but a lot more people
are still using the same wheels that were in use 20 years ago.
strncpy, gets, asctime, trigraphs, all that CRUFT?

You are free to not use them. Just because something exists, doesn't
mean you have to use it.
A simple language like C is much better than other "OO" ones.

Perhaps. The problem is, simple languages don't hold your hand. You
seem to want to take a simple language, then add features from other
languages, and pretend that it is still the simple language.

You have a compiler for this language of yours, which is based upon C,
but isn't C any longer. Why not simply come up with a new name for it,
publish a spec for it, and stop /pretending/ that it is C?

This would also allow you to eliminate all the cruft that you are
forced to carry around now, making your language "leaner and meaner",
and probably please you and perhaps others a great deal more than
whining constantly because the millions of existing C programmers don't
see it your way.

That would seem to make a lot of sense. Anyone that agrees with you
would adopt it immediately. Anyone that disagrees with you would
simply not use it. For some reason, that doesn't seem to make you
happy or you would have done so already.
A simple
software like my IDE/Debuger that installs in 30 seconds is much
EASIER TO USE than C# with Visual studio 2008 and 4GB of software!

I guess that theory that says "if you build a better mousetrap, the
world will beat a path to your door" isn't working out then? If this
was as much of a slam dunk as you claim, Microsoft would be out of the
compiler market. Clearly not everyone agrees with you.
 
C

CBFalconer

.... snip erroneous evaluation of strlcpy ...
Its a hack. Its not technically needed, and when it can be used
for its primary purpose its effects are partial. I don't think
the question of style or needless overhead enter into it. Its
more philosophical.

No, strlcpy and strlcat are much easier and more accurately used
that any combination of strcpy, strcat, strncpy, etc. They are a
part of the BSD system, and should be propagated into the standard
C library. You can see what they are, how they differ, and perform
your own accurate evaluation from strlcpy.zip available at:

<http://cbfalconer.home.att.net/download/>
 
C

CBFalconer

santosh said:
I personally don't consider strlcpy or strcpy_s of much of an
improvement over strcpy and similarly for the other functions.
You still need to get the length right, and if you do get it
right, then strcpy, strcat etc. are perfectly safe.

But if you get it wrong, strlcpy/cat etc. will tell you, and not
blow up your system. They will even often tell you by how much.
Their only problem is not being in the C std library.
 
C

CBFalconer

Randy said:
.... snip ...

If someone knows of the magical programming language that makes
all of these issues go away, I would like to be told of it. If
there isn't one, then we'll have to continue to pick languages
based upon their appropriateness for a given task, and continue
to fix bugs.

s/all/most/

Pascal and Ada. :)
 
R

Randy Howard

But if you get it wrong, strlcpy/cat etc. will tell you, and not
blow up your system. They will even often tell you by how much.
Their only problem is not being in the C std library.

Not to mention having a name (starting with str) that is not to be used
if not in the standard. Apparently arguing about this only counts when
used by functions that folks don't think should be part of standard C,
because they get flagged over it, but for other functions, like
strlcpy() nobody seems to object.
 
J

jacob navia

Randy said:
We can. We do. You can create new interfaces independent of the std
library functions in many cases, or you can create "better" versions of
std library functions via wrappers to add safety features, or to
provide additional functionality. This has been done for ages. Open
source has to a certain degree wiped out the old commercial library
development market, but in either form, alternatives exist by the
bushel.

Then, if all that is OK, why you and the other people here are
ranting when Microsoft proposes a standard about those "wrappers"?

All functions in the microsoft proposal just add error checking to the
basic library functions.
Because the legacy issue can't be gotten rid of just by snapping your
fingers. Billions (trillions?) of lines of C code are out there being
used. A lot of people love to reinvent wheels, but a lot more people
are still using the same wheels that were in use 20 years ago.

Yes. And they can go on doing that. Who cares? Nobody is
proposing to make all those functions (except the obviously
buggy ones like gets) obsolete instantaneously.

Why can't we use for NEW code better interfaces?

This is the central point of my argumentation. And it is repeated
thousand times and you ignored it AGAIN.
You are free to not use them. Just because something exists, doesn't
mean you have to use it.


Perhaps. The problem is, simple languages don't hold your hand. You
seem to want to take a simple language, then add features from other
languages, and pretend that it is still the simple language.

Adding the changes that I proposed makes the language

***SMALLER***.


Why that?

Because instead of having complex numbers (as standardized in C99) or
decimal numbers or fixed point numbers ALL in the language, we can
use a SINGLE extension (operator overloading) to accommodate them ALL.

That means that the language gets smaller by including a general
feature that will allow it to have ANY kind of numbers.

With the SAME feature (operator overloading) it is possible to
transparently make a bounded arrays package and use it when debugging,
and without changing a line of code you can revert to the old arrays
in production.

For instance.

Other advantages of that single change are described in my proposal
available at:

http://www.q-software-solutions.de/~jacob/proposal.pdf
You have a compiler for this language of yours, which is based upon C,
but isn't C any longer. Why not simply come up with a new name for it,
publish a spec for it, and stop /pretending/ that it is C?

It is one of the few C99 implementations under windows. Done with
years of effort from my part. But here I have to hear from people
that never have done anything to promote standard C that "It is not C"

At least CB Falconer proposes his strlcpy or ggets packages.

What do YOU propose Mr Howard?

Just empty talk.

Easy isn't it?


[crap elided]
 
T

Tomás Ó hÉilidhe

Aaron said:
Hey all,

After seeing the Secure version I/O functions thread, it occured to me
that maybe not everyone agrees with the almost universal adage that I
have heard. I have Always been told that using things like strlcpy and
other explicitly bounded functions were better than using the
non-bounded versions like strcpy.

Is this a matter of good programming style, or is this just needless overhead?


In my opinion, this depends entirely upon two things:

1) The competency of the programmer
2) The programmer's view of their own competency

For instance, I myself have high confidence in my own programming and
so I feel comfortable playing around with pointers. I don't have a
need for bounds checking, so any overhead introduced by bounds
checking would seem unacceptable to me.

In debug mode tho, I usually have all the warnings and safeguards
cranked thru the roof.
 
R

Randy Howard

Then, if all that is OK, why you and the other people here are
ranting when Microsoft proposes a standard about those "wrappers"?

When did I rant about a Microsoft proposal? A simple link will do.
All functions in the microsoft proposal just add error checking to the
basic library functions.

Then why not just introduce them as an open source library that
provides these wrappers? If they wanted them to be widely adopted and
quickly, this would be out there. Where can this library be
downloaded?
Yes. And they can go on doing that. Who cares? Nobody is
proposing to make all those functions (except the obviously
buggy ones like gets) obsolete instantaneously.

Why can't we use for NEW code better interfaces?

You are free to do so. The problem seems to lie in you wanting to make
everyone do it your way, and you get quite pissy about it when that
doesn't happen.
This is the central point of my argumentation. And it is repeated
thousand times and you ignored it AGAIN.

I didn't ignore it. If anything, I said it myself. Read my first
paragraph above.
Adding the changes that I proposed makes the language

***SMALLER***.


Why that?

Painful grammar notwithstanding, you don't get to make the language
smaller unless you want to break legacy code, which you said about that
you do not intend to do. You need to pick one answer, and stick with
it.
Because instead of having complex numbers (as standardized in C99) or
decimal numbers or fixed point numbers ALL in the language, we can
use a SINGLE extension (operator overloading) to accommodate them ALL.

There are more than zero programmers that strongly dislike operator
overloading for a variety of reasons. There are languages that include
it already though, so if you want such, use one of them, or create a
new one.
That means that the language gets smaller by including a general
feature that will allow it to have ANY kind of numbers.

With the SAME feature (operator overloading) it is possible to
transparently make a bounded arrays package and use it when debugging,
and without changing a line of code you can revert to the old arrays
in production.

Only one problem here. C does not have operator overloading.
Other advantages of that single change are described in my proposal
available at:

http://www.q-software-solutions.de/~jacob/proposal.pdf

Yes, I've read it.
It is one of the few C99 implementations under windows. Done with
years of effort from my part.

It's quite a bit different from a pure C99 implementation though, isn't
it? You are on record as being far less than happy with standard C in
any of its versions, but correct me if I misinterpreted you on this
point.
But here I have to hear from people that never have done anything to
promote standard C that "It is not C"

Why does standard C need promoting? If anything, more than enough
people use it already.
At least CB Falconer proposes his strlcpy or ggets packages.

What do YOU propose Mr Howard?

I am quite happy with the language available as C89, and have not
perceived a need for anything more than that in order to use C
effectively.
[crap elided]

So you call the notion of you creating a new language that embodies all
of the ideas that you find worthwhile under a new name to be crap?
Well, I might agree with you, but probably not for the reasons you
might guess.
 
W

William Ahern

char buf[MAX_PATH];
strcpy(buf, dir);
strcat(buf, file);
The latter seems worse, but the real better alternative is to avoid both,
and for that you shouldn't use that infamous buf[SMART_SIZE] in the first
place. And that's the only case where "safe" functions can really help,
when you can do sizeof(buf).

This scenario accounts for the vast majority of cases where I handle
"strings". 8, 64, 255, 1024; these are magic numbers that litter innumerable
RFCs and other standards. In many cases its possible to be given input which
doesn't fit the constraint, and often its OK to reject the input. But, point
being, I size char buffers in structures using these values. Having a
statically sized buffer of 64 or 255 bytes, or even 1024, is usually easier
and faster and, probably, safer than dynamically managing that particular
piece of memory. All things being equal, less code is safer code.

Now, much of the time I already know the size of the input before copying.
But there are often cases where the design has an [intentional] gap, and
you're passed a string without a size, often at the junction where a library
or component exposes itself to code usage which expects a more canonical
string interface--just a pointer. In such instances, strlcpy is priceless.

The utility of strlcpy is tacitly recognized and reflected in the signature
of C99's snprintf.

It's not as "safe" as the proposal authors make you think it is, by using
terms like "safe" and "bounds checking". And that's one of the problems
(the main one I think), that it's deceiving.

I agree. Particularly the notion that "bounds checking" is some sort of
exceptional or uncharacteristic quality of general programming hygiene.

These new interfaces certainly don't do bounds checking for you. They merely
alleviate a small part of the burden in some circumstances.
Yes, some people here do say that it's not hard to write correct code
in standard C, but that's a lie as their real code shows.

Well... lie or not, it's something to aspire to. That sort of defeatist (as
opposed to pragmatic) attitude can't be helpful. Certainly it sounds a bit
presumptive. Knuth and Berstein haven't written many checks. It goes without
saying that nobody's perfect, though.
Or science fiction, if you wish. Safe string handling API is good,
absolutely. But don't you confuse "safe" meaning what vocabulary says, and
"safe" in the title of some paperwork ;)

The same goes for "security", or the myriad other amorphous qualitative
terms. There'll never be a substitute for critical reading and analysis. Yet
discerning writers and readers continue to use the term, because its not
their job to cater to the least common denominator, but rather to
efficiently get their message across to their intended audience.
Notwithstanding hyperbole and rhetoric, when a writer uses the term, it
_signals_ to the reader how he should approach the writing or claim.
 
C

CBFalconer

jacob said:
Randy Howard wrote:
.... snip ...


It is one of the few C99 implementations under windows. Done
with years of effort from my part. But here I have to hear from
people that never have done anything to promote standard C that
"It is not C"

At least CB Falconer proposes his strlcpy or ggets packages.

I want to popularize those routines. If they become sufficiently
popular, they may get included in a future version of the C
standard. Meanwhile everybody has access to them. That is why
they are in the public domain. And it is not 'my' strlcpy, just my
implementation.

Note that hashlib is not in the public domain. It is released
under GPL. I consider it a first class solution to the problem;
others may not. For other things you will have to examine the
releases, but they will be either public or GPL.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top