Hi All!
I have been reading recently about the 'NX' or "No Execute" technology chip
makers are using to prevent buffer overflow errors. The central idea is that
it allows the OS to automatically kill any process that begins writing
outside its own memory.
I don't know anything about "No Execute" technology, but this
description doesn't sound quite right.
Keeping a process from writing "outside its own memory" (e.g.,
writing to memory that belongs to another process) is a good thing
(usually called "memory protection" AFAIK), but it does not prevent
all buffer overflows, since it's possible to overflow a buffer without
writing outside the process's memory.
I think a distinction can usefully be made between two cases.
Let's talk about these in terms of trying to access element N of
an array of length 100. Most programming languages I know of store
arrays as contiguous chunks of memory (N * size of element here) and
compute the address of element N by multiplying N by the size of an
array element and adding that to the address of the start of the array.
One case: N is some really big number. Then the computed address will
likely be outside the bounds of the program's memory, and the system's
normal memory protection (combination of hardware and operating system
features) should result in the program being killed.
Another case: N=105. The computed address will be a little past
the end of the array. It's possible it will be outside the bounds of
the program's memory, but it's more likely that it won't, but instead
will be associated with some other program variable. If the program
is doing array bounds checking, it will recognize that this is an
out-of-bounds access and do something (throw an exception, e.g.).
If the program isn't doing array bounds checking, it will just
silently write to the address, thereby changing the value of the
other variable. Results vary but can include tricking the program
into executing virus-type code.
The first case is already addressed by memory-protection features.
Probably this new technology attempts to address a subset of the
second case. (Notice too that it may not be all that new -- I vaguely
remember hearing that some systems of the past tried to offer similar
protections.)
The article I read said there is a danger with this
at the moment: a lot of 'legitimate code' (i.e. apps that are not viruses)
have buffer overruns for reasons such as sloppy programming, deadlines that
are too tight etc..
My question: does Java suffer from buffer overrun errors?
In theory, no. Java is supposed to always perform array bounds
checking, and you can't get around it with creative pointer arithmetic
because Java doesn't do the kind of general-purpose pointer arithmetic
that would make it possible.
What about other OO capable languages: C++/C#/.NET etc?
No idea about C# or .NET, but C++ certainly makes buffer overruns
possible -- no automatic array bounds checking, and many ways to
use pointers that could result in buffer overruns or similar problems.
Is it only C code that is worrying?
No, though it has a reputation for encouraging programming habits that
make buffer overruns possible. You *can* do array bounds checking in
C, but you might have to put it in yourself, and people often don't.
I used to write in Fortran, which has similar issues, and after a
few years of tracking down "interesting" bugs caused by other people
not putting array bounds checking in their code, I learned ....