what is NULL pointer dereferencing

P

prashant.khade1623

I am not getting the exact idea.

Can you please explain me with an example.

Thanks
 
J

Joachim Schmitz

Morris said:
Please state your question in the body of your article - thanks.

A NULL pointer is not /for/ dereferencing. Setting a pointer to
NULL indicates that the pointer does not point to a usable value
and should not be dereferenced.
But is is better to dereference a NULL pointer than uninialized pointer or a
pointer that points to something that is no longer valid.
Better, because easier do spot (the probtram will crash right there) and to
debug (hunting dangling pointers is a pain in the proverbial, as the program
quite likely crashes at a totally unrelated place).
So there's a good reason to (re-)initialize a pointer to NULL.

Bye, Jojo
 
J

Joachim Schmitz

Morris said:
Umm - are you prepared to _guarantee_ that dereferencing a null
pointer will result in a crash on _every_ system, or are you
engaging in a bit of wishful thinking?
No, I'm not prepared to guarantee this, but it happens on every machine I've
come accross so far.
And dereferencing dangling pointers is by far less 'reliable' when it comes
to crash a machine at a predictable spot.

Bye, Jojo
 
J

Joachim Schmitz

Morris said:
Then I think we're in basic agreement. In many of the
environments I've worked in it's possible to access every address
even though there may not be anything (memory or device) at that
address - without consequence other than retrieval of (usually)
an all-ones value.
even NULL?

Bye, Jojo
 
N

Nick Keighley

On 10 Apr, 11:48, "(e-mail address removed)"

Re: what is NULL pointer dereferencing
I am not getting the exact idea.

Can you please explain me with an example.

which bit don't you understand?

what you call a NULL pointer is really a null pointer
value. Whilst the NULL macro defines a value that
is a null pointer value.

A null pointer value is a pointer value that never corresponds to
a legal pointer value.

To dereference a pointer you get the value it is pointing to.
Since a null pointer value doesn't point to anything valid
it is meaningless to dereference it. In terms of the C standard
the behaviour is "undefined". The implementor is free to do
anything he likes (eg. return random crap, crash etc).

A common implementation is to make a pointer an address,
make the null pointer value equal to zero and ensure that
address zero never contains anything valid. Dereferencing
might then return whatever is at address zero or if
address zero is not a valid address it might generate some
sort of operating system signal that terminates the program.
This signal is often referred to as a "segmentaion fault"
as the memory segment is not valid.

Note the above ***is an example***. It doesn't have to be done
this way. A null pointer value does not have to be zero.
Though a zero value that appears in a pointer context
in the source text must be treated as a null pointer value.
And if that last bit made no sense then read the FAQ!

I've often thought C should have had a keyword for the
null pointer value. Eg. nil or null. Some people say it already
has and it's spelt 0 (digit-zero)...



--
Nick Keighley

"I have always wished that my computer would be as easy to use as my
telephone. My wish has come true. I no longer know how to use my
telephone."
- Bjarne Stroustrup
 
S

Stephen Sprunk

Joachim Schmitz said:
No, I'm not prepared to guarantee this, but it happens on every machine
I've come accross so far.
And dereferencing dangling pointers is by far less 'reliable' when it
comes to crash a machine at a predictable spot.

Then you haven't worked on a very wide variety of systems. There are some
still in use today that will happily dereference a NULL pointer, since the
OS puts valid (and sometimes critical) data at memory location zero instead
of a trap page -- indeed, such systems generally don't have paging at all.

I will grant that a system is _more likely_ to trap dereferencing a NULL
pointer than a random value, but it's still not guaranteed. Besides, you
should be checking pointers for NULL after memory allocations, after
receiving them as arguments to functions, etc. so that there is no need to
count on a dereference of them trapping. Unlike a random value, NULL is
easy to detect since it's fixed and easy to test against (e.g. !p), so
there's no excuse for not adding in the sanity checks. The same cannot be
said of accidentally using a pointer returned from *alloc() after some
other, unrelated piece of code has free()d it.

S
 
K

Kenneth Brody

Joachim said:
No, I'm not prepared to guarantee this, but it happens on every machine I've
come accross so far.
And dereferencing dangling pointers is by far less 'reliable' when it comes
to crash a machine at a predictable spot.

Well, aside from systems without "real" memory protection, such as
real-mode MS-DOS, I have seen systems which _on_purpose_ allow you
to dereference (for reading) a NULL pointer, by supplying one "page"
of all-zero read-only memory at address zero. (SCO Unix, for
example, does this, allowing '\0' to be read from addresses
0x00000000 through 0x00000fff.)

Now, that doesn't mean that (re-)initializing pointers to NULL is a
bad idea. It certainly is "better" than leaving uninitialized or
dangling pointers. It's just not guaranteed to do anything "bad" if
you dereference them.

--
+-------------------------+--------------------+-----------------------+
| Kenneth J. Brody | www.hvcomputer.com | #include |
| kenbrody/at\spamcop.net | www.fptech.com | <std_disclaimer.h> |
+-------------------------+--------------------+-----------------------+
Don't e-mail me at: <mailto:[email protected]>
 
W

Walter Roberson

Morris Dovey wrote:
No, I'm not prepared to guarantee this, but it happens on every machine I've
come accross so far.

See my historical example earlier this week (in this newsgroup)
of Silicon Graphics (SGI) workstations with R8000/R10000 CPUs.
 
R

Richard Tobin

No, I'm not prepared to guarantee this, but it happens on every machine
I've come accross so far.
[/QUOTE]
Well, you haven't come across many machines, then - not even x86s, it
seems.

It's more a matter of operating systems than hardware. All modern
general-purpose computers have the necessary hardware support; it's
just a question of whether the OS chooses to use it. I would consider
it a poor choice now for an OS not to trap it.

-- Richard
 
J

jxh

But is is better to dereference a NULL pointer than uninialized
pointer or a pointer that points to something that is no longer
valid. Better, because easier do spot (the probtram will
crash right there) and to debug (hunting dangling pointers is
a pain in the proverbial, as the program quite likely crashes
at a totally unrelated place). So there's a good reason to
(re-)initialize a pointer to NULL.

Many CPUs provide an MMU, which most modern OSs will
utilize to get this behavior. But there are many CPUs
that do not have an MMU, and the runtime environment
on these CPUs will not give address 0 special treatment.

It is just something to keep in mind when you move from
application programming to embedded programming.

-- James
 
S

Stephen Sprunk

Richard Heathfield said:
Well, you haven't come across many machines, then - not even x86s, it
seems. I have an x86 currently sat about three yards away from my desk,
admittedly a rather old one but there's nothing particularly special about
it. If you run a program on this computer that derefs NULL, well, I can't
tell you what will happen, obviously, because the behaviour is UB, but
*very often* what happens is that the program runs to completion and then
a little message is displayed at the end. I can't quite remember the
precise text, but basically it tells you about the null pointer deref. I
don't guarantee that a null pointer deref *won't* crash the program on
such a system, of course. But very often it does not.

<OT>
x86 Real Mode puts the interrupt vector table at the start of memory -- the
same place if you dereference a null pointer in C. Reading that area just
gets you odd values; writing it almost always* results in bad behavior, but
not necessarily immediately -- it may not be evident until one of those
interrupts fires seconds, minutes, or even hours later. (Or when, say,
whatever code runs after your program ends checks to see that the IVT is
still intact.)
</OT>

S

(* unless you're altering that data deliberately and correctly, e.g. to hook
into hardware devices, and restore it when you're done)
 
R

Richard

Jack Klein said:
Really? Chapter and verse, please. Where does the C standard make
one of these two operations more undefined than the other?

Common sense and years of practice say it is. On most systems I have
used rereferencing a NULL pointer always raises issues at the
time. Using a "random" pointer which might point to valid memory is
often not signalled for some reason ...

It's amazing that even you would fight against common sense because its
not mentioned in the "standard".
 
J

Joachim Schmitz

Kenneth said:
Joachim Schmitz wrote:
Now, that doesn't mean that (re-)initializing pointers to NULL is a
bad idea. It certainly is "better" than leaving uninitialized or
dangling pointers.
And that's my main point. You may win something, but won't loose anything,
so there's no good reason for not doing it.

Bye, Jojo
 
J

Joachim Schmitz

Jack said:
Really? Chapter and verse, please. Where does the C standard make
one of these two operations more undefined than the other?
Re-read and tell me where I claimed this being mandated by the standard. I
merely said that it is better. You won't loose anything if you do, but may
gain something (a usefull error message, an abort at the spot, a decent
value if you take the program into debugger)
At least 90% (probably more) of the computing devices in the world
executing C code at any given time have no hardware memory protection,
and most likely won't trap at all.

Another desktop chauvinist.
? Most (UNIX and other) _Servers_ will kill a program that derefence a NULL
pointer, nothing about desktop.
There's a much better reason to keep track of when pointers are and
are not valid.
True. But marking the invalid ones invalid by setting them to NULL is part
of that story.

Bye, Jojo
 
K

Kenneth Brody

Jack said:
Really? Chapter and verse, please. Where does the C standard make
one of these two operations more undefined than the other?

Well, as far as C is concerned, they are both UB, and both just as
"bad". However, many implementation can, and do, trap a dereference
of NULL, whereas uninitialized/dangling pointers cannot, as they may
just happen to be "valid". (Okay, I could see an implementation,
probably for debugging purposes, which would set up malloc/free such
that a pointer becomes invalid after being freed.)

I think in this case, "better" is a QOI issue, rather than a language
issue.
At least 90% (probably more) of the computing devices in the world
executing C code at any given time have no hardware memory protection,
and most likely won't trap at all.

Another desktop chauvinist.


There's a much better reason to keep track of when pointers are and
are not valid.

One could say that "setting the pointer to NULL once it goes invalid"
_is_ "keeping track".

--
+-------------------------+--------------------+-----------------------+
| Kenneth J. Brody | www.hvcomputer.com | #include |
| kenbrody/at\spamcop.net | www.fptech.com | <std_disclaimer.h> |
+-------------------------+--------------------+-----------------------+
Don't e-mail me at: <mailto:[email protected]>
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,767
Messages
2,569,570
Members
45,045
Latest member
DRCM

Latest Threads

Top