Gary Labowitz said:
And it leads me ask the question: If there is a type for which you
cannot declare a variable, is there a kind of variable for it? It
seems to hinge on the fact that reference variables keep the type of
the object to which they refer. A reference which is null (refers to
no object) has as its value a null type, i.e. the type of the object
to which it refers is null since there is no object. It sort of like
the void that returns from a method, meaning of course that no value
is returned. It would appear to be a syntactic trick so that the
coding has parallel coding to set a reference variable to an object
or to "no object" in the same way, e.g. Object refer;
refer = &SomeObject;
refer = null; //refer to no object
I feel your pain in this regard, and you are conducting nearly exactly the
same argument I've made about this to others in the past.
null, should have been called the "null value" or the "null literal" which
is assignable to any reference type and is without type of its own.
The problem they faced, however, is this, I think: For all literals, there
needs to be a type that they belong to even BEFORE they are assigned. That
is, there are no literals per se, there are only /literals of a certain
type/.
Thus declaration assignments of
<type T> <var name> = <literal>;
are really...
<var name> of <type t> = <literal> of <type t2>;
.....and there needs to be conversion rules for t2 to t1.
That is:
float f_num = 1.0; // error. requires explicit downcast.
f_num of type float = 1.0 of type double; // error.
The 1.0 literal is of type double regardless of what you assign to it.
And thus...
Component thing = null;
thing of type Component reference = null of type null;
The null literal is of type null regardless of what you assign it it. Why
not simply be this:
thing of type Component reference = null of type Component
reference.
I think it's because they want the RHS of the equation to establish it's
type devoid of the LHS. But that doesn't make any sense, since they
/already/ play games to that extent:
short s_num = 32767;
Was 32767 a short or int literal? And in this error case:
short s_num = 32768; // error. Can't fit in signed 16 bits
is the 32767 an int literal simply because it cannot fit in the short?
Or are both numbers in these last two examples of type int with "special"
conversion rules?
In any case, it seems like there is precedent for having the RHS mutate to
the needs of the LHS.