Can someone definitively answer this: Given code such as this: char c = 'a'; // (int) c == 97, // int i = 'b' - c == 1, // and, c >= 'a' && c <= 'h' (1) Are the commented assumptions guaranteed to be true programmatically? And, (2) Is arithmetic with primitive char values valid programming practice? The best I can do is: based on the language and Unicode specs; such arithmetic is simply dependent upon the Unicode code code point for 'a' never changing (no matter even what character encoding is used for the ..java file). My "beef" is that: it would seem that the integer value of (char) c is dependent upon the Unicode spec. Even though the consortium guarantees that the code point for the character 'a' will never change for the life of the spec, a reliance on that spec would make this type of programming technically unstable. This would seem to be scary since java's relational operators can be (and are) used with char values as those ponted out above. Comments? Peace, Steev.