On 19 Juni, 07:41, (e-mail address removed) wrote:
What does this mean? That you can't use '@' in strings without relying
on the particular implementation?
The relevant clause in the standard is 5.2.1(3). The extract that
follows is from n1256 which not the official standard but a working
draft.
-----------
Both the basic source and basic execution character sets shall have the
following members: the 26 uppercase letters of the Latin alphabet
A B C D E F G H I J K L M
N O P Q R S T U V W X Y Z
the 26 lowercase letters of the Latin alphabet
a b c d e f g h i j k l m
n o p q r s t u v w x y z
the 10 decimal digits
0 1 2 3 4 5 6 7 8 9
the following 29 graphic characters
! " # % & ' ( ) * + , - . / :
; < = > ? [ \ ] ^ _ { | } ~
the space character, and control characters representing horizontal tab,
vertical tab, and form feed. The representation of each member of the
source and execution basic character sets shall fit in a byte. In both
the source and execution basic character sets, the value of each
character after 0 in the above list of decimal digits shall be one
greater than the value of the previous. In source files, there shall be
some way of indicating the end of each line of text; this International
Standard treats such an end-of-line indicator as if it were a single
new-line character. In the basic execution character set, there shall
be control characters representing alert, backspace, carriage return,
and new line. If any other characters are encountered in a source file
(except in an identifier, a character constant, a string literal, a
header name, a comment, or a preprocessing token that is never
converted to a token), the behavior is undefined.
-----------
But in practise most implementations do support the @ character, at
least those that I'm aware of, which is a tiny fraction of all the
implementations out there, so you might disregard my "most"
comment.