S
SpOiLeR
Hi.
q1: Is std::bitset<N> part of standard or it's compiler extension?
q2: Is std::bitset::to_string() part of standard?
q3: My documentation say this about std::bitset::to_string():
"...each character is 1 if the corresponding bit is set, and 0 if it is
not. In general, character position i corresponds to bit position N - 1 -
i..."
On my machine, it results in most significant bits being on lower positions
in resulting string:
unsigned int i = 12536;
std::bitset<16> bs = i;
std::string str = bs.to_string();
which gives for str "0011000011111000"
=> 0011 0000 1111 1000 == x30F8 == 12536
If I understand my docs well, on machine with different endianess, same
code will result in different string. What does standard say about it: will
string output always have most significant bits on lowest string positions,
or it is like my docs say?
TIA
q1: Is std::bitset<N> part of standard or it's compiler extension?
q2: Is std::bitset::to_string() part of standard?
q3: My documentation say this about std::bitset::to_string():
"...each character is 1 if the corresponding bit is set, and 0 if it is
not. In general, character position i corresponds to bit position N - 1 -
i..."
On my machine, it results in most significant bits being on lower positions
in resulting string:
unsigned int i = 12536;
std::bitset<16> bs = i;
std::string str = bs.to_string();
which gives for str "0011000011111000"
=> 0011 0000 1111 1000 == x30F8 == 12536
If I understand my docs well, on machine with different endianess, same
code will result in different string. What does standard say about it: will
string output always have most significant bits on lowest string positions,
or it is like my docs say?
TIA