unicode text file

K

Koulbak

I have some unicode (utf8) text file. I _tried_ to write a simple
program that read one of them and write it to the standard output but...
of course it doesn't work. There is an easy way to do it? Thanks, K.

This is my program.

#include <fstream>
#include <iostream>
#include <string>

using namespace std;

int main(){
ifstream infile ("in.txt");
string s;
while (infile >> s) {
cout << s;
}
}
 
M

Mike Wahler

Koulbak said:
I have some unicode (utf8) text file. I _tried_ to write a simple program
that read one of them and write it to the standard output but... of course
it doesn't work. There is an easy way to do it? Thanks, K.

This is my program.

#include <fstream>
#include <iostream>
#include <string>

using namespace std;

int main(){
ifstream infile ("in.txt");

You should here check that file was opened successfully
before attempting to read from it.
string s;
while (infile >> s) {
cout << s;
}
}

Try using 'wifstream' and 'wcout'.

-Mike
 
K

Koulbak

Mike Wahler wrote:
[read unicode text file]
You should here check that file was opened successfully
before attempting to read from it.

In the real program of course I do it, but in my post I put only the
essential part of the question.
Try using 'wifstream' and 'wcout'.

1 Tried, it doesn't compile.

error C2679: binary '>>' : no operator found which takes a right-hand
operand of type 'std::string' (or there is no acceptable conversion)

I added also wstring and it compile but it doens't work correctly: it
prints a lot of garbage.

2 I thought that with C++ there was the possibility to use exactly the
standard way (avoid special construct as wcout) maybe setting some
library option. Is it not at all true?

Thanks a lot, K.
 
I

Ioannis Vranos

Koulbak said:
1 Tried, it doesn't compile.

error C2679: binary '>>' : no operator found which takes a right-hand
operand of type 'std::string' (or there is no acceptable conversion)


You should use wstring. A wchar_t string literal is prefixed with L. For example:


wstring s= L"Some string";


I added also wstring and it compile but it doens't work correctly: it
prints a lot of garbage.

2 I thought that with C++ there was the possibility to use exactly the
standard way (avoid special construct as wcout) maybe setting some
library option. Is it not at all true?

These *are* standard facilities. All string facilities come with their wchar_t equivalents
(including the facilities of the C-subset).
 
K

Koulbak

Ioannis said:
You should use wstring. [...]

I add wstring, it doesn't works.
These *are* standard facilities. All string facilities come with their
wchar_t equivalents (including the facilities of the C-subset).

Sorry I was not clear at all. I would like to avoid as mush as possible
the implementation details. I don't want to use explicitely unicode
function but simply say to the compiler or to the library that my
character code is unicode and then read a file exactly in the usual way.

I would like to avoid to learn a new set of function to read and
manipulate unicode character, unicode string and so on. Of course if it
is possible.

Thanks, K.
 
R

Rapscallion

Koulbak said:
1 Tried, it doesn't compile.

error C2679: binary '>>' : no operator found which takes a right-hand
operand of type 'std::string' (or there is no acceptable conversion)

You have not included all necessary or the wrong header files (or have
the wrong files in your include path).
I added also wstring and it compile but it doens't work correctly: it
prints a lot of garbage.

wstring is not appropriate for UTF-8.

R.C.
 
O

Old Wolf

Koulbak said:
I have some unicode (utf8) text file. I _tried_ to write a
simple program that read one of them and write it to the
standard output but... of course it doesn't work. There
is an easy way to do it? Thanks, K.

This is my program.

#include <fstream>
#include <iostream>
#include <string>

using namespace std;

int main(){
ifstream infile ("in.txt");
string s;
while (infile >> s) {
cout << s;
}
}

ostream >> string reads a word (up to whitespace), and then
ignores any adjacent whitespace and newlines.
To do line-by-line reading, you would go:

while (getline(infile, s))
cout << s;

But this is not good for UTF-8 files because newline characters
might be part of a UTF-8 code.

To output the whole file at once:

cout << infile.rdbuf();

I'm assuming you want to output UTF-8 on stdout (Standard
C++ offers no facilities for converting UTF-8 to a stream
of wide characters). Can you clarify your intention?

The best thing to do (IMHO) would be to open the file in
binary mode, and also force std::cout into binary mode. (This
would require a system-specific code). Then, no translation
will occur and it will work correctly.

If you can't force cout to binary, then it *might* work to
open the input in text mode too, and hope that the input
conversions match the output conversions!
 
I

Ioannis Vranos

Koulbak said:
Sorry I was not clear at all. I would like to avoid as mush as possible
the implementation details. I don't want to use explicitely unicode
function but simply say to the compiler or to the library that my
character code is unicode and then read a file exactly in the usual way.

I would like to avoid to learn a new set of function to read and
manipulate unicode character, unicode string and so on. Of course if it
is possible.

wchar_t represents the largest character set of a system, char mainly represents a byte
and 1 byte character sets. If you have to deal with various character sets, then better
stick to wchar_t and the corresponding facilities for it (which are the same with plain
char facilities, with an additional w in their name) .
 
I

Ioannis Vranos

Old said:
The best thing to do (IMHO) would be to open the file in
binary mode, and also force std::cout into binary mode. (This
would require a system-specific code). Then, no translation
will occur and it will work correctly.

If you can't force cout to binary, then it *might* work to
open the input in text mode too, and hope that the input
conversions match the output conversions!


What is wrong with the use of wcout?
 
S

Serge Skorokhodov (216716244)

The best thing to do (IMHO) would be to open the file in
What is wrong with the use of wcout?

UTF-8 is a stream of 1-byte chars with characters beyond ASCII
coded as multi byte sequences. I guess that you need to read such
a stream as a char or binary stream and then decode each line
with appropriate routine to UTF-16 Unicode. Say
MultiByteToWideChar and WideCharToMultiByte strings on Win32
platform. Other API exists on *nix platform in iconv ets.
 
P

phil_gg04

I have some unicode (utf8) text file. I _tried_ to write a simple
program that read one of them and write it to the standard output but...
of course it doesn't work

What character set do you want to use when writing to standard output?

If you want it to write using a character set other than the UTF-8 that
it read in, you need to do some conversion. You have to do this
explicitly. It will not happen automatically.

Assuming that your program is going to actually do something with the
text, rather than just reading it in and then writing it out again, you
need to decide what character set you want to use internally. I mostly
use UTF-8 internally and for input/output, so there is rarely any
conversion. I store this in chars. This is on Unix, and I'm in the
"western hemisphere". I understand that Windows programmers tend to
use UTF-16 quite often and that would also be sensible for non-European
languages. For that you should use wchars. You should not be using
ASCII for any new applications.

To actually perform the conversion you need something like the iconv
library. This is supported just about everywhere, but you'll want a
C++ wrapper for it to make it more palateable.

Regards, Phil.
 
K

Koulbak

I have some unicode (utf8) text file. I _tried_ to write a simple
program that read one of them and write it to the standard output
but... of course it doesn't work

What character set do you want to use when writing to standard output? [..]
If you want it to write using a character set other than the UTF-8 that
it read in, you need to do some conversion. You have to do this
explicitly. It will not happen automatically.


Thanks! I think I perfectly understood the problem.

My program was only an exercise, but the goal was learn how to "set" the
library (?) to read unicode (or eventually another encoding), manipulate
it using the string functionality of the standard library and then
write it back in a particular encoding on a file or to the standard output.
Assuming that your program is going to actually do
something with the text, rather than just reading
it in and then writing it out again, you need to
decide what character set you want to use internally.

It's really necessary that I specify the internal encoding? At my level
(scholastic level) I have no performance problem so if does exists a
default encoding this is ok for me.

So I would like specify the input file encoding and the ouput file
encoding, than use my program, for example:

string s;
while (infile >> s) {
if (s=="hello")
{;} //delete "hello" from input
else
{cout << s;}
}

I don't want, if it's possible, to specify wstring, wcut and so on because
1 I don't want to change the program the day I will need a diffent encoding
2 The program written without wstring, wcut etc. is more natural and
general and don't touch the implementation level

[Old Wolf write]
I'm assuming you want to output UTF-8 on stdout (Standard
C++ offers no facilities for converting UTF-8 to a stream
of wide characters). Can you clarify your intention?

I hope now it's more clear.

Thanks to all for the help. K.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,768
Messages
2,569,574
Members
45,051
Latest member
CarleyMcCr

Latest Threads

Top