how to get size of unicode string/string in bytes ?

P

pattreeya

Hello,

how can I get the number of byte of the string in python?
with "len(string)", it doesn't work to get the size of the string in
bytes if I have the unicode string but just the length. (it only works
fine for ascii/latin1) In data structure, I have to store unicode
string for many languages and must know exactly how big of my string
which is stored so I can read back later.

Many thanks for any suggestion.

cheers!
pattreeya.
 
P

pattreeya

e.g. I use utf8 as encoding/decoding,
s = "ทดสอบ"
u = s.decode("utf-8")
how can I get size of u ?
 
S

Stefan Behnel

how can I get the number of byte of the string in python?
with "len(string)", it doesn't work to get the size of the string in
bytes if I have the unicode string but just the length. (it only works
fine for ascii/latin1) In data structure, I have to store unicode
string for many languages and must know exactly how big of my string
which is stored so I can read back later.

I do not quite know what you could possibly need that for, but AFAICT Python
only uses two different unicode encodings depending on the platform.

If 'sys.maxunicode' is bigger than 65536, you're on a 32 bit unicode platform
(UCS4), otherwise you're on UCS. For UCS4, you can multiply the length of the
unicode string by 4 to get the length of the internal memory buffer, otherwise
multiply it by 2.

Normally, however, you should not need to deal with this kind of detail. Since
you say "read back later", maybe what you actually want is a serialisation of
the unicode string in, say, UTF-8 or something, that you can actually write to
a file and read back.

Stefan
 
D

Diez B. Roggisch

Stefan said:
I do not quite know what you could possibly need that for, but AFAICT
Python only uses two different unicode encodings depending on the
platform.

It is very important for relational databases, as these usually constrain
the amount of bytes per column - so you need the size of bytes, not the
number of unicode characters.

Diez
 
P

pattreeya

I got the answer. What I need was so simple but I was blinded at that
moment.
Thanks for any suggestion!


--------

f = open("test.csv", rb)
t1 = f.readline()Dur-kalk trafigi, tikaniklik tehlikesi
Dur-kalk trafigi, tikaniklik tehlikesi
Dur-kalk trafigi, tikaniklik tehlikesi



Thnx!
 
S

Stefan Behnel

Diez B. Roggisch wrote
It is very important for relational databases, as these usually constrain
the amount of bytes per column - so you need the size of bytes, not the
number of unicode characters.

So then the easiest thing to do is: take the maximum length of a unicode
string you could possibly want to store, multiply it by 4 and make that the
length of the DB field.

However, I'm pretty convinced it is a bad idea to store Python unicode strings
directly in a DB, especially as they are not portable. I assume that some DB
connectors honour the local platform encoding already, but I'd still say that
UTF-8 is your best friend here.

Stefan
 
D

Diez B. Roggisch

So then the easiest thing to do is: take the maximum length of a unicode
string you could possibly want to store, multiply it by 4 and make that
the length of the DB field.
However, I'm pretty convinced it is a bad idea to store Python unicode
strings directly in a DB, especially as they are not portable. I assume
that some DB connectors honour the local platform encoding already, but
I'd still say that UTF-8 is your best friend here.

It was your assumption that the OP wanted to store the "real"
unicode-strings. A moot point anyway, at it is afaik not possible to get
their contents in byte form (except from a C-extension).

And assuming 4 bytes per character is a bit dissipative I'd say - especially
when you have some > 80% ascii-subset in your text as european and american
languages have.

The solution was given before: chose an encoding (utf-8 is certainly the
most favorable one), and compute the byte-string length.

Diez
 
?

=?ISO-8859-1?Q?Walter_D=F6rwald?=

Diez said:
It was your assumption that the OP wanted to store the "real"
unicode-strings. A moot point anyway, at it is afaik not possible to get
their contents in byte form (except from a C-extension).

It is possible:
'a\x00\xff\x00\xff\xff\xff\xdb\xff\xdf'

This encoding is useless though, as you can't use it for reencoding on
another platform. (And it's probably not what the OP intended.)
And assuming 4 bytes per character is a bit dissipative I'd say - especially
when you have some > 80% ascii-subset in your text as european and american
languages have.

That would require UTF-32 as an encoding, which Python currently doesn't
have.
The solution was given before: chose an encoding (utf-8 is certainly the
most favorable one), and compute the byte-string length.

Exactly!

Servus,
Walter
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,534
Members
45,007
Latest member
obedient dusk

Latest Threads

Top