unicode memory usage

G

Gary Robinson

We have an application which involves storing a lot of strings in RAM. It
would be most convenient to use Unicode strings, but I am wary of doubling
memory usage. My fear is based on the idea that unicode strings may take two
bytes per character in order to accomodate non-ascii characters.

But I don't know whether that's actually how Python strings work internally.

So, my question: Do unicode strings in Python take substantially more memory
than classic python strings or not, assuming the strings are generally 99%
ASCII characters (but not 100%)?


--Gary

--
Putting http://wecanstopspam.org in your email helps it pass through
overzealous spam filters.

Gary Robinson
CEO
Transpose, LLC
(e-mail address removed)
207-942-3463
http://www.transpose.com
http://radio.weblogs.com/0101454
 
?

=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=

Gary said:
But I don't know whether that's actually how Python strings work internally.

Python Unicode objects use normally 2 bytes per character, unless Python
is built in UCS-4 mode, in which case they use 4 bytes per character.
So, my question: Do unicode strings in Python take substantially more memory
than classic python strings or not, assuming the strings are generally 99%
ASCII characters (but not 100%)?

Yes; you can expect that 99% of the storage for characters are null
bytes, then. Whether this is substantial depends on the total amount of
storage that you need for string objects, compared to the storage needed
for other things, or the storage available.

Regards,
Martin
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top