best way to replace first word in string?

H

hagai26

I am looking for the best and efficient way to replace the first word
in a str, like this:
"aa to become" -> "/aa/ to become"
I know I can use spilt and than join them
but I can also use regular expressions
and I sure there is a lot ways, but I need realy efficient one
 
L

Larry Bates

There is a "gotcha" on this:

How do you define "word"? (e.g. can the
first word be followed by space, comma, period,
or other punctuation or is it always a space).

If it is always a space then this will be pretty
"efficient".

string="aa to become"
firstword, restwords=s.split(' ',1)
newstring="/%s/ %s" % (firstword, restwords)

I'm sure the regular expression gurus here can come
up with something if it can be followed by other than
a space.

-Larry Bates
 
M

Mike Meyer

I am looking for the best and efficient way to replace the first word
in a str, like this:
"aa to become" -> "/aa/ to become"
I know I can use spilt and than join them
but I can also use regular expressions
and I sure there is a lot ways, but I need realy efficient one

Assuming you know the whitespace will be spaces, I like find:

new = "/aa/" + old[old.find(' '):]

As for efficiency - I suggest you investigate the timeit module, and
do some tests on data representative of what you're actaully going to
be using.

<mike
 
M

Micah Elliott

I am looking for the best and efficient way to replace the first word
in a str, like this:
"aa to become" -> "/aa/ to become"
I know I can use spilt and than join them
but I can also use regular expressions
and I sure there is a lot ways, but I need realy efficient one

Of course there are many ways to skin this cat; here are some trials.
The timeit module is useful for comparison (and I think I'm using it
correctly :). I thought that string concatenation was rather
expensive, so its being faster than %-formatting surprised me a bit:

$ python -mtimeit '
res = "/%s/ %s"% tuple("a b c".split(" ", 1))'
100000 loops, best of 3: 3.87 usec per loop

$ python -mtimeit '
b,e = "a b c".split(" ", 1); res = "/"+b+"/ "+e'
100000 loops, best of 3: 2.78 usec per loop

$ python -mtimeit '
"/"+"a b c".replace(" ", "/ ", 1)'
100000 loops, best of 3: 2.32 usec per loop

$ python -mtimeit '
"/%s" % ("a b c".replace(" ", "/ ", 1))'
100000 loops, best of 3: 2.83 usec per loop

$ python -mtimeit '
"a b c".replace("", "/", 1).replace(" ", "/ ", 1)'
100000 loops, best of 3: 3.51 usec per loop

There are possibly better ways to do this with strings.

And the regex is comparatively slow, though I'm not confident this one
is optimally written:

$ python -mtimeit -s'import re' '
re.sub(r"^(\w*)", r"/\1/", "a b c")'
10000 loops, best of 3: 44.1 usec per loop

You'll probably want to experiment with longer strings if a test like
"a b c" is not representative of your typical input.
 
F

Fredrik Lundh

Micah said:
And the regex is comparatively slow, though I'm not confident this one
is optimally written:

$ python -mtimeit -s'import re' '
re.sub(r"^(\w*)", r"/\1/", "a b c")'
10000 loops, best of 3: 44.1 usec per loop

the above has to look the pattern up in the compilation cache for each loop,
and it also has to parse the template string. precompiling the pattern and
using a callback instead of a template string can speed things up somewhat:

timeit -s"import re; sub = re.compile(r'^(\w*)').sub"
"sub(lambda x: '/%s/' % x.groups(), 'a b c')"

(but the replace solutions should be faster anyway; it's not free to prepare
for a RE match, and sub uses the same split/join implementation as replace...)

</F>
 
S

Steven D'Aprano

I am looking for the best and efficient way to replace the first word
in a str, like this:
"aa to become" -> "/aa/ to become"
I know I can use spilt and than join them
but I can also use regular expressions
and I sure there is a lot ways, but I need realy efficient one

Efficient for what?

Efficient in disk-space used ("least source code")?

Efficient in RAM used ("smallest objects and compiled code")?

Efficient in execution time ("fastest")?

Efficient in program time ("quickest to write and debug")?

If you use regular expressions, does the time taken in loading the module
count, or can we assume you have already loaded it?

It will also help if you specify your problem a little better. Are you
replacing one word, and then you are done? Or at you repeating hundreds of
millions of times? What is the context of the problem?

Most importantly, have you actually tested your code to see if it is
efficient enough, or are you just wasting your -- and our -- time with
premature optimization?

def replace_word(source, newword):
"""Replace the first word of source with newword."""
return newword + " " + "".join(source.split(None, 1)[1:])

import time
def test():
t = time.time()
for i in range(10000):
s = replace_word("aa to become", "/aa/")
print ((time.time() - t)/10000), "s"

py> test()
3.6199092865e-06 s


Is that fast enough for you?
 
W

William Park

I am looking for the best and efficient way to replace the first word
in a str, like this:
"aa to become" -> "/aa/ to become"
I know I can use spilt and than join them
but I can also use regular expressions
and I sure there is a lot ways, but I need realy efficient one

I doubt you'll find faster than Sed.

man sed

--
William Park <[email protected]>, Toronto, Canada
ThinFlash: Linux thin-client on USB key (flash) drive
http://home.eol.ca/~parkw/thinflash.html
BashDiff: Super Bash shell
http://freshmeat.net/projects/bashdiff/
 
S

Steven D'Aprano

I thought that string concatenation was rather
expensive, so its being faster than %-formatting surprised me a bit:

Think about what string concatenation actually does:

s = "hello " + "world"

In pseudo-code, it does something like this:

- Count chars in "hello" (six chars).
- Count chars in "world" (five chars).
- Allocate eleven bytes.
- Copy six chars from "hello " and five from "world" into the newly
allocated bit of memory.

(This should not be thought of as the exact process that Python uses, but
simply illustrating the general procedure.)

Now think of what str-formatting would do:

s = "hello %s" % "world"

In pseudo-code, it might do something like this:

- Allocate a chunk of bytes, hopefully not too big or too small.
- Repeat until done:
- Copy chars from the original string into the new string,
until it hits a %s placeholder.
- Grab the next string from the args, and copy chars from
that into the new string. If the new string is too small,
reallocate memory to make it bigger, potentially moving
chunks of bytes around.

The string formatting pseudo-code is a lot more complicated and has to do
more work than just blindly copying bytes. It has to analyse the bytes it
is copying, looking for placeholders.

So string concatenation is more efficient, right? No. The thing is, a
*single* string concatenation is almost certainly more efficient than a
single string concatenation. But now look what happens when you repeat it:

s = "h" + "e" + "l" + "l" + "o" + " " + "w" + "o" + "r" + "l" + "d"

This ends up doing something like this:

- Allocate two bytes, copying "h" and "e" into them.
- Allocate three bytes, copying "he" and "l" into them.
- Allocate four bytes, copying "hel" and "l" into them.
....
- Allocate eleven bytes, copying "hello worl" and "d" into them.

The problem is that string concatenation doesn't scale efficiently. String
formatting, on the other hand, does more work to get started, but scales
better.

See, for example, this test code:

py> def tester(n):
.... s1 = ""
.... s2 = "%s" * n
.... bytes = tuple([chr(i % 256) for i in range(n)])
.... t1 = time.time()
.... for i in range(n):
.... s1 = s1 + chr(i % 256)
.... t1 = time.time() - t1
.... t2 = time.time()
.... s2 = s2 % bytes
.... t2 = time.time() - t2
.... assert s1 == s2
.... print t1, t2
....
py> x = 100000
py> tester(x)
3.24212408066 0.01252317428
py> tester(x)
2.58376598358 0.01238489151
py> tester(x)
2.76262307167 0.01474809646

The string formatting is two orders of magnitude faster than the
concatenation. The speed difference becomes even more obvious when you
increase the number of strings being concatenated:

py> tester(x*10)
2888.56399703 0.13130998611

Almost fifty minutes, versus less than a quarter of a second.
 
S

Steven D'Aprano

The thing is, a
*single* string concatenation is almost certainly more efficient than a
single string concatenation.

Dagnabit, I meant a single string concatenation is more efficient than a
single string replacement using %.
 
C

Chris F.A. Johnson

I doubt you'll find faster than Sed.

On the contrary; to change a string, almost anything will be faster
than sed (except another external program).

If you are in a POSIX shell, parameter expansion will be a lot
faster.

In a python program, one of the solutions already posted will be
much faster.
 
M

Mike Meyer

Steven D'Aprano said:
py> def tester(n):
... s1 = ""
... s2 = "%s" * n
... bytes = tuple([chr(i % 256) for i in range(n)])
... t1 = time.time()
... for i in range(n):
... s1 = s1 + chr(i % 256)
... t1 = time.time() - t1
... t2 = time.time()
... s2 = s2 % bytes
... t2 = time.time() - t2
... assert s1 == s2
... print t1, t2
...
py>
py> tester(x)
3.24212408066 0.01252317428
py> tester(x)
2.58376598358 0.01238489151
py> tester(x)
2.76262307167 0.01474809646

The string formatting is two orders of magnitude faster than the
concatenation. The speed difference becomes even more obvious when you
increase the number of strings being concatenated:

The test isn't right - the addition test case includes the time to
convert the number into a char, including taking a modulo.

I couldn't resist adding the .join idiom to this test:
.... l1 = [chr(i % 256) for i in range(n)]
.... s1 = ""
.... t1 = time.time()
.... for c in l1:
.... s1 += c
.... t1 = time.time() - t1
.... s2 = '%s' * n
.... l2 = tuple(chr(i % 256) for i in range(n))
.... t2 = time.time()
.... s2 = s2 % l2
.... t2 = time.time() - t2
.... t3 = time.time()
.... s3 = ''.join(l2)
.... t3 = time.time() - t3
.... assert s1 == s2
.... assert s1 == s3
.... print t1, t2, t3
.... 0.0544500350952 0.0271301269531 0.0232360363007

The "order of magnitude" now falls to a factor of two. The original
version of the test on my box also showed an order of magnitude
difference, so this isn't an implementation difference.

This version still includes the overhead of the for loop in the test.

The join idiom isn't enough faster to make a difference.
py> tester(x*10)
2888.56399703 0.13130998611

Here we get the addition idiom being closer to a factor of four
instead of two slower. The .joim idiom is still nearly identical.

<mike
 
R

Ron Adam

Steven said:
def replace_word(source, newword):
"""Replace the first word of source with newword."""
return newword + " " + "".join(source.split(None, 1)[1:])

import time
def test():
t = time.time()
for i in range(10000):
s = replace_word("aa to become", "/aa/")
print ((time.time() - t)/10000), "s"

py> test()
3.6199092865e-06 s


Is that fast enough for you?


I agree in most cases it's premature optimization. But little tests
like this do help in learning to write good performing code in general.

Don't forget a string can be sliced. In this case testing before you
leap is a win. ;-)


import time
def test(func, n):
t = time.time()
s = ''
for i in range(n):
s = func("aa to become", "/aa/")
tfunc = t-time.time()
print func.__name__,':', (tfunc/n), "s"
print s

def replace_word1(source, newword):
"""Replace the first word of source with newword."""
return newword + " " + "".join(source.split(None, 1)[1:])

def replace_word2(source, newword):
"""Replace the first word of source with newword."""
if ' ' in source:
return newword + source[source.index(' '):]
return newword

test(replace_word1, 10000)
test(replace_word2, 10000)


=======
replace_word1 : -3.09998989105e-006 s
/aa/ to become
replace_word2 : -1.60000324249e-006 s
/aa/ to become
 
W

William Park

Chris F.A. Johnson said:
On the contrary; to change a string, almost anything will be faster
than sed (except another external program).

If you are in a POSIX shell, parameter expansion will be a lot
faster.

In a python program, one of the solutions already posted will be
much faster.

Care to put a wager on your claim?

--
William Park <[email protected]>, Toronto, Canada
ThinFlash: Linux thin-client on USB key (flash) drive
http://home.eol.ca/~parkw/thinflash.html
BashDiff: Super Bash shell
http://freshmeat.net/projects/bashdiff/
 
S

Steven D'Aprano

Don't forget a string can be sliced. In this case testing before you
leap is a win. ;-)

Not much of a win: only a factor of two, and unlikely to hold in all
cases. Imagine trying it on *really long* strings with the first space
close to the far end: the split-and-join algorithm has to walk the string
once, while your test-then-index algorithm has to walk it twice.

So for a mere factor of two benefit on short strings, I'd vote for the
less complex split-and-join version, although it is just a matter of
personal preference.
 
C

Chris F.A. Johnson

Care to put a wager on your claim?

In a shell, certainly.

If one of the python solutions is not faster than sed (e.g.,
os.system("sed .....")) I'll forget all about using python.
 
S

Steven D'Aprano

The test isn't right - the addition test case includes the time to
convert the number into a char, including taking a modulo.

I wondered if anyone would pick up on that :)

You are correct, however that only adds a constant amount of time to
the time it takes for each concatenation. That's why I talked about order
of magnitude differences. If you look at the vast increase in time taken
for concatenation when going from 10**5 to 10**6 iterations, that cannot
be blamed on the char conversion.

At least, that's what it looks like to me -- I'm perplexed by the *vast*
increase in speed in your version, far more than I would have predicted
from pulling out the char conversion. I can think of three
possibilities:

(1) Your PC is *hugely* faster than mine;

(2) Your value of x is a lot smaller than I was using (you don't actually
say what x you use); or

(3) You are using a version and/or implementation of Python that has a
different underlying implementation of string concatenation.


I couldn't resist adding the .join idiom to this test:

[snip code]
0.0544500350952 0.0271301269531 0.0232360363007

The "order of magnitude" now falls to a factor of two. The original
version of the test on my box also showed an order of magnitude
difference, so this isn't an implementation difference.
[snip]
1.25092792511 0.311630964279 0.241738080978

Looking just at the improved test of string concatenation, I get times
about 0.02 second for n=10**4. For n=10**5, the time blows out to 2
seconds. For 10**6, it explodes through the roof to about 2800 seconds, or
about 45 minutes, and for 10**7 I'm predicting it would take something of
the order of 500 HOURS.

In other words, yes the char conversion adds some time to the process, but
for large numbers of iterations, it gets swamped by the time taken
repeatedly copying chars over and over again.
 
R

Ron Adam

Steven said:
Not much of a win: only a factor of two, and unlikely to hold in all
cases. Imagine trying it on *really long* strings with the first space
close to the far end: the split-and-join algorithm has to walk the string
once, while your test-then-index algorithm has to walk it twice.

So for a mere factor of two benefit on short strings, I'd vote for the
less complex split-and-join version, although it is just a matter of
personal preference.

Guess again... Is this the results below what you were expecting?

Notice the join adds a space to the end if the source string is a single
word. But I allowed for that by adding one in the same case for the
index method.

The big win I was talking about was when no spaces are in the string.
The index can then just return the replacement.

These are relative percentages of time to each other. Smaller is better.

Type 1 = no spaces
Type 2 = space at 10% of length
Type 3 = space at 90% of length

Type: Length

Type 1: 10 split/join: 317.38% index: 31.51%
Type 2: 10 split/join: 212.02% index: 47.17%
Type 3: 10 split/join: 186.33% index: 53.67%
Type 1: 100 split/join: 581.75% index: 17.19%
Type 2: 100 split/join: 306.25% index: 32.65%
Type 3: 100 split/join: 238.81% index: 41.87%
Type 1: 1000 split/join: 1909.40% index: 5.24%
Type 2: 1000 split/join: 892.02% index: 11.21%
Type 3: 1000 split/join: 515.44% index: 19.40%
Type 1: 10000 split/join: 3390.22% index: 2.95%
Type 2: 10000 split/join: 2263.21% index: 4.42%
Type 3: 10000 split/join: 650.30% index: 15.38%
Type 1: 100000 split/join: 3342.08% index: 2.99%
Type 2: 100000 split/join: 1175.51% index: 8.51%
Type 3: 100000 split/join: 677.77% index: 14.75%
Type 1: 1000000 split/join: 3159.27% index: 3.17%
Type 2: 1000000 split/join: 867.39% index: 11.53%
Type 3: 1000000 split/join: 679.47% index: 14.72%




import time
def test(func, source):
t = time.clock()
n = 6000000/len(source)
s = ''
for i in xrange(n):
s = func(source, "replace")
tt = time.clock()-t
return s, tt

def replace_word1(source, newword):
"""Replace the first word of source with newword."""
return newword + " " + " ".join(source.split(None, 1)[1:])

def replace_word2(source, newword):
"""Replace the first word of source with newword."""
if ' ' in source:
return newword + source[source.index(' '):]
return newword + ' ' # space needed to match join results


def makestrings(n):
s1 = 'abcdefghij' * (n//10)
i, j = n//10, n-n//10
s2 = s1[:i] + ' ' + s1[i:] + 'd.' # space near front
s3 = s1[:j] + ' ' + s1[j:] + 'd.' # space near end
return [s1,s2,s3]

for n in [10,100,1000,10000,100000,1000000]:
for sn,s in enumerate(makestrings(n)):
r1, t1 = test(replace_word1, s)
r2, t2 = test(replace_word2, s)
assert r1 == r2
print "Type %i: %-8i split/join: %.2f%% index: %.2f%%" \
% (sn+1, n, t1/t2*100.0, t2/t1*100.0)
 
B

bonono

interesting. seems that "if ' ' in source:" is a highly optimized code
as it is even faster than "if str.find(' ') != -1:' when I assume they
end up in the same C loops ?
 
R

Ron Adam

interesting. seems that "if ' ' in source:" is a highly optimized code
as it is even faster than "if str.find(' ') != -1:' when I assume they
end up in the same C loops ?


The 'in' version doesn't call a function and has a simpler compare. I
would think both of those results in it being somewhat faster if it
indeed calls the same C loop.

.... if ' ' in a:
.... pass
.... 2 0 LOAD_CONST 1 (' ')
3 LOAD_FAST 0 (a)
6 COMPARE_OP 6 (in)
9 JUMP_IF_FALSE 4 (to 16)
12 POP_TOP

3 13 JUMP_FORWARD 1 (to 17).... if str.find(' ') != -1:
.... pass
.... 2 0 LOAD_GLOBAL 0 (str)
3 LOAD_ATTR 1 (find)
6 LOAD_CONST 1 (' ')
9 CALL_FUNCTION 1
12 LOAD_CONST 2 (-1)
15 COMPARE_OP 3 (!=)
18 JUMP_IF_FALSE 4 (to 25)
21 POP_TOP

3 22 JUMP_FORWARD 1 (to 26)
 
M

Mike Meyer

Steven D'Aprano said:
I wondered if anyone would pick up on that :)
You are correct, however that only adds a constant amount of time to
the time it takes for each concatenation. That's why I talked about order
of magnitude differences. If you look at the vast increase in time taken
for concatenation when going from 10**5 to 10**6 iterations, that cannot
be blamed on the char conversion.

True. string addition is O(n^2); the conversion time is O(n). But
fair's fair.
At least, that's what it looks like to me -- I'm perplexed by the *vast*
increase in speed in your version, far more than I would have predicted
from pulling out the char conversion. I can think of three
possibilities:

Everything got faster, so it wasn't just pulling the chr conversion.
(1) Your PC is *hugely* faster than mine;

It's a 3Ghz P4.
(2) Your value of x is a lot smaller than I was using (you don't actually
say what x you use); or

It's still in the buffer, and I copied it from your timings:
(3) You are using a version and/or implementation of Python that has a
different underlying implementation of string concatenation.

I'm runing Python 2.4.1 built with GCC 3.4.2.

<mike
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,578
Members
45,052
Latest member
LucyCarper

Latest Threads

Top