Fuzzy string comparison

S

Steve Bergman

I'm looking for a module to do fuzzy comparison of strings. I have 2
item master files which are supposed to be identical, but they have
thousands of records where the item numbers don't match in various
ways. One might include a '-' or have leading zeros, or have a single
character missing, or a zero that is typed as a letter 'O'. That kind
of thing. These tables currently reside in a mysql database. I was
wondering if there is a good package to let me compare strings and
return a value that is a measure of their similarity. Kind of like
soundex but for strings that aren't words.

Thanks,
Steve Bergman
 
W

Wojciech =?ISO-8859-2?Q?Mu=B3a?=

Steve said:
I'm looking for a module to do fuzzy comparison of strings. [...]

Check module difflib, it returns difference between two sequences.
 
J

John Machin

Wojciech said:
Steve said:
I'm looking for a module to do fuzzy comparison of strings. [...]

Check module difflib, it returns difference between two sequences.

and it's intended for comparing text files, and is relatively slow.

Google "python levenshtein". You'll probably find this a better fit for
typoed keys in a database.

What I would do for a quick start on this exercise (as described) is:

To compare two strings, take copies, and:
1. strip out all spaces (including \xA0 i.e.   if the data has
been anywhere near the web) from each string; also all "-" (in general
strip frequently occurring meaningless punctuation)
2. remove leading zeroes from each string
3. d = levenshtein_distance(string_a, string_b) # string_a etc is the
reduced string, not the original
4. error_metric = float(d) / max(len(string_a), len(string_b))

The error_metric will be 0.0 if the strings are the same (after
removing spaces, leading zeroes, etc) and 1.0 if they are completely
different (no characters in common).

.... and you don't want anything "kind of like soundex". That's a bit
like saying you'd like to travel in an aeroplane "kind of like the
Wright brothers' " ;-)

Cheers,
John
 
G

Gabriel Genellina

Wojciech said:
Steve said:
I'm looking for a module to do fuzzy comparison of strings. [...]

Check module difflib, it returns difference between two sequences.

and it's intended for comparing text files, and is relatively slow.

Google "python levenshtein". You'll probably find this a better fit for
typoed keys in a database.

Other alternatives: trigram, n-gram, Jaro's distance. There are some
Python implem. available.


--
Gabriel Genellina
Softlab SRL






__________________________________________________
Preguntá. Respondé. Descubrí.
Todo lo que querías saber, y lo que ni imaginabas,
está en Yahoo! Respuestas (Beta).
¡Probalo ya!
http://www.yahoo.com.ar/respuestas
 
C

Carsten Haese

Wojciech said:
Steve said:
I'm looking for a module to do fuzzy comparison of strings. [...]

Check module difflib, it returns difference between two sequences.

and it's intended for comparing text files, and is relatively slow.

Google "python levenshtein". You'll probably find this a better fit for
typoed keys in a database.
[...]

Using the Levenshtein distance in combination with stripping "noise"
characters is a good start, but the OP might want to take it a step
further. One of the OP's requirements is to recognize visually similar
strings, but 241O (Letter O at the end) and 241X have the same
Levenshtein distance from 2410 (digit zero at the end) while the former
is visually much closer to 2410 than the latter.

It seems to me that this could be achieved by starting with a standard
Levenshtein implementation such as http://hetland.org/python/distance.py
and altering the line "change = change + 1" to something like "change =
change + visual_distance(a[j-1], b[i-1])". visual_distance() would be a
function that embodies the OP's idea of which character replacements are
tolerable by returning a number between 0 (the two characters are
visually identical) and 1 (the two characters are completely different).

Hope this helps,

-Carsten
 
J

John Machin

Carsten said:
Wojciech said:
Steve Bergman wrote:
I'm looking for a module to do fuzzy comparison of strings. [...]

Check module difflib, it returns difference between two sequences.

and it's intended for comparing text files, and is relatively slow.

Google "python levenshtein". You'll probably find this a better fit for
typoed keys in a database.
[...]

Using the Levenshtein distance in combination with stripping "noise"
characters is a good start, but the OP might want to take it a step
further. One of the OP's requirements is to recognize visually similar
strings, but 241O (Letter O at the end) and 241X have the same
Levenshtein distance from 2410 (digit zero at the end) while the former
is visually much closer to 2410 than the latter.

It seems to me that this could be achieved by starting with a standard
Levenshtein implementation such as http://hetland.org/python/distance.py
and altering the line "change = change + 1" to something like "change =
change + visual_distance(a[j-1], b[i-1])". visual_distance() would be a
function that embodies the OP's idea of which character replacements are
tolerable by returning a number between 0 (the two characters are
visually identical) and 1 (the two characters are completely different).

Ya ya ya, I could have told him a whole lot more -- please consider
that what I did tell him was IMHO over-generous in response to an OT
question asking for assistance with performing a Google search.

.... and given his keys are described as "numbers", a better example
might be 241O or 241o false-matching with 2416.

.... and it might be a good idea if he ran the simplistic approach first
and saw what near-misses he actually came up with before complicating
it and slowing down what is already an O(N**2*L**2) exercise in the
traditional/novice implementation where N is the number of keys and L
is their average length.

The OP needs to think about 123456789 compared with 123426789; are they
the same account or not? What other information does he have?

HTH,
John
 
J

Jorge Godoy

Steve Bergman said:
I'm looking for a module to do fuzzy comparison of strings. I have 2
item master files which are supposed to be identical, but they have
thousands of records where the item numbers don't match in various
ways. One might include a '-' or have leading zeros, or have a single
character missing, or a zero that is typed as a letter 'O'. That kind
of thing. These tables currently reside in a mysql database. I was
wondering if there is a good package to let me compare strings and
return a value that is a measure of their similarity. Kind of like
soundex but for strings that aren't words.

If you were using PostgreSQL there's a contrib package (pg_trgm) that could
help a lot with that. It can show you the distance between two strings based
on a trigram comparison.

You can see how it works on the README
(http://www.sai.msu.su/~megera/postgres/gist/pg_trgm/README.pg_trgm) and maybe
port it for your needs.

But it probably won't be a one operation only search, you'll have to
post process results to decide what to do on multiple matches.
 
J

John Machin

Duncan said:
Taking a copy of a string seems kind of superfluous in Python.

You are right, I really meant don't do:
original = original.strip().replace(....).replace(....)
(a strange way of doing things which I see occasionally in other folks'
code)

Cheers,
John
 
S

Steven D'Aprano

You are right, I really meant don't do:
original = original.strip().replace(....).replace(....)
(a strange way of doing things which I see occasionally in other folks'
code)

Well, sure it is strange if you call the variable "original". But if you
don't care about the original, what's so strange about writing this?

astring = astring.strip().replace(....).replace(....)

Why keep the original value of astring around if you no longer need it?
 
S

Steve Bergman

Thanks, all. Yes, Levenshtein seems to be the magic word I was looking
for. (It's blazingly fast, too.)

I suspect that if I strip out all the punctuation, etc. from both the
itemnumber and description columns, as suggested, and concatenate them,
pairing the record with its closest match in the other file, it ought
to be pretty accurate. Obviously, the final decision will be up to a
human being, but this should help them quite a bit.

BTW, excluding all the items that match exactly, I only have 8000 items
in one file to compare to 2600 in the other. As fast as
python-levenshtein seems to be, this should finish in well under a
minute.

Thanks again.

-Steve
 
J

John Machin

Steve said:
Thanks, all. Yes, Levenshtein seems to be the magic word I was looking
for. (It's blazingly fast, too.)

I suspect that if I strip out all the punctuation, etc. from both the
itemnumber and description columns, as suggested, and concatenate them,
pairing the record with its closest match in the other file, it ought
to be pretty accurate. Obviously, the final decision will be up to a
human being, but this should help them quite a bit.

BTW, excluding all the items that match exactly, I only have 8000 items
in one file to compare to 2600 in the other. As fast as
python-levenshtein seems to be, this should finish in well under a
minute.

The above suggests that you plan to do a preliminary pass using exact
comparison, and remove exact-matching pairs from further consideration.
If that is the case, here are a few questions for you to ponder:

What about 789o123 in file A and 789o123 in file B? Are you concerned
about standardising your item-numbers?

What about cases like 7890123 and 789o123 in file A? Are you concerned
about duplicated records within a file?

What about cases like 7890123 and 789o123 in file A and 7890123 and
789o123 and 78-901-23 in file B? Are you concerned about grouping all
instances of the same item?
If you are, the magic phrase you are looking for is "union find".

HTH,
John
 
G

Gabriel Genellina

In case you need something more, this article is a good starting point:
Record Linkage: A Machine Learning Approach, A Toolbox, and A Digital
Government Web Service (2003)
Mohamed G. Elfeky, Vassilios S. Verykios, Ahmed K. Elmagarmid, Thanaa
M. Ghanem, Ahmed R. Huwait.
http://citeseer.ist.psu.edu/elfeky03record.html


--
Gabriel Genellina
Softlab SRL






__________________________________________________
Preguntá. Respondé. Descubrí.
Todo lo que querías saber, y lo que ni imaginabas,
está en Yahoo! Respuestas (Beta).
¡Probalo ya!
http://www.yahoo.com.ar/respuestas
 
J

jmw

Gabriel said:
At Tuesday 26/12/2006 18:08, John Machin wrote:
I'm looking for a module to do fuzzy comparison of strings. [...]
Other alternatives: trigram, n-gram, Jaro's distance. There are some
Python implem. available.

Quick question, you mentioned the data you need to run comparisons on
is stored in a database. Is this string comparison a one-time
processing kind of thing to clean up the data, or are you going to have
to continually do fuzzy string comparison on the data in the database?
There are some papers out there on implementing n-gram string
comparisons completely in SQL so that you don't have to pull back all
the data in your tables in order to do fuzzy comparisons. I can drum
up some code I did a while ago and post it (in java).
 
J

jmw

Gabriel said:
At Tuesday 26/12/2006 18:08, John Machin wrote:
I'm looking for a module to do fuzzy comparison of strings. [...]
Other alternatives: trigram, n-gram, Jaro's distance. There are some
Python implem. available.

Quick question, you mentioned the data you need to run comparisons on
is stored in a database. Is this string comparison a one-time
processing kind of thing to clean up the data, or are you going to have
to continually do fuzzy string comparison on the data in the database?
There are some papers out there on implementing n-gram string
comparisons completely in SQL so that you don't have to pull back all
the data in your tables in order to do fuzzy comparisons. I can drum
up some code I did a while ago and post it (in java).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,774
Messages
2,569,596
Members
45,143
Latest member
DewittMill
Top