Theoretical Problem

M

Materialised

I have been given the task, of developing a program to sit next to a cgi
based c program (I know this is offtopic but my question does only refer
to the standard c part of the code).

Basically what the program needs to do, is open a text file, containing
hundreds maybe even thousands of items, and remove duplicate items.
Also if the line length exceds a certian amount of characters, I wish to
remove it.

I was thinking of using malloc to create a dynamically generated array
to load the elements into, then loop through them all removing duplicate
items.

This seems a little resource heavy considering the size of the file I am
dealing with. Does anyone have any suggestions on how to better handle
the task at hand?

Thanks in advance
Mick
 
B

Ben Pfaff

Materialised said:
Basically what the program needs to do, is open a text file,
containing hundreds maybe even thousands of items, and remove
duplicate items.
Also if the line length exceds a certian amount of characters, I wish
to remove it.

I think you'd be best off combining existing tools to do this.
For example, on most OSes you could use `sort < input | uniq -c >
output', or something similar, to do the removal of duplicates.
 
E

Eric

Materialised said:
Does anyone have any suggestions on how to better handle
the task at hand?

Assuming you cannot use existing tools, as Ben suggested, I cannot think
of a better method to solve your problem.

I would suggest using a dynamic data structure such that as you read in
each item, you insert the item and keep everything sorted.

As you do the insert then, you can also check to see if it is a
duplicate and abort the insert.

This way, you only have to do one pass over the data to remove the
duplicates and once complete you can just write everything back out.

As for the data structure, I might suggest using a red-black tree.
 
B

Ben Pfaff

I would suggest using a dynamic data structure such that as you read in
each item, you insert the item and keep everything sorted.

As you do the insert then, you can also check to see if it is a
duplicate and abort the insert.

This way, you only have to do one pass over the data to remove the
duplicates and once complete you can just write everything back out.

As for the data structure, I might suggest using a red-black tree.

I can't think why a red-black tree would be superior to a hash
table here. Hash tables work fine for finding duplicates.
 
T

Tim Woodall

Assuming you cannot use existing tools, as Ben suggested, I cannot think
of a better method to solve your problem.

I would suggest using a dynamic data structure such that as you read in
each item, you insert the item and keep everything sorted.

As you do the insert then, you can also check to see if it is a
duplicate and abort the insert.

This way, you only have to do one pass over the data to remove the
duplicates and once complete you can just write everything back out.

As for the data structure, I might suggest using a red-black tree.


If there are millions of lines then you will probably do better
storing a hash[1] than storing the actual data. You can write out each
line as you read it in unless the hash has already been seen before.
Of course this doesn't sort the data which may be an additional
requirement.

[1] e.g. MD5 or SHA-1. You don't want to get any collisions.
 
G

glen herrmannsfeldt

Materialised wrote:

(snip)
Basically what the program needs to do, is open a text file, containing
hundreds maybe even thousands of items, and remove duplicate items.
Also if the line length exceds a certian amount of characters, I wish to
remove it.

(snip)

Does it need to keep 100% of the non duplicated lines, or is
99.99999999% good enough? I would keep a hash table of the hashed
values of the lines. Maybe a 64 bit or even 96 or 128 bit CRC. There
is an extremely small chance that two different lines would generate the
same hash value, but it is not zero. Storing only the hash but not the
lines saves a lot of memory. Note that N lines have about N**2/2
possible pairs (the birthday problem), so a 32 bit CRC may not be good
enough.

Otherwise, doing it as an external sort such as the unix sort command,
is pretty fast and almost unlimited in the length of file you can
process. (You need enough disk space to hold the file and the
intermediate sort work files.)

-- glen
 
E

Eric

Ben Pfaff said:
I can't think why a red-black tree would be superior to a hash
table here. Hash tables work fine for finding duplicates.

That works to, as long as one is careful to avoid an inordinate number
of collisions.
 
E

Eric

Eric said:
That works to, as long as one is careful to avoid an inordinate number
of collisions.

Of course, you need some kind of data structure to hold the hash table
and since we do not apparently know how big that table needs to be, this
could lead back to a red-black tree. The two ideas are not necessarily
mutually exclusive.
 
B

Bigdakine

Subject: Theoretical Problem
From: Materialised (e-mail address removed)
Date: 11/20/03 3:27 PM Hawaiian Standard Time
Message-id: <[email protected]>

I have been given the task, of developing a program to sit next to a cgi
based c program (I know this is offtopic but my question does only refer
to the standard c part of the code).

Basically what the program needs to do, is open a text file, containing
hundreds maybe even thousands of items, and remove duplicate items.
Also if the line length exceds a certian amount of characters, I wish to
remove it.

I was thinking of using malloc to create a dynamically generated array
to load the elements into, then loop through them all removing duplicate
items.

This seems a little resource heavy considering the size of the file I am
dealing with. Does anyone have any suggestions on how to better handle
the task at hand?

Hash Table will make searching quicker.

Stuart

Stuart
Dr. Stuart A. Weinstein
Ewa Beach Institute of Tectonics
"To err is human, but to really foul things up
requires a creationist"
 
A

Anuj Heer

what u can do is probably sort the items and then remove duplicate
entries from the list whis as my friends have said is slower because
it requires multiple passes through the data..what u can do instead is
an insertion sort to another file and that ould help u solve the
problem faster..
(e-mail address removed)
 
R

Richard Heathfield

Anuj said:
what u can do is probably sort the items and then remove duplicate
entries from the list whis as my friends have said is slower because
it requires multiple passes through the data..what u can do instead is
an insertion sort to another file and that ould help u solve the
problem faster..

No need for multiple passes. If you like trees, pour the file into a tree,
and duplicates will be eliminated automagically. If you prefer hash, hash
each line and filter out the dups that way. Or, better still, use existing
tools, like Ben suggested.
 
R

Randy Howard

Of course, you need some kind of data structure to hold the hash table
and since we do not apparently know how big that table needs to be,

Simple hash chaining seems like a perfectly suitable method for handling
the dynamic size issue. Any decent data structures book should have
example code to get the OP started.
 
E

Eric

Randy Howard said:
Simple hash chaining seems like a perfectly suitable method for handling
the dynamic size issue.

Yes, and I would probably suggest a red-black tree be used for the
chaining. As I mentioned...hash tables and red-black trees are not
mutually exclusive concepts.
 
R

Randy Howard

Yes, and I would probably suggest a red-black tree be used for the
chaining. As I mentioned...hash tables and red-black trees are not
mutually exclusive concepts.

OK, you've established (repeatedly) that you like RB trees. Move on.
 
R

Randy Howard

Do you see a flaw in something I said?

*sigh* Are you a relative of R. Bayer (symmetric B trees)?

There is nothing wrong with RB trees per se, same goes for a lot of other
similar data structures. However, since you won't let it go, perhaps a
review of section 14.6 of Sedgewick's Algorithms in C, Third Edition,
Parts 1-4 will be enlightening. Pay particular attention to Table 14.1,
in which RB trees used with hash chaining are demonstrated to NOT give
particularly good performance, either for construction or search times
as compared to other methods. In fact, none of the other 4 methods
considered have worse performance than RB hashing. Go figure.

As the empirical study in the text points out, simple linear probing
with table expansion by doubling is considerably faster, and far simpler
to code. (RB hashing took 2.5X longer for table construction, and 5.6X
longer to handle missed searches than the method you recommend ad nauseam.
The LP method also uses considerably less memory overhead while providing
performance benefits.
 
E

Eric

Randy Howard said:
Pay particular attention to Table 14.1,
in which RB trees used with hash chaining are demonstrated to NOT give
particularly good performance, either for construction or search times
as compared to other methods.

As is specifically mentioned, this table only demonstrates this for
32-bit Ints that are easily hashed. The case being dealt with in this
thread is quite different. As such, Table 14.1 is simply irrelevant.
RB hashing took 2.5X longer for table construction, and 5.6X
longer to handle missed searches than the method you recommend ad nauseam.

The kind of hashing being talked about by Sedgewick does not consider
the need to not insert duplicates. The hashing being considered here
assuming that each new node to be inserted is unique. Note the automatic
'N++' in the sample code which is a total node count.

In other words, after one has computed the hash value and goes to
insert, when not using an RB tree, one must check each and every node
(let's say there are M nodes) at that hash value. With RB, one only
needs to check log(M) nodes. Now considering that we are inserting
strings and must do string comparisons, the use of RB can be a
significant savings.

Furthermore, hashing performs well only when the hash function can both:

1. Be computed quickly and easily
2. Is capable of distributing evenly over the table

Such a function is dependent upon knowing the characteristics of the
data being fed into it and there does not exist a universal hash
function.

Now, in 14.6, Sedgewick does talk about 'Ordered Hashing', which may be
useful for this case, but he does not perform any analysis with respect
to Ordered Hashing and instead informs the reader to look elsewhere.

I am not familiar with the various aspects of Ordered Hashing, but my
intuition and a bit of thought tells me that the use of RB will likely
have very good performance in this case and, in worst-case situations,
be able to deal with it just as easily.

So, my recommendation remains to just use an RB tree.

However, if a good hash function can be found (and there are no
guarantees here), this can speed things up quite nicely to find a bucket
to stick things into and then for the nodes in that bucket, use an RB
tree.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,044
Latest member
RonaldNen

Latest Threads

Top