fastest navigation in a list

D

danny van elsen

hello all,

I have an application in which I build a list<node>, with potentially
thousands of nodes.

each node has an "index", and all nodes are ordered by this index. this
index reflects a value that has been computed before, and will range from
0 to N.

node 0: index 0
node 1: index 6
node 2: index 9
node 3: index 15
and so on.

my question is: what would theoretically be the fastest way of finding a
node by its index?

I now have a second list<node> in which I keep bookmarks, i.e. I run
through the first list and every 100 or so nodes, I
copy that node into the second list. So that when I'm looking for the
node with index 5000, I don't have to run through the entire(first) list
of nodes, but only a dozen or so (in the second list). I also keep track
of the last node refernced, since most lookups will be in each other's
neighbourhood, and begin my next search from that node.

This works very well; it is veru fast. But I was wondering if using an
"advance" method would be equally fast and efficient? If yes, I would
replace my list of bookmarks by a navigation based on "advance"...

greetings, D.
 
R

red floyd

danny said:
hello all,

I have an application in which I build a list<node>, with potentially
thousands of nodes.

each node has an "index", and all nodes are ordered by this index. this
index reflects a value that has been computed before, and will range from
0 to N.

node 0: index 0
node 1: index 6
node 2: index 9
node 3: index 15
and so on.

my question is: what would theoretically be the fastest way of finding a
node by its index?

I now have a second list<node> in which I keep bookmarks, i.e. I run
through the first list and every 100 or so nodes, I
copy that node into the second list. So that when I'm looking for the
node with index 5000, I don't have to run through the entire(first) list
of nodes, but only a dozen or so (in the second list). I also keep track
of the last node refernced, since most lookups will be in each other's
neighbourhood, and begin my next search from that node.

This works very well; it is veru fast. But I was wondering if using an
"advance" method would be equally fast and efficient? If yes, I would
replace my list of bookmarks by a navigation based on "advance"...

greetings, D.

Perhaps a better data structure? std::set is already ordered, giving
O(log n) search times.
 
W

Wiseguy

danny van elsen said:
hello all,

I have an application in which I build a list<node>, with potentially
thousands of nodes.

each node has an "index", and all nodes are ordered by this index. this
index reflects a value that has been computed before, and will range from
0 to N.

node 0: index 0
node 1: index 6
node 2: index 9
node 3: index 15
and so on.

my question is: what would theoretically be the fastest way of finding a
node by its index?

I now have a second list<node> in which I keep bookmarks, i.e. I run
through the first list and every 100 or so nodes, I
copy that node into the second list. So that when I'm looking for the
node with index 5000, I don't have to run through the entire(first) list
of nodes, but only a dozen or so (in the second list). I also keep track
of the last node refernced, since most lookups will be in each other's
neighbourhood, and begin my next search from that node.

This works very well; it is veru fast. But I was wondering if using an
"advance" method would be equally fast and efficient? If yes, I would
replace my list of bookmarks by a navigation based on "advance"...

If you have an index and that index is suitable to use as a "key" then
create a map or multimap instead of a list.
 
H

Howard

danny van elsen said:
hello all,

I have an application in which I build a list<node>, with potentially
thousands of nodes.

each node has an "index", and all nodes are ordered by this index. this
index reflects a value that has been computed before, and will range from
0 to N.

node 0: index 0
node 1: index 6
node 2: index 9
node 3: index 15
and so on.

my question is: what would theoretically be the fastest way of finding a
node by its index?

If you want to keep that structure, instead of using one of the alternatives
mentioned by others, then a binary search is probably your best bet. That's
what it's designed for.
I now have a second list<node> in which I keep bookmarks, i.e. I run
through the first list and every 100 or so nodes, I
copy that node into the second list. So that when I'm looking for the
node with index 5000, I don't have to run through the entire(first) list
of nodes, but only a dozen or so (in the second list). I also keep track
of the last node refernced, since most lookups will be in each other's
neighbourhood, and begin my next search from that node.

This works very well; it is veru fast. But I was wondering if using an
"advance" method would be equally fast and efficient? If yes, I would
replace my list of bookmarks by a navigation based on "advance"...

(Sorry, I don't know what an "advance" method is.)

-Howard
 
R

Richard Herring

Other posters have suggested using map, but that's not necessarily any
better. There are too many unanswered questions:

Is there a particular reason for choosing a list as the basic storage
structure?
Are the index values actually in sorted order?
Are elements inserted all at once or sporadically?
Are elements ever removed?
If you want to keep that structure, instead of using one of the alternatives
mentioned by others, then a binary search is probably your best bet. That's
what it's designed for.

Except that std::list only has bidirectional iterators. binary_search
will work, but it may not be as fast as you expect.

It's sometimes known as "indexed sequential" in database circles.
(Sorry, I don't know what an "advance" method is.)

It's what you use to move non-random-access iterators by more than one
step ;-)
 
H

Howard

Except that std::list only has bidirectional iterators. binary_search will
work, but it may not be as fast as you expect.

Ah. That would tend to make the search less efficient, eh? :)
It's sometimes known as "indexed sequential" in database circles.


It's what you use to move non-random-access iterators by more than one
step ;-)

Ok, thanks for the info.

-Howard
 
D

danny van elsen

Other posters have suggested using map, but that's not necessarily any
better. There are too many unanswered questions:

Is there a particular reason for choosing a list as the basic storage
structure?
Are the index values actually in sorted order? Are elements inserted all
at once or sporadically? Are elements ever removed?

The list will be the result of an analysis phase in which I start out with
perhaps tens of thousands of nodes; each node will be looked at and most
probably split up in several new nodes. So insertion is certainly not all
at once, and 'a little bit random', because not all nodes will be
treated sequentially. I will certainly end up with a multitude of the
original number of nodes.

The indexes will always remain ordered, however.

list is certainly better than vector, because of the inserts and removals;
but my artificial way of looking up a node made me think that perhaps
there should be better predefined methods available?
Except that std::list only has bidirectional iterators. binary_search
will work, but it may not be as fast as you expect.

Do I understand correctly that what makes you say this is that there
are no random iterators for a list? so every next step in the binary
search is necessarily a sequence of increments or decrements? Perhaps then
my way of searching is not that bad at all?
 
R

Richard Herring

danny van elsen said:
The list will be the result of an analysis phase in which I start out with
perhaps tens of thousands of nodes; each node will be looked at and most
probably split up in several new nodes. So insertion is certainly not all
at once, and 'a little bit random', because not all nodes will be
treated sequentially. I will certainly end up with a multitude of the
original number of nodes.

The indexes will always remain ordered, however.

list is certainly better than vector, because of the inserts and removals;

That makes sense. But if there were no removals, and the sequence didn't
need to be sorted until the end of the analysis phase, you might
consider just appending to a vector, sorting it once only, and then
using binary_search.

And for the removals you might consider using the erase(remove_if(begin,
end, predicate), end) approach of copying wanted elements down over the
unwanted ones rather than removing them individually.
but my artificial way of looking up a node made me think that perhaps
there should be better predefined methods available?


Do I understand correctly that what makes you say this is that there
are no random iterators for a list? so every next step in the binary
search is necessarily a sequence of increments or decrements?

Correct. Under the hood std::list is probably (though the standard
doesn't require it) a straightforward doubly-linked list whose nodes are
randomly scattered around in memory, so the only way to traverse it is
to follow the links one by one. So a binary search on a list will do O(N
log N) comparisons _and_ O(N log N) iterator increments. If the cost
of the comparisons dominates, that's not too bad. If the increments are
expensive (probably not for std::list, as it's just an indirection)
there might be a problem.
Perhaps then
my way of searching is not that bad at all?

Maybe not. It has a long history (think of those dictionaries with a
thumb index!). Knuth mentions something similar (The Art of Computer
Programming vol. III, page 473.) If you index every M'th element, it
takes O(N/M+M) comparisons and increments.

Ultimately the only way to find the best strategy is to try them all and
time them.
 
R

red floyd

Richard said:
In message <[email protected]>, danny van elsen



Correct. Under the hood std::list is probably (though the standard
doesn't require it) a straightforward doubly-linked list whose nodes are
randomly scattered around in memory, so the only way to traverse it is
to follow the links one by one. So a binary search on a list will do O(N
log N) comparisons _and_ O(N log N) iterator increments. If the cost
of the comparisons dominates, that's not too bad. If the increments are
expensive (probably not for std::list, as it's just an indirection)
there might be a problem.



Maybe not. It has a long history (think of those dictionaries with a
thumb index!). Knuth mentions something similar (The Art of Computer
Programming vol. III, page 473.) If you index every M'th element, it
takes O(N/M+M) comparisons and increments.

Ultimately the only way to find the best strategy is to try them all and
time them.

Thought: I'm making the assumption that once your list is built up,
it's unchanging. If this is incorrect, the following suggestion no
longer works.

Make a std::set of iterators into your list. The sort function for the
set is "*iter1 < *iter2". You have the overhead of sorting the
iterators exactly once. There was an article in either CUJ or DDJ on a
"view library" which used a similar mechanism to create alternate views
into an STL Container.

The point being that once you've built up your set of iterators, all
searches are O(log n). Hell, you don't even need to use a set, you
could use a vector of iterators, sort the vector once, and then use a
binary search on the vector.

Kind of like what people did in C when they had data structures that
were too big to really move around efficiently -- keep a list/set/vector
of pointers to the data, and throw them around instead.
 
D

danny van elsen

Thought: I'm making the assumption that once your list is built up,
it's unchanging. If this is incorrect, the following suggestion no
longer works.

Make a std::set of iterators into your list. The sort function for the
set is "*iter1 < *iter2". You have the overhead of sorting the
iterators exactly once. There was an article in either CUJ or DDJ on a
"view library" which used a similar mechanism to create alternate views
into an STL Container.

The point being that once you've built up your set of iterators, all
searches are O(log n). Hell, you don't even need to use a set, you
could use a vector of iterators, sort the vector once, and then use a
binary search on the vector.

ok, this is an interesting idea, I'll try that...

thanks for all answers!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,071
Latest member
MetabolicSolutionsKeto

Latest Threads

Top