map : iterator increment/decrement code

J

John

I was wondering why the STL RB Tree implementation is
used when the iterators of map are incremented/decremented
to yield successors and predecessors? This takes O(log n)
time (and a lot of memory thrashing depending on the size
of the tree) whereas just a simple pointer chase would make it O(1).
Is there an easy way to code this in C++ (Threaded red
black tree? ) OR Modify map so that map<>::iterator, ++it/--it
becomes O(1)?

Thanks in advance for your comments,
--j
 
V

Victor Bazarov

John said:
I was wondering why the STL RB Tree implementation is
used when the iterators of map are incremented/decremented
to yield successors and predecessors?

I am wondering why are you sure that it is what's happening.
This takes O(log n)
time (and a lot of memory thrashing depending on the size
of the tree) whereas just a simple pointer chase would make it O(1).

And why are you sure that it's not so?
Is there an easy way to code this in C++ (Threaded red
black tree? ) OR Modify map so that map<>::iterator, ++it/--it
becomes O(1)?

What makes you think that all this would be necessary? Do you have
any definite results from running some tests that the performance of
++/-- is not acceptable? And even if you do, what makes you think
that it's not just your particular implementation of the Standard
library? AFAIK, there is no specific requirement in the Standard
that 'std::map' or its iterators have any particular implementation.

V
 
P

P.J. Plauger

I was wondering why the STL RB Tree implementation is
used when the iterators of map are incremented/decremented
to yield successors and predecessors?

Because the RB tree is also used to do inserts, erases, and
finds?
This takes O(log n)
time (and a lot of memory thrashing depending on the size
of the tree) whereas just a simple pointer chase would make it O(1).

And in practice, *most* increments and decrements (in a large
tree) are indeed O(1). It's just that every once in a while
you come to the end of a subtree and have to do O(log N)
operations to find the successor. So amortized time complexity
is O(log N).
Is there an easy way to code this in C++ (Threaded red
black tree? ) OR Modify map so that map<>::iterator, ++it/--it
becomes O(1)?

You probably could, at the expense of making inserts and erases
even more expensive. They'd have to update two more links
(predecessor and successor) in addition to the existing three
(parent, left subtree, and right subtree). I suspect the
time complexity would still be O(log N), but with a notably
bigger multiplier.

And if you care about memory thrashing, you're bound to get
more of it with five pointers per node instead of three.
The member functions insert and erase also get bigger, so
you increase the chance of code thrashing.

And if you care about implementation bugs (as I do), you're
bound to get more of them when the code has to maintain two
only loosely related data structures instead of one to preserve
tree integrity.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com


P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
K

Kai-Uwe Bux

John said:
I was wondering why the STL RB Tree implementation is
used when the iterators of map are incremented/decremented
to yield successors and predecessors? This takes O(log n)
time (and a lot of memory thrashing depending on the size
of the tree) whereas just a simple pointer chase would make it O(1).

Actually it is implementation dependent, but in any case, the
implementation you are hinting at is amortized constant time. That is not
that bad a design for an iterator: in most cases you will use them to
iterate through a rather longish piece of the map.
Is there an easy way to code this in C++ (Threaded red
black tree? )

That depends: what do you consider easy? You would have to run your own
RB-tree implementation.
OR Modify map so that map<>::iterator, ++it/--it becomes O(1)?

That would be possible with containers that are templated on the underlying
tree structure. I do not know if there is an implementation available that
supports this.


Best

Kai-Uwe Bux
 
V

Vladimir Marko

P.J. Plauger said:
Because the RB tree is also used to do inserts, erases, and
finds?
O(1).

And in practice, *most* increments and decrements (in a large
tree) are indeed O(1). It's just that every once in a while
you come to the end of a subtree and have to do O(log N)
operations to find the successor. So amortized time complexity
is O(log N).

If you iterate from begin() to end() you go along each edge
exactly twice (once down, once up). And since there are exactly
size()-1 edges in a tree (plus 1 additional from the non-node
"root") the increment time complexity is amortized constant
unless you mean amortized _with respect to_ something else than
incrementing every incrementable iterator once.

Vladimir Marko
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,007
Latest member
obedient dusk

Latest Threads

Top