Question about times of standard container operations

I

Ioannis Vranos

In TC++PL3 on page 464 (Standard Container Operations), times of the kind
O(log(n)) are mentioned.


As far as I know, general "asymptotic times" in general algorithms, are at
least O(n) (that is, at least relative to the number of elements), so how
can it be possible to be O(log(n)) in the case of standard library container
operations, in non-parallel systems?




--
Ioannis Vranos

C95 / C++03 Software Developer

http://www.cpp-software.net
 
I

Ioannis Vranos

Corrected:


Ioannis said:
In TC++PL3 on page 464 (Standard Container Operations), times of the kind
O(log(n)) are mentioned.


As far as I know, general "asymptotic times" in general algorithms, are at
least

relative to n (where n the number of elements),

so how
can it be possible to be O(log(n)) in the case of standard library
container operations, in non-parallel systems?




--
Ioannis Vranos

C95 / C++03 Software Developer

http://www.cpp-software.net
 
A

Alf P. Steinbach

* Ioannis Vranos:
In TC++PL3 on page 464 (Standard Container Operations), times of the kind
O(log(n)) are mentioned.

As far as I know, general "asymptotic times" in general algorithms, are at
least O(n) (that is, at least relative to the number of elements), so how
can it be possible to be O(log(n)) in the case of standard library container
operations, in non-parallel systems?

It's not clear exactly what you're referring to, nor is it clear exactly what
you don't understand.

But anyway, let's start with the big-oh notation.

Formally big-oh refers to a lower bound. But the Holy Standard uses this
notation in the sense of big-Omega, which implies both a lower and an upper
bound. That is, in the standard's notation, if some operation has complexity
(number of fundamental operations) f(n), and the standard says it's O(g(n)),
then that means that when n is sufficiently large f(n) has K1*g(n) as a lower
bound, and K2*g(n) as an upper bound, for some constants K1 and K2.

For example, f(n) = 3*n^2 + 5 is O(n^2) because for sufficiently large n there
exists constants K1 and K2 such that K1*n^2 <= f(n) and f(n) <= K2*n^2.

Logarithmic times mainly occur when a set of items can be recursively
partitioned into subsets, where those subsets can be treated independently. For
example, a std::map may store the items in a sorted and fairly balanced tree.
Assuming such a tree, and that it's binary (branching factor 2), then starting
at the root of the tree the search for one of the n items reduces to either
finding it at the root, or searching among approx. n/2 items in the left
sub-tree, or searching among approx n/2 items in the right sub-tree. Descending
into the tree, at each way choice the number of items to search among is halved.
For n items you can only halve n approximately lg(n) times before you've reduced
the number of items to 1, whence the sought for item is found or not there.


Cheers & hth.,

- Alf
 
I

Ioannis Vranos

Alf said:
It's not clear exactly what you're referring to, nor is it clear exactly
what you don't understand.

But anyway, let's start with the big-oh notation.

Formally big-oh refers to a lower bound.


Big-Oh refers to the upper bound.



--
Ioannis Vranos

C95 / C++03 Software Developer

http://www.cpp-software.net
 
D

Daniel Pitts

Ioannis said:
Corrected:




relative to n (where n the number of elements),
It depends on the algorithm used, there is no "general algorithms"
asymptotic time.I believe map and set use a red-black tree implementation, which have
O(log n) search/insertion/deletion times. This doesn't require a
parallel system.

HTH,
Daniel.
 
J

James Kanze

* Ioannis Vranos:
It's not clear exactly what you're referring to, nor is it
clear exactly what you don't understand.
But anyway, let's start with the big-oh notation.
Formally big-oh refers to a lower bound. But the Holy Standard
uses this notation in the sense of big-Omega, which implies
both a lower and an upper bound. That is, in the standard's
notation, if some operation has complexity (number of
fundamental operations) f(n), and the standard says it's
O(g(n)), then that means that when n is sufficiently large
f(n) has K1*g(n) as a lower bound, and K2*g(n) as an upper
bound, for some constants K1 and K2.

Does the standard even talk of big-O. The standard often gives
an upper bound: at most n comparisons, or something of the sort.
But in such cases, it doesn't mention O(n). The "complexity"
clauses in the algorithms section are generally more
constraining than big-O, since they specify maximum (or even
exact) values everywhere, even for very small n.
 
A

Alf P. Steinbach

* James Kanze:
Does the standard even talk of big-O. The standard often gives
an upper bound: at most n comparisons, or something of the sort.
But in such cases, it doesn't mention O(n). The "complexity"
clauses in the algorithms section are generally more
constraining than big-O, since they specify maximum (or even
exact) values everywhere, even for very small n.

Thanks, I assumed Ioannis' reference to O in the standard was correct.

I didn't check.

But assuming it does use big O, then as you describe the complexity measures in
the standard are upper bounds, anything else would be meaningless.

And so is big O, an upper bound, contrary to what I wrote (lower bound is small o).

I shouldn't have posted in the middle of the night, that posting was just idiocy
(although neat explanation of omega, except that some absolute value invocations
would have been ever nicer). :-(


Cheers,

- Alf
 
J

James Kanze

Alf P. Steinbach wrote:
Big-Oh refers to the upper bound.

FWIW, the Wikipedia says that is asymptotic; there's no real
upper or lower bound, but as n grows, the value becomes "nearer"
the ideal value. The Wikipedia's not really a reference work,
but that's always how I've informally seen it used as well.


Formally, big-O is an asymptotic upper bound, and big-Omega an
asymptotic lower bound. Alf just confused them. But in
practice, all we're concerned about is the "asymptotic", and the
standard doesn't use big-O, or "asymptotic" anything, it
normally defines a strict upper bound (with no requirements for
asymptotic).
 
A

Alf P. Steinbach

* James Kanze:
FWIW, the Wikipedia says that is asymptotic; there's no real
upper or lower bound, but as n grows, the value becomes "nearer"
the ideal value. The Wikipedia's not really a reference work,
but that's always how I've informally seen it used as well.


Formally, big-O is an asymptotic upper bound, and big-Omega an
asymptotic lower bound. Alf just confused them.

Yeah, you're right about the confusion, I was tired. Still am but I've now got
some coffee. So it's no longer all greek to me.

And thanks again for correcting that, James! :)

HOWEVER... Hark... Hm... Well, it's was a bit more than just a confusion of
upper and lower bound. Big Theta is actually as I described the "omega" (modulo
lacking absolute value invocations), and it was Theta that I meant, because
that's the relevant one. Theta is a combination of lower and upper bound, not
just a lower bound. Like, f is Theta(g) implies f is o(g) and f is O(g). And so,
lacking greek keyboards, in Usenet postings we often just write big Oh when we
mean big Theta. Like, if an algorithm is said, in a Usenet posting, to be
O(n^2), then that doesn't mean that n^2 is just an upper bound, it means that
the algorithm will asymptotically use quadratic time, i.e. that it's Theta(n^2).
But the use of O for Theta O in Usenet postings doesn't mean that there's any
such notation in the standard. So that was a *third* confusion, which is why I
noted in my follow-up to you that my posting was pure idiocy. It really was. I
don't understand why the machine doesn't stop me from making such postings.

But in
practice, all we're concerned about is the "asymptotic", and the
standard doesn't use big-O, or "asymptotic" anything, it
normally defines a strict upper bound (with no requirements for
asymptotic).

Yah.


Cheers,

- Alf
 
K

Kaz Kylheku

* James Kanze:
Yeah, you're right about the confusion, I was tired. Still am but I've now got
some coffee. So it's no longer all greek to me.

Wow, we can print this thread and frame it!

What's it like to be corrected by a well-known complete idiot? :)
 
J

James Kanze

James Kanze wrote:
There are 41 places in the current working draft where
complexity is specified with big-O. Most of them are in Clause
23 (Containers).

And most of those concern unordered associative containers?

A quick search in C++03 turned up only a single instance of "O("
(but perhaps others are hidden by some intermediate formatting
information?), in the general description of head operations.
Most of the instances in the draft seem to concern forward_list
and the unordered associative containers. Was there some reason
for choosing a different way of describing them, or is this just
an artifact of who wrote the proposals?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,770
Messages
2,569,586
Members
45,086
Latest member
ChelseaAmi

Latest Threads

Top