K
Kai-Uwe Bux
Frank said:Kai-Uwe Bux, I downloaded a recent GNU g++ STL release and here is the
vector source for your STL chaining implementation of the 2D matrix:
/**
* @brief Subscript access to the data contained in the vector.
* @param n The element for which data should be accessed.
* @return Read/write reference to data.
*
* This operator allows for easy, array-style, data access.
* Note that data access with this operator is unchecked and
out_of_range
* lookups are not defined. (For checked lookups see at().)
*/
reference operator[](size_type __n) { return *(begin() + __n); }
/**
* Returns a read/write iterator that points to the first element in
the
* vector. Iteration is done in ordinary element order.
*/
iterator begin() { return iterator (_M_start); }
/**
* @brief Provides access to the data contained in the vector.
* @param n The element for which data should be accessed.
* @return Read/write reference to data.
*
* This function provides for safer data access. The parameter is
first
* checked that it is in the range of the vector. The function
throws
* out_of_range if the check fails.
*/
reference at(size_type __n)
{ _M_range_check(__n); return (*this)[__n]; }
void _M_range_check(size_type __n) const {
if (__n >= this->size())
__throw_out_of_range("vector");
}
As you can see, this is identical to the MSVC7.1 STL code I posted the
other day. So, your argumemt about STL compiler differences doesn't
apply to this case.
I do not recall refering to "STL compiler" differences. All I said was that
one should not look at the code and jump to conclusions about the
Yes, I agree each vendor uses compiler
optimizations. The MSVC7.1 STL header files list all the compiler
optimizations. As for your tests, I haven't ran them yet. At first
glance, the matrices in your test are not large
Feel free to change the sizes. I did so. Results don't change.
and it appears you are
not using the standard linear algebra / matrix benchmarks which would
be a better test for random access of 2D matrices.
May I remind you that the point in question is *not* the performance of
these types in linear algebra applications but just the question whether a
double indirect memory access in std::vector< std::vector< T > > is less
efficient than in T**. If the point was not this narrow, the code snippets
you provided would not be significant anyway. To get evidence with regard
to this narrow question, I think, the test code fits. One could think about
testing different orders of accessing the entries to get a more detailed
picture. But that is all I would think of. Just run some variations of the
code.
However, if you feel inclined to provide the necessary arithmetic operations
based on the two data structures to run linear algebra benchmarks, feel
free to do so. I do not think it would be worthwhile. As I pointed out
elsewhere, there are high-performance linear algebra libraries out there.
Best
Kai-Uwe Bux