Own shared_array implementation

R

red floyd

Hi!

Because there is no shared_array in tr1, I decided to implement my own
one using shared_ptr with a custom deleter helper function.

Seehttp://code.google.com/p/peepeeplayer/source/browse/trunk/src/stuff/s....

My question is: Is this implementation OK? Have I done something that
could be possibly dangerous?

What you are looking for is called std::vector<>
 
S

Syron

Am 19.02.2010 22:32, schrieb red floyd:
What you are looking for is called std::vector<>

well, I should have had mentioned that I want to store sample data for
which I need very quick access, and I don't need that overhead of
methods that std::vector provides. I should have looked earlier at
stl_vector.h, now I see that it uses pointers inside.
So my next questions: Are there big differences between
boost::shared_array and std::vector? I think boost would not contain it
if there were only small differences.

Again, thanks.
 
S

Syron

OK, thanks for all replies.
But I think I keep going with my implementation, because of:
1) There are not many re-assignments, so the size is const.
2) RAII.
3) Less overhead (I only need automatic ressource control and fast
member access.)

Thanks, again.
 
S

Syron

Am 20.02.2010 00:17, schrieb Leigh Johnston:
Sounds dodgy.

1) std::vector has a reserve function allowing you to allocate once up
front if size is const.
2) std::vector uses RAII.
3) you are prematurely optimizing, I suspect std::vector is just as
"fast" for member access as your version.

/Leigh

Hmmm... OK, I'll look closer at this, but not now, it's late here. I
think for the next time I will look closer at the STL internal working.
Thanks.

-- Syron
 
D

Daniel Pitts

Am 20.02.2010 00:17, schrieb Leigh Johnston:

Hmmm... OK, I'll look closer at this, but not now, it's late here. I
think for the next time I will look closer at the STL internal working.
Thanks.

I don't think you need to look at the internal working, unless you want
to count CPU cycles (good luck with that task anyway). Do a benchmark
with realistic use-cases, or at least profile your running application
if the performance is an issue.
 
R

Robert Fendt

3) you are prematurely optimizing, I suspect std::vector is just as "fast"
for member access as your version.

In fact, vector<>::eek:perator[] is usually implemented as a direct
pointer or array operation. You cannot really get much faster
than that. After some comparison in that regard I now have
vectors even inside image transformation loops. The overhead is
elsewhere: a vector needs a bit more memory than a naked array
(but so does the OP's shared_array), and resizing can be costly.
However, the former is mostly, the latter absolutely
insignificant for const size.

Regards,
Robert
 
R

Robert Fendt

By C-style I meant that newing an array is not much better than using
malloc. std::vector already provides exception safety and other nice
features such as knowing its size and the ability to do reallocations.
shared_ptr<std::vector<T> > also works if you want sharing semantics.

Arrays in C++ are extremely important: as basis for std::vector. ;-)

Under normal circumstances there is really seldom reason to use
naked arrays. One possibility would be small tuples of constant
size, if space efficiency and/or allocation time is relevant. But
other than that...

Regards,
Robert
 
A

Alf P. Steinbach

* Leigh Johnston:
I have
yet to see a convincing argument in favour of using a dynamic array
instead of std::vector. std::vector exists so use it.

A reference-counted shared array is one case where a std::vector doesn't
necessarily provide the required or wished-for guaranteed performance.

It's best wrapped in a suitable class, but as I understand it that's what the OP
is doing.

I haven't looked at the code though.


Cheers & hth.,

- Alf
 
A

Alf P. Steinbach

* Leigh Johnston:
Use std::tr1::shared_ptr<std::vector<T> > then.

The shared_ptr solution has a couple of issues.

Most immediately obvious it introduces an extra level of indirection, and an
extra dynamic allocation on creation. That reduces operational efficiency, when
much of the point of a reference counted array is to improve efficiency.

Perhaps not as obvious, but still in the operational efficiency domain, when the
ref-counted array is used to implement strings, say, then by managing the buffer
directly dynamic allocations can be completely avoided for representing string
literals, which is much of what strings do, namely carrying originally literals.

Functionality-wise it lacks abstraction in that you know type-wise that it's a
vector<T> in there, which in turn translates into inefficiency.

For example, consider the Windows API CommandLineToArgvW function. The result is
a pointer to (the first element of) an array of pointers to wchar_t arrays with
zero-terminated strings, and this result pointer should be deallocated using the
Windows API function GlobalFree. The data can be copied but that's inefficient
when with a proper ref-counted array class you can just wrap it and then freely
pass it around with client code blissfully unaware of its origin and destruction
policy and kind of internal buffer; all that the client code needs to know is
(mainly) how to index the thing.

And there's more, but in short, there can be many reasonably good reasons to do
manual buffer management, suitably wrapped.

It would have been nice if TR1 and C++0x had supported that by including
boost::intrusive_ptr, but alas, it's not there.


Cheers & hth.,

- Alf
 
S

Syron

Am 23.02.2010 16:16, schrieb Leigh Johnston:
The only *extra* overhead compared to the OP implementation I should
say. I could live with this overhead as the pointer dereference could
be cached before calling lots of std::vector methods in a loop for example.

/Leigh

well, it seems that my last answer wasn't committed, so I repeat it: The
pointer is now cached. One time in the shared_ptr<T>, and then directly
in my class again as T*. As both are not directly accessable/changeable
(only through the 'reset' methods), it is safe.
And about the 50-bytes overhead: Mostly I transfer the instances as a
reference, especially in time critical sections. On the other hand, I
"only" create about 1000 instances at maximum, so that overhead is
negligible.

-- Syron
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,537
Members
45,022
Latest member
MaybelleMa

Latest Threads

Top