Singleton_pattern and Thread Safety

C

Chris M. Thomasson

James Kanze said:
You must know something I don't: the documentation of the Sparc
architecture definitely says that it isn't guaranteed;

Here is my initial question when I was joining up with the Sun CoolThreads
Contest:

https://coolthreads.dev.java.net/servlets/ProjectForumMessageView?forumID=1797&messageID=11068

Here is response I got form the Sun Engineering department:

https://coolthreads.dev.java.net/servlets/ProjectForumMessageView?messageID=11460&forumID=1797


BTW, SPARC can run in three memory coherency modes. TSO, PSO, RMO:

http://en.wikipedia.org/wiki/Memory_ordering

http://www.rdrop.com/users/paulmck/scalability/paper/ordering.2007.09.19a.pdf
(read all).

Heck, SPARC in RMO mode still handles data-dependant loads!

:^)



I've also heard that it fails on Itaniums,

AFAICT, `read_barrier_depends()' is a NOP on an Itanium. You have to consult
Linux source code.



and that it is uncertain on
80x86. (My own reading of the Intel documentation fails to turn
up a guarantee, but I've not seen everything.)

x86 is basically TSO. `read_barrier_depends()' is NOP on all Intel
architectures I have come across.
 
J

James Kanze

On 13/12/2010 11:45, James Kanze wrote:
[...]
As you are doing it wrong it is neither defensive programming
nor sound engineering.
Again, I'd suggest you read my code very, very carefully. (I'll
admit that it's not immediately obvious as to why it works. But
it's been reviewed several times by leading experts, and never
found wanting.)
You are doing it wrong. I say again if you have more than one of your
leaking singletons defined in more than one TU the construction order of
your leaking singletons is unspecified.
This is your code:
namespace {
Singleton* ourInstance = &Singleton::instance();
Singleton&
Singleton::instance()
{
if (ourInstance == NULL)
ourInstance = new Singleton;
return *ourInstance;
}
}
The ourInstance *pointer* is a global object (albeit with internal
linkage) which you are initializing with a dynamic allocation wrapped in
a function.

The ourInstance pointer is *not* a global object. No code
outside the above can access it. (There is a simple error in
the posted code: the function Singleton::instance() shouldn't be
in unnamed namespace, since the class Singleton obviously isn't
in unnamed namespace.)
If you have more than such initialization in more than one
TU the order of the initializations is unspecified.

Yes, but it doesn't matter:

-- ourInstance is initialized to null before any C++ code is
executed (zero initialization).

-- client code cannot access ourInstance; all accesses go
through Singleton::instance().

-- Singleton::instance() checks for null, and initializes the
pointer if necessary, regardless of when it is called. This
is the classic implementation of the singleton pattern, and
ensures that there is no order of initialization problem
when Singleton is used.

-- The "formal" initialization of ourInstance (after the = sign
in its definition ensures that Singleton::instance() is
called at least once during static initialization, and thus
that the pointer is initialized before entering main (in
practice, at least), and thus normally before threading
starts. This is only necessary for thread safety---once the
pointer has been correctly initialized, the code is thread
safe without any synchronization.
 
J

James Kanze

On 13/12/2010 18:47, Leigh Johnston wrote:

[...]
Just as confirmation that what I am saying is correct I created two
singletons using your method in the files a.cpp and b.cpp and here is
the result:
leigh@leigh-VirtualBox:~/dev/singleton$ g++ a.cpp b.cpp
leigh@leigh-VirtualBox:~/dev/singleton$ ./a.out
singleton B constructed
singleton A constructed
leigh@leigh-VirtualBox:~/dev/singleton$ g++ b.cpp a.cpp
leigh@leigh-VirtualBox:~/dev/singleton$ ./a.out
singleton A constructed
singleton B constructed
leigh@leigh-VirtualBox:~/dev/singleton$ g++ a.cpp b.cpp
leigh@leigh-VirtualBox:~/dev/singleton$ ./a.out
singleton B constructed
singleton A constructed
leigh@leigh-VirtualBox:~/dev/singleton$

That's not the point. The point is that it is impossible to use
the singleton before it has been constructed.
 
J

James Kanze

Here is my initial question when I was joining up with the Sun
CoolThreads Contest:

Here is response I got form the Sun Engineering department:

Which concerns one particular implementation of the Sparc
architecture, IIUC.
BTW, SPARC can run in three memory coherency modes. TSO, PSO, RMO:

I know. And the extra synchronization is only necessary in RMO.
Most current Sparc processors don't use RMO, so on most current
Sparc processors, you don't need the #LoadLoad. The key words
there, however, are "most" (which may actually be all, but I've
seen nothing which guarantees it), and "current" (which very
definitely means that you may run into problems on future
Sparcs---if such things ever exist). IIUC, PSO is usual,
however.
Heck, SPARC in RMO mode still handles data-dependant loads!
AFAICT, `read_barrier_depends()' is a NOP on an Itanium. You
have to consult Linux source code.

Could it be another case where current processors don't actually
take advantage of all of the freedom allowed in by the processor
architecture documents?
x86 is basically TSO. `read_barrier_depends()' is NOP on all Intel
architectures I have come across.

Yes. On rereading the document you posted, I think you're
right.

BTW: I wouldn't quote Linux sources as a reference. OS's have
been known to have bugs. In the end, the only reference is the
official architecture document. (Chips have also been known to
have bugs, but generally less so than the OS. And of course,
you have to base yourself on something.)
 
K

Keith H Duggar

Yes.  On rereading the document you posted, I think you're
right.

BTW: I wouldn't quote Linux sources as a reference.  OS's have
been known to have bugs.  In the end, the only reference is the
official architecture document.

Interesting that you say that. So then when I posted data from
an official architecture document proving that integer division
is slower than + - * (and that you were wrong even specifically
about SPARC):

http://groups.google.com/group/comp.lang.c++/msg/5467503d1fb3e411

it was definitive correct? Ok, why did you ignore the post? Or
do you only believe engineers when they write about concurrency
ops? Do you now accept that integer division is slow (compared
to + - * of course)?

KHD
 
G

gwowen

Shrinking violets can use message filters if they find profanities
emotionally crippling.

Actually, I find frequent and gratuitous profanity to be indicative of
poor reasoning, poor social skills and, usually, the first sign of
someone who knows they have lost the technical argument. Once you
lapse into it, its very hard to take you seriously.

As the old lawyers truism goes: "If the facts are on your side, bang
on the facts. If the law is on your side, bang on the law. If neither
the facts nor the law is on your side, bang on the table."
 
J

James Kanze

Interesting that you say that. So then when I posted data from
an official architecture document proving that integer division
is slower than + - * (and that you were wrong even specifically
about SPARC):

it was definitive correct? Ok, why did you ignore the post? Or
do you only believe engineers when they write about concurrency
ops? Do you now accept that integer division is slow (compared
to + - * of course)?

If you'll recall, at the end of the thread I did. (But the
issues aren't totally identical. In the threading question, it
is a binary question; when speaking about the speed of an
instruction, it's less black and white, because the processor
can overlap so much.)
 
K

Keith H Duggar

Stop trying to give this leaky singleton a proper name; it is a poor
method.  I googled "gamma singleton" and the only match I found was for
a version which did not instantiate the singleton before main()
(different to what Kanze was doing).

LOL ... you have to google to figure out that gamma == Erich Gamma
the first author of a book that you know very well? Ie the OO (which
you worship) "Design Patterns" (of which you know one or two) book
commonly referred to as GoF? Damn dude, learn to connect the dots.
You can also instantiate Meyers Singletons before creating any
additional threads so your argument is invalid.

And one can impose dynamic initialization order in spite of Kanze's
namespace variable. His variable just ensures it does at least get
created during the dynamic initialization phase. It does not impose
nor prevent imposition of order.

KHD
 
I

Ian Collins

Kanze's method to create the singletons before main() was to avoid
threading issues (which is what the OP wants). Kanze's method is leaky
and has unspecified construction order, end of.

It is not leaky in any of the situations I have used it.
 
I

Ian Collins

If the singleton is not deleted on program exit then it is a leak.

I some cases, the "program" didn't exit!

In others, the OS cleans them up on program exit, so there is no leak.
If the instance pointers were reassigned there would be, but they weren't.
 
I

Ian Collins

Why are you being so stubborn?

Because I'm correct.
Don't like the idea that the code you
have written is wrong? Or that your fundamental understanding of what a
"leak" is is wrong?

I didn't write the code. It was a team effort. I get the impression
you haven't worked as part of a large project team, developing code to
external requirements.
The OS cleans the *leaks* up on program exit. The *leaks* still existed.

It does, but memory allocated for the singletons isn't leaked, it's in
use. This was certainly the case for the embedded applications, which
run until they are powered off.
 
K

Keith H Duggar

I meant to say:

"Meyers Singletons do not require any explicit dependency code at the
site of their definition. The dependency *can be made* explicit at the
call site instead (for example from user defined constructors)."

I am also thinking along the lines of:

singleton_a::singleton_a() { third_party(this); }
singleton_b::singleton_b() { third_party(this); }

and you want the following two programs to behave differently:

int main()
{
        singleton_a::instance();
        singleton_b::instance();

}

int main()
{
        singleton_b::instance();
        singleton_a::instance();

}

This is true. Calling ::instance during the dynamic initialization
phase (Kanze's assignment to a file/namescape scope variable) does
prevent such customization of construction order after said phase
such as in main. That and the forced instantiation are both strong
criticisms in my opinion, especially for libraries.
The construction order of Kanze's method (assuming we introduce
dependencies between all the singletons) does not provide this
flexibility. I have to admit that this is probably a rare
use-case though.

Except that once dependencies between the singletons (assuming you
mean in constructors as we've discussed) are in place, one cannot
circumvent the partial ordering (dependency ordering) whether they
are Meyer's or not.

KHD
 
M

Michael Doubez

     [...]
This is your code:
namespace {
Singleton* ourInstance =&Singleton::instance();
Singleton&
Singleton::instance()
{
if (ourInstance == NULL)
ourInstance = new Singleton;
return *ourInstance;
}
}
The ourInstance *pointer* is a global object (albeit with internal
linkage) which you are initializing with a dynamic allocation wrapped in
a function. If you have more than such initialization in more than one
TU the order of the initializations is unspecified.
Just as confirmation that what I am saying is correct I created two
singletons using your method in the files a.cpp and b.cpp and here is
the result:
leigh@leigh-VirtualBox:~/dev/singleton$ g++ a.cpp b.cpp
leigh@leigh-VirtualBox:~/dev/singleton$ ./a.out
singleton B constructed
singleton A constructed
leigh@leigh-VirtualBox:~/dev/singleton$ g++ b.cpp a.cpp
leigh@leigh-VirtualBox:~/dev/singleton$ ./a.out
singleton A constructed
singleton B constructed
leigh@leigh-VirtualBox:~/dev/singleton$ g++ a.cpp b.cpp
leigh@leigh-VirtualBox:~/dev/singleton$ ./a.out
singleton B constructed
singleton A constructed
leigh@leigh-VirtualBox:~/dev/singleton$
That's not the point.  The point is that it is impossible to use
the singleton before it has been constructed.

It is the point.  Singletons can do stuff during *construction*.  In
this case the two singletons whilst not referencing each other reference
a third object during construction namely std::cout.

cout has a special treatment.
You seem to think that a program can behave differently depending on the
order its source files are built; this is an interesting approach to
software engineering.

This is what happen when somthing is unspecified by the standard -
such as the order of initialisation of global variables in a program.
Some actually do solve the initialisation order fiasco by changing the
order they appear on the linker command line.
 
J

Joshua Maurice

If the singleton is not deleted on program exit then it is a leak.

You will be wrong as long as you use that definition and assert it's
bad a priori. The rest of us program to business requirements and use
cases, and "avoid a Leigh Johnston memory leak at all costs" has never
been a real business requirement nor use case.
 
I

Ian Collins

You will be wrong as long as you use that definition and assert it's
bad a priori. The rest of us program to business requirements and use
cases, and "avoid a Leigh Johnston memory leak at all costs" has never
been a real business requirement nor use case.

Especially when the memory isn't leaked! It's in use and accessible
until the process terminates.
 
I

Ian Collins

Ah good, an insult. I win.

You have not won.

Microsoft agrees with me on the definition of a memory leak. Given the
following program:

char* p = new char[4242];

int main()
{
_CrtSetDbgFlag ( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );
}

The following is output on program termination:

Detected memory leaks!
Dumping objects ->
{68} normal block at 0x007C4A20, 4242 bytes long.
Data: < > CD CD CD CD CD CD CD CD CD CD CD CD CD CD CD CD
Object dump complete.
Well valgrind and dbx agree with me:

dbx:
Checking for memory leaks...

Actual leaks report (actual leaks: 0 total size:
0 bytes)

Possible leaks report (possible leaks: 0 total size:
0 bytes)

Checking for memory use...

Blocks in use report (blocks in use: 1 total size:
4242 bytes)

Total % of Num of Avg Allocation call stack
Size All Blocks Size
========== ==== ====== ====== =======================================
4242 100% 1 4242 operator new < operator new[] <
__SLIP.INIT_A < __STATIC_CONSTRUCTOR < __cplus_fini_at_exit

valgrind:

==2276== HEAP SUMMARY:
==2276== in use at exit: 4,242 bytes in 1 blocks
==2276== total heap usage: 1 allocs, 0 frees, 4,242 bytes allocated
==2276==
==2276== LEAK SUMMARY:
==2276== definitely lost: 0 bytes in 0 blocks
==2276== indirectly lost: 0 bytes in 0 blocks
==2276== possibly lost: 0 bytes in 0 blocks
==2276== still reachable: 4,242 bytes in 1 blocks
==2276== suppressed: 0 bytes in 0 blocks
 
K

Keith H Duggar

Don't forget that singleton is an anti-pattern as well as a pattern!  I
have very few singletons in my code.  Making singletons easy to write is
possibly a bad thing. :)

Yes, misuse of singletons can be a disaster, a maintenance nightmare.
Have you ever been forced to weed through all the uses of such a "kewl
global variable" just to figure out wtf will happen if there are two
of them, because of course now you need two of them?

The idea that a programmer can know that there /must never ever/ be
more than one instance of a class T is, to be perfectly frank, nearly
laughable.

Now, of course, one may know that at this particular time we need
a /common/ shared instance but this is easily achieved with a non-
intrusive smart "pointer" template such as the singleton template
Leigh gave or the very similar (though having more features)
"common" template of mine. For example my

common<Foo> foo ;

is a kind of smart pointer to a common instance of Foo. These days
I can think of absolutely no, nada, zero reasons why I would want to
pollute a Foo type with intrusive "singleton" scaffolding. If anyone
knows of good reasons for intrusive singletons please enlighten me.

KHD

PS. I think singleton is so heavily abused because 1) the name
sounds kewl 2) it constrains usage of the class and even if that
constraint is useless programmers love to naively force their will
on other programmers 3) it's famous so you can throw it around as
a buzzword and seem smart 4) it also happens to provide a global
variable replacement and one that has been /approved/ by all the
kewl kids 5) you are mocked as a "C" programmer if you are honest
and just use a plain global variable (or free function).
 
G

gwowen

It is a leak.  If it wasn't a leak the OS wouldn't have anything to
deallocate during process cleanup.

If one calls exit() from a function other than main(), would you
consider the stack space used by main() to be leaked? What about
abort()? If not, what's the qualitative difference?

Having said that, while memory is (almost always) cleaned up by the
OS, it may be more common for non-memory-resources to be leaked via
this method.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,781
Messages
2,569,619
Members
45,316
Latest member
naturesElixirCBDGummies

Latest Threads

Top