Singleton_pattern and Thread Safety

C

Chris M. Thomasson

[...]
They don't in my workplace.

I guess the best you can do is abstract a non-portable atomic
operations/memory barrier API behind a C++0x interface. Luckily, somebody
already did that for us:

http://www.stdthread.co.uk/

But I think even this still might be able to break with very aggressive
link-time optimizations...
 
E

Ebenezer

They don't in my workplace.- Hide quoted text -

With an on line service (code generator) it's OK to
write non-portably as long as you're happy with the
platform you're on and are convinced you've found the
platform of your dreams. I'm on Linux/Intel and am not
sure it is the platform of my dreams. I still haven't
found what I'm looking for, but think it's possible to
find such a platform eventually.


Brian Wood
Ebenezer Enterprises
http://webEbenezer.net
 
J

James Kanze

On 10/12/2010 09:52, James Kanze wrote:
Pallav singh wrote:
[...]
Note that the above still risks order of destruction issues;
it's more common to not destruct the singleton ever, with
something like:
namespace {
Singleton* ourInstance =&Singleton::instance();
Singleton&
Singleton::instance()
{
if (ourInstance == NULL)
ourInstance = new Singleton;
return *ourInstance;
}
}
(This solves both problems at once: initializing the variable
with a call to Singleton::instance and ensuring that the
singleton is never destructed.)
James "Cowboy" Kanze's OO designs includes objects that are never
destructed but leak instead?
And where do you see a leak?
Is that a serious question?

Yes. There's no memory leak in the code I posted. I've used it
in applications that run for years, without running out of
memory.
The only real difference between the two programs below is the amount of
memory leaked:
int main()
{
int* p = new int;
}
int main()
{
int* p = new int;
p = new int;
}

Arguably, there's no memory leak in either. A memory leak
results in memory use that increases in time. That's the usual
definition of "leak" in English, applied to memory, and it's the
only "useful" definition; if the second line in the above were
in a loop, you would have a leak.

Other definitions of memory leak are possible---I've seen people
claim that you can't have a memory leak in Java because it has
garbage collection, for example. But such definitions are of no
practical use, and don't really correspond to the usual meaning.
A singular memory leak (one that is not repeated so doesn't
consume more and more memory as a program runs) is still
a memory leak.
I will ignore the predictable, trollish part of your reply.

In other words, you know that your position is indefensible, so
prefer to resort to name calling.
 
J

James Kanze

[...]
That's really the only acceptable solution. (And to answer
Leigh's other point: you don't use singletons in plugins.)
Sorry I was worrying over nothing; of course the C++ compiler
will not reorder a pointer assignment to before the creation
of the object it points to ..

Really? What makes you think that? (And of course, even if the
compiler doesn't reorder, the hardware might.)

The code posted above is broken on so many counts, it's hard to
know where to start. It is basically just the double checked
locking anti-pattern, known to not work. (See, for example,
http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf.)
With, in addition, the fact that there doesn't seem to be
anything which guarantees that sLock is constructed before it is
used.
no volatile needed! :)

In C++, the volatile wouldn't buy you anything; some versions of
Visual Studios do extend the meaning so that it could be used,
but this was (I believe) a temporary solution; presumably,
future versions will adopt the definition of volatile adopted in
C++0x.
 
J

James Kanze

On 11/12/2010 03:12, Joshua Maurice wrote:
Normally I instantiate all my singletons up front
(before threading) but I decided to quickly roll a new
singleton template class just for the fun of it
(thread-safe Meyers Singleton):
namespace lib
{
template <typename T>
class singleton
{
public:
static T& instance()
{
if (sInstancePtr != 0)
return static_cast<T&>(*sInstancePtr);
{ // locked scope
lib::lock lock1(sLock);
static T sInstance;
{ // locked scope
lib::lock lock2(sLock); // second lock should emit memory barrier here
sInstancePtr = &sInstance;
}
}
return static_cast<T&>(*sInstancePtr);
}
private:
static lib::lockable sLock;
static singleton* sInstancePtr;
};

template <typename T>
lib::lockable singleton<T>::sLock;
template <typename T>
singleton<T>* singleton<T>::sInstancePtr;
}
Even though a memory barrier is emitted for a specific
implementation of my lockable class it obviously still
relies on the C++ compiler not re-ordering stores across
a library I/O call (acquiring the lock) but it works fine
for me at least (VC++). I could mention volatile but
I better not as that would start a long argument. Roll
on C++0x.
If I'm reading your code right, on the fast path, you
don't have a barrier, a lock, or any other kind of
synchronization, right? If yes, you realize you've coded
the naive implementation of double checked? You realize
that it's broken, right? Have you even read
http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf
? To be clear, this has undefined behavior according to
the C++0x standard as well.
I am aware of double checked locking pattern yes and this
is not the double checked locking pattern (there is only
one check of the pointer if you look). If a pointer
read/write is atomic is should be fine (on the
implementation I use it is at least).
You've hidden the second check with the static keyword.
Example: Consider:
SomeType& foo()
{
static SomeType foo;
return foo;
}
For a C++03 implementation, it's likely implemented with something
like:
SomeType& foo()
{
static bool b = false; /*done before any runtime execution, stored
in the executable image */
static char alignedStorage[sizeof(SomeType)]; /*with some magic
for alignment */
if ( ! b)
new (alignedStorage) SomeType();
return * reinterpret_cast<SomeType*>(alignedStorage);
}
That's your double check.
For C++0x, it will not be implemented like that. Instead, it
will be implemented in a thread-safe way that makes your
example entirely redundant.
The problem with the traditional double checked locking
pattern is twofold:
1) The "checks" are straight pointer comparisons and for the second
check the pointer may not be re-read after the first check due to
compiler optimization.

That's not correct. Since there is a lock between the two
reads, the pointer must be reread.

One major problem with both the traditional double checked
locking and your example is that a branch which finds the
pointer not null will never execute any synchronization
primitives. Which means that there is no guarantee that it will
see a constructed object---in the absense of synchronization
primitives, the order of writes in another thread is not
preserved (and in practice will vary on most high end modern
processors).

You've added to the problem by not reading the pointer a second
time. This means that two threads may actually try to construct
the static object. Which doesn't work with most compilers today
(but will be guaranteed in C++0x, I think).

Finally, of course, if the instance function is called from the
constructor of a static object, there's a very good chance that
sLock won't have been constructed. (Unix supports static
construction of mutexes, but as far as I know, Windows doesn't.)
2) The initialization of the pointer may be re-ordered by the CPU to
happen before the initialization of the singleton object is complete.
I think you are confusing the checking issue. I am acquiring
a lock before this hidden check of which you speak is made and
this check is not the same as the initial fast pointer check
so issue 1 is not a problem.

I think you're missing the point that order is only preserved if
*both* threads synchronize correctly. You're lock guarantees
the order the writes are emitted in the writing thread, but does
nothing to ensure the order in which the writes become visible
in other threads.
As far as issue 2 is concerned my version (on VC++ at least) is solved
via my lock primitive which should emit a barrier on RAII construction
and destruction and cause VC++ *compiler* to not re-order stores across
a library I/O call (if I am wrong about this a liberal sprinkling of
volatile would solve it).
I should have stated in the original post that my solution is not
portable as-is but it is a solution for a particular implementation
(which doesn't preclude porting to other implementations). :)

There are definitly implementations on which it will work: any
single core machine, for example. And it's definitely not
portable: among the implementions where it will not work, today,
are Windows, Linux and Solaris, at least when running on high
end platforms.
 
J

James Kanze

Hmm, I think I see why I might need the first barrier: is it
due to loads being made from the singleton object before the
pointer check causing problems for *clients* of the
function? any threading experts care to explain?

Basically, the only architecture out there which requires
a data-dependant acquire barrier after the initial atomic load
of the shared instance pointer is a DEC Alpha...

You must know something I don't: the documentation of the Sparc
architecture definitely says that it isn't guaranteed; I've also
heard that it fails on Itaniums, and that it is uncertain on
80x86. (My own reading of the Intel documentation fails to turn
up a guarantee, but I've not seen everything.)
 
J

James Kanze

On 11/12/2010 18:47, Chris M. Thomasson wrote: [...]
Thanks for the info. At the moment I am only concerned with
IA-32/VC++ implementation which should be safe.
FWIW, an atomic store on IA-32 has implied release memory
barrier semantics.

Could you cite a statement from Intel in support of that?
Also, an atomic load has implied acquire semantics. All
LOCK'ed atomic RMW operations basically have implied full
memory barrier semantics:

In particular, §8.2.3.4, which specifically states that "Loads
may be reordered with earliers storead to different locations".
Which seems to say just the opposite of what you are claiming.
Also, latest VC++ provides acquire/release for volatile load/store
respectively:
So, even if you port over to VC++ for X-BOX (e.g., PowerPC), you will get
correct behavior as well.

Provided he uses volatile on the pointer (and uses the classical
double checked locking pattern, rather than his modified
version).
Therefore, I don't think you even need the second lock at all.
If you are using VC++ you can get away with marking the global
instance pointer variable as being volatile. This will give
release semantics when you store to it, and acquire when you
load from it on Windows or X-BOX, Itanium...

IIUC, these guarantees were first implemented in VS 2010.
(They're certainly not present in the generated code of the
versions of VC++ I use, mainly 2005.)

I'm also wondering about their perenity. I know that Herb
Sutter presented them to the C++ committee with the suggestion
that the committee adopt them. After some discussion, he more
or less accepted the view of the other members of the committee,
that it wasn't a good idea. (I hope I'm not misrepresenting his
position---I was present during some of the discussions, but
I wasn't taking notes.) Given the dates, I rather imagine that
the feature set of 2010 was already fixed, and 2010 definitly
implements the ideas that Herb presented. To what degree
Microsoft will feel bound to these, once the standard is
officially adopted with a different solution, I don't know (and
I suspect that no one really knows, even at Microsoft).
 
M

Michael Doubez

    [...]

That's really the only acceptable solution.  (And to answer
Leigh's other point: you don't use singletons in plugins.)

Nitpicking.
DLL plugin might use/define singleton but AFAIK nobody said you have
to use the same kind of singleton everywhere.
 
J

James Kanze

On 11/12/2010 18:47, Chris M. Thomasson wrote: [...]
Thanks for the info. At the moment I am only concerned
with IA-32/VC++ implementation which should be safe.
FWIW, an atomic store on IA-32 has implied release memory
barrier semantics. Also, an atomic load has implied acquire
semantics. All LOCK'ed atomic RMW operations basically have
implied full memory barrier semantics:
http://www.intel.com/Assets/PDF/manual/253668.pdf
(read chapter 8)
Also, latest VC++ provides acquire/release for volatile load/store
respectively:
http://msdn.microsoft.com/en-us/library/12a04hfd(v=VS.100).aspx
(read all)
So, even if you port over to VC++ for X-BOX (e.g., PowerPC),
you will get correct behavior as well.
Therefore, I don't think you even need the second lock at
all. If you are using VC++ you can get away with marking the
global instance pointer variable as being volatile. This
will give release semantics when you store to it, and
acquire when you load from it on Windows or X-BOX,
Itanium...
Or, you know, you could just do it "the right way" the first time and
put in all of the correct memory barriers to avoid undefined behavior
according to the C++ standard in order to well, avoid, undefined
behavior.

Until C++0x becomes reality, there is no "right way" in C++.
One reasonably portable way of getting memory barriers is to use
explicit locks; this will have a run-time impact. (Chris and
I have discussed this in the past.) Whether that run-time
impact is significant is another question---roughly speaking
(IIRC), Chris has developed a solution that will use one less
barrier than a classical mutex lock (which requires a barrier
when acquiring the lock, and another when freeing it). In the
absolute, a barrier is "expensive" (the equivalent of 10 or more
normal instructions?), but a lot depends on what else you're
doing; I think that in most cases, the difference will be lost
in the noise.
It's not like it will actually put in a useless no-op when
using the appropriate C++0x atomics as a normal load on that
architecture apparently has all of the desired semantics. Why write
unportable code which you have to read arch manuals to prove its
correctness when you can write portable code which you can prove its
correctness from the much simpler C++0x standard?
Moreover, are you ready to say that you can foresee all possible
compiler, linker, hardware, etc., optimizations in the future which
might not exist yet, and you know that they won't break the code?
Sure, the resultant assembly output is correct at the moment according
to the x86 assembly docs, but that is no guarantee that the C++
compiler will produce that correct assembly in the future. It could
implement cool optimizations that would break the /already broken/ C++
code. This is why you write to the appropriate standard. When in C++
land, write to the C++ standard.
I strongly disagree with your implications Chris that Leigh is using
good practice with his threading nonsense non-portable hacks,
especially if/when C++0x comes out and is well supported.

If/when C++0x comes out, obviously, you'd want to use it. Until
then, if you really do have a performance problem, you may have
to live with non-portable constructs. (At present, anything
involving threading is non-portable.) Leigh, of course,
disingenuously didn't mention non-portability in his initial
presentation of the algorithm (which contained other problems as
well); Chris is generally very explicit about such issues.
PS: If you are implementing a portable threading library, then
eventually someone has to use the non-portable hardware specifics.
However, only that person / library should have to, not the writer of
what should be portable general purpose code.
PPS: volatile has no place in portable code as a threading primitive
in C or C++. None. It never has. Please stop perpetuating this myth.

Microsoft has extended the meaning of volatile (starting with
VS 2010?) so that it can be used. This is a Microsoft specific
extension (and on the web page Chris sites, they explicitly
present it as such---this isn't the old Microsoft, trying to
lock you in without your realizing it). C++0x will provide
alternatives which should be portable, but until then...
 
J

James Kanze

On 12/12/2010 01:23, Joshua Maurice wrote:
On 11/12/2010 18:47, Chris M. Thomasson wrote:
[...]
One can instantiate multiple "Meyers Singletons" before creating any
threads to avoid any singleton related threading code. No object
destruction problems.
You continue to assert that a memory leak is bad a priori. The rest of
us in this thread disagree with that claim. Instead, we program to
tangible requirements, such as cost, time to market, meets (business)
use case. We also keep in mind less tangible but still important
considerations, like maintainability and reusability.
You are not listening. If you have multiple singletons using
Mr Kanze's method that "you all" agree with it is unspecified
as to the order of construction of these singletons across
multiple TUs; i.e. the method suffers the same problem as
ordinary global variables; it is no better than using ordinary
global variables modulo the lack of object destruction (which
is shite).

I'd suggest you read the code I posted first. It addresses the
order of initialization issues fully. (But of course, you're
not one to let actual facts bother you.)
Unspecified construction order is anathema to maintainability
as the order could change as TUs are added or removed from
a project.

Unspecified constructor order of variables at namespace scope is
a fact of life in C++. That's why we use the singleton pattern
(which doesn't put the actual variable at namespace scope, but
allocates it dynamically).

Unspecified destructor order of variables with static lifetime
(namespace scope or not) is also a fact of life in C++. That's
why we don't destruct the variable we dynamically allocated.

It's called defensive programming. Or simply sound software
engineering. It's called avoiding undefined behavior, if you
prefer.

[...]
As Chris pointed out the only problem with my version compared
to the version given in document by Meyers and Alexandrescu
that you seem so fond of is the lack of a memory barrier after
the initial fast check but this is only a problem for
a minimal number of CPUs as the load is dependent. If I had
to port my code to run on such CPUs I simply have to add this
extra barrier.

I'm not sure which problems in your code Chris tried to address,
or even how much of your code he actually studied; your code
definitely doesn't work on the machines I use (which includes
some with Intel processors, both under Linux and under Windows)
and the compilers I use.
In the real world people write non-portable code all the time
as doing so is not "incorrect".

Certainly not: until we get C++0x, all multithreaded code is
"non-portable". It's a bit disingenious, however, to not
mention the limitations before others pointed out what didn't
work.
 
J

Joshua Maurice

On 11/12/2010 18:47, Chris M. Thomasson wrote:
[...]
Thanks for the info.  At the moment I am only concerned
with IA-32/VC++ implementation which should be safe.
FWIW, an atomic store on IA-32 has implied release memory
barrier semantics.  Also, an atomic load has implied acquire
semantics. All LOCK'ed atomic RMW operations basically have
implied full memory barrier semantics:
http://www.intel.com/Assets/PDF/manual/253668.pdf
(read chapter 8)
Also, latest VC++ provides acquire/release for volatile load/store
respectively:
http://msdn.microsoft.com/en-us/library/12a04hfd(v=VS.100).aspx
(read all)
So, even if you port over to VC++ for X-BOX (e.g., PowerPC),
you will get correct behavior as well.
Therefore, I don't think you even need the second lock at
all. If you are using VC++ you can get away with marking the
global instance pointer variable as being volatile. This
will give release semantics when you store to it, and
acquire when you load from it on Windows or X-BOX,
Itanium...
Or, you know, you could just do it "the right way" the first time and
put in all of the correct memory barriers to avoid undefined behavior
according to the C++ standard in order to well, avoid, undefined
behavior.

Until C++0x becomes reality, there is no "right way" in C++.
One reasonably portable way of getting memory barriers is to use
explicit locks; this will have a run-time impact.  (Chris and
I have discussed this in the past.)  Whether that run-time
impact is significant is another question---roughly speaking
(IIRC), Chris has developed a solution that will use one less
barrier than a classical mutex lock (which requires a barrier
when acquiring the lock, and another when freeing it).  In the
absolute, a barrier is "expensive" (the equivalent of 10 or more
normal instructions?), but a lot depends on what else you're
doing; I think that in most cases, the difference will be lost
in the noise.


It's not like it will actually put in a useless no-op when
using the appropriate C++0x atomics as a normal load on that
architecture apparently has all of the desired semantics. Why write
unportable code which you have to read arch manuals to prove its
correctness when you can write portable code which you can prove its
correctness from the much simpler C++0x standard?
Moreover, are you ready to say that you can foresee all possible
compiler, linker, hardware, etc., optimizations in the future which
might not exist yet, and you know that they won't break the code?
Sure, the resultant assembly output is correct at the moment according
to the x86 assembly docs, but that is no guarantee that the C++
compiler will produce that correct assembly in the future. It could
implement cool optimizations that would break the /already broken/ C++
code. This is why you write to the appropriate standard. When in C++
land, write to the C++ standard.
I strongly disagree with your implications Chris that Leigh is using
good practice with his threading nonsense non-portable hacks,
especially if/when C++0x comes out and is well supported.

If/when C++0x comes out, obviously, you'd want to use it.  Until
then, if you really do have a performance problem, you may have
to live with non-portable constructs.  (At present, anything
involving threading is non-portable.)  Leigh, of course,
disingenuously didn't mention non-portability in his initial
presentation of the algorithm (which contained other problems as
well); Chris is generally very explicit about such issues.
PS: If you are implementing a portable threading library, then
eventually someone has to use the non-portable hardware specifics.
However, only that person / library should have to, not the writer of
what should be portable general purpose code.
PPS: volatile has no place in portable code as a threading primitive
in C or C++. None. It never has. Please stop perpetuating this myth.

Microsoft has extended the meaning of volatile (starting with
VS 2010?) so that it can be used.  This is a Microsoft specific
extension (and on the web page Chris sites, they explicitly
present it as such---this isn't the old Microsoft, trying to
lock you in without your realizing it).  C++0x will provide
alternatives which should be portable, but until then...

I'm sorry if I wasn't clear enough. Let me put forward my
proposition:

There seems to be an emerging memory model, whose basics are shared
between C++0x, POSIX, win32, and even Java. It seems like a wise idea
to program to this memory model as this gives the best guarantee of
correctness, maintainability, portability, and so on.

As we don't have C++0x at the moment, and POSIX and win32 are not
fully portable, I would suggest that you use a library which
implements "atomics" following the basic idea of these memory models.
The library can be one which you've downloaded, like Boost, or it can
be one you wrote yourself.

The "non-portable" stuff should be kept in that library, and the rest
of the code should use the portable API, that API which is consistent
with POSIX, win32, and C++0x.

In short, you shouldn't sprinkle your code with volatile. Have
functions implement some of the C++0x / POSIX / win32 semantics, and
implement those functions in terms of volatile. You minimize the code
which needs to be changed when porting. This is preferable in almost
every way to using volatile throughout your code.
 
K

Keith H Duggar

On 10/12/2010 13:59, Fred Zwarts wrote:
[...]
James "Cowboy" Kanze's OO designs includes objects that are never
destructed but leak instead?  Interesting.  What utter laziness
typical of somebody who probably overuses (abuses) the singleton
pattern. Singleton can be considered harmful (use rarely not
routinely).
As far as I can see it does not leak.
Up to the very end of the program the ourInstance pointer keeps
pointing to the object and can be used to access the object.
This is a well known technique to overcome the order of destruction
issue.
Of course it is a memory leak the only upside being it is a singular
leak that would be cleaned up by the OS as part of program termination
rather than being an ongoing leak that continues to consume memory.
It is lazy.  As far as it being a "well known technique" I have
encountered it before when working on a large project with many team
members but that does not justify its use; it was a consequence of
parallel development of many sub-modules with insufficient time set
aside for proper interop design and too much risk associated with
"fixing" it.
Exactly. Programming is an engineering discipline, meaning that one has
to estimate the  risks, costs and benefits. If the "leaked singleton"
approach is 10 times easier to get working correctly and has vanishing
risk of ill side effects, I would go with it regardless if somebody calls
it a leak or not.

Possible approaches:
1- Static initialisation singleton (also known as Meyer singleton)
    + Simple
    + no leak
    - Can create problems due to unspecified destruction order

You are incorrect. The destruction order of Meyers Singletons is specified.

By itself Meyer's Singleton is not sufficient to eliminate
unspecified order of destruction "problems". It must be used
in combination with some convention such as "all objects that
use Singleton must call the Singleton::instance() method in
their constructor" to ensure the Singleton outlives objects
that might use it.

This can be difficult to achieve (especially in shops that
abuse Singleton in the first place) hence their temptation to
just throw in the towel and create these "never destroyed"
singletons. (FYI I'm not saying James works in such a shop;
I doubt that he does.)

Regardless, such objects are still a cop out. I have yet to
see any good justification or examples of objects that /must
not be/ properly destroyed at program termination. Though I
cannot claim there are no such objects. Maybe someone can
provide a legitimate example of an object that cannot have
any appropriate destructor called at exit?

KHD

PS. Leigh, save us your automatic "I am well aware ..." bs.
The Royal We don't care whether or not you /appear/ intelligent
in an internet forum. Furthermore what you did or did not know
is virtually irrelevant (except to your insecurity).

PPS. I'm not asking for examples of objects that /need not/ be
destroyed, I'm asking for examples that /must not/ be destroyed.

PPPS. The word is "destroyed" not "destructed" you lemmings.
 
G

gwowen

WTF?  Troll (****) off.

/Leigh

Seriously, Leigh, if profanity is your response to everyone who
disagrees with you, you only succeed in coming across like a petulant,
hormonal, teenager. Talk like a mature adult, or no-one will take
anything you seriously.
 
E

Ebenezer

Certainly not: until we get C++0x, all multithreaded code is
"non-portable".  It's a bit disingenious, however, to not
mention the limitations before others pointed out what didn't
work.

I'm not entirely sure of your point, but think you are advocating
for portability here. With library code I think portability is
important and work to increase the portability of my code. But
with executable/service code I'm not convinced portability is
very important. What matters more I think is being happily
married to a decent platform. If one finds an excellent platform,
he can work on writing the best code on that platform possible
and rest assured that those working on the platform are ethical
people who will work hard to produce quality products. I'm on
the Linux/Intel platform at this time, but believe I'll move
to a better platform in the days ahead. This article about
airport/airplane security tells of some Israeli companies with
interesting products coming out.

http://current.com/technology/92862...rport-which-could-rumble-a-suicide-bomber.htm

(I believe that's a helpful question to ask people when
flying outside of Israel.)
Perhaps the new and improved platforms being developed
will have Israeli ties. If others have suggestions on where
to find these platforms, I'm interested.


Brian Wood
Ebenezer Enterprises
http://webEbenezer.net
 
E

Ebenezer

Shrinking violets can use message filters if they find profanities
emotionally crippling.

I hope you will clean up your language here also. Profanity does
nothing to help your cause.
 
J

James Kanze

I'm not entirely sure of your point, but think you are advocating
for portability here.

I'm not advocating anything, really. All I'm saying is that,
realistically, multithreaded code today will not be 100%
portable.
With library code I think portability is
important and work to increase the portability of my code.

Whether library code or application code, it's best to avoid
non-portable constructs when they aren't necessary, and to
isolate them in a few well defined areas when they are. For
some definition of "portable": I've worked on a lot of projects
where we supposed that floating point was IEEE, and we didn't
isolate the use of double to a few well defined areas:).

In the end, there is no one right answer. The important thing
is to make an educated choice, and document it.
 
J

James Kanze

On 13/12/2010 11:45, James Kanze wrote:

[...]
The code you posted results in unspecified construction order
of your leaking singletons even though they are dynamically
allocated if they are defined in multiple TUs.

I'd suggest you read it carefully, because it guarantees that
the singleton is constructed before first use.
A fact of life you seem to be ignoring; the order of construction of the
following is specified within a single TU but unspecified in relation to
globals defined in other TUs:
namespace
{
foo global1;
foo* global2 = new foo();
foo global3; // global2 has been fully constructed (dynamically
allocated) before reaching here

}

Certainly. Who ever said the contrary? And what relationship
does this have to any of the singleton implementations we've
been discussing.
Dynamic allocation is irrelevant here; construction order is
unspecified as you are initializing a global pointer with the
result of a dynamic allocation

So you wrap your initialization in a function, and make sure
that the only way to access the pointer is through that
function. The classical (pre-Meyers) singleton pattern, in
fact.
As you are doing it wrong it is neither defensive programming
nor sound engineering.

Again, I'd suggest you read my code very, very carefully. (I'll
admit that it's not immediately obvious as to why it works. But
it's been reviewed several times by leading experts, and never
found wanting.)
 
I

Ian Collins

No excuse. It is poor design pure and simple.

Have you ever had to work to a set of requirements?

On one project I worked on we had a very large number of singletons
similar to the one James posted. They didn't have destructors because
there were never destroyed. They were independent, so construction
order wasn't an issue. They had to be constructed before the
application started and still be there when it ended, so file scope was
the place to construct them.

A good design in one context may be a bad one in another; context and
requirements matter.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,776
Messages
2,569,603
Members
45,198
Latest member
JaimieWan8

Latest Threads

Top