What's the connection between objects and threads?

G

gpderetta

I'm not too sure what you mean by transactional memory.  

As far as I can understand, a transactional memory runs under
optimistic concurrent assumptions: concurrent operations are
exectued in explicitly marked atomic blocks, and run
speculatively. At the end of the atomic block, if the system
detects that no race condition occurred, the the transaction
is allowed to commit, (i.e. its effects are made visible to
the system), else it is rolled back and optionally retried.

For example, a thread safe queue under a TM model would have
its enqueue and dequeue methods each wrapped in a atomic block.

What makes transactional memory very different from other forms
of concurrency control is composability: you can nest atomic
blocks and logically enlarge transaction scope:

tm_queue a;
tm_queue b;

atomic {
x = a.deq()
b.deq(x)
}

This block guarantees that other threads will see x in queue a OR
in queue b (never in both queues or in neither). This sort of
atomic omposability is quite a powerful abstraction which is hard
to do generically with lock based designs.

The problem of course that implementing transactional memory without
hardware support not only requires language support, but it is quite
expensive. I think that current software implementations of TM are
at most half as fast as lock based models in the non contented case.

In case of many aborts, performance can degrade quickly, so STM might
not scale very well.

So far the only system which support a form of TM in hardware are the
last SPARC systems from SUN (by sort-of-hijacking the cache coherency
protocol).

Another problem of TM is that in most practical implementations it is
possible to livelock (even if unlikely).

Real experts might be able to correct any errors in my explaination
or explain more.
In general, however, containers that's the only useful sense of
thread safety for containers which "leak" references to internal
objects.

More importantly: the implementor here has *documented* the
contract which the implemention provides.  And *that* is the key
meaning behind thread safety.  Thread safety is first and
foremost a question of documentation.

agree 100%
 
W

werasm

Hi

I have to write a multi-threaded program. I decided to take an OO
approach to it. I had the idea to wrap up all of the thread functions
in a mix-in class called Threadable. Then when an object should run
in its own thread, it should implement this mix-in class. Does this
sound like plausible design decision?

I'm surprised that C++ doesn't have such functionality, say in its
STL. This absence of a thread/object relationship in C++ leads me to
believe that my idea isn't a very good one.

I would appreciate your insights. thanks

We've create a framework that uses commands (GOF command pattern)
to perform inter threading communication. If the applicable
command (a member function/instance combination) is associated
with a context, it executes in that context, else it executes
in its own.

All context objects merely have on static function that
processes the next command on the queue. Application
programmers in general don't think about threading, as
it is opaque whether commands are associated with threads
or not.

In general we prevent the race conditions mentioned by
halting on a binary semaphore in the task function and
explicitly activating tasks after construction. This is
typically done by an object that is responsible for
creation of all domain objects (objects associated with
context).

Typically the context function looks like this:

int CmdMngrTaskPosix::taskEntryPoint( CmdMngrTaskPosix* inst )
{
//Wait for activation
inst->activateSem_->acquire();
inst->execState_ = eExecutionState_Activated;

while( inst->execState_ == eExecutionState_Activated )
{
inst->serviceCmdQueue();
}
//It is highly unlikely that more commands would exists
//on the queue after terminate, but flush anyway.
inst->flush();

//This is called to release the destructSem_ semaphore
// that is waiting for the task to be completed.
// destructSem_ will be acquired during either destruction
// or a call to terminateExecution().
inst->destructSem_->release();

return EXIT_SUCCESS;
}


Typically we make use of template method to create all objects,
whereafter we start tasks. Obviously tasks can be created on
the fly, but the creation and the activation is separated. Each
context (or task) has interface that allows for the mailing of
commands on its queue. Commands can consist of either member
functions or non-member functions. They are cloned prior to
mailing over queues. Arguments to the commands are stored
as pointers and usually ownership transferral occurs,
depending on the size of the argument type.

Destruction of a Context Object gets done from an orthogonal
context (typically the creator) and the destroyer is forced
to block until the context exits successfully.

All said, it's quite a large excursion to undertake, but works
quite nicely and allows the application programmer to not
think "Threads".

Regards,

Werner
 
S

Szabolcs Ferenczi

Is it?
[...]
How about this:
<quote>
23.  Pete Becker    View profile   More options Apr 20, 7:04 pm
On 2008-04-20 12:36:50 -0400, James Kanze <[email protected]>
said:

Hmmmm...

"threading API at the language level" != "threading at the language
level"

"language level" != "API"
C++ will have threading at the language level, but you will not be
able to
access the threading primitives via keywords;

Then it is not at the language level. Do you know at all what language
is?

Some of you could admit already in another discussion thread that C+
+0x does not provide threading at the language level. Check out these
questions:

1) How do you create a thread of computation in C++0x? With library
calls or at language level?

2) How do you define critical region in C++0x? With library calls or
at language level?

3) How do you let the threads of computation synchronise with each
other in C++0x? With library calls or at language level?

If you try to answer these simple questions, you might find the answer
to whether C++0x provides threading at the language level or at the
library level.
instead, you will have
to access
them through standard library based wrappers.

That is the library itself. However, your claim just means that for
the C++0x programmer threading is at the library level and not at the
language level.

As I said in the other discussion thread, there would be no problem if
you guys could admit the truth, namely, that what C++0x will provide
is nothing else but yet another threading library.

You know, calling a library API a language is dissonant.

Maybe the committee wanted to design a horse but it is going to be a
camel. A camel is beautiful creature as a camel but it is an ugly one
as a horse, and vice versa. The problem comes when you want to name a
camel a horse.

Best Regards,
Szabolcs
 
S

Szabolcs Ferenczi

You ask:
Do you understand English?

I guess so.
 The SGI statement says exactly what
I said, that the implementation is thread safe.

SGI statement simply contradicts you: "If multiple threads access a
single container, and at least one thread may potentially write, then
THE USER IS RESPONSIBLE FOR ENSURING MUTUAL EXCLUSION BETWEEN THE
THREADS during the container accesses." It is just safe for reading,
so what I said was correct that it is thread safe to a certain extent.
The SGI also confirms me: "The SGI implementation of STL is thread-
safe ONLY IN THE SENSE that ..." That is it is not "completely thread
safe" as you claimed. You like to talk big, don't you?

Now let us see what I said and what you and other mental guys with
conditioned reflex attacked like mad:

"Be aware that although STL is thread-safe to a certain extent, you
must wrap around the STL data structure to make a kind of a bounded
buffer out of it."
http://groups.google.com/group/comp.lang.c++/msg/4450c4f92d6e0211

Besides, you still must show us how can you get elements from the
plain "completely thread safe" STL containers with multiple consumers.
(You cannot show this because you just talk big as usual.)
It's a more or less conditioned reflex on my part to correct
errors which people post, so that beginners don't get mislead.

I had a strong feeling that you have mental problem.

Best Regards,
Szabolcs
 
G

gpderetta

On May 19, 6:45 pm, Szabolcs Ferenczi <[email protected]>
wrote:
On May 18, 11:00 pm, Szabolcs Ferenczi <[email protected]>
wrote:
since you were trolling in the other discussion
thread in "Threading in new C++ standard" too,
...
that threads are not
going to be included at the language level in C++0x. You were
corrected there by people taking part in C++0x standardisation.
And that is simply a lie.
Is it?
[...]
How about this:
<quote>
23. Pete Becker View profile More options Apr 20, 7:04 pm
On 2008-04-20 12:36:50 -0400, James Kanze <[email protected]>
said:
There was never a proposal for introducing the
threading API at the language level, however; that just seems
contrary to the basic principles of C++, and in practice, is
very limiting and adds nothing.
There actually was a proposal for threading at the language level:http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1875.html.
...
</quote>http://groups.google.com/group/comp.lang.c++/msg/1ea5fdf80fd7f461
Q.E.D.

Hmmmm...

"threading API at the language level" != "threading at the language
level"

"language level" != "API"
C++ will have threading at the language level, but you will not be
able to
access the threading primitives via keywords;

Then it is not at the language level. Do you know at all what language
is?

*sigh*. We have been trying to explain it painfully to you for a long
time.

In C++ land, the "C++ language" is the standard defined in the
ISO/IEC 14882:2003 document, no more no less.

This document include both a keyword based interface (also called
core) and a library based interface (also called standard library),
both interfaces together make the C++ language (a freestanding
implementation my lack part but not all of the standard library).

Notice that the distinction between core and library is purely
a matter of convenience, and is more due to historical reasons
than anything (if C++ were rewritten today, core would be even
smaller).

Just because some parts are classified as "library" it doesn't
means that it must actually be implemented as a library:
std::vector for all purposes might as well be a special keyword:
same thing for std::string.

Sometimes the distinction between the core and library part
is very fuzzy: consider placement new: it is for all intent
and purposes a very primitive builtin, but in theory it should
be defined in the <new> header. Or the builting typeid, which
needs header <type_info> for support. The new range based
for loop, a variant of the classic 'for' also need a support
header.

But I'm sure this won't convince you. Let's take another route:

think about a language like java, which has explicit
synchronized {} blocks. Certainly, by all definitions, this
is language support for threading (even if I'm sure it doesn't fit
your vision of good language threading support).

Let's consider a little variant of java/c++ hybrid with a
builtin mutex type, where the synchronized block take
the mutex explicitly as a parameter (instead of implicitly
like in java), and obvious semantics.

mutex m;
...
synchronized(m) {
// do something in mutual exclusion
}

So far so good.
An user is free to do this:

class my_mutex {
friend my_synchronized;
private:
mutex m;
};

template<template Body>
void my_synchronized(Body b, my_mutex m) {
synchronized(m.m) {
b();
}
}

Use them like this:

my_mutex m;

my_synchronized(
lambda()
{ /* body in mutual exclusion here */ }
, m
);


So far, so good, certainly threading is still at the language level.

Let's say that, for whatever reasons, the language designers, liked
both my_mutex and my_synchronized so much that decided that these
are the primary meaning to access synchronization primitives, and
eventually relegated mutex and synchronized as implementation
detail and no longer accessible to user code.

Would this remove suddenly threading support from the language?
I'm sure that you can come up with some definition which will
support your view, but it will be a very narrow minded one (
which will value syntactic sugar more than semantics)

This is exactly the situation in C++: my_mutex is spelled
std::mutex (and other beasts), and, instead of my_synchronized,
we use scoped locks, as higher order functions are a bit
cumbersome in C++.

What matters are the semantics, not the syntactic sugar. C++0x
certainly has well defined multi threaded semantics, even if
the syntax might not have all the bell and whistles of other
languages.

In C++03, there were still no pure library based thread packages:
posix, openmp, even win32, all define a dialect
of C++ which add thread semantics to C++ (actually C).
Unfortunately, so far, all this bolt on additions have been
less than perfect, bug prone, and lacking in corner areas.

C++0x try to fix all these problems, in part by standardizing
existing work, in part by explicitly specifying which threaded
programs are legal, and finally expanding the state of the art
(of C++) a bit by adding atomic primitives.
 
S

Szabolcs Ferenczi

think about a language like java, which has explicit
synchronized {} blocks. Certainly, by all definitions, this
is language support for threading (even if I'm sure it doesn't fit
your vision of good language threading support).

The `synchronized {} blocks' are language support for specifying
Critical Regions but, you are right, it is not correctly defined in
Java.
Let's consider a little variant of java/c++ hybrid with a
builtin mutex type, where the synchronized block take
the mutex explicitly as a parameter (instead of implicitly
like in java), and obvious semantics.

  mutex m;
  ...
  synchronized(m) {
     // do something in mutual exclusion
  }

So far so good.

So far not so good at all.

1) First of all the mutex is a library level tool. It is not a
language tool. It shall not appear at the language level. The mutex is
part of the library-based solution for achieving mutual exclusion of
the processes. There can be various language means for specifying
Critical Region, and one of them is the `synchronized' keyword.
However, it is not about the keyword, since you can call it
`synchronized', `region', `cr', or whatever. The main issue is that if
it is a language level means, it has consequences, see the semantics.
One important semantical issue is that the block marked by this
keyword may contain access to shared variables and access to shared
variables can only appear in these kind of blocks. Note that the low
level mutex can be inserted by the compiler to implement the Critical
Region defined at the language level. On the other hand, the high
level Critical Region can be implemented by any other means in the
compiler as far as it ensures mutual exclusion. That is one of the
benefit of a high level language.

2) In procedural languages if you introduce a keyword for a Critical
Region, you must also give some means to associate which shared
variables are involved. So, fixing your example, if there is a shared
resource `m' you can define a critical region using a keyword like
`synchronized' such as this:

shared int m; // the `shared' keyword marks the shared variable
...
synchronized(m) {
// here you can access `m'
}

Now the compiler can make sure that `m' is accessed within Critical
Regions and only within Critical Regions.

Can you see the difference between the language level means and the
library level means?

3) In object-based or higher level object-oriented languages the unit
of shared resource is obviously an object. So the association of the
Critical Region and the critical resource is naturally merged in an
object. That was one of the failure of Java that they were not brave
enough to mark the class as the shared resource. In a decent
concurrent object-oriented language you could specify a keyword for
the class to mark a shared object (and not for the methods). E.g.:

synchronized class M {
int m;
public:
void foo() {
// do something with `m' in mutual exclusion
}
};

You must mark the class as a whole and not the individual methods of
it. Then the compiler can again help you to make correct concurrent
programs. This is again not possible with library-based solution.

Note 1: We were talking about language level means for Critical Region
only but a concurrent language must provide means for Conditional
Critical Region as well. See also:
http://groups.google.com/group/comp.lang.c++/msg/041b2d8d13869371

Note 2: There is an important issue of defining threads of computation
at the language level, and the object-oriented solution for it is
indicated in the title of this discussion thread: "What's the
connection between objects and threads?" See also:
http://groups.google.com/group/comp.lang.c++/msg/4499fb9fb8c3a42a

Best Regards,
Szabolcs
 
I

I V

2) In procedural languages if you introduce a keyword for a Critical
Region, you must also give some means to associate which shared
variables are involved. So, fixing your example, if there is a shared
resource `m' you can define a critical region using a keyword like
`synchronized' such as this:

shared int m; // the `shared' keyword marks the shared variable ...
synchronized(m) {
// here you can access `m'
}

Now the compiler can make sure that `m' is accessed within Critical
Regions and only within Critical Regions.

Can you see the difference between the language level means and the
library level means?

Lets imagine, instead of your syntax, we use this syntax:

shared<int> m;

{
synchronized<int> s(m);
// Here you can access the shared int 'm' through the object 's'
}

Again, the compiler can ensure that the shared state is only accessed
within a critical region (which, in this case, is equivalent to a block
containing the construction of the relevant synchronized<T> object). But
my proposed syntax can be implemented just as well with a library
solution as with a keyword based one, if the language provides certain
guarantees. So what's the difference?
 
S

Szabolcs Ferenczi

Lets imagine, instead of your syntax, we use this syntax:

shared<int> m;

{
        synchronized<int> s(m);
        // Here you can access the shared int 'm' through the object 's'

}

Again, the compiler can ensure that the shared state is only accessed
within a critical region (which, in this case, is equivalent to a block
containing the construction of the relevant synchronized<T> object). But
my proposed syntax can be implemented just as well with a library
solution as with a keyword based one, if the language provides certain
guarantees. So what's the difference?

It is not about syntax but first of all it is about semantics.
However, without syntax you cannot talk about semantics.

In your version `shared' and `synchronized' are keywords, aren't
they?

If yes, you already introduced keywords for the language which the
compiler can recognise. How would you describe the semantics of these
keywords? What is the role of object `s'?

If not, it can really be implemented with a library solution but the
compiler will not be able to identify the block what you mean as the
Critical Region and hence it cannot ensure for you that the intended
shared variable is only accessed within Critical Region.

Is it clear to you?

(Note that the role of "object 's'" is not clear in your proposal, see
"the shared int 'm' through the object 's'".)

Best Regards,
Szabolcs
 
C

coal

The `synchronized {} blocks' are language support for specifying
Critical Regions but, you are right, it is not correctly defined in
Java.




So far not so good at all.

1) First of all the mutex is a library level tool. It is not a
language tool. It shall not appear at the language level. The mutex is
part of the library-based solution for achieving mutual exclusion of
the processes. There can be various language means for specifying
Critical Region, and one of them is the `synchronized' keyword.
However, it is not about the keyword, since you can call it
`synchronized', `region', `cr', or whatever. The main issue is that if
it is a language level means, it has consequences, see the semantics.
One important semantical issue is that the block marked by this
keyword may contain access to shared variables and access to shared
variables can only appear in these kind of blocks. Note that the low
level mutex can be inserted by the compiler to implement the Critical
Region defined at the language level. On the other hand, the high
level Critical Region can be implemented by any other means in the
compiler as far as it ensures mutual exclusion. That is one of the
benefit of a high level language.

2) In procedural languages if you introduce a keyword for a Critical
Region, you must also give some means to associate which shared
variables are involved. So, fixing your example, if there is a shared
resource `m' you can define a critical region using a keyword like
`synchronized' such as this:

shared int m;     // the `shared' keyword marks the shared variable
...
synchronized(m) {
  // here you can access `m'

}

Now the compiler can make sure that `m' is accessed within Critical
Regions and only within Critical Regions.

Can you see the difference between the language level means and the
library level means?

3) In object-based or higher level object-oriented languages the unit
of shared resource is obviously an object. So the association of the
Critical Region and the critical resource is naturally merged in an
object. That was one of the failure of Java that they were not brave
enough to mark the class as the shared resource. In a decent
concurrent object-oriented language you could specify a keyword for
the class to mark a shared object (and not for the methods). E.g.:

synchronized class M {
  int m;
public:
  void foo() {
    // do something with `m' in mutual exclusion
  }

};

You must mark the class as a whole and not the individual methods of
it. Then the compiler can again help you to make correct concurrent
programs. This is again not possible with library-based solution.


I'm not sure marking the class like that is a good idea. Would you
mark vector as synchronized? If yes, do you expect compilers to
figure
out that only one thread is able to access some of your vector objects
at
any point in time and disable the synchronization? I don't want to
pay for
extra synchronization when it isn't needed.

Brian Wood
Ebenezer Enterprises
www.webEbenezer.net
 
I

I V

It is not about syntax but first of all it is about semantics. However,
without syntax you cannot talk about semantics.

Sure; I was trying to show you that you could get identical semantics to
the ones in your example using the existing C++ syntax (clearly, you need
more semantics than the C++ standard currently specifies; but the point
is that you don't need any more syntax).
In your version `shared' and `synchronized' are keywords, aren't they?

They don't have to be, that's the point. They could be class templates.
If yes, you already introduced keywords for the language which the
compiler can recognise. How would you describe the semantics of these
keywords? What is the role of object `s'?

With your suggestion, you declare the shared int 'm', synchronize on that
variable 'm', and access the shared data through the object 'm'. In my
suggestion, you declare the shared int 'm' and synchronize on the
variable 'm', but you access the shared data through the variable 's'.

The object of type synchronized<T> provides access to shared data of type
T that was declared with shared<T>. When an object of type
synchronized<T> is created, it ensures (through whatever method - the
obvious one is a mutex) that any other attempt to created a
synchronized<T> based on the particular shared variable will block. When
the synchronized said:
If not, it can really be implemented with a library solution but the
compiler will not be able to identify the block what you mean as the
Critical Region and hence it cannot ensure for you that the intended
shared variable is only accessed within Critical Region.

Sure it can. The point is that the object of type synchronized<T> is the
only way in which one can gain access to the shared data. Therefore,
anywhere in which an object of type synchronized<T> is in scope is a
critical region; any attempt to access the shared state without creating
a synchronized<T> object (and therefore a critical region) will cause a
compile error. For instance:


shared<int> i, j, k;
i = j + k;

wouldn't compile, whereas:

shared<int> i, j, k;
{
synchronized<int> s_i(i), s_j(j), s_k(k);
s_i = s_j + s_k
}

would.
 
I

Ian Collins

Chris said:
Well, at this point, IMVHO, Szabolcs has rendered himself into a
complete troll with no intention on learning anything.

Yet we still keep feeding him...
He even called
David Butenhof a fool for patiently trying to explain to him that
condvars can be used to signal about events that result from state
mutations.

Now that was a classic. The put down was even better:

http://tinyurl.com/5wk4l6

I should have learned my lesson there.
 
I

Ian Collins

Chris said:
Ian already tried to patiently explain why running threads from ctors
can be a bad idea.

I should add that I didn't "escape", I just realised my head would loose
out to the brick wall.
 
S

Szabolcs Ferenczi

Your claim:
Ian already tried to patiently explain why running threads from ctors can be
a bad idea.

On the contrary I was patiently correct all his mistakes. The last
time in this post:
http://groups.google.com/group/comp.lang.c++/msg/a02557fe9a4b7909

Then he failed as I predicted.
I even gave you some code which shows how undefined behavior
results when a pure virtual function is invoked from the ctor of a base
class!

I made the remark to your the relevant piece of hack that it is a
clear example to the fact that hackers can misuse any formal notation.
http://groups.google.com/group/comp.lang.c++/msg/d0d8850e3ec7ab3a
You misused C++ in a sequential mode in your demonstration.

Best Regards,
Szabolcs
 
S

Szabolcs Ferenczi

On May 19, 6:45 pm, Szabolcs Ferenczi <[email protected]>
wrote:
On May 18, 11:00 pm, Szabolcs Ferenczi <[email protected]>
wrote:
since you were trolling in the other discussion
thread in "Threading in new C++ standard" too,
...
that threads are not
going to be included at the language level in C++0x. You were
corrected there by people taking part in C++0x standardisation.
And that is simply a lie.
Is it?
[...]
How about this:
<quote>
23.  Pete Becker    View profile   More options Apr 20, 7:04 pm
On 2008-04-20 12:36:50 -0400, James Kanze <[email protected]>
said:
There was never a proposal for introducing the
threading API at the language level, however; that just seems
contrary to the basic principles of C++, and in practice, is
very limiting and adds nothing.
There actually was a proposal for threading at the language level:http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1875.html.
...
</quote>http://groups.google.com/group/comp.lang.c++/msg/1ea5fdf80fd7f461
Q.E.D.
Nope:
"threading API at the language level" != "threading at the language
level"
"language level" != "API"
Then it is not at the language level. Do you know at all what language
is?

You react:
*sigh*. We have been trying to explain it painfully to you for a long
time.

So you keep claiming that C++0x has threading at the language level
and you are wondering why some of us who already know some concurrent
programming languages cannot accept this.

A fable might be due here:

<fable>
Once the animals gathered at a meeting to decide who is the strongest
animal next to the lion. The bear claimed he is the strongest one next
to the lion. The wolf claimed he is the strongest one next to the
lion. The lion had enough of it and he declared that the rabbit is the
strongest animal next to him and closed the meeting.

The rabbit hearing this got very scared and started shivering. His
wife seeing this told him: Are you mad? Why are you shivering? Go and
bite anybody and you will see that they are scared of you.

The rabbit went to the wolf and bit him on the bottom. The wolf
thought it was a small bee but when he looked back he could see that
it was the rabbit. The rabbit who is the strongest animal next to the
lion. So the wolf got scared, apologised, and slinked away.

The rabbit went to the bear and bit him on the bottom. The bear
thought it was a small mosquito but when he looked back he could see
that it was the rabbit. The rabbit who is the strongest animal next to
the lion. So the bear got scared, apologised, and slinked away.

The rabbit started to enjoy the situation and became very cheeky. He
went to the tiger and bit him on the bottom. The tiger looked back and
promptly swated the rabbit.

Well, it happened that the tiger was not present at the meeting and
hence he did not know about that the rabbit was the strongest animal
next to the lion.
</fable>

Well, it might just happened that some of us experienced in concurrent
programming were not there in the meeting where the committee has
decided that C++0x is a concurrent programming language and that it
has threading at the language level.

Best Regards,
Szabolcs
 
S

Szabolcs Ferenczi

Name a platform?

I will not name any platform since I did not claim that "fork() is
very fast". Please ask the one who claimed it. Besides, you can guess
the platform.

I have concluded already that some of you have a well developed
conditioned reflex wich is so accute that you tend to address me even
if I did not claim anything.

Best Regards,
Szabolcs
 
S

Szabolcs Ferenczi

You say:
I'm not sure marking the class like that is a good idea.

That is why I try to explain it to you.
 Would you
mark vector as synchronized?

It depends what do you want to express in the programming language
notation. If you want to express your intention that the whole vector
is a critical resource, then yes, put it as a member in a class marked
with the keyword `synchronized' or `shared' or whatever. If your
intention is not that the whole vector is a shared resource, you can
partition it.
 If yes, do you expect compilers to
figure
out that only one thread is able to access some of your vector objects
at
any point in time and disable the synchronization?

No, if I follow you correctly. The compiler does not have to figure
out anything because you have expressed exactly what you wanted. You
wanted the whole vector to be a shared resource. That means the
resource must be protected against simultaneous access.

If you want it otherwise, you must express it so in your programming
language notation.
 I don't want to
pay for
extra synchronization when it isn't needed.

Then, you could build your algorithm accordingly. Everything depends
on the notation as a tool of thought. That is why a programming
language must be designed carefully.

Best Regards,
Szabolcs
 
J

James Kanze

I guess so.

You don't seem to.
SGI statement simply contradicts you: "If multiple threads
access a single container, and at least one thread may
potentially write, then THE USER IS RESPONSIBLE FOR ENSURING
MUTUAL EXCLUSION BETWEEN THE THREADS during the container
accesses." It is just safe for reading, so what I said was
correct that it is thread safe to a certain extent.

That's not the only guarantee it gives. It specifies the
contract that you have to respect. In other words, it is
completely thread safe.
The SGI also confirms me: "The SGI implementation of STL is
thread- safe ONLY IN THE SENSE that ..."

Only in the sense of thread safety normally used by the experts
in the domain. It's true that they felt they had to add this
statement because a lot of beginners have misconceptions about
the meaning of the word. (Maybe you're one of them, and that's
the problem.)
That is it is not "completely thread safe" as you claimed.

Sorry, but that is the accepted definition of "completely thread
safe".
You like to talk big, don't you?

And what is that supposed to mean? Pointing out your
misstatements is "talking big"?

[...]
Besides, you still must show us how can you get elements from
the plain "completely thread safe" STL containers with
multiple consumers. (You cannot show this because you just
talk big as usual.)

Are you trying to say that you cannot use STL containers for
communications between threads? I use std::deque, for example,
in my message queue, and it works perfectly.

This is so basic, there has to be some misunderstanding on your
part.
 
J

James Kanze

[...]
Notice that the distinction between core and library is purely
a matter of convenience, and is more due to historical reasons
than anything (if C++ were rewritten today, core would be even
smaller).

That's far from certain. We're still adding features to the
core, in order to support concepts, or lamba, for example, where
purely library solutions don't work as well.

However...
Just because some parts are classified as "library" it doesn't
means that it must actually be implemented as a library:
std::vector for all purposes might as well be a special keyword:
same thing for std::string.
Sometimes the distinction between the core and library part
is very fuzzy: consider placement new: it is for all intent
and purposes a very primitive builtin, but in theory it should
be defined in the <new> header. Or the builting typeid, which
needs header <type_info> for support. The new range based
for loop, a variant of the classic 'for' also need a support
header.

The best comparison here is doubtlessly typeid. The contents of
<typeinfo> are certainly library, but they don't make any sense
without the basic language support. Similarly, the contents of
<pthread.h> don't make sense without the language support Posix
adds to C.

What's interesting in this regard is that the thread support in
C++ is being developed with constant consultation with the C
committee, so that it will be adopted by C as well. Obviously,
not the API, since no C library will have classes which count on
constructors and destructors. But the language part is designed
so that it can be identical, with constant consultation with the
C committee to ensure that it will be identical; that the C
committee is in agreement with what we are doing, and will
accept it in the next version of C. (In a certain sense, it's
even lower than the language, since it doesn't depend on syntax
or keywords even.)
 
J

James Kanze

On May 20, 11:45 pm, gpderetta <[email protected]> wrote:
2) In procedural languages if you introduce a keyword for a
Critical Region, you must also give some means to associate
which shared variables are involved. So, fixing your example,
if there is a shared resource `m' you can define a critical
region using a keyword like `synchronized' such as this:
shared int m; // the `shared' keyword marks the shared variable
...
synchronized(m) {
// here you can access `m'
}
Now the compiler can make sure that `m' is accessed within
Critical Regions and only within Critical Regions.

And what does that buy us? It prevents some very useful idioms,
while adding no real additional safety.
 
J

James Kanze

On May 20, 5:34 pm, Szabolcs Ferenczi <[email protected]>
wrote:

[...]
I'm not sure marking the class like that is a good idea.
Would you mark vector as synchronized? If yes, do you expect
compilers to figure out that only one thread is able to access
some of your vector objects at any point in time and disable
the synchronization? I don't want to pay for extra
synchronization when it isn't needed.

That's not really the point (although it certainly would be in
some applications). The point is that this idea was put forward
many, many years ago; it works well when you're dealing with
simple objects, like int's, but it doesn't work when you start
dealing with sets of objects grouped into transactions.
Ensuring transactional integrity in a multi-threaded
environment, without deadlocks, still requires manually managing
locking and unlocking---even scoped locking doesn't really work
here. (You can implement transactions with scoped locking, but
only if you only handle one transaction at a time. In which
case, there's really no point in being multithreaded.)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top