Are c++ features a subset of java features?

I

IR

andrewmcdonagh wrote:
[snip]
Hmmm... this is what I thought....

Java Finalizers are not destructors. They are not even remotely
equivalent and so we are comparing apples and pears.

For the record, I was answering to:

Cesar said:
Ian Collins escreveu:
To answer that you need to question yourself if this features
allow easier programming, safely, less error prone.

I showed that both approaches (RAII vs GC) are as easy/safe in the
general case, that is, just fire and forget.

This becomes false as soon as you need determinism, in which case GC
becomes a nuisance: the responsability of freeing a resource in time
is then transfered on the user of the resource.

IMO this is *not* easier or less error prone, and it can even affect
safety (eg. does automatically rolling back a DB transaction on
exception ring a bell?).

One could argue that to handle this latter example, he could use a
try/finally block. Again, this is a shift of responsability on the
user of the transaction object.


So a language that *requires* you to use GC renders itself less safe
and more error prone, by imposing certain vital constructs to be
handled by the user rather than by the library.

And I have yet to find any advantage in the favor of GC, as long as
one doesn't play around with naked pointers...


Cheers,
 
J

Jon Harrop

IR said:
I showed that both approaches (RAII vs GC) are as easy/safe in the
general case, that is, just fire and forget.

Almost any program using graphs or trees with reuse is a counter example.
For example, an abstract syntax tree:

http://www.codecodex.com/wiki/index.php?title=Derivative

Look at the final OCaml implementation. The first two functions simplify
symbolic expressions as they are built:

let ( +: ) f g = match f, g with
| `Int n, `Int m -> `Int (n + m)
| `Int 0, f | f, `Int 0 -> f
| f, g -> `Add(f, g)

let ( *: ) f g = match f, g with
| `Int n, `Int m -> `Int (n * m)
| `Int 0, _ | _, `Int 0 -> `Int 0
| `Int 1, f | f, `Int 1 -> f
| f, g -> `Mul(f, g)

The next function computes the symbolic derivative:

let rec d e x = match e with
| `Int n -> `Int 0
| `Add(f, g) -> d f x +: d g x
| `Mul(f, g) -> f *: d g x +: g *: d f x
| `Var v -> `Int (if v=x then 1 else 0)

Skip to the end (the rest is a pretty printer) and it contructs and
differentiates a symbolic expression:

let x = `Var "x" and a = `Var "a" and b = `Var "b" and c = `Var "c"

let e = a *: x *: x +: b *: x +: c

d e "x"

Note that the subexpression x = Var "x" is a tree that is referred to three
times by reference.

My question is simply: how can RAII and smart pointers be used to handle
deallocation of a simple graph like this as simply and efficiently as this
GC-based program does?

The simplest solution that I know of in C++ (short of battling with Boehm)
is to disallow sharing and explicitly copy all subtrees so that the
root "owns" its entire tree. This is obviously very wasteful and slow.
This becomes false as soon as you need determinism, in which case GC
becomes a nuisance:

You simply resort to RAII when you must. GC is then irrelevant, not
a "nuisance".
the responsability of freeing a resource in time is then transfered on the
user of the resource.

GCs are designed specifically for memory management.
IMO this is *not* easier or less error prone, and it can even affect
safety (eg. does automatically rolling back a DB transaction on
exception ring a bell?).

Can you elaborate on this?
One could argue that to handle this latter example, he could use a
try/finally block. Again, this is a shift of responsability on the
user of the transaction object.

The try..finally block would be in the library code, just as the destructor
is in the library code in C++.
So a language that *requires* you to use GC renders itself less safe
and more error prone, by imposing certain vital constructs to be
handled by the user rather than by the library.

Why can the library not handle them?
And I have yet to find any advantage in the favor of GC, as long as
one doesn't play around with naked pointers...

Easier to write graph algorithms.

Can have closures.
 
A

Alf P. Steinbach

* Jon Harrop:
My question is simply: how can RAII and smart pointers be used to handle
deallocation of a simple graph like this as simply and efficiently as this
GC-based program does?

E.g. boost::shared_ptr.

Will be included in the next standard as std::shared_ptr.

There are many other smart pointers designed for sharing.
 
A

Alan Johnson

Jon said:
Can you elaborate on this?


The try..finally block would be in the library code, just as the destructor
is in the library code in C++.


Why can the library not handle them?

Consider a class in C++ that uses RAII to rollback a database
transaction in the destructor. You would use it something like this:

void f()
{
Transaction t ;

// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;

t.commit() ;
}

If at any point an exception is thrown, the transaction is rolled back
when the destructor is called. Now consider equivalent code in Java:

class Whatever
{
public static void f()
{
Transaction t = null ;
try
{
t = new Transaction() ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;

t.commit() ;
}
finally
{
if (t != null)
t.rollback() ;
}
}
}

Note that the responsibility of rolling back was transferred from the
Transaction class to the client code. One could argue that eventually
the GC will reclaim the object, at which point the rollback will be
executed, but a database transaction isn't the sort of thing you want
to just leave sitting around waiting for the GC. It could be that I
just lack the proper skill as a Java programmer, but I cannot see a way
that the library (in this case, where the Transaction class lives) can
handle that try/finally for me.
 
K

Kai-Uwe Bux

Alf said:
* Jon Harrop:

E.g. boost::shared_ptr.

Will be included in the next standard as std::shared_ptr.

I maybe wrong, but I thought that shared_ptr<> uses reference counting and
will therefore not reclaim resources properly for graphs that contain
directed cycles.

There are many other smart pointers designed for sharing.

Do you know of one that copes with cycles?


Best

Kai-Uwe Bux
 
J

Jon Harrop

This is exactly what I thought they were getting at.

Alan said:
Note that the responsibility of rolling back was transferred from the
Transaction class to the client code.

In this case, yes.
One could argue that eventually
the GC will reclaim the object, at which point the rollback will be
executed, but a database transaction isn't the sort of thing you want
to just leave sitting around waiting for the GC.

Absolutely. That is an abuse of the GC (using it for non-memory resource).
It could be that I
just lack the proper skill as a Java programmer, but I cannot see a way
that the library (in this case, where the Transaction class lives) can
handle that try/finally for me.

Certainly in ML you do exactly what I said before. There will be a Java
equivalent, of course, but it will be much more verbose.

Essentially, instead of writing a transaction class with a destructor in a
C++ library, the library writer must write a higher-order function that
unwinds the transaction before it returns in the event that an exception
was raised. This gives exactly the same semantics as RAII with the same
amount of code.

I'm sure someone in a Java forum could explain how to do this in Java.
 
J

Jon Harrop

Alf said:
* Jon Harrop:

E.g. boost::shared_ptr.

Will be included in the next standard as std::shared_ptr.

Using such a smart pointer is not as simple (you must explicitly use it) and
not as fast (reference counting is one of the slowest forms of GC).
There are many other smart pointers designed for sharing.

Most importantly, none of Boost's smart pointers can collect cyclic graphs.
So they are more verbose, slower and less powerful than GC.

The AST I gave before was not a cyclic graph, but z=3+2z is:

let rec z = `Add(`Int 3, `Mul(`Int 2, z))

A GC has no problem collecting such data structures. C++ with Boost's smart
pointers will leak memory until it dies.
 
A

Alf P. Steinbach

* Kai-Uwe Bux:
I maybe wrong, but I thought that shared_ptr<> uses reference counting and
will therefore not reclaim resources properly for graphs that contain
directed cycles.

Right, a simplistic application of shared_ptr<> has that problem. John
Harrop didn't say the graph had cycles but instead kept going on about
shared expression trees. If he had mentioned cyclic graphs I'd add that
for the cyclic portions weak_ptr may in many cases be employed to break
the cycles (a companion class to shared_ptr).

Do you know of one that copes with cycles?

Not directly and automagically, it requires design. The general case of
completely arbitrary graphs with portions reused in other graphs, and so
on, appears to be difficult. The question is whether there's any
problem that requires that degree of unrestricted linkup of object.
 
A

Alf P. Steinbach

* Jon Harrop:
Essentially, instead of writing a transaction class with a destructor in a
C++ library, the library writer must write a higher-order function that
unwinds the transaction before it returns in the event that an exception
was raised. This gives exactly the same semantics as RAII with the same
amount of code.

Applying the template pattern to exception safety / transactions is a
good idea. But (sad to say) unfamiliar to me. But hey, learned
something new! :)
 
A

Alf P. Steinbach

* Jon Harrop:
Using such a smart pointer is not as simple (you must explicitly use it) and
not as fast (reference counting is one of the slowest forms of GC).


Most importantly, none of Boost's smart pointers can collect cyclic graphs.
So they are more verbose, slower and less powerful than GC.

The AST I gave before was not a cyclic graph, but z=3+2z is:

let rec z = `Add(`Int 3, `Mul(`Int 2, z))

A GC has no problem collecting such data structures. C++ with Boost's smart
pointers will leak memory until it dies.

As mentioned else-subthread, weak_ptr, or simply a raw pointer, can in
many cases be employed to "break" the graph. The recursive rhs
reference could here just be a weak_ptr. But there are examples that
are not so easily dealt with, and also as stated else-thread, the
question is whether such arbitrary linkups occur in practice, with no
reasonable alternative.
 
I

I V

Jon said:
Why can the library not handle them?

Consider a class in C++ that uses RAII to rollback a database
transaction in the destructor. You would use it something like this:

void f()
{
Transaction t ;

// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;

t.commit() ;
}

If at any point an exception is thrown, the transaction is rolled back
when the destructor is called. Now consider equivalent code in Java:

[snip]


Note that the responsibility of rolling back was transferred from the
Transaction class to the client code. One could argue that eventually
the GC will reclaim the object, at which point the rollback will be
executed, but a database transaction isn't the sort of thing you want
to just leave sitting around waiting for the GC. It could be that I
just lack the proper skill as a Java programmer, but I cannot see a way
that the library (in this case, where the Transaction class lives) can
handle that try/finally for me.

I think you could do something like this:

class SaferWhatever
{
public static void f()
{
DbConnection.doTransaction(new DatabaseAction() {
public doAction(Transaction t) {
t.add_query(sql);
// Do some stuff that could throw exceptions.
t.add_query(sql);
// Do some stuff that could throw exceptions.
t.add_query(sql);
t.commit();
}
});
}
}

Where the library provides something like:

class DatabaseAction
{
public abstract void doAction(Transaction t);
}

class DbConnection
{
public void doTransaction(DatabaseAction a)
{
Transaction t = null;
try
{
t = new Transaction();
a.doAction(t);
}
finally
{
if( t != null )
t.rollback();
}
}
}

And the syntax might be a bit nicer in a language with first-class
anonymous functions.
 
A

Alan Johnson

I said:
Jon said:
and more error prone, by imposing certain vital constructs to be
handled by the user rather than by the library.
Why can the library not handle them?
Consider a class in C++ that uses RAII to rollback a database
transaction in the destructor. You would use it something like this:

void f()
{
Transaction t ;

// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;

t.commit() ;
}

If at any point an exception is thrown, the transaction is rolled back
when the destructor is called. Now consider equivalent code in Java:

[snip]

Note that the responsibility of rolling back was transferred from the
Transaction class to the client code. One could argue that eventually
the GC will reclaim the object, at which point the rollback will be
executed, but a database transaction isn't the sort of thing you want
to just leave sitting around waiting for the GC. It could be that I
just lack the proper skill as a Java programmer, but I cannot see a way
that the library (in this case, where the Transaction class lives) can
handle that try/finally for me.

I think you could do something like this:

class SaferWhatever
{
public static void f()
{
DbConnection.doTransaction(new DatabaseAction() {
public doAction(Transaction t) {
t.add_query(sql);
// Do some stuff that could throw exceptions.
t.add_query(sql);
// Do some stuff that could throw exceptions.
t.add_query(sql);
t.commit();
}
});
}
}

Where the library provides something like:

class DatabaseAction
{
public abstract void doAction(Transaction t);
}

class DbConnection
{
public void doTransaction(DatabaseAction a)
{
Transaction t = null;
try
{
t = new Transaction();
a.doAction(t);
}
finally
{
if( t != null )
t.rollback();
}
}
}

And the syntax might be a bit nicer in a language with first-class
anonymous functions.

This is clever, but is there any non-masochistic way of expanding it to
an arbitrary number of resources? Consider when you have two resources
from different libraries. First, the RAII way in C++:

#include <iostream>

class Resource1
{
public:
void doSomething()
{
std::cout << "Doing something with Resource1." << std::endl ;
}

~Resource1()
{
std::cout << "Resource1 released." << std::endl ;
}
} ;

class Resource2
{
public:
void doSomething()
{
std::cout << "Doing something with Resource2." << std::endl ;
}

~Resource2()
{
std::cout << "Resource2 released." << std::endl ;
}
} ;

int main()
{
Resource1 r1 ;
Resource2 r2 ;
r1.doSomething() ;
r2.doSomething() ;
}


Now let's do the same thing in Java, transferring responsibility to the
client code (aka the "bad" way):

public class Resource1
{
public void doSomething()
{
System.out.println("Doing something with Resource1.") ;
}

public void release()
{
System.out.println("Resource1 released.") ;
}
}

public class Resource2
{
public void doSomething()
{
System.out.println("Doing something with Resource2.") ;
}

public void release()
{
System.out.println("Resource2 released.") ;
}
}

public class Whatever
{
public static void main(String args[])
{
Resource1 r1 = null ;
Resource2 r2 = null ;
try
{
r1 = new Resource1() ;
r2 = new Resource2() ;
r1.doSomething() ;
r2.doSomething() ;
}
finally
{
if (r1 != null)
r1.release() ;
if (r2 != null)
r2.release() ;
}
}
}

And finally, let's apply the idiom you propose to make this safer:

public class Resource1Manager
{
public static void use(Resource1Action a)
{
Resource1 r1 = null ;
try
{
r1 = new Resource1() ;
a.doAction(r1) ;
}
finally
{
if (r1 != null)
r1.release() ;
}
}
}

public abstract class Resource1Action
{
public abstract void doAction(Resource1 r1) ;
}

public class Resource2Manager
{
public static void use(Resource2Action a)
{
Resource2 r2 = null ;
try
{
r2 = new Resource2() ;
a.doAction(r2) ;
}
finally
{
if (r2 != null)
r2.release() ;
}
}
}

public abstract class Resource2Action
{
public abstract void doAction(Resource2 r2) ;
}

public class SaferWhatever
{
public static void main(String args[])
{
Resource1Manager.use(new Resource1Action()
{
public void doAction(final Resource1 r1)
{
Resource2Manager.use(new Resource2Action()
{
public void doAction(Resource2 r2)
{
r1.doSomething() ;
r2.doSomething() ;
}
}) ;
}
}) ;
}
}


Even if you had a first-class anonymous functions, this is clearly going
to quickly depart from the simplicity and straightforwardness of RAII.
(And as a side issue, how do you negotiate the return type in this idiom?)
 
S

Sascha Bohnenkamp

Safety = no dangling pointers, no buffer overruns, no segmentation faults...

only "internal null pointer expections" ;)
 
P

peter koch

Jon said:
C. Not C++. I'm not saying that C isn't cross platform - it ports well. I'm
saying that C++ (using all the bells and whistles) is very
platform/compiler specific in my experience - it doesn't port well.

What platforms does C++ not port well to? I'm curious as our company
has a huge codebase in C++ (16000 cpp and hpp files), and we have had
more problems porting our Java base (the size of which I don't know)
than our C++ base.

/Peter
 
I

IR

I have not much to add to the answers given to your post, as the other
people already explained my point quite clearly.

But...

Jon said:
IR wrote: [...]
And I have yet to find any advantage in the favor of GC, as long
as one doesn't play around with naked pointers...

Easier to write graph algorithms.

Can have closures.

I don't see how closures and GC are related?

FWIW, closures exist in C++. Granted, not as part of the language, but
as libraries. So even without GC you can have closures...


Cheers,
 
A

andrewmcdonagh

transaction in the destructor. You would use it something like this:

void f()
{
Transaction t ;

// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;

t.commit() ;

}If at any point an exception is thrown, the transaction is rolled back
when the destructor is called. Now consider equivalent code in Java:

class Whatever
{
public static void f()
{
Transaction t = null ;
try
{
t = new Transaction() ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;

t.commit() ;
}
finally
{
if (t != null)
t.rollback() ;
}
}

}Note that the responsibility of rolling back was transferred from the
Transaction class to the client code. One could argue that eventually
the GC will reclaim the object, at which point the rollback will be
executed, but a database transaction isn't the sort of thing you want
to just leave sitting around waiting for the GC. It could be that I
just lack the proper skill as a Java programmer, but I cannot see a way
that the library (in this case, where the Transaction class lives) can
handle that try/finally for me.

If we are writing the Transaction class, then there is nothing stopping
us putting the rollback inside a try/finally block with in the
add_query() method.

So we'd end up with:

void f()
{
Transaction t = new Transaction();

// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;

t.commit() ;

}If at any point an exception is thrown, the transaction is rolled
back

If we can't (because its a 3rd party class or widely used and we'd be
breaking its known behaviour), then yes, this is where a Java
programmer mind set (aka idiom) comes to play (just like C++ RAII
idiom).

We'd decorate the Transaction class to provide the same safety.

void f()
{
Transaction t = new AutoRollbackTransaction(new Transaction() );

// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;

t.commit() ;

}//If at any point an exception is thrown, the transaction is rolled
back

Where AutoRollbackTransaction derives from Transaction but wraps a
try/finally block around each call to its delegate that was passed at
construction time.

This is actually a cleaner design (for Java) as it separates the
responsibilities of AutoRollback from transaction.

Andrew
 
A

andrewmcdonagh

I said:
Jon Harrop wrote:
and more error prone, by imposing certain vital constructs to be
handled by the user rather than by the library.
Why can the library not handle them?
Consider a class in C++ that uses RAII to rollback a database
transaction in the destructor. You would use it something like this:
void f()
{
Transaction t ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
// Do some stuff that could throw exceptions.
t.add_query(sql) ;
t.commit() ;
}
If at any point an exception is thrown, the transaction is rolled back
when the destructor is called. Now consider equivalent code in Java:
Note that the responsibility of rolling back was transferred from the
Transaction class to the client code. One could argue that eventually
the GC will reclaim the object, at which point the rollback will be
executed, but a database transaction isn't the sort of thing you want
to just leave sitting around waiting for the GC. It could be that I
just lack the proper skill as a Java programmer, but I cannot see a way
that the library (in this case, where the Transaction class lives) can
handle that try/finally for me.
I think you could do something like this:
class SaferWhatever
{
public static void f()
{
DbConnection.doTransaction(new DatabaseAction() {
public doAction(Transaction t) {
t.add_query(sql);
// Do some stuff that could throw exceptions.
t.add_query(sql);
// Do some stuff that could throw exceptions.
t.add_query(sql);
t.commit();
}
});
}
}
Where the library provides something like:
class DatabaseAction
{
public abstract void doAction(Transaction t);
}
class DbConnection
{
public void doTransaction(DatabaseAction a)
{
Transaction t = null;
try
{
t = new Transaction();
a.doAction(t);
}
finally
{
if( t != null )
t.rollback();
}
}
}
And the syntax might be a bit nicer in a language with first-class
anonymous functions.This is clever, but is there any non-masochistic way of expanding it to
an arbitrary number of resources? Consider when you have two resources
from different libraries. First, the RAII way in C++:

#include <iostream>

class Resource1
{
public:
void doSomething()
{
std::cout << "Doing something with Resource1." << std::endl ;
}

~Resource1()
{
std::cout << "Resource1 released." << std::endl ;
}

} ;class Resource2
{
public:
void doSomething()
{
std::cout << "Doing something with Resource2." << std::endl ;
}

~Resource2()
{
std::cout << "Resource2 released." << std::endl ;
}

} ;int main()
{
Resource1 r1 ;
Resource2 r2 ;
r1.doSomething() ;
r2.doSomething() ;

}Now let's do the same thing in Java, transferring responsibility to the
client code (aka the "bad" way):

snipped....

Alan Johnson


So here's how I'd do it in java.....



public class Whatever
{
public static void main(String args[])
{
Resource1 r1 = new AutoReleasingResource1(new Resource1()
);
Resource2 r2 = new AutoReleasingResource2(new Resource2()
);
r1.doSomething() ;
r2.doSomething() ;
}
}


class AutoReleasingResource1 extends Resource1 {
private Resource1 delegate

public AutoReleasingResource1(Resource1 delegate) {
this.delegate = delegate;
}

public void doSomething()
{
try {
delegate.doSomething();
} finally {
delegate.release();
}
}

}

class AutoReleasingResource2 extends Resource2 {

private Resource2 delegate

public AutoReleasingResource1(Resource2 delegate) {
this.delegate = delegate;
}

public void doSomething()
{
try {
delegate.doSomething();
} finally {
delegate.release();
}
}

}


Do note, I've explicitly chosen to pass the delegate object into the
decorator to show its using a delegate(not to mention that it makes
unit testing so much easier), but there is nothing stopping us from
creating this object within each decorator itself instead of passing it
in.

Andrew
 
J

Jon Harrop

Alf said:
* Jon Harrop:

As mentioned else-subthread, weak_ptr, or simply a raw pointer, can in
many cases be employed to "break" the graph.

Then you are reimplementing the garbage collector.
 
J

Jon Harrop

Alf said:
* Kai-Uwe Bux:

Not directly and automagically, it requires design. The general case of
completely arbitrary graphs with portions reused in other graphs, and so
on, appears to be difficult. The question is whether there's any
problem that requires that degree of unrestricted linkup of object.

Almost any problem domain that uses graph theory. For me, this includes
compilers, interpreters, graphics, numerical, symbolic, parallel and many
parts of scientific computing.

I used C++ for many years and this is one of the most important improvements
made by modern high-level languages is the ease with which they handle
these kinds of problems.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top