Is C# really "better" than C++ or C++0x? How about objective-C?

A

A

Microsoft thinks C++ is obsolete and C# is "the future". Apple thinks the
same for Objective-C.

IMO this is all nonsense.

What do you think about this? How "better" really they are? And what about
C++0x?
 
S

SG

J

Juha Nieminen

A said:
Microsoft thinks C++ is obsolete and C# is "the future". Apple thinks the
same for Objective-C.

Do you have any source references for this claim?

As for Objective-C being "better" than C++, I cannot but fully disagree.
Basically the only thing that Objective-C offers that C++ doesn't is full
runtime introspection (which can be handy sometimes). Otherwise Objective-C
is very crippled compared to C++. For example, it doesn't have such
essential features such as RAII (no scope-based automatic lifetime of
objects, no automatic constructors, destructors and assignment, and so on)
or templates. It has also other more minor, but still pretty annoying,
limitations, such as no inner classes nor any other inner types (inner to
a class, that is), no protected or private methods, no multiple inheritance
(it follows the fad of having "interfaces", which in this case are called
"protocols", but they cannot have member variables nor method implementations),
you cannot hide "constructors" from the base class in a derived class
(because Objective-C has no constructors) which sometimes causes irritating
bugs (inadvertedly "construct" an object by calling the wrong constructor
in the base class which does not properly construct the derived object),
and so on.

Naturally objects in Objective-C are not value-based like in C++, meaning
that every object has to be allocated dynamically (thus increasing time and
memory overhead), like is the fad in almost every other object-oriented
language (although I hear C# is better at this than most).

I know nothing about C#, but from a feature point of view it doesn't
sound too bad. One of its major problems from an implementation point of
view is, however, that it heavily relies on an efficient optimizing (ie.
JIT-compiling) runtime environment if you want any efficiency. Not an
insurmountable problem by any means, though.
 
M

MikeP

A said:
Microsoft thinks C++ is obsolete and C# is "the future". Apple thinks
the same for Objective-C.

IMO this is all nonsense.

What do you think about this? How "better" really they are? And what
about C++0x?

Use C# when you can, use C++ when you still have to.
 
P

Paul

A said:
Microsoft thinks C++ is obsolete and C# is "the future". Apple thinks the
same for Objective-C.

Where did you read this? I don't think this is the general case.
IMO this is all nonsense.

What do you think about this? How "better" really they are? And what about
C++0x?

I don't think they are directly comparable. When discussing this I tend to
think of application programming. In a large scale application such as a
video game or video editing software C# would be used at higher level than
C++. perhaps implementing a GUI, popups and handling mouse clicks,
keystrokes etc. C++ being used for any video or image processing or any
under-the-hood large scale processing, or speed critical processing.

C++ is not obselete , almost all non-trivial modern applications are
programmed in C++. More recently its much easier to implement the non speed
critical parts with interpreted languages because these languages seem to be
more productively efficient in these areas.
C++ is a language for more specialised programming, such as a device driver
programmer or a video card programmer. I think to program in these
speciality niches you need a lot of knowledge on the specialised subject ,
knowledge of the programming language comes as a secondary skill.
For the average Joe I think C# is very good but I would prefer to learn Java
because its more portable and its more of a distinctly different language.
If you already know C++ its hard to also know a very similar
language(synataxically) such as managed C++ or C# , unless you use it every
day. I think with Java there is enough of a difference, even if its only in
the name of the language, to keep a clear distinction between the two.
 
A

Alf P. Steinbach /Usenet

* MikeP, on 17.04.2011 18:57:
Use C# when you can, use C++ when you still have to.

Certainly good advice from a programmer effectiveness perspective.

But I think I'd hate for more of the "not-small" garbage collected programs
infesting the PC environment. This is not provable, it is perhaps not even
measurable in any consistent way!, but right now on my old machine, the newer
versions of Thunderbird, Firefox and Opera, not to mention Windows Update (which
is almost exotic in how many ways it just acts as malware, and its sheer extreme
inefficiency), bog things down to a crawl, a snail's pace. I think what's going
on is a subtlety that I call *the stretch effect* (he he, I guess some folks
will say that it is also a stretch as a hypothesis):

* One garbage collector based process is fine OK. As it allocates more
virtual memory than the system is comfortable with, things slow down
and the garbage collector kicks in, fixing it. Anyway, you don't get
into that situation unless it's necessary for the program.

* Two or more garbage collector based processes are Bad. As they
happily allocate more and more virtual memory the OS just obliges,
providing virtual memory at higher and higher cost (the stretching),
and goes into *trashing* behavior. In general the processes never get
to the point where their garbage collectors could have fixed things.

One problem is that each process does not see the *cost* of its allocation(s).

And another problem is that even if it did, and took that into account, one
would still have the problem of the Tragedy Of The Commons. In this scenario a
process that behaves nicely for the common good will apparently perform worse
than a greedy egoist process that just allocates for its own good.

I think a solution would have to involve both cost measure, user choice and OS
policing -- punishing the greedy ego processes, unless told otherwise by user.

But we don't have that yet. And perhaps never will. And if we did then with
costs and policing and such it would sort of introduce politics. Do you want a
communist, social-democratic or capitalist (or other) society of processes? They
compete for global resources; they can do harm, or good; they need to be
organized in some way... Me, I want to be the benevolent dictator of my PC. I do
not want my OS or other software to cut me out of the loop and make choices.

Anyway I think it would be not nice to the users to heap more garbage collector
based programs onto the PC scene (point: Microsoft has made it legally very
difficult to publish any performance measurements of .NET and .NET
applications). And yes, I'm aware that that's apparently in conflict with the
goal of making programs scriptable. I don't know any good solution to that.


Cheers,

- Alf (subjective)
 
R

robertwessel2

* MikeP, on 17.04.2011 18:57:



Certainly good advice from a programmer effectiveness perspective.

But I think I'd hate for more of the "not-small" garbage collected programs
infesting the PC environment. This is not provable, it is perhaps not even
measurable in any consistent way!, but right now on my old machine, the newer
versions of Thunderbird, Firefox and Opera, not to mention Windows Update(which
is almost exotic in how many ways it just acts as malware, and its sheer extreme
inefficiency), bog things down to a crawl, a snail's pace. I think what'sgoing
on is a subtlety that I call *the stretch effect* (he he, I guess some folks
will say that it is also a stretch as a hypothesis):

   * One garbage collector based process is fine OK. As it allocates more
     virtual memory than the system is comfortable with, things slow down
     and the garbage collector kicks in, fixing it. Anyway, you don't get
     into that situation unless it's necessary for the program.

   * Two or more garbage collector based processes are Bad. As they
     happily allocate more and more virtual memory the OS just obliges,
     providing virtual memory at higher and higher cost (the stretching),
     and goes into *trashing* behavior. In general the processes never get
     to the point where their garbage collectors could have fixed things.

One problem is that each process does not see the *cost* of its allocation(s).

And another problem is that even if it did, and took that into account, one
would still have the problem of the Tragedy Of The Commons. In this scenario a
process that behaves nicely for the common good will apparently perform worse
than a greedy egoist process that just allocates for its own good.

I think a solution would have to involve both cost measure, user choice and OS
policing -- punishing the greedy ego processes, unless told otherwise by user.

But we don't have that yet. And perhaps never will. And if we did then with
costs and policing and such it would sort of introduce politics. Do you want a
communist, social-democratic or capitalist (or other) society of processes? They
compete for global resources; they can do harm, or good; they need to be
organized in some way... Me, I want to be the benevolent dictator of my PC. I do
not want my OS or other software to cut me out of the loop and make choices.

Anyway I think it would be not nice to the users to heap more garbage collector
based programs onto the PC scene (point: Microsoft has made it legally very
difficult to publish any performance measurements of .NET and .NET
applications). And yes, I'm aware that that's apparently in conflict withthe
goal of making programs scriptable. I don't know any good solution to that.


At least with .NET, the runtime is aware of both the application
memory usage and system wide usage, and it will GC when physical
memory starts getting low, even if the individual processes are still
below their thresholds. Nor does he (usually) set the processes
thresholds to values that are vastly higher than their actual memory
usage (as determined by the prior GC). And he varies the process
thresholds by system load as well.

It doesn't avoid all the issues, but the scenario you described, two
processes each allocating a whole system's worth of memory because of
delayed GC, pretty much doesn't happen.
 
M

Martin B.

* MikeP, on 17.04.2011 18:57:

Certainly good advice from a programmer effectiveness perspective.

But I think I'd hate for more of the "not-small" garbage collected
programs infesting the PC environment. This is not provable, it is
perhaps not even measurable in any consistent way!, but right now on my
old machine, the newer versions of Thunderbird, Firefox and Opera, not
to mention Windows Update (which is almost exotic in how many ways it
just acts as malware, and its sheer extreme inefficiency), bog things
down to a crawl, a snail's pace. I think what's going on is a subtlety
that I call *the stretch effect* (he he, I guess some folks will say
that it is also a stretch as a hypothesis):

* One garbage collector based process is fine OK. As it allocates more
virtual memory than the system is comfortable with, things slow down
and the garbage collector kicks in, fixing it. Anyway, you don't get
into that situation unless it's necessary for the program.

* Two or more garbage collector based processes are Bad. As they
happily allocate more and more virtual memory the OS just obliges,
providing virtual memory at higher and higher cost (the stretching),
and goes into *trashing* behavior. In general the processes never get
to the point where their garbage collectors could have fixed things.

One problem is that each process does not see the *cost* of its
allocation(s).
...

Well, I guess it doesn't really require GC to see this effect.

Programs written in VM/scripted languages do factually take up more
resources that programs written in native/compiled languages.

(Not that *I* measured it, and there's certainly examples to the
opposite, and theoretical scenarios where there's no difference, but I
guess we can agree that for the vast majority of programs my statement
holds, yes?)

Now, as you say, this doesn't matter at all, if you have one (few) such
programs, but when "all" programs on a machine are VM/script/GC based
programs, I expect this to have a huge impact on resource consumption
and performance and perceived performance.

I have 4 (four) "applications" open as I write this on my Windows XP
machine. There are 78 processes running, of which only three (according
to Process Explorer) are .NET/managed apps.

I certainly -- don't :) -- want to know what kind of crawl I'd see on
this machine if a significant number of these processes were VM/GC
processes.

For a single application it makes sense to use a managed/scripted
approach, but "the industry" as a whole could ask itself whether we are
using the resources ("we" and) the users have to good effect if too much
stuff moves to VM/GC.

002,
Martin
 
J

Juha Nieminen

Alf P. Steinbach /Usenet said:
But I think I'd hate for more of the "not-small" garbage collected programs
infesting the PC environment.

It's not that bad with apps that you simply run, use and then close.
It becomes a lot worse with those apps which launch at startup and are
kept in the background to perform whatever tasks (such as handling the
mouse or other controllers in special ways, or interfacing with the
display driver, or whatever).

Suppose that you are, for example, playing a game which is using all
of the available RAM for good use. This means that most of the memory
of those background apps will be swapped to disk at some point, especially
if the game is taking up most of the physical RAM.

Now the GC engine of one of those background apps suddenly just decides
to run a sweep. Not that it *needs* to free anything. It just *checks* if
it might have to free something. Thus it sweeps through the allocated
memory of the app, sees that nothing needs freeing, and thus does nothing.

The problem? Remember: The memory allocated by the app was swapped too
disk. In order for the GC to sweep through it, it has to be swapped back
to the physical RAM. Heavy disk trashing results, and for *no good reason*
(because the GC ended up doing nothing useful).
 
M

MikeP

Juha said:
It's not that bad with apps that you simply run, use and then close.
It becomes a lot worse with those apps which launch at startup and are
kept in the background to perform whatever tasks (such as handling the
mouse or other controllers in special ways, or interfacing with the
display driver, or whatever).

Those are the kinds of things on a desktop PC that fit the "when you have
to use C++, do so" "rule". I wonder how much of the Windows OS MS is
considering moving from C++ to C#. I also wonder how much of .net is C#,
or how much will be. Eventually, I'll bet that C++ at MS will go away and
C# will evolve as it has to for that to happen (no sense in using 2
languages if you can use 1). The two languages are closer to each other
than they are disimilar IMO, but C# has more compelling features than
C++, not the least of which one is that it isn't burdened by its past,
allowing it to evolve while C++ remains stuck in the mud.
 
J

Juha Nieminen

MikeP said:
Those are the kinds of things on a desktop PC that fit the "when you have
to use C++, do so" "rule". I wonder how much of the Windows OS MS is
considering moving from C++ to C#. I also wonder how much of .net is C#,
or how much will be. Eventually, I'll bet that C++ at MS will go away and
C# will evolve as it has to for that to happen (no sense in using 2
languages if you can use 1). The two languages are closer to each other
than they are disimilar IMO, but C# has more compelling features than
C++, not the least of which one is that it isn't burdened by its past,
allowing it to evolve while C++ remains stuck in the mud.

The two major problems I see in C# is that it relies on being GC'd and
also, AFAIK, on being interpreted/JIT-compiled. (I don't know too much
about the technology behind it, but AFAIK many of the optimizations
related to eg. generics rely heavily on runtime JIT optimizations which
cannot be performed at compile time).

That just sounds to me like it doesn't make C# a very good language for
eg. low-level device drivers or the core kernel. (If nothing else, the
runtime environment / bytecode interpreter / JIT-compiler would have to
be a compiled native binary, which obviously cannot be in C# if C# cannot
be compiled into a native binary with no dependency on a runtime
environment.)

Also, as you say, it doesn't sound like a good choice for apps and
drivers that need to be constantly run in the background.
 
R

Richard Kettlewell

Juha Nieminen said:
The two major problems I see in C# is that it relies on being GC'd and
also, AFAIK, on being interpreted/JIT-compiled. (I don't know too much
about the technology behind it, but AFAIK many of the optimizations
related to eg. generics rely heavily on runtime JIT optimizations which
cannot be performed at compile time).

That just sounds to me like it doesn't make C# a very good language for
eg. low-level device drivers or the core kernel. (If nothing else, the
runtime environment / bytecode interpreter / JIT-compiler would have to
be a compiled native binary, which obviously cannot be in C# if C# cannot
be compiled into a native binary with no dependency on a runtime
environment.)

It can be compiled to native code:
http://msdn.microsoft.com/en-us/library/6t9t5wcf(v=vs.80).aspx
 
M

MikeP

Juha said:
The two major problems I see in C# is that it relies on being GC'd
and also, AFAIK, on being interpreted/JIT-compiled. (I don't know too
much about the technology behind it, but AFAIK many of the
optimizations related to eg. generics rely heavily on runtime JIT
optimizations which cannot be performed at compile time).

That just sounds to me like it doesn't make C# a very good language
for eg. low-level device drivers or the core kernel. (If nothing
else, the runtime environment / bytecode interpreter / JIT-compiler
would have to be a compiled native binary, which obviously cannot be
in C# if C# cannot be compiled into a native binary with no
dependency on a runtime environment.)

Also, as you say, it doesn't sound like a good choice for apps and
drivers that need to be constantly run in the background.

My thought process above was that C# would evolve to meet those
requirements. The work looks basically done and just needing a
"migration" from C++/CLI. Need GC? Fine, use gcnew. Need fine-grained
control of memory? Fine, use new and the caret (^) syntax. Although 'new'
would have to stay the GC thing and something else would have to be
created for non-GC stuff.

Anyway, it seems that the work done with C++/CLI would lend itself
directly toward making C# "the bomb". On forgoing the VM, well... that
will take a bit more thought (I mean for me it would... for someone way
up on the technology, they know the answers pronto). Then there's that
cross-language thing that relies on the VM. Well, I didn't say it would
be easy!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,020
Latest member
GenesisGai

Latest Threads

Top