Why C++ Is Not “Back”

Discussion in 'C++' started by Lynn McGuire, Dec 4, 2012.

  1. Lynn McGuire

    Lynn McGuire Guest

    Why C++ Is Not “Back” by John Sonmez
    http://simpleprogrammer.com/2012/12/01/why-c-is-not-back/

    "I love C++."

    "C++ taught me how to really write code."

    "Back in the day I would study the intricacies of
    the language, Standard Template Library, and all
    the nuances of memory management and pointer
    arithmetic."

    "Those were some seriously good times. I remember
    reading Scott Meyers Effective C++ book series
    over and over again. Each time I would learn
    something new or grasp more of how to use C++."

    "I’m saying all this just to let you know that I
    don’t hate C++. I love C++."

    "There are plenty of excellent developers I know
    today that still use C++ and teach others how to
    use it and there is nothing at all wrong with that."

    "So what is the problem then?"

    His list of 36 C++ hiring questions is awesome.
    He might nail me on a third of them.

    Lynn
    Lynn McGuire, Dec 4, 2012
    #1
    1. Advertising

  2. Lynn McGuire

    Rui Maciel Guest

    Re: Why C++ Is Not “Backâ€

    Lynn McGuire wrote:

    > Why C++ Is Not “Back†by John Sonmez
    > http://simpleprogrammer.com/2012/12/01/why-c-is-not-back/
    >
    > "I love C++."
    >
    > "C++ taught me how to really write code."
    >
    > "Back in the day I would study the intricacies of
    > the language, Standard Template Library, and all
    > the nuances of memory management and pointer
    > arithmetic."
    >
    > "Those were some seriously good times. I remember
    > reading Scott Meyers Effective C++ book series
    > over and over again. Each time I would learn
    > something new or grasp more of how to use C++."
    >
    > "I’m saying all this just to let you know that I
    > don’t hate C++. I love C++."
    >
    > "There are plenty of excellent developers I know
    > today that still use C++ and teach others how to
    > use it and there is nothing at all wrong with that."
    >
    > "So what is the problem then?"
    >
    > His list of 36 C++ hiring questions is awesome.
    > He might nail me on a third of them.


    I suspect the author wrote that post to troll C++ users to increase his
    blog's traffic, as it starts with an inflamatory assertion and then proceeds
    to say nothing at all. Does the author even have any relevant experience
    with C++?

    Regarding his interviewing questions, I've saw more knowledgeable questions
    about C++ in a intro do programming course given to 1st year civil
    engineering students, and the course covered both C++ and Matlab.

    The blog post was also quoted by Herb Sutter in his blog, and from the
    article's lack of content I suspect that it has more to do with its
    enthusiastic praise of C# at the expense of C++ than with anything else.


    Rui Maciel
    Rui Maciel, Dec 4, 2012
    #2
    1. Advertising

  3. Lynn McGuire

    Jorgen Grahn Guest

    Re: Why C++ Is Not ???Back???

    On Tue, 2012-12-04, Rui Maciel wrote:
    > Lynn McGuire wrote:
    >
    >> Why C++ Is Not ???Back??? by John Sonmez
    >> http://simpleprogrammer.com/2012/12/01/why-c-is-not-back/
    >>
    >> "I love C++."
    >>
    >> "C++ taught me how to really write code."

    ....

    > I suspect the author wrote that post to troll C++ users to increase his
    > blog's traffic, as it starts with an inflamatory assertion and then proceeds
    > to say nothing at all.


    Not nothing at /all/, but it's hard to get a grip on it -- especially
    if you don't know or care about that Microsoft language which seems to
    be his frame of reference (C#).

    His rhethorical style is also annoying:

    "Don't get me wrong -- I love C++! But it's useless crap! But don't
    get me wrong -- C++11 is a great thing! But it's useless! But don't
    get me wrong -- you should learn C++! Although it's completely
    pointless unless you're herding goats in Somalia! But don't get me
    wrong! ... etc."

    Perhaps that's just part of the trolling.

    > enthusiastic praise of C# at the expense of C++


    Speaking of that, his main theme is that C++ is *so* huge and complex
    compared to C# and Java. I don't know either of those two, but
    perhaps someone who does can comment: is that really true today?

    I guess Java was small in ~1997 when I looked at it, but that was when
    they still had no standard containers or other things we now take for
    granted.

    /Jorgen

    --
    // Jorgen Grahn <grahn@ Oo o. . .
    \X/ snipabacken.se> O o .
    Jorgen Grahn, Dec 5, 2012
    #3
  4. Lynn McGuire

    BGB Guest

    Re: Why C++ Is Not ???Back???

    On 12/4/2012 6:49 PM, Jorgen Grahn wrote:
    > On Tue, 2012-12-04, Rui Maciel wrote:
    >> Lynn McGuire wrote:
    >>
    >>> Why C++ Is Not ???Back??? by John Sonmez
    >>> http://simpleprogrammer.com/2012/12/01/why-c-is-not-back/
    >>>
    >>> "I love C++."
    >>>
    >>> "C++ taught me how to really write code."

    > ...
    >
    >> I suspect the author wrote that post to troll C++ users to increase his
    >> blog's traffic, as it starts with an inflamatory assertion and then proceeds
    >> to say nothing at all.

    >
    > Not nothing at /all/, but it's hard to get a grip on it -- especially
    > if you don't know or care about that Microsoft language which seems to
    > be his frame of reference (C#).
    >
    > His rhethorical style is also annoying:
    >
    > "Don't get me wrong -- I love C++! But it's useless crap! But don't
    > get me wrong -- C++11 is a great thing! But it's useless! But don't
    > get me wrong -- you should learn C++! Although it's completely
    > pointless unless you're herding goats in Somalia! But don't get me
    > wrong! ... etc."
    >
    > Perhaps that's just part of the trolling.
    >
    >> enthusiastic praise of C# at the expense of C++

    >
    > Speaking of that, his main theme is that C++ is *so* huge and complex
    > compared to C# and Java. I don't know either of those two, but
    > perhaps someone who does can comment: is that really true today?
    >
    > I guess Java was small in ~1997 when I looked at it, but that was when
    > they still had no standard containers or other things we now take for
    > granted.
    >


    Java, is smaller / "simpler" than C++, in the sense that it would be a
    little easier to write a compiler for it (basically, they took a C++
    like language, stripped it down pretty much as far as they could, and
    started gradually adding things back in).

    how much difference this makes to programmers is debatable (mostly I
    think that many language designers assume "compiler complexity" ==
    "ability of human programmers to understand it" or something).


    a big drawback of the language is that it is actually fairly
    awkward/cumbersome to write code in it (IOW: write code much beyond
    invoking library facilities to do something).

    it has a number of arbitrary restrictions which are IMO very annoying,
    so I generally prefer not to use the language if it can be avoided.

    similarly, for writing "actual code", I have typically had to write
    considerably more code than would be needed to address the same problem
    in C. (and, most of the rest, is people invoking library functionality
    and then thinking that they actually wrote something...).



    C# is like Java, but with a slightly more C++ like syntax, and some of
    the stuff Java ripped out having been re-added (like "structs" and
    operator overloading, but otherwise staying largely Java-like).

    for the parts that matter, WRT the core language, there isn't really a
    huge difference. C# doesn't really save effort over C++, and many tasks
    actually require more typing and effort (for example, declaring a local
    array). as well, there are some things in C and C++ that can't really be
    done readily in C#, requiring them to be faked.


    arguably GC makes a difference productivity-wise, but then again a
    person can just as easily use the Boehm GC or similar with C or C++ anyways.

    personally, I find C# a bit more annoying than either C or C++, in that
    they put in lots of little nit-picky error conditions, which seem like
    they serve little real purpose besides trying to promote a particular
    development style (and/or hunt down certain types of bugs). many of them
    would IMO make more sense as warnings.


    things which are in C#'s favor:
    IntelliSense works in Visual Studio (nevermind if IntelliSense is
    sometimes very annoying, mostly because it gets in the way sometimes of
    "just being able to quickly type out some code");
    produced CIL code can be used by either a 32 or 64 bit process (vs C++
    and C++/CLI, where the produced binaries/DLLs are limited to a single
    target);
    ....

    the IntelliSense mostly gives an advantage for "people just sort of
    bumping along with little real idea what they are doing". (it sort of
    serves a role as a kind of half-assed substitute for documentation or
    reading header files in an auto-complete like form).

    it does sort of work better when writing code against an API which
    otherwise lacks any form of documentation (personally, I prefer
    documentation more, or failing this, at least some header-files or
    similar to look at). otherwise, a person can buffer relevant information
    in memory or something. (FWIW: I don't really think it is probably
    exactly like I have super-human memory skills or something...).


    major annoyances (with IntelliSense):
    it partly hijacks the arrow keys;
    it "aggressively" expands things (TAB, ENTER, and SPACE, all serve to
    auto-complete things), making it really annoying if the thing you are
    trying to type is a prefix of the thing you want;
    if trying to just quickly type-out some code, it will often start
    expanding ones local variable-names to random unrelated stuff (granted,
    debatably this would be less of an issue for a person not using
    single-letter variable names, but I am lazy and don't want to have to
    deal with longer variable names if I don't have to);
    ....


    though, for what stuff I do, I generally prefer to stay in native land,
    though, admittedly, I am more of a plain C developer than a C++
    developer though (my main project is mixed language though, including C,
    C++, and a my custom language). there is some C#, but this is generally
    in the form of custom Paint.NET plugins and similar (couldn't get
    C++/CLI plugins to work with PDN, so ended up writing C# ones...).


    a fair chunk of the C code in the project is not exactly "idiomatic" /
    "orthodox" though (much of it is somewhat tooled up, does an OO-in-C
    thing, uses garbage-collection, ...). (lots of stupid arguments have
    been had over all this, but yes, this stuff can be done, and yes, I
    already do stuff this way...). (and, it is not my fault if things like
    structs and function pointers are apparently beyond the abilities of
    mere humans to understand, grr...). (I make fairly heavy use of both
    structs and function pointers).


    my personal opinion though is mostly that claims of significant
    language-specific productivity differences are likely exaggerated though.

    a lot more has to do with libraries and development style than it does
    with language.

    libraries aside, it isn't really too far outside the realm of
    possibility to copy/paste/rewrite from one language to another
    (including to/from C as well).


    or such...
    BGB, Dec 5, 2012
    #4
  5. Lynn McGuire

    Guest

    Re: Why C++ Is Not ???Back???

    On Wednesday, December 5, 2012 4:11:10 AM UTC+1, Luca Risolia wrote:
    > On 05/12/2012 01:49, Jorgen Grahn wrote:
    >
    > > Speaking of that, his main theme is that C++ is *so* huge and complex

    >
    > > compared to C# and Java. I don't know either of those two, but

    >
    > > perhaps someone who does can comment: is that really true today?

    >
    >
    >
    > With regard to the language itself, Java has always been behind compared
    >
    > to C++. As a simple example, have a look at what you had (or still have)
    >
    > to write in Java to safely use a stream:
    >
    >
    >
    > try {
    >
    > r = new Reader(...);
    >
    > // do something
    >
    > } catch (Exception e) {
    >
    > //..
    >
    > } finally {
    >
    > try {
    >
    > if (r != null) // cannot omit != null, no convertion op.
    >
    > r.close();
    >
    > } catch (Exception e){
    >
    > //..
    >
    > }
    >
    > }
    >


    You should put construction of the reader outside the try, and you then have no "if" in the catch below. I see people do this all the time, why?

    > That is *complex*. Only with Java 7 they finally realized that above
    >
    > code is too cumbersome and patched the language by inventing (yet)
    >
    > another construct called "try with resource" (a sort of RAII), which in
    >
    > turn introduces new concepts such as "AutoCloseable"...


    That, however, is absolutely true. RAII of C++ beats try/finally any day ofthe week. .NET world uses "using" blocks and IDisposable. But properly implementing IDisposable is __so much__ dumb code that many people just don't do it, instead they mandate that if a type is IDisposable, it's the user's responsibility to "correctly" call Dispose() on it (meaning, wrap it in "using" and write their own IDisposable).

    Actually... What Java and .NET code puts up with is not complex. It's just __bloat__. ;-)

    On the other hand, cognitive load (not code!) and correctness of design required to properly handle dynamic memory (heap) in C++ is orders of magnitude bigger than in GC world ;-).

    Goran.
    , Dec 5, 2012
    #5
  6. Lynn McGuire

    Nobody Guest

    Re: Why C++ Is Not ???Back???

    On Wed, 05 Dec 2012 00:49:53 +0000, Jorgen Grahn wrote:

    > Speaking of that, his main theme is that C++ is *so* huge and complex
    > compared to C# and Java. I don't know either of those two, but perhaps
    > someone who does can comment: is that really true today?


    Java has a huge standard library, but that's because the number of
    third-party libraries available for Java is a tiny fraction of what's
    available for C or C++.

    The language itself is still basically C++ with all the sharp edges
    rounded off. No templates (Java now has generics, but they're much more
    limited than templates), no multiple inheritance, no operator overloading,
    no bare pointers, etc.

    Much of C++'s complexity stems from the way that the various features
    interact, so comparing the number of feature "bullet points" doesn't
    really give an accurate picture of the relative complexity.
    Nobody, Dec 5, 2012
    #6
  7. Lynn McGuire

    BGB Guest

    Re: Why C++ Is Not ???Back???

    On 12/5/2012 3:23 AM, Juha Nieminen wrote:
    > Lynn McGuire <> wrote:
    >> Why C++ Is Not ???Back??? by John Sonmez
    >> http://simpleprogrammer.com/2012/12/01/why-c-is-not-back/

    >
    > Seems to be quite Windows-centric.
    >
    > "I held out for a long time trying to believe that all the investment I had
    > made in C++ was not lost, but it turned out that C# simplified things to
    > such a great degree that the extra power C++ gave me was not worth the
    > extra responsibility."
    >
    > That might be so... on Windows. C# isn't feasible on most other platforms
    > (either because support is poor, or outright missing.)
    >
    > I develop for the iPhone, Mac OS X and Windows as my payjob, and for Linux
    > as a hobby, and C# wouldn't be an option even if I wanted. C++ actually is.
    > (I can use the exact same C++ libraries on all of those platforms, and that
    > fact has been quite useful.)
    >
    > (Also, while I don't really know C#, I have hard time believing that it's
    > more portable than C++, even though that was one of his points. Sure, if
    > you avoid using any of Microsoft's system libraries you *might* get a
    > portable program... which probably amounts to little more than a command
    > line program, exactly like with C++.)
    >


    theoretically, it "can" be portable, but this generally means:
    using Mono on Linux, which is sort of a bit of a step down from .NET on
    Windows in many areas, as well as there being API differences,
    differences regarding interfacing with native code, ... (a person has to
    be mindful of the differences if they want to write code which works on
    both).

    other targets may involve one of the more portable interpreters
    (typically written in C), which typically provide a minimal subset of
    the functionality, at interpreter-like performance (IOW: slow). (a few
    claim to "JIT" the code, but actually just convert it to
    indirect-threaded-code which is run in C, which is still basically an
    interpreter in my definition, vs a JIT which actually requires, you
    know, producing native code...). (*1)

    the level of library functionality mandated by the ECMA standard is
    actually smaller than that of the C library (or, for that matter, J2ME),
    consisting mostly of support-classes needed for the language itself to
    function (IOW: no console or file IO or similar).

    IIRC elsewhere, there is a cross-compiler to compile CIL into C code,
    which can then be compiled using a C compiler.

    generally, C and C++ are much more portable in terms of a more practical
    definition of portability.



    *1: I was digging around in one of these interpreters, and was not
    terribly impressed by its "JIT". basically, my existing script-VM
    (BGBScript VM) does basically the same thing, namely converting the IL
    into lists of structs holding function pointers, and I still call the
    thing an interpreter.

    I also have an "experimental" interpreter (still based on threaded code)
    which should be able to go faster, but isn't really complete enough to
    be used (and is a low priority at present). in my tests thus far
    (generally involving tight loops and arithmetic), it performs about 8x
    slower than native code.

    this interpreter, unlike my other one, uses an architecture more
    directly inspired by the Dalvik VM (IOW: a statically-typed
    register-based interpreter, rather than a stack machine), with the idea
    being that the other VM would translate the stack-machine bytecode into
    the IR this interpreter uses, in place of producing its own
    threaded-code directly.

    the main reason for using a threaded-code interpreter (despite the worse
    performance) is that it is easier to target to various CPU architectures
    and maintain than a JIT (a lot of this is what killed my last JIT).

    eventually, I may have another JIT, but mostly work on the premise that
    the hop from a statically-typed register IR to native code is much
    shorter than going directly from a dynamically-typed stack-machine to
    native code (my last JIT did this).


    but, yeah, as-is, my scripting language is portable as well:
    its code can run wherever I can build my scripting VM.
    BGB, Dec 5, 2012
    #7
  8. Lynn McGuire

    W Karas Guest

    On Tuesday, December 4, 2012 12:39:28 PM UTC-5, Lynn McGuire wrote:
    > Why C++ Is Not “Back” by John Sonmez
    >
    > http://simpleprogrammer.com/2012/12/01/why-c-is-not-back/
    >
    >
    >
    > "I love C++."
    >
    >
    >
    > "C++ taught me how to really write code."
    >
    >
    >
    > "Back in the day I would study the intricacies of
    >
    > the language, Standard Template Library, and all
    >
    > the nuances of memory management and pointer
    >
    > arithmetic."
    >
    >
    >
    > "Those were some seriously good times. I remember
    >
    > reading Scott Meyers Effective C++ book series
    >
    > over and over again. Each time I would learn
    >
    > something new or grasp more of how to use C++."
    >
    >
    >
    > "I’m saying all this just to let you know that I
    >
    > don’t hate C++. I love C++."
    >
    >
    >
    > "There are plenty of excellent developers I know
    >
    > today that still use C++ and teach others how to
    >
    > use it and there is nothing at all wrong with that."
    >
    >
    >
    > "So what is the problem then?"
    >
    >
    >
    > His list of 36 C++ hiring questions is awesome.
    >
    > He might nail me on a third of them.
    >
    >
    >
    > Lynn


    Other reasons to use (hence to learn) C++ include:

    o Multiple inheritance with inheritance of implementation as well as interface.

    o You don't think templates are something to never use, or only use as a last resort.

    o You prefer to look at class definitions with the implementations of large member functions out-of-line.

    o You think that a language that is guided by someone living on a professor's salary at a state university is less likely to be twisted by greed thanone tightly controlled by a large for-profit corporation.

    o You like having a standardized macro preprocessor available.
    W Karas, Dec 5, 2012
    #8
  9. This is what the Google coding guidelines for C++ says about multiple inheritance:

    Multiple Inheritance

    Only very rarely is multiple implementation inheritance actually useful. We allow multiple inheritance only when at most one of the base classes has an implementation; all other base classes must be pure interface classes tagged with the Interface suffix.
    kurt krueckeberg, Dec 6, 2012
    #9
  10. Lynn McGuire

    BGB Guest

    On 12/5/2012 10:00 PM, kurt krueckeberg wrote:
    > This is what the Google coding guidelines for C++ says about multiple inheritance:
    >
    > Multiple Inheritance
    >
    > Only very rarely is multiple implementation inheritance actually useful. We allow multiple inheritance only when at most one of the base classes has an implementation; all other base classes must be pure interface classes tagged with the Interface suffix.
    >


    yeah, 'tis an issue...

    how often does using MI actually make sense?...

    I suspect though that this is a reason for MI being generally absent
    from many/most newer languages. like, it seemed like a good idea early
    on, but tended not really to be generally worth the added funkiness and
    implementation complexity, and the single-inheritance + interfaces model
    is mostly-good-enough but is much simpler to implement.

    actually, it is sort of like inheritance hierarchies:
    many people seem to imagine big/complex hierarchies more similar to that
    of a taxonomy (with many levels and a "common ancestor" for pretty much
    everything);
    more often, I have personally rarely seen cases where more than 2 or 3
    levels are needed (and very often don't have *any* parent class).


    or such...
    BGB, Dec 6, 2012
    #10
  11. Lynn McGuire

    Rui Maciel Guest

    Re: Why C++ Is Not “Backâ€

    BGB wrote:

    > yeah, 'tis an issue...
    >
    > how often does using MI actually make sense?...


    Whenever someone wishes to implement mixins, or any other form of separating
    features in dedicated classes, through inheritance.

    Chaining inheritances just to get a class that inherits from multiple base
    classes is silly and needlessly verbose if the language supports multiple
    inheritance.


    Rui Maciel
    Rui Maciel, Dec 6, 2012
    #11
  12. Lynn McGuire

    Guest

    Re: Why C++ Is Not ???Back???

    On Wednesday, December 5, 2012 7:36:26 PM UTC+1, Luca Risolia wrote:
    > On 05/12/2012 10:05, wrote:
    >
    > > On Wednesday, December 5, 2012 4:11:10 AM UTC+1, Luca Risolia wrote:

    >
    > >> try {

    >
    > >> r = new Reader(...);

    >
    > >> // do something

    >
    > >> } catch (Exception e) {

    >
    > >> //..

    >
    > >> } finally {

    >
    > >> try {

    >
    > >> if (r != null) // cannot omit != null, no convertion op.

    >
    > >> r.close();

    >
    > >> } catch (Exception e){

    >
    > >> //..

    >
    > >> }

    >
    > >> }

    >
    > >>

    >
    >
    >
    > >

    >
    > > You should put construction of the reader outside the try, and you

    >
    > > then have no "if" in the catch below. I see people do this all the

    >
    > > time, why?

    >
    >
    >
    > Because sometimes you want the finally block to get executed even if the
    >
    > constructor throws.


    If you want that, then you can do:

    try
    {
    TYPE instance = new TYPE(...);
    try
    {
    use(instance);
    }
    finally
    {
    instance.close(); // etc.
    }
    }
    finally
    {
    // Your "additional" code here.
    }

    True, this is longer in pure lines of code, but IMO
    the intention is clearer than with an if.

    > Furthermore, if you put the statement outside any
    >
    > try, then the method must specify all the checked exceptions that the
    >
    > constructor can throw.


    Well, for that, same as above, really: you wrap the whole of the function into a try-catch and rethrow your exception wrapping the original. I would further argue that you have to do that anyhow. Say that the code is (considering the "standard" Java "wrap/rethrow" approach to checked exceptions):

    void f() throws myException
    {
    try
    {
    TYPE instance = new TYPE(...);
    use(instance);
    instance.close();
    }
    catch(Exception e)
    {
    try
    {
    if (instance != null) instance.close();
    }
    catch(Exception e)
    {
    throw new myException(msg, e);
    }
    throw new myException(msg, e);
    }
    }

    with what I propose, you get

    void f() throws myException
    {
    try
    {
    TYPE instance = new TYPE(...);
    try
    {
    use(instance);
    }
    finally
    {
    instance.close();
    }
    }
    catch(Exception e)
    {
    throw new myException(msg, e);
    }
    }

    which is not much different. In fact, there's less duplicated code in the latter (only one call to instance.close, only one "rethrow").

    > I am not sure why the Java exception
    >
    > specification is supposed to help the programmer write safer programs..


    Me neither. I think, because it's cumbersome, people dodge the question in various ways (e.g. catch(Exception) {/*swallow*/ }) and get poor end results anyhow ;-).

    Goran.
    , Dec 6, 2012
    #12
  13. Lynn McGuire

    Rui Maciel Guest

    Re: Why C++ Is Not “Backâ€

    kurt krueckeberg wrote:

    > This is what the Google coding guidelines for C++ says about multiple
    > inheritance:
    >
    > Multiple Inheritance
    >
    > Only very rarely is multiple implementation inheritance actually useful.
    > We allow multiple inheritance only when at most one of the base classes
    > has an implementation; all other base classes must be pure interface
    > classes tagged with the Interface suffix.


    Just for posterity, here's the link.

    http://google-
    styleguide.googlecode.com/svn/trunk/cppguide.xml#Multiple_Inheritance


    Rui Maciel
    Rui Maciel, Dec 6, 2012
    #13
  14. Lynn McGuire

    Rui Maciel Guest

    Re: Why C++ Is Not ???Back???

    Juha Nieminen wrote:

    > One could justly ask: Why?
    >
    > What exactly would the problem be in having, for example, optional
    > functions in an "interface", or "interfaces" where some of the functions
    > have a default implementation (as to avoid code repetition and help in
    > code reuse)?
    >
    > It just sounds like a stupid principle that causes more problems than
    > it avoids. It sounds like nanny-ruling, ie. let's build a set of rules
    > to try to make incompetent programmers make slightly less mistakes
    > (while making the lives of competent programmers more difficult for no
    > benefit.)


    Indeed.

    It's weird that the only remotely tangible justification for that policy was
    that someone within their organization found that "only very rarely" that
    feature was useful.

    I suspect that the reason behind that nonsense might be that the people
    behind it only had a Java background and knew nothing about C++ besides that
    it supports OO. As they only spoke javanese, instead of actually learning
    C++, they might have opted to learn nothing and simply extend their Java
    practices to another language. Hence, as Java doesn't support multiple
    inheritance, they simply opted to stay away from that kind of witchcraft,
    and as Java supports inheriting multiple interfaces, they come up with the
    idiotic "all other base classes must be pure interface classes tagged with
    the Interface suffix". Awesome.

    It might end up being nothing of that sort, but they do look an awful lot
    like they are really putting in some effort to try to write Java code with
    C++, to the point they are even trying to shoehorn Java's limitations and
    idiosyncrasies onto their code. Maybe there's a reason they called that
    list "style guide" instead of "best practices".


    Rui Maciel
    Rui Maciel, Dec 6, 2012
    #14
  15. Lynn McGuire

    Rui Maciel Guest

    Re: Why C++ Is Not ???Back???

    Juha Nieminen wrote:

    > Multiple inheritance is actually much, much more prevalent than people
    > seem to think.
    >
    > For some reason a good majority of programmers and programming languages
    > seem to think that so-called "interfaces" are something distinct from
    > multiple inheritance. As if a language had interfaces *instead* of MI.


    I suspect that that idea only prevails in the minds of those who were
    introduced to OO through Java, which happens to be a significant number of
    programmers who graduated from a CS or CS-related degree in the past decade
    or so. That would explain the multiple-inheritance phobia, and related
    misconceptions, and also the "interfaces are not really classes" mantra.


    Rui Maciel
    Rui Maciel, Dec 6, 2012
    #15
  16. Lynn McGuire

    Ian Collins Guest

    kurt krueckeberg wrote:

    ** Please wrap your lines **

    > This is what the Google coding guidelines for C++ says about multiple
    > inheritance:
    >
    > Multiple Inheritance
    >
    > Only very rarely is multiple implementation inheritance actually
    > useful. We allow multiple inheritance only when at most one of the
    > base classes has an implementation; all other base classes must be
    > pure interface classes tagged with the Interface suffix.


    I didn't realise iostreams were rarely used...

    --
    Ian Collins
    Ian Collins, Dec 6, 2012
    #16
  17. Lynn McGuire

    Öö Tiib Guest

    Re: Why C++ Is Not ???Back???

    On Thursday, 6 December 2012 12:03:53 UTC+2, Juha Nieminen wrote:
    > BGB <> wrote:
    > > how often does using MI actually make sense?...

    >
    > Multiple inheritance is actually much, much more prevalent than people
    > seem to think.


    That is true. Something can have lot of inborn capabilities, abilities,
    roles and responsibilities. The people just are not capable of creating, maintaining and handling such beasts effectively. They recognize that
    and avoid it.

    > For some reason a good majority of programmers and programming languages
    > seem to think that so-called "interfaces" are something distinct from
    > multiple inheritance. As if a language had interfaces *instead* of MI.


    There is unfortunate mixed boxed thinking. The C# and Java have exactly
    same thing so pretending that their interfaces are oh-so-different is
    silly.

    I try to bring example: What we do (in C++, Java or C#) is technically
    that "class CodeSnippet *inherits* from pure abstract class
    PrettyPrinting" and that it "overloads" the functions in it.

    That is quite complex way to tell that "objects of class CodeSnippet
    *have* ability to print their contents prettily". Yes, *has*
    PrettyPrinting functionality, what you mean "is-a" pretty printing?

    Maybe bad naming? Lets try name like PrettyPrintable. That sounds
    like interface that some external PrettyPrinter needs. If that is not
    true then naming is misleading. Also PrettyPrinter itself sounds silly
    name ... pretty printer indeed.

    > That makes no sense. Multiple inheritance is the exact same thing as
    > regular inheritance, with the only difference being that you are
    > inheriting from more than one base class. In a proper inheritance
    > hierarchy there's always a "is-a" relationship between derived class
    > and base class. The base class being an "interface" does not change this
    > fact at all.
    >
    > For example, let's say that you have a base class named "Widget" and an
    > interface named "EventListener". You now inherit a class named "Button"
    > from Widget and EventListener. Now your class Button "is-a" Widget, and
    > "is-a" EventListener. It follows perfectly the principles of object
    > oriented design and inheritance. Wherever an object of type Widget is
    > expected you can give an object of type Button. Wherever an object of
    > type EventListener is expected you can give an object of type Button.
    > That's what inheritance is.


    EventListener is lucky example. Congrats. We can imagine that the Button
    not only has that ability to hear events but also uses it actively by
    constantly listening events. Only thing that natural question like
    "if object x can hear events?" looks in C++ quite cryptic like:

    if ( dynamic_cast<EventListener*>(&x) != nullptr )

    Majority of common "SomethingDoer" interface names however feel stretched
    and unnatural and do not fit into situation. The common "SomethingDoer"
    or "SomethingDoable" is named such only because of the "is-a" relation.


    > Therefore the class Button above uses multiple inheritance: It has an
    > "is-a" relationship with more than one other class. That's the very
    > definition of multiple inheritance.
    >
    > The more relevant question is, however, whether the full MI offered by
    > C++ have any advantages over the crippled MI offered by the majority of
    > other OO languages? And the answer is yes.
    >
    > The most prominent advantage is that you can provide default implementations
    > for some or all of the functions in your "interface" class. This lessens
    > the need for code repetition and helps making reusable code. It also
    > burdens the derived class less because it doesn't force it to implement
    > things that could very well be implemented in the interface.


    That is indeed advantage.

    > A very practical use of this is to have *optional* functions in your
    > "interface" class. In other words, functions that the derived class can
    > override but doesn't have to. The default implementation could be empty
    > or return a default value.


    Yes, also that is great.

    > (Some other OO languages, such as Objective-C, have the concept of
    > optional functions in an interface (or protocol, as Obj-C calls it),
    > but you can't have them return any default values, and such optional
    > functions burden the implementation of whatever uses objects that
    > implement that interface because it has to explicitly check if the
    > object is implementing that precise function. Even though the function
    > may be optional, the interface cannot have a default implementation for it.)


    We can do same in C++ for example by messing with pointer to member
    functions. We seem lucky since we can always default-initialize those.
    Unfortunately pointer to member functions have most ugly syntax.

    > And if a function in an interface can have a default implementation,
    > then it's also useful if the interface could also provide some member
    > variables that those default implementations could use for their work.


    Forwarding the actual work to (possibly polymorphic and dynamically
    switchable) component is great way to get rid of overly big burden
    of responsibilities but it involves some code bloat too.

    Again the reason is in way of thinking about interfaces. Interface
    we know is set of abilities inherited by "is-a" birth right. Making
    it a functionality that it "has" or even "may have" e.g. can gain,
    improve and lose during life-time is ... thinking out of the box, so
    while it is often needed it is not so convenient to implement.
    Öö Tiib, Dec 6, 2012
    #17
  18. Lynn McGuire

    Jorgen Grahn Guest

    Re: Why C++ Is Not ?Back?

    On Thu, 2012-12-06, kurt krueckeberg wrote:
    > This is what the Google coding guidelines for C++ says about multiple inheritance:
    >
    > Multiple Inheritance
    >
    > Only very rarely is multiple implementation inheritance actually
    > useful. We allow multiple inheritance only when at most one of the
    > base classes has an implementation; all other base classes must be
    > pure interface classes tagged with the Interface suffix.


    Google has some very questionable guidelines regarding C++ ...

    .... although IIRC Stroustrup (Design and Evolution) makes it clear
    that multiple inheritance is in the langugage because people could
    come up with real-life examples of problems where it made good sense
    -- and explicily /not/ because it's something everyone would need
    often, because it isn't.

    I've never used it myself.

    /Jorgen

    --
    // Jorgen Grahn <grahn@ Oo o. . .
    \X/ snipabacken.se> O o .
    Jorgen Grahn, Dec 6, 2012
    #18
  19. Lynn McGuire

    BGB Guest

    Re: Why C++ Is Not “Backâ€

    On 12/6/2012 12:58 AM, Rui Maciel wrote:
    > BGB wrote:
    >
    >> yeah, 'tis an issue...
    >>
    >> how often does using MI actually make sense?...

    >
    > Whenever someone wishes to implement mixins, or any other form of separating
    > features in dedicated classes, through inheritance.
    >
    > Chaining inheritances just to get a class that inherits from multiple base
    > classes is silly and needlessly verbose if the language supports multiple
    > inheritance.
    >


    yes, but there can be other language features which may be able to
    accomplish a similar goal in a language without needing MI.

    consider, for example, if you could have "delegation variables", which
    transparently allow accessing linked-to objects, making the chaining
    strategy considerably less verbose.

    basically, a compiler handling a delegation is like:
    do I see the field/method in this object? no.
    do I see it in an object linked to by a delegation variable? yes.

    it then internally generates an indirect access to the variable.

    so, while the programmer writes in their method:
    x
    the compiler could instead compile it as:
    this->someMixin->super_b->x;

    ....

    granted, yes, there is probably little hope of this feature showing up
    in mainstream languages (but, I did it this way in my language, with the
    basic idea mostly coming from Self, and also ended up basically
    implementing namespaces/packages via this mechanism as well, ...).


    granted, yes, the issue isn't so much that whether MI can help in these
    cases, but whether it often makes enough of a difference, or if these
    sorts of cases are really common enough, to make supporting it worthwhile.

    like, an infrequently used feature can be omitted if it can be easily
    enough accomplished by another means.


    or such...
    BGB, Dec 6, 2012
    #19
  20. Lynn McGuire

    Stuart Guest

    Re: Why C++ Is Not ?Back?

    On 2012-12-06 kurt krueckeberg wrote:
    >> This is what the Google coding guidelines for C++ says about multiple inheritance:
    >>
    >> Multiple Inheritance
    >>
    >> Only very rarely is multiple implementation inheritance actually
    >> useful. We allow multiple inheritance only when at most one of the
    >> base classes has an implementation; all other base classes must be
    >> pure interface classes tagged with the Interface suffix.


    On 12/6/12 Jorgen Grahn wrote:
    > Google has some very questionable guidelines regarding C++ ...
    >
    > ... although IIRC Stroustrup (Design and Evolution) makes it clear
    > that multiple inheritance is in the langugage because people could
    > come up with real-life examples of problems where it made good sense
    > -- and explicily /not/ because it's something everyone would need
    > often, because it isn't.
    >
    > I've never used it myself.



    MI is probably only needed in some niche problem domains like MS's ATL
    library. For almost six years I have been coding stuff like this:

    class ATL_NO_VTABLE CHAMAMATSUCamera :
    public CComObjectRootEx<CComMultiThreadModel>,
    public CComCoClass<CHAMAMATSUCamera, &CLSID_HAMAMATSUCamera>,
    public IDispatchImpl
    <
    Common::HDeviceNameImplHelper
    <
    IHAMAMATSU_C4880_CameraSM2<CHAMAMATSUCamera>
    >, &__uuidof (IBasicAcquisitionModule)
    >,

    public IPAMSnapshotSupportSM2<CHAMAMATSUCamera>,
    public CameraCommon::HMultipleRegionImplHelper
    <
    IPAMRegionOfInterestSupportSM2<CHAMAMATSUCamera>
    >,

    public IPAMAcquisitionSpeedReadSupportSM2 <CHAMAMATSUCamera>,
    public IPAMAcquisitionSpeedSetSupportSM2 <CHAMAMATSUCamera>,
    public IPAMExposureTimeReadSupportSM2 <CHAMAMATSUCamera>,
    public IPAMExposureTimeSetSupportSM2 <CHAMAMATSUCamera>,
    public IPAMLiveImageSupportSM2 <CHAMAMATSUCamera>,
    public IPAMTemperatureReadSupportSM2 <CHAMAMATSUCamera>,
    public IPAMTemperatureSetSupportSM2 <CHAMAMATSUCamera>,
    public IProvideClassInfoImpl<&CLSID_HAMAMATSUCamera,
    &LIBID_HAMAMATSUServer>,
    public ComponentHelpers::HBasicParamInfoImplementationHelper
    <
    IGenericallyConfigurableDeviceSM2<CHAMAMATSUCamera>
    >,

    public Common::HVersionInfoFromResource
    <
    IVersionInfoSM2<CHAMAMATSUCamera>
    >,

    public Common::HSupportErrorInfoHelper
    <
    ISupportErrorInfoSM2<CHAMAMATSUCamera>
    >

    {


    Using MI is definitely a must when you use ATL.

    Coding COM components (or generally any interface based system) is a bit
    like semi-conductor physics. You have some holes (the methods of the
    interface your component is supporting) that need to be filled with
    electrons (your implementation). Only when all holes are filled, your
    class is finished. Some holes can be filled by generic electrons, some
    you have to fill on your own. Filling a hole with a generic electron is
    IMHO most elegantly done via mixin classes, like
    ComponentHelpers::HBasicParamInfoImplementationHelper. This mixin will
    fill the three holes of the interface IGenericallyConfigurableDevice
    with electrons and create a new (more high-level) hole that must be
    provided by the CHAMAMATSUCamera class.

    Regards,
    Stuart
    Stuart, Dec 7, 2012
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Joby
    Replies:
    0
    Views:
    1,612
  2. Horace Nunley

    why why why does function not work

    Horace Nunley, Sep 27, 2006, in forum: ASP .Net
    Replies:
    1
    Views:
    441
    =?Utf-8?B?UGV0ZXIgQnJvbWJlcmcgW0MjIE1WUF0=?=
    Sep 27, 2006
  3. Mr. SweatyFinger

    why why why why why

    Mr. SweatyFinger, Nov 28, 2006, in forum: ASP .Net
    Replies:
    4
    Views:
    853
    Mark Rae
    Dec 21, 2006
  4. Mr. SweatyFinger
    Replies:
    2
    Views:
    1,736
    Smokey Grindel
    Dec 2, 2006
  5. Skybuck Flying
    Replies:
    16
    Views:
    649
    tragomaskhalos
    Aug 25, 2007
Loading...

Share This Page