inline request and compiler rejection

Discussion in 'C++' started by Pallav singh, Mar 19, 2009.

  1. Pallav singh

    Pallav singh Guest

    Q when is inline request to compiler rejected?
    1. Function length is large
    2. Recursive function call

    is there any other situation when inline request is rejected by
    Compiler in C++ ??

    Thanks in Advance
    Pallav Singh
    Pallav singh, Mar 19, 2009
    1. Advertisements

  2. Pallav singh

    James Kanze Guest

    Neither of the above. The compiler will inline a function
    whenever if feels like it, and only then.
    Whenever it feels like it. Most compilers don't inline anything
    unless optimization is requested, and a lot will inline
    functions not declared inline if optimization is requested.
    James Kanze, Mar 19, 2009
    1. Advertisements

  3. Pallav singh

    James Kanze Guest

    The standard does express an "intent". From a quality of
    implementation point of view, the compiler should only ignore
    this "intent" if it can do better. (In many ways, it's like
    "register", except that almost all modern compilers can do
    better than the programmer when it comes to what belongs in a
    register---only a few can do so when it comes to inlining.)
    Inlining increases coupling. As such, it is "bad", and should
    be avoided. On the other hand, when the profiler says you need
    to improve performance, it is one of the cheapest means
    available (in terms of programmer time). So the logical rule is
    not to use it until you have profiler data which says you have
    to. Unless, perhaps, you're writing a library, and won't
    necessarily be able to profile the actual applications.
    James Kanze, Mar 19, 2009
  4. Pallav singh

    Pallav singh Guest


    class A
    public :
    virtual inline f()
    {cout<<" Inside class A "<<endl; }

    class B: public A
    public :
    virtual inline f()
    {cout<<" Inside class B "<<endl; }

    main ()
    A * a;
    a = new B();
    a ->f();

    error: ISO C++ forbids declaration of `f' with no type
    error: ISO C++ forbids declaration of `f' with no type
    Pallav singh, Mar 20, 2009
  5. Pallav singh

    Pallav singh Guest


    Q Is it possible to declare a virtual inline in C++? Does this
    a problem to the compilation and/or performance when the call for
    a virtual inline function is from a secondary purpose?

    class A
    { public :
    virtual inline void f( )
    {cout<<" Inside class A "<<endl; }

    class B: public A
    { public :
    virtual inline void f( )
    {cout<<" Inside class B "<<endl; }

    main ()
    A * a;
    a = new B();
    a ->f();
    Pallav singh, Mar 20, 2009
  6. Pallav singh

    Pranav Guest

    Is there any way to know whether the 'inline' request has been
    accepted by compiler or not?
    Pranav, Mar 20, 2009
  7. Pallav singh

    James Kanze Guest

    I'm not sure what you mean by "shallow". It increases compile
    time coupling significantly, which is extremely important in
    anything except trivially small applications.
    But why would you change the signature if you don't change the
    contract. And if you change the contract, you have to at least
    look at every place the function is used. A lot more than just
    one or two places.

    That's why the desicion to change the signature usually rests in
    other hands than the desicion to change the implementation, and
    why it is important that the implementation and the signature
    reside in different files.
    It could be simpler, but I've never found it to be a particular
    problem. And I don't know how you can maintain the public
    interface and the implementation in two separate files
    otherwise. (Of course, a good tool set helps here; you don't
    actually write the boilerplate yourself, the machine does it for
    James Kanze, Mar 20, 2009
  8. Pallav singh

    James Kanze Guest

    Of course.
    It doesn't cause any more problems than inlining in general.
    The problem is declaring a function inline, virtual or not.
    (Most of the coding guidelines I've seen simply forbid inline
    functions, period.)
    James Kanze, Mar 20, 2009
  9. Pallav singh

    James Kanze Guest

    If the compiler doesn't generate an error, the inline has been
    accepted. There's no way of telling what effect it has on the
    generated code, by definition---the semantics of an inline
    function are exactly the same as those of a non-inline function.
    James Kanze, Mar 20, 2009
  10. Pallav singh

    James Kanze Guest

    It's unusual to declare any function inline. Curiously enough,
    the one exception I rather regularly make is a virtual function:
    if I'm defining an interface (an abstract class with only pure
    virtual functions as members), I'll define the virtual
    destructor inline---rather than have to create a source file for
    this one empty function.

    (There's also a metaprogramming idiom which depended on virtual
    inline functions, but it's been largely replaced with templates,
    using duck typing.)
    James Kanze, Mar 20, 2009
  11. Pallav singh

    James Kanze Guest

    It's coupling in the most classical sense.
    If you're working on libraries, you won't see them. Your users
    will. (If you're using dynamically loaded objects, it may mean
    the difference between having to relink and recompile, and not.)
    But neither are particularly frequent. Significantly less
    frequent, at any rate, than changing details of the
    Yes, but it does mean that one or two places more or less
    doesn't make a difference.
    No, it's good software engineering.
    There are matters of taste, and there are matters of basic
    software engineering. I tend to jump on anyone who recommends
    bad engineering.
    James Kanze, Mar 20, 2009
  12. Pallav singh

    Ian Collins Guest

    We've been here before, if build times get too long, add a faster build
    server. As applications grow, odds are processing power will also grow.
    So a single server can be upgraded, or a new server added to a build
    farm to keep pace with code growth.

    If your tools don't support parallel/distributed building, get better ones.
    Ian Collins, Mar 21, 2009
  13. Pallav singh

    coal Guest

    I think your answer assumes that buying faster hardware is no big
    In the recent past it's been an affordable way to make up for
    sloppiness in a project, but from what I can tell things are changing
    and budgets are tighter today than they were just a few years ago.
    So if you hope a project will be long lasting, I think the better
    option is to work on the software so it builds more quickly on modest

    My current project puts everything in header files. Probably what we
    should do is make that an option so users can choose between a header
    only approach or having separate interfaces and implementations. I
    it partly because it's hard to believe how slowly compilers improve.
    Ten years ago I thought compilers would be able to handle the header
    only approach as efficiently as they handle the separated approach in
    about 5 years -- 2004. I think the header only approach will
    eventually prevail, but we aren't there yet.

    Brian Wood
    Ebenezer Enterprises
    coal, Mar 21, 2009
  14. Pallav singh

    Ian Collins Guest

    So far I've always managed to justify the cost against the saving in
    developer time, either to my employer or to myself (I've just added a
    new core i7 box to my farm).
    Ian Collins, Mar 21, 2009
  15. Pallav singh

    James Kanze Guest

    Coupling means that changes in one place affect (many) other
    places. This is the meaning the word has always had, and the
    meaning with which it is almost universally used. If you want
    to restrict its meaning to something else, fine. Just don't be
    surprised if no one understands you.
    I'm not sure I understand that sentence. If what you're trying
    to say is that compilation dependencies are not as serious a
    form of coupling as source code dependencies, fine. We agree on
    that. That doesn't mean that you can neglect compilation
    dependencies. They tend to chain: in one real project, we found
    that before using the compilation firewall idiom, source codes
    of around a thousand lines resulted in the compiler seeing
    between 500 thousand and 2 million lines after preprocessing.
    Use of the compilation firewall brought that figure down to
    around 20 thousand. That had two major impacts: compile times
    dropped drastically (that was on an older machine, but even
    today, reading 2 million lines over the LAN takes time), and
    significantly less files needed to be recompiled after a small
    change in the implementation details of the class.
    It's the client code which ends up paying for your extra
    includes, not the library itself.

    It occured to me after posting that you seem to be confusing two
    issues. The fact that a function is inline doesn't mean that
    you don't need both a declaration and a definition. In order
    for the definition to serve as the declaration, it has to be in
    the class. Do that, and you quickly end up with the unreadable
    mess we know from Java. It's a nice feature for demo programs,
    but once the class has more than three or four functions, and
    the functions have more than two or three lines, the results are
    completely unreadable.
    To have a different procedure for modifying the interface than
    for modifying the implementation is essential. In large
    projects, it will almost always be a case of different people;
    in small projects, it may be the same person wearing a different
    hat. But the impact of changing an interface is significantly
    different than that of changing an implementation.
    It's not a question of more or less. Inline functions increase
    coupling, and cause maintenance problems. Good software
    engineering avoids them except in special cases.
    James Kanze, Mar 21, 2009
  16. Pallav singh

    James Kanze Guest

    Build times are generally limited by the network speed, not the
    server speed. There's a very definite upper limit to the
    speed of an Ethernet. And reading between 500 thousand and 2
    million lines for each source (including includes) will always
    take more time than reading between 20 thousand and 50 thousand.
    None of which addresses the problem of network speed, which is
    usually the limiting factor.
    James Kanze, Mar 21, 2009
  17. * James Kanze:
    I think you missed a couple of "can" and other qualifiers.

    Especially for libraries of basic functionality, or in a known "environment"
    where header pollution isn't an issue, inlining is very useful for header-only
    modules without introducing any extra coupling (coupling in addition to what one
    already has anyway).

    As an example, one reason that much of Boost is so useful is that a great many
    of the modules can be employed without compiling them separately; just include
    the relevant header and that's that.

    So in a way inlining is the C++ counter-weight to the language's lack of real

    For with real modules header-file modules would presumably have had no advantage.

    Cheers & hth.,

    - Alf
    Alf P. Steinbach, Mar 21, 2009
  18. * James Kanze:
    How about building on a dedicated server?

    I'm unfortunately unfamiliar with the tool support here (cause I've been out of
    the loop of large scale development for a number of years), but it's not long
    since I saw a huge wall screen display build statistics for the whole firm.

    And I think that would not have been possible unless the actual builds were
    centralized, i.e. not just each developer's machine fetching relevant files from
    server, but actual builds being done on the server -- in which case network
    speed could be irrelevant: just make sure all files available on the server.

    Perhaps that's what Jeff and Ian were referring to.

    However, on the negative side, I think that's the wrong direction, apparently an
    attempt to solve a number of problems related to working on same files (there's
    the coupling aspect :) ) by brute force and centralization. I "know" that /most/
    C++ programmers prefer the model of SVN where strong coupling is assumed and one
    simply try to "merge" the results of several people changing the same files; in
    one project most of each Friday was wasted on merging, to get to a coherent
    state where work could proceed next week... This is abhorrent to me, but then,
    most people I've talked to find it completely abhorrent to consider locking of
    files; it seems they reason "we're not able to avoid strong coupling, so we do
    need to work on the same files at the same time, and locking just gets in the
    way". And I think that's a kind of ostrich POV. Possibly a concession to the
    reality of weak management, but anyway, IMHO not what one should /want/ and work
    for, even if it can make pragmatic sense to do it given constraints of mgmt.

    Cheers & hth.,

    - Alf
    Alf P. Steinbach, Mar 21, 2009
  19. Pallav singh

    Ian Collins Guest

    With a decent OS and enough RAM, most (include) files will be fetched
    and cached only once.
    Caching does.

    The network limit is usually writing back the compiler's output, which
    is much less than its input.
    Ian Collins, Mar 21, 2009
  20. Pallav singh

    Ian Collins Guest

    The key isn't so much building on a dedicated server but *linking* on a
    dedicated server. Linking is the most I/O bound phase of a build.
    Continuous integration is the way. My last team integrated their work
    to the main trunk many times a day. The teams I'm managing now will be
    integrating their work to the main trunk many times a day!
    Ian Collins, Mar 21, 2009
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.