inline request and compiler rejection


P

Pallav singh

Q when is inline request to compiler rejected?
1. Function length is large
2. Recursive function call

is there any other situation when inline request is rejected by
Compiler in C++ ??

Thanks in Advance
Pallav Singh
 
Ad

Advertisements

J

James Kanze

Q when is inline request to compiler rejected?
1. Function length is large
2. Recursive function call

Neither of the above. The compiler will inline a function
whenever if feels like it, and only then.
is there any other situation when inline request is rejected
by Compiler in C++ ??

Whenever it feels like it. Most compilers don't inline anything
unless optimization is requested, and a lot will inline
functions not declared inline if optimization is requested.
 
J

James Kanze

Any time the compiler feels like it. For example, some
compilers support command-line flags to disable inlining
entirely.
Unless you have specific knowledge of a particular compiler,
it may be best to think of "inline" as enabling the definition
of a function to appear inline within a source header, rather
than within object code.

The standard does express an "intent". From a quality of
implementation point of view, the compiler should only ignore
this "intent" if it can do better. (In many ways, it's like
"register", except that almost all modern compilers can do
better than the programmer when it comes to what belongs in a
register---only a few can do so when it comes to inlining.)
I inline almost all non-template function definitions. I want
the compiler to have access to as much information available
as possible, as each translation unit is compiled. The
popular zeitgeist is that inlining functions slows build
times, but I have not found this to be a problem.

Inlining increases coupling. As such, it is "bad", and should
be avoided. On the other hand, when the profiler says you need
to improve performance, it is one of the cheapest means
available (in terms of programmer time). So the logical rule is
not to use it until you have profiler data which says you have
to. Unless, perhaps, you're writing a library, and won't
necessarily be able to profile the actual applications.
 
P

Pallav singh

Not, it doesn't, except in an extremely shallow sense.  Furthermore,
changing the signature of an inline function, in any syntactically and
semantically backward-compatible way, requires only one source change
for an inline function, but two changes for a non-inline function.
Moreover, separating the definition from the prototype has a staggering
syntactic overhead in C++, particularly for member functions and templates.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

class A
{
public :
virtual inline f()
{cout<<" Inside class A "<<endl; }
};

class B: public A
{
public :
virtual inline f()
{cout<<" Inside class B "<<endl; }
};

main ()
{
A * a;
a = new B();
a ->f();
}

error: ISO C++ forbids declaration of `f' with no type
error: ISO C++ forbids declaration of `f' with no type
 
P

Pallav singh

Not, it doesn't, except in an extremely shallow sense.  Furthermore,
changing the signature of an inline function, in any syntactically and
semantically backward-compatible way, requires only one source change
for an inline function, but two changes for a non-inline function.
Moreover, separating the definition from the prototype has a staggering
syntactic overhead in C++, particularly for member functions and templates.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++

Q Is it possible to declare a virtual inline in C++? Does this
poses
a problem to the compilation and/or performance when the call for
a virtual inline function is from a secondary purpose?


class A
{ public :
virtual inline void f( )
{cout<<" Inside class A "<<endl; }
};

class B: public A
{ public :
virtual inline void f( )
{cout<<" Inside class B "<<endl; }
};

main ()
{
A * a;
a = new B();
a ->f();
}
 
P

Pranav

Is there any way to know whether the 'inline' request has been
accepted by compiler or not?
 
Ad

Advertisements

J

James Kanze

Not, it doesn't, except in an extremely shallow sense.

I'm not sure what you mean by "shallow". It increases compile
time coupling significantly, which is extremely important in
anything except trivially small applications.
Furthermore, changing the signature of an inline function, in
any syntactically and semantically backward-compatible way,
requires only one source change for an inline function, but
two changes for a non-inline function.

But why would you change the signature if you don't change the
contract. And if you change the contract, you have to at least
look at every place the function is used. A lot more than just
one or two places.

That's why the desicion to change the signature usually rests in
other hands than the desicion to change the implementation, and
why it is important that the implementation and the signature
reside in different files.
Moreover, separating the definition from the prototype has a
staggering syntactic overhead in C++, particularly for member
functions and templates.

It could be simpler, but I've never found it to be a particular
problem. And I don't know how you can maintain the public
interface and the implementation in two separate files
otherwise. (Of course, a good tool set helps here; you don't
actually write the boilerplate yourself, the machine does it for
you.)
 
J

James Kanze

Q Is it possible to declare a virtual inline in C++?

Of course.
Does this poses
a problem to the compilation and/or performance when the call for
a virtual inline function is from a secondary purpose?

It doesn't cause any more problems than inlining in general.
The problem is declaring a function inline, virtual or not.
(Most of the coding guidelines I've seen simply forbid inline
functions, period.)
 
J

James Kanze

Is there any way to know whether the 'inline' request has been
accepted by compiler or not?

If the compiler doesn't generate an error, the inline has been
accepted. There's no way of telling what effect it has on the
generated code, by definition---the semantics of an inline
function are exactly the same as those of a non-inline function.
 
J

James Kanze

Yes, but it is unusual.

It's unusual to declare any function inline. Curiously enough,
the one exception I rather regularly make is a virtual function:
if I'm defining an interface (an abstract class with only pure
virtual functions as members), I'll define the virtual
destructor inline---rather than have to create a source file for
this one empty function.

(There's also a metaprogramming idiom which depended on virtual
inline functions, but it's been largely replaced with templates,
using duck typing.)
 
J

James Kanze

In the first place, that's not "coupling" in any meaningful sense.

It's coupling in the most classical sense.
In the second place, as I already posted, I have not seen a
prohibitive effect on build times. The projects I work on are
not "trivially small."

If you're working on libraries, you won't see them. Your users
will. (If you're using dynamically loaded objects, it may mean
the difference between having to relink and recompile, and not.)
Usually to improve error-checking. Occasionally, to add an
argument with a default value, although I do find that
distasteful at best.

But neither are particularly frequent. Significantly less
frequent, at any rate, than changing details of the
implementation.
The places it is used are additional to the places in which
you declare it, not instead of.

Yes, but it does mean that one or two places more or less
doesn't make a difference.
That is complete nonsense.

No, it's good software engineering.
As always, you are entitled to your opinion. I'm not sure why
you have taken it as your personal responsibility to chastise
me every time I express an opinion you do not care for.

There are matters of taste, and there are matters of basic
software engineering. I tend to jump on anyone who recommends
bad engineering.
 
Ad

Advertisements

I

Ian Collins

James said:
I'm not sure what you mean by "shallow". It increases compile
time coupling significantly, which is extremely important in
anything except trivially small applications.
We've been here before, if build times get too long, add a faster build
server. As applications grow, odds are processing power will also grow.
So a single server can be upgraded, or a new server added to a build
farm to keep pace with code growth.

If your tools don't support parallel/distributed building, get better ones.
 
C

coal

We've been here before, if build times get too long, add a faster build
server.  As applications grow, odds are processing power will also grow..
  So a single server can be upgraded, or a new server added to a build
farm to keep pace with code growth.

I think your answer assumes that buying faster hardware is no big
deal.
In the recent past it's been an affordable way to make up for
sloppiness in a project, but from what I can tell things are changing
and budgets are tighter today than they were just a few years ago.
So if you hope a project will be long lasting, I think the better
option is to work on the software so it builds more quickly on modest
hardware.

My current project puts everything in header files. Probably what we
should do is make that an option so users can choose between a header
only approach or having separate interfaces and implementations. I
did
it partly because it's hard to believe how slowly compilers improve.
Ten years ago I thought compilers would be able to handle the header
only approach as efficiently as they handle the separated approach in
about 5 years -- 2004. I think the header only approach will
eventually prevail, but we aren't there yet.


Brian Wood
Ebenezer Enterprises
www.webEbenezer.net
 
I

Ian Collins

I think your answer assumes that buying faster hardware is no big
deal.

So far I've always managed to justify the cost against the saving in
developer time, either to my employer or to myself (I've just added a
new core i7 box to my farm).
 
J

James Kanze

"Coupling," in the "classical" sense, means that changes to
one portion of a code base require changes elsewhere.

Coupling means that changes in one place affect (many) other
places. This is the meaning the word has always had, and the
meaning with which it is almost universally used. If you want
to restrict its meaning to something else, fine. Just don't be
surprised if no one understands you.
Requiring recompilation, within an application, is only a
trivial form of coupling, and in this enables the compiler to
have significantly more information as it compiles each
translation unit.

I'm not sure I understand that sentence. If what you're trying
to say is that compilation dependencies are not as serious a
form of coupling as source code dependencies, fine. We agree on
that. That doesn't mean that you can neglect compilation
dependencies. They tend to chain: in one real project, we found
that before using the compilation firewall idiom, source codes
of around a thousand lines resulted in the compiler seeing
between 500 thousand and 2 million lines after preprocessing.
Use of the compilation firewall brought that figure down to
around 20 thousand. That had two major impacts: compile times
dropped drastically (that was on an older machine, but even
today, reading 2 million lines over the LAN takes time), and
significantly less files needed to be recompiled after a small
change in the implementation details of the class.
I work mainly on commercial applications. With all due
respect, you don't know what you're talking about.

It's the client code which ends up paying for your extra
includes, not the library itself.

[...]
Yes, it does. Most application-internal functions are called
in only a few places.

It occured to me after posting that you seem to be confusing two
issues. The fact that a function is inline doesn't mean that
you don't need both a declaration and a definition. In order
for the definition to serve as the declaration, it has to be in
the class. Do that, and you quickly end up with the unreadable
mess we know from Java. It's a nice feature for demo programs,
but once the class has more than three or four functions, and
the functions have more than two or three lines, the results are
completely unreadable.
To make sure that every function's interface and
implementation are maintained by separate people? Rubbish.

To have a different procedure for modifying the interface than
for modifying the implementation is essential. In large
projects, it will almost always be a case of different people;
in small projects, it may be the same person wearing a different
hat. But the impact of changing an interface is significantly
different than that of changing an implementation.
The fact that I prefer to inline more functions than you do
does not justify your claim of "bad engineering." This
brow-beating and insulting of people with whom you disagree is
petty and unbecoming.

It's not a question of more or less. Inline functions increase
coupling, and cause maintenance problems. Good software
engineering avoids them except in special cases.
 
J

James Kanze

We've been here before, if build times get too long, add a
faster build server.

Build times are generally limited by the network speed, not the
server speed. There's a very definite upper limit to the
speed of an Ethernet. And reading between 500 thousand and 2
million lines for each source (including includes) will always
take more time than reading between 20 thousand and 50 thousand.
As applications grow, odds are processing power will also grow.
So a single server can be upgraded, or a new server added to a
build farm to keep pace with code growth.
If your tools don't support parallel/distributed building, get
better ones.

None of which addresses the problem of network speed, which is
usually the limiting factor.
 
Ad

Advertisements

A

Alf P. Steinbach

* James Kanze:
It's not a question of more or less. Inline functions increase
coupling, and cause maintenance problems. Good software
engineering avoids them except in special cases.

I think you missed a couple of "can" and other qualifiers.

Especially for libraries of basic functionality, or in a known "environment"
where header pollution isn't an issue, inlining is very useful for header-only
modules without introducing any extra coupling (coupling in addition to what one
already has anyway).

As an example, one reason that much of Boost is so useful is that a great many
of the modules can be employed without compiling them separately; just include
the relevant header and that's that.

So in a way inlining is the C++ counter-weight to the language's lack of real
modules.

For with real modules header-file modules would presumably have had no advantage.


Cheers & hth.,

- Alf
 
A

Alf P. Steinbach

* James Kanze:
Build times are generally limited by the network speed, not the
server speed. There's a very definite upper limit to the
speed of an Ethernet. And reading between 500 thousand and 2
million lines for each source (including includes) will always
take more time than reading between 20 thousand and 50 thousand.

How about building on a dedicated server?

I'm unfortunately unfamiliar with the tool support here (cause I've been out of
the loop of large scale development for a number of years), but it's not long
since I saw a huge wall screen display build statistics for the whole firm.

And I think that would not have been possible unless the actual builds were
centralized, i.e. not just each developer's machine fetching relevant files from
server, but actual builds being done on the server -- in which case network
speed could be irrelevant: just make sure all files available on the server.

Perhaps that's what Jeff and Ian were referring to.

However, on the negative side, I think that's the wrong direction, apparently an
attempt to solve a number of problems related to working on same files (there's
the coupling aspect :) ) by brute force and centralization. I "know" that /most/
C++ programmers prefer the model of SVN where strong coupling is assumed and one
simply try to "merge" the results of several people changing the same files; in
one project most of each Friday was wasted on merging, to get to a coherent
state where work could proceed next week... This is abhorrent to me, but then,
most people I've talked to find it completely abhorrent to consider locking of
files; it seems they reason "we're not able to avoid strong coupling, so we do
need to work on the same files at the same time, and locking just gets in the
way". And I think that's a kind of ostrich POV. Possibly a concession to the
reality of weak management, but anyway, IMHO not what one should /want/ and work
for, even if it can make pragmatic sense to do it given constraints of mgmt.


Cheers & hth.,

- Alf
 
I

Ian Collins

James said:
Build times are generally limited by the network speed, not the
server speed. There's a very definite upper limit to the
speed of an Ethernet. And reading between 500 thousand and 2
million lines for each source (including includes) will always
take more time than reading between 20 thousand and 50 thousand.

With a decent OS and enough RAM, most (include) files will be fetched
and cached only once.
None of which addresses the problem of network speed, which is
usually the limiting factor.

Caching does.

The network limit is usually writing back the compiler's output, which
is much less than its input.
 
Ad

Advertisements

I

Ian Collins

Alf said:
* James Kanze:

How about building on a dedicated server?

The key isn't so much building on a dedicated server but *linking* on a
dedicated server. Linking is the most I/O bound phase of a build.
However, on the negative side, I think that's the wrong direction,
apparently an attempt to solve a number of problems related to working
on same files (there's the coupling aspect :) ) by brute force and
centralization. I "know" that /most/ C++ programmers prefer the model of
SVN where strong coupling is assumed and one simply try to "merge" the
results of several people changing the same files; in one project most
of each Friday was wasted on merging, to get to a coherent state where
work could proceed next week... This is abhorrent to me, but then, most
people I've talked to find it completely abhorrent to consider locking
of files; it seems they reason "we're not able to avoid strong coupling,
so we do need to work on the same files at the same time, and locking
just gets in the way". And I think that's a kind of ostrich POV.

Continuous integration is the way. My last team integrated their work
to the main trunk many times a day. The teams I'm managing now will be
integrating their work to the main trunk many times a day!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top