Overload ambiguity between f(int,int) and f(double,double)?


S

Shriramana Sharma

Hello. Please consider the following code:

# include <iostream>
void foo ( int a, int b ) { std::cout << "foo(int,int)\n" ; }
void foo ( double a, double b ) { std::cout << "foo(double,double)\n" ; }
int main ()
{
int a = 2.2 ;
foo ( 1, 2 ) ;
foo ( 1, 2.2 ) ;
foo ( 2.2, 1 ) ;
foo ( 2.2, 2.2 ) ;
}

and the console session compiling it with GCC 4.7.4:

$ g++ -c overload-ambiguity.cpp
overload-ambiguity.cpp: In function ‘int main()’:
overload-ambiguity.cpp:8:17: error: call of overloaded ‘foo(int, double)’ is ambiguous
overload-ambiguity.cpp:8:17: note: candidates are:
overload-ambiguity.cpp:2:6: note: void foo(int, int)
overload-ambiguity.cpp:3:6: note: void foo(double, double)
overload-ambiguity.cpp:9:17: error: call of overloaded ‘foo(double, int)’ is ambiguous
overload-ambiguity.cpp:9:17: note: candidates are:
overload-ambiguity.cpp:2:6: note: void foo(int, int)
overload-ambiguity.cpp:3:6: note: void foo(double, double)

Clang 3.2 also makes the same complaints.

Initially I was surprised that the compiler flags the calls as ambiguous, since it seemed obvious that calling with arguments as (int,double) or (double,int) should only select the overload one that takes (double,double) since it is the only lossless one and not the lossy (int,int).

The explanation is possibly obvious from the fact that the compiler does not complain about int a = 2.2, meaning that the language permits an implicit cast of a double to an int (perhaps for C compatibility?). But at least when selecting overloads, one would expect the compiler to prefer a lossless overload to a lossy one. This is what D does:

import std.stdio ;
void foo ( int a, int b ) { writeln ( "foo(int,int)" ) ; }
void foo ( double a, double b ) { writeln ( "foo(double,double)" ) ; }
void main ()
{
// int a = 2.2 ;
foo ( 1, 2 ) ;
foo ( 1, 2.2 ) ;
foo ( 2.2, 1 ) ;
foo ( 2.2, 2.2 ) ;
}

compiles cleanly and outputs:

foo(int,int)
foo(double,double)
foo(double,double)
foo(double,double)

Of course, D doesn't allow the int a = 2.2 without a cast, but the overload mechanism is more sane in D. And I'm not trying to extol the virtues of D, but rather asking why C++ doesn't ensure that the lossless overload is selected over a lossy one, irrespective of whether the language permits an implicit lossy cast or not.
 
Ad

Advertisements

M

Marcel Müller

Hello. Please consider the following code:

# include<iostream>
void foo ( int a, int b ) { std::cout<< "foo(int,int)\n" ; }
void foo ( double a, double b ) { std::cout<< "foo(double,double)\n" ; }
int main ()
{
int a = 2.2 ;
foo ( 1, 2 ) ;
foo ( 1, 2.2 ) ;
foo ( 2.2, 1 ) ;
foo ( 2.2, 2.2 ) ;
}

You have a static double dispatch problem.

The function you want to invoke depends on the types of two arguments in
a way that only some combination of types should invoke function #1
while the others should invoke #2.

If you want to do this unambiguous, you need to provide /all/ overloads.
I.e. also foo(int,double) and foo(double,int).
Initially I was surprised that the compiler flags the calls as ambiguous, since it seemed obvious that calling with arguments as (int,double) or (double,int) should only select the overload one that takes (double,double) since it is the only lossless one and not the lossy (int,int).

There is nothing like a better conversion. Either a conversion is needed
or not. Only templates have a lower priority.
import std.stdio ;
void foo ( int a, int b ) { writeln ( "foo(int,int)" ) ; }
void foo ( double a, double b ) { writeln ( "foo(double,double)" ) ; }
void main ()
{
// int a = 2.2 ;
foo ( 1, 2 ) ;
foo ( 1, 2.2 ) ;
foo ( 2.2, 1 ) ;
foo ( 2.2, 2.2 ) ;
}

Other language, other conversion rules.


Marcel
 
L

Leandro

About your last comment, you can find an interesting text in the book "The Design and Evolution of C++" or in [1].

2. C with Classes

(...)

2.6.1. Narrowing Conversions

Another early attempt to tighten C with Classes' type rules was to disallow"information destroying" implicit conversions. Like others, I had been badly bitten by examples eqivalent to (but naturally not as easy to spot in areal program) as these:

void f()
{
long int lng = 65000;
int i1 = lng; /* i1 becomes negative (-536) */
/* on machines with 16 bits ints */
int i2 = 257;
char c = i2; /* truncates; c becomes 1 */
/* on machines with 8 bits chars */
}

I decided to try to ban all conversions that were not value preserving, that is, to require an explicit conversion operator wherever a larger object was stored into a smaller:

void g(long lng, int i) /* experiment */
{
int i1 = lng; /* error: narrowing conversion */
i1 = (int) lng; /* truncates for 16 bits ints */

char c = i; /* error: narrowing conversion */
c = (char)i; /* truncates */
}

The experiment failed miserably. Every C program I looket at contained large numbers of assignments of ints to chars variables. Naturally, since thesewere working programs, most of these assignments were perfectly safe. Thatis, either the value was small enough not to become truncated, or the truncation was expected or at least harmless in that particular context. There was no willingness in the C with Classes community to make such a break from C. I'm still looking for ways to compensate for these problems (§ 14.3.5.2)

[1] - http://www.stroustrup.com/hopl2.pdf


Em sábado, 15 de junho de 2013 01h11min42s UTC-3, Shriramana Sharma escreveu:
 
S

Shriramana Sharma

Hello and thank you people for your reply. It is good to know that Stroustrup was aware of the preferability of not allowing implicit range-lowering conversions but too bad that he wasn't successful in implementing it.

I see D as a good opportunity for such decisions to be made with the benefit of hindsight, but I hope it matures some time soon, otherwise it will forever be an experiment and despite C++'s imperfections it will always be theonly systems programming language that supports OOP (although I'm aware that some say C++ doesn't really support OOP... whatever).
 
G

Gerhard Fiedler

Shriramana said:
Initially I was surprised that the compiler flags the calls as
ambiguous, since it seemed obvious that calling with arguments as
(int,double) or (double,int) should only select the overload one that
takes (double,double) since it is the only lossless one and not the
lossy (int,int).

I don't think it is guaranteed (by the standard) that the conversion
from int to double is lossless. AFAIK it is in most "normal" compilers
and platforms, but that's not what's relevant for language rules.

I think it's quite possible to have a 64-bit int and a 64-bit (or even
32-bit) double, for example. In both cases the conversion from int to
double would lose significant bits.

Gerhard
 
G

Gerhard Fiedler

Robert said:
FWIW, the minimum possible size for a double is about 41 bits.

Can you please point me to the relevant section in the C++ standard that
determines this?
But the minimum is 10 decimal digits of precision, so conversion from
32 bit ints should be lossless.

Yes, but even if the minimum for a double is "about 41 bits" (does that
include the exponent?), what about 64-bit ints? Remember, this is not
about a specific implementation, this is about the language (in all its
implementations).

Thanks,
Gerhard
 
Ad

Advertisements

G

Gerhard Fiedler

Robert said:
From the "Environmental limits" section of the standard (5.2.4.2.2 in
C99). The minimums for a double are 10 digits (DBL_DIG minimum of 10,
that's ~33.2 bits), and DBL_MIN and DBL_MAX require an exponent range
of 10**-37 to 10**37, equivalent to ~2**-122.9 to 2**122.9 (about 7.9
bits), plus a sign. And that's actually about 42 bits, which I was
misremembering.

Interesting. However, I didn't find anything similar in the C++
standard. Does (all of) the C standard automatically apply to the C++
standard? What about the elements of C99 that are not considered part of
C++11?
I was mainly pointing out that you cannot have a 32 bit double.

Noted, thanks.

Gerhard
 
J

James Kanze

Interesting. However, I didn't find anything similar in the C++
standard.

§18.3.3. It's very short, since all it says is that the
contents of <climits> are exactly the same as those of
Does (all of) the C standard automatically apply to the C++
standard? What about the elements of C99 that are not considered part of
C++11?

Nothing is automatic. With regards to the library, the C++
standard always says which elements of the C standard are
included (sometimes with slight changes). With regards to the
language, it's hard to say. I think that the intent is that
nothing is included, at least not implicitly.

With regards to the exact minimum number of bits: Robert Wessel
seems to forget that the standard doesn't require binary (and
that a lot of floating point formats aren't binary). If the
representation is base 16, then you need a lot less bits for the
exponent. (On the other hand, you need three more for the
mantissa.)
 
Ad

Advertisements

G

Gerhard Fiedler

James said:
§18.3.3. It's very short, since all it says is that the
contents of <climits> are exactly the same as those of
<limits.h> in the C standard.

Ah, thanks... "See also ISO C ... 5.2.4.2.2". That clears this up :)

Gerhard
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top