D
Don
Hello, folks. I have two boxes, both running Debian GNU/Linux 5,
kernel 2.6.26, and gcc version 4.3.2. One box is an AMD64 (obviously,
the x86_64 architecture), and one is an Intel i686.
I have a very simple program designed to a base string, which is a
numeral in base 12, supplied at the command line or from standard
input, into decimal, then send it back to the calling function as a
double. Very simple, under 200 lines.
Before converting the string to a decimal double, I test whether the
string is written in exponential notation, and if so, send it to a
function to convert it out of exponential notation. This consists of
juggling the integral marker (hard-coded as a semicolon, at this
point) back or forward in the string depending on the value of the
exponent, cutting off the exponent in the process. It was a hairier
problem than I expected, and it's not pretty, but it works *almost*
every time. I know that the problem is in this function because no
errors occur when the string is *not* in exponential notation, and
this function is therefore never called.
It works properly on all tested strings sent from the command line.
When I supply the strings from standard input, however, some strange
things happen. They only happen on the second string sent through the
function. The errors do *not* occur if I send a string that isn't in
exponential notation, which makes me think the problem is in the
exponential-handling function.
If the first two strings sent through stdin are "X44;4" (decimal
1492.3repeating) and "X;444e2" (the same number), the first is
converted properly and the second is converted to 17908.0000, which
about the proper number multiplied by twelve (not quite, but very
close). All subsequent passes work fine, and the same input in any
place but the second time works fine. "X44;4" does *not* fail when
passed through second.
If the first two strings sent through stdin are "X;3" (10.25 decimal)
and "0;x3e1" (the same number), the first is converted properly and
the second throws a segfault. Entering "0;x3e1" in any other place
(as in, not second from standard input) works perfectly fine,
producing "10.25" as output predictably. "X;3" does *not* fail when
passed through second.
I stared at this for some time, then in desperation sshed over to my
i686 box, recompiled the same code, and tried it there. Both of these
situations worked properly, without any issues.
I'm compiling with the following command line:
gcc -lm -o dec dec.c
I've googled the issue; in some places I'm told to try compiling with
the -arch x86_64 flag, and in other places I'm told that gcc should
automatically compile for x86_64 on an x86_64 system. I can't try it
for eight or nine more hours (I'm at work), so I thought I'd ask here
and see if perhaps the code itself is bad.
Here's the function at issue: it takes a pointer to the string to be
converted, plus an integer representing the point of the string where
the "e" is, computed at call time (I loop through the string looking
for an "e" or "E" to determine whether it needs to be called at all;
since I already had the loop index set for that when I called the
function, I thought it made sense to send it to the function rather
than let the function do the same loop again itself). Error checking
for the input has already occurred at this point, but it's proper
input that's causing this problem anyway. It calls "doztodec(char
*s)" to convert the part of the string that represents the exponent to
decimal; this function works fine on all input, and typically returns
a double. I'm putting it in a char because the exponent has to be
below 127 (my precision doesn't get anywhere like that high here), but
I understood that a cast would be done automatically in this case.
Should I make it explicit? I've sprinkled a lot of extra comments for
this posting to make it clearer what's going on.
*******
int expkill (char *s, int expspot)
{
int i,j;
char zensuf[3]; /* the base-12 exponent suffix */
char exp; /* the value of the suffix */
int semi; /* where semicolon is in original string */
for (i = expspot+1,j=0; *(s+i) != '\0'; ++i,++j)
*(zensuf+j) = *(s+i); /* put the exponent in zensuf */
exp = doztodec(zensuf); /* convert the exponent to decimal char */
for (semi=0; *(s+semi) != ';' && *(s+semi) != '\0'; ++semi); /* set
semi = semicolon's location */
*(s+expspot) = '\0'; /* cut off the "e" and the exponent */
if (semi >= strlen(s))
return 0; /* if the semicolon is already at the end of the string,
end processing */
if (exp > 0) { /* if the semicolon needs to be moved to the right */
for (i=semi; *(s+i+1) != '\0'; ++i)
*(s+i) = *(s+i+1); /* move everything over to get rid of original
semicolon */
*(s+i) = '\0'; /* move string terminator to new end of string */
for (i=strlen(s); i > semi+exp; --i)
*(s+i) = *(s+i-1); /* move the rest of string over to make room for
new semi */
*(s+semi+exp) = ';'; /* insert the new semicolon */
} else if (exp < 0) { /* if the semicolon needs to be moved to the
left */
exp = -exp, ++exp;
for (i=strlen(s)+exp; i >= 0; --i)
*(s+i) = *(s+i-exp); /* move string to right to make room for new
digits */
for (i=0; i < exp; ++i)
*(s+i) = '0'; /* pad left side of string with zeroes */
for (semi=0; *(s+semi) != ';'; ++semi); /* find location of
semicolon */
for (i=semi; *(s+i) != '\0'; ++i)
*(s+i) = *(s+i+1); /* move over to eliminate original semicolon */
*(s+i) = '\0'; /* insert new end of string */
*(s+1) = ';'; /* put in new semicolon */
}
if (*(s+strlen(s)-1) == ';')
*(s+strlen(s)-1) = '\0'; /* if there's no fractional part now, don't
have semicolon */
return 0;
}
*******
The contents of the file that produces erroneous input is:
X44;4
X;444e2
The contents of the file that throws a segfault is:
X;3
0;X3e1
Thanks to anyone who can help me figure this out.
kernel 2.6.26, and gcc version 4.3.2. One box is an AMD64 (obviously,
the x86_64 architecture), and one is an Intel i686.
I have a very simple program designed to a base string, which is a
numeral in base 12, supplied at the command line or from standard
input, into decimal, then send it back to the calling function as a
double. Very simple, under 200 lines.
Before converting the string to a decimal double, I test whether the
string is written in exponential notation, and if so, send it to a
function to convert it out of exponential notation. This consists of
juggling the integral marker (hard-coded as a semicolon, at this
point) back or forward in the string depending on the value of the
exponent, cutting off the exponent in the process. It was a hairier
problem than I expected, and it's not pretty, but it works *almost*
every time. I know that the problem is in this function because no
errors occur when the string is *not* in exponential notation, and
this function is therefore never called.
It works properly on all tested strings sent from the command line.
When I supply the strings from standard input, however, some strange
things happen. They only happen on the second string sent through the
function. The errors do *not* occur if I send a string that isn't in
exponential notation, which makes me think the problem is in the
exponential-handling function.
If the first two strings sent through stdin are "X44;4" (decimal
1492.3repeating) and "X;444e2" (the same number), the first is
converted properly and the second is converted to 17908.0000, which
about the proper number multiplied by twelve (not quite, but very
close). All subsequent passes work fine, and the same input in any
place but the second time works fine. "X44;4" does *not* fail when
passed through second.
If the first two strings sent through stdin are "X;3" (10.25 decimal)
and "0;x3e1" (the same number), the first is converted properly and
the second throws a segfault. Entering "0;x3e1" in any other place
(as in, not second from standard input) works perfectly fine,
producing "10.25" as output predictably. "X;3" does *not* fail when
passed through second.
I stared at this for some time, then in desperation sshed over to my
i686 box, recompiled the same code, and tried it there. Both of these
situations worked properly, without any issues.
I'm compiling with the following command line:
gcc -lm -o dec dec.c
I've googled the issue; in some places I'm told to try compiling with
the -arch x86_64 flag, and in other places I'm told that gcc should
automatically compile for x86_64 on an x86_64 system. I can't try it
for eight or nine more hours (I'm at work), so I thought I'd ask here
and see if perhaps the code itself is bad.
Here's the function at issue: it takes a pointer to the string to be
converted, plus an integer representing the point of the string where
the "e" is, computed at call time (I loop through the string looking
for an "e" or "E" to determine whether it needs to be called at all;
since I already had the loop index set for that when I called the
function, I thought it made sense to send it to the function rather
than let the function do the same loop again itself). Error checking
for the input has already occurred at this point, but it's proper
input that's causing this problem anyway. It calls "doztodec(char
*s)" to convert the part of the string that represents the exponent to
decimal; this function works fine on all input, and typically returns
a double. I'm putting it in a char because the exponent has to be
below 127 (my precision doesn't get anywhere like that high here), but
I understood that a cast would be done automatically in this case.
Should I make it explicit? I've sprinkled a lot of extra comments for
this posting to make it clearer what's going on.
*******
int expkill (char *s, int expspot)
{
int i,j;
char zensuf[3]; /* the base-12 exponent suffix */
char exp; /* the value of the suffix */
int semi; /* where semicolon is in original string */
for (i = expspot+1,j=0; *(s+i) != '\0'; ++i,++j)
*(zensuf+j) = *(s+i); /* put the exponent in zensuf */
exp = doztodec(zensuf); /* convert the exponent to decimal char */
for (semi=0; *(s+semi) != ';' && *(s+semi) != '\0'; ++semi); /* set
semi = semicolon's location */
*(s+expspot) = '\0'; /* cut off the "e" and the exponent */
if (semi >= strlen(s))
return 0; /* if the semicolon is already at the end of the string,
end processing */
if (exp > 0) { /* if the semicolon needs to be moved to the right */
for (i=semi; *(s+i+1) != '\0'; ++i)
*(s+i) = *(s+i+1); /* move everything over to get rid of original
semicolon */
*(s+i) = '\0'; /* move string terminator to new end of string */
for (i=strlen(s); i > semi+exp; --i)
*(s+i) = *(s+i-1); /* move the rest of string over to make room for
new semi */
*(s+semi+exp) = ';'; /* insert the new semicolon */
} else if (exp < 0) { /* if the semicolon needs to be moved to the
left */
exp = -exp, ++exp;
for (i=strlen(s)+exp; i >= 0; --i)
*(s+i) = *(s+i-exp); /* move string to right to make room for new
digits */
for (i=0; i < exp; ++i)
*(s+i) = '0'; /* pad left side of string with zeroes */
for (semi=0; *(s+semi) != ';'; ++semi); /* find location of
semicolon */
for (i=semi; *(s+i) != '\0'; ++i)
*(s+i) = *(s+i+1); /* move over to eliminate original semicolon */
*(s+i) = '\0'; /* insert new end of string */
*(s+1) = ';'; /* put in new semicolon */
}
if (*(s+strlen(s)-1) == ';')
*(s+strlen(s)-1) = '\0'; /* if there's no fractional part now, don't
have semicolon */
return 0;
}
*******
The contents of the file that produces erroneous input is:
X44;4
X;444e2
The contents of the file that throws a segfault is:
X;3
0;X3e1
Thanks to anyone who can help me figure this out.