problem with output of the program on different OS

B

Ben Bacarisse

pereges said:
Yes, I fixed the bug that you had pointed out( uninitialized members
of E_incident and E_scattered) and other bugs that many people have
pointed out.

Did I miss the message where you explained that?
 
B

Ben Bacarisse

pereges said:
Are you really getting this output on PellesC after fixing the bug?
Yes.

What is the version ?
4.50.113

I'm using PellesC 5.00.4 and it reports the same value 1.11e+01
 
B

Ben Bacarisse

Bart said:
I was also getting these weird results. I independently made the
initialisation change

Ah. Vital info. Without that, I thought you and the OP were
discussing the behaviour of an undefined program!
and got the results of 7.39... except for Pelles
which gave 7.08...;


My Pelles and the OP's gave 7.08....

In fact my Pelles was an old V2.9, I just downloaded the new version
and it gave the same results, namely 7.08.


There's lots of other uninitialised data which may or may not be
affecting anything else. However the 7.08/7.39 discrepancy *was*
traced to these zero/near zero values.

Then the problem (or the algorithm) may be unstable, but I am not
persuaded that the compiler was wrong to produce 0 for the arithmetic
test case you posted a while back.
 
B

Bart

Then the problem (or the algorithm) may be unstable, but I am not
persuaded that the compiler was wrong to produce 0 for the arithmetic
test case you posted a while back.

There was a /possibility/ of a compiler bug, but until I found one
reason for the discrepancy, we didn't know that for sure. Now it seems
Pelles was correct and the others wrong! (Because by chance Pelles
avoid the near-zero values that caused the indeterminancy later on.)

A proper fix is now up to the OP; I've already suggested care testing
values against /exactly/ 0.0 or 1.0.
 
B

Ben Bacarisse

pereges said:
I had missed your post in 2nd page but I think BartC also pointed out
that bug. I replied to him on page 2.

Usenet has no pages. I am glad you have the program fixed now.
 
B

Ben Bacarisse

Bart said:
There was a /possibility/ of a compiler bug, but until I found one
reason for the discrepancy, we didn't know that for sure. Now it seems
Pelles was correct and the others wrong!

I think that is a stretch. I know you mean that the others did not
reveal something that you feel was important, but it is going some to
get from that to the other compilers are wrong!
(Because by chance Pelles
avoid the near-zero values that caused the indeterminancy later on.)

A proper fix is now up to the OP; I've already suggested care testing
values against /exactly/ 0.0 or 1.0.

I agree that only the OP can really sort this out but in general I
think your advice is too strict. Testing for exactly *equal* to zero
is obviously problematic but deciding if a ray does or does intersect
a triangle using code like if (v < 0 || v > 1) should be fine. If
including of excluding a few edge rays causes a significant change to
the results it suggests that something else is wrong.
 
C

Chris Dollin

Richard said:
Chris Dollin said:


Good Lord, that was months ago, wasn't it? Well, days ago, anyway. Is he
the tardy one, or are you? :)

Me. Much business porting RDF/XML IO to Jena3.
<shrug> You expected something different?

Time passes, I saw Panic Room and Mermaid Kiss; I am an optimist
again.
Remember that the above idiocy
hails from the same stable that can't distinguish between "I rarely do
such-and-such" and "I never do such-and-such", or between "I have on a few
rare occasions been able to do so-and-so" and "I can always do so-and-so".
Time spent arguing with such people is generally time wasted (which
doesn't necessarily mean it can't be fun).

If it's fun, it's not wasted. But I wasn't arguing for fun; I happened
upon the froth and wanted to call attention to its frothiness.
 
P

pereges

But I've now changed a couple of things: < 0.0 to < (-EPSILON) and >
1.0 to > (1+EPSILON).

*Now*, all my 4 compilers (5 including new Pelles) give the 7.08...
result.

(Whether that is right or not, I've still no idea.)

But, why should we change < 0.0 to < -EPSILON just to get consistent
results ?

Numerical accuracy is very important in my program. <0.0 is a negative
number. For example I test t < 0.0 which means if the distance that
the ray has travelled is negative, then it means the intersection is
behind the ray's origin. But taking < - EPSILON will allow a lot of
negative values of t to pass through as well.

I'm wondering if I'm doing a wrong thing by taking having a tolerance
value in my program for floating point checks. If accuracy is really
important, perhaps I should do away with such checks and compare with
exact values or may have an extremely small tolerance like 10e-20 or
something.
 
P

pereges

btw I changed a few things in my code to make it more efficient. In
radar.c, I got rid of gen_grid_points which was creating an array of
points i.e. pointinarray. You are creating a huge array in
initialize_radar as well(raylist array) and then you ahve two huge
arrays in the memory at the same time putting too much burden on the
system. Now, I create the points on the fly and assign it directly to
radar->raylist.origin which consumes less memory as I have only one
array i.e. raylist. In test.c, there is a change in formula of RCS:

RCS = 4 * PI * R ^2 * ES / Ei

where R is the average distance travelled by the scattered rays
(Rsum / Rcount)

Also, the amplitude of electrical field is now of the complex form:

a + ib so there has to be two entries for real and imaginary everytime
the user enters the amplitude.

There is also little change in the calculation of electrical field of
reflected rays as you can see in cases for r.depth == 1 and r.depth ==
2 in raytrace module. Actually the formula for electrical field is:

E(s) = E(0) * exp(-jks)

This is for direct ray when the ray is travelling from the source to
the object. E(0) is the electrical field at the source i.e. the
amplitude that you enter initially. k is the radar->wavenumber field
and s is the distance travelled by the ray.To calculate electrical
field for reflected ray:

E(s) = E(P) * RM * ( 1 / (s + 1)) * exp(-jks)

where E(P) is the electrical field incident at the point(same point
from where the reflected ray starts), is distance travelled and RM is
a reflected matrix

-1 0
0 1



Here's the code:

http://upload2.net/page/download/HaJJNGPTto8ac2z/raytrace.tar.gz.html

The code compiles and there are no errors.

For following values:

frequency: 40e+7
amplitude: 10e-4 10e-4
numberofrays: 1000

I'm getting output as:

Rsum: 25985.552124 Rcount: 25 R: 1039.422085
Es: 9.988147e-11 Ei: 3.192876e-03 RCS: 4.247141e-01
 
B

Ben Bacarisse

pereges said:

For your information, you still calculate with uninitialised data. In
radar.c after calculating the number of rays, you need to run the
loops with a <= test rather than a < test. Either that or you should
not set

radar->numberofrays = (numpointsx + 1) * (numpointsy + 1);

So, remove the +1s or change the following loop to initialise the
whole array. When I do that, I get this output:

Rsum: 25983.243363 Rcount: 25 R: 1039.329735
Es: 9.497733e-11 Ei: 3.291908e-03 RCS: 3.916416e-01

I hope this helps.
 
P

pereges

For your information, you still calculate with uninitialised data. In
radar.c after calculating the number of rays, you need to run the
loops with a <= test rather than a < test. Either that or you should
not set

radar->numberofrays = (numpointsx + 1) * (numpointsy + 1);

So, remove the +1s or change the following loop to initialise the
whole array. When I do that, I get this output:

Rsum: 25983.243363 Rcount: 25 R: 1039.329735
Es: 9.497733e-11 Ei: 3.291908e-03 RCS: 3.916416e-01

True. I changed it to <= test but the output that I'm getting is
4.24e-01. I am compiling the code on linux gcc. But anyway, I'm still
in process of changing a few things. In raytrace.c I noticed there is
a lot of repetitive code which probably is not a good thing. Since you
have read my code, can you also please point out some poor practices
that I've used ?

thanks
 
B

Bart

But, why should we change < 0.0 to < -EPSILON just to get consistent
results ?

My interest in looking at this was solely to investigate discrepancies
in the output. This was traced to ambiguous behaviour when some values
were 'noisy' (eg. near zero but not quite zero).

Changing that zero to -epsilon was tried out of interest to see if the
program became more stable (it did).

But as Ben B pointed out, you probably had more serious issues with
your code which you've now apparently fixed.

Numerical accuracy is very important in my program. <0.0 is a negative
number. For example I test t < 0.0 which means if the distance that
the ray has travelled is negative, then it means the intersection is
behind the ray's origin.  But taking < - EPSILON will allow a lot of
negative values of t to pass through as well.

I'm wondering if I'm doing a wrong thing by taking having a tolerance
value in my program for floating point checks. If accuracy is really
important, perhaps I should do away with such checks and compare with
exact values or may have an extremely small tolerance like 10e-20 or
something.

This depends on many things. I personally think it's better to have
tolerances than not.

I've found these few lines in some ancient code of mine (this is not
C!):

if xt1>=(-minreal) and xt1<=(minreal+1.0) then ixstat1:=1 fi
if xt2>=(-minreal) and xt2<=(minreal+1.0) then ixstat1 ior:=2 fi

This is to do with finding if one edge crosses another. Xt1,xt2 are
0..1 if the interesection is /on/ the edge.

But because of small numerical errors, sometimes xt1/2 could be
-0.000033203 or sometimes 1.00000474, when the edges join at their
endpoints for example.

If you are summing thousands of these results, maybe it doesn't matter
if you miss a few. But if you are working with just 2 edges, the user
will get upset if the program reports 2 edges do not join when clearly
they do, within tolerance!

In your code, you are reading data specified to 6 decimal places. To
me that signifies limited accuracy. /Maybe/ you are just assuming that
these figures represent the /exact/ data, or maybe not.
 
B

Ben Bacarisse

pereges said:
But, why should we change < 0.0 to < -EPSILON just to get consistent
results ?

I don't think you should. You should look on the different results as
a sign that something else is wrong. Testing for *==* zero is another
matter, but you are not doing that in the cases cited.

If your algorithm is sound, and the physical properties of the problem
are reasonable, then you should get the same result (roughly)
regardless of how you decode all the boundary cases. The program
should be coded in such a way that deciding this or that edge case one
way or the other will not have a significant effect.

Take a look at all the places where you make floating point compares.
Convince yourself that small arithmetic differences will not make any
significant contribution to the result. For example, if a few rays
fall close to a triangle edge, will deciding to include or exclude that
ray make a real difference? If so, you need to rethink the algorithm.
 
B

Bart

I don't think you should.  You should look on the different results as
a sign that something else is wrong.  Testing for *==* zero is another
matter, but you are not doing that in the cases cited.

Using (v<0.0-EPSILON) (forgetting comparing with 1.0) made
intersect_triangle give nearly 800 hits instead of just over 700.
Suggesting an unhealthy clustering of dot product values around 0.0
(indicating -- I think -- a number of vectors at 90 degrees).

In this case, allowing for a small clustering near 0.0 probably won't
hurt.


-- Bartc
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,756
Messages
2,569,534
Members
45,007
Latest member
OrderFitnessKetoCapsules

Latest Threads

Top