Re: Anyone not switching to CUDA ?

Discussion in 'C Programming' started by fft1976, Jul 23, 2009.

  1. fft1976

    fft1976 Guest

    On Jul 22, 1:39 pm, Nicolas Bonneel
    <> wrote:
    > fft1976 a écrit :
    >
    > > I'm curious: is anyone not switching all their new projects to CUDA
    > > and why? It seems to me that CPUs are quickly becoming irrelevant in
    > > scientific computing.

    >
    > multi-core programming is not really recent. Also, coding on GPU already
    > has several years, and CUDA is not the only way to code on GPU.
    > Finally, Larrabee is being release soon, and I would see no reason to
    > change all your code in a heavy way to make it run on GPU while Larrabee
    > might require much fewer changes.



    Frankly, I doubt Larrabee will be anywhere near 1.7 TFLOPS for $500
    that we can have with CUDA _today_ (for a certain fairly wide class of
    scientific problems)
     
    fft1976, Jul 23, 2009
    #1
    1. Advertising

  2. "fft1976" <> wrote in message
    news:...
    On Jul 22, 1:39 pm, Nicolas Bonneel
    <> wrote:
    > fft1976 a écrit :
    >
    > > I'm curious: is anyone not switching all their new projects to CUDA
    > > and why? It seems to me that CPUs are quickly becoming irrelevant in
    > > scientific computing.

    >
    > multi-core programming is not really recent. Also, coding on GPU already
    > has several years, and CUDA is not the only way to code on GPU.
    > Finally, Larrabee is being release soon, and I would see no reason to
    > change all your code in a heavy way to make it run on GPU while Larrabee
    > might require much fewer changes.


    <--
    Frankly, I doubt Larrabee will be anywhere near 1.7 TFLOPS for $500
    that we can have with CUDA _today_ (for a certain fairly wide class of
    scientific problems)
    -->

    I don't use CUDA, but I also don't do scientific computing...

    I do some limited GPU based processing, but mostly this is through the use
    of OpenGL and the use of fragment shaders.


    I might be willing to code for the GPU if it were provided in a means
    similar to GLSL, for example, as a kind of OpenGL add-on or companion
    library, however, to make my code dependent on an IMO questionable
    preprocessor and compiler toolset, probably not, even for the performance
    benefit...

    now, it probably seems like it would be less convinient to compile with a
    library-based interface, but in my experience it is actually far more
    convinient, even for things like ASM (which are traditionally statically
    assembled and linked against an app).

    though maybe counter-intuitive, the inconvinience cost of using a library
    interface to compile and link the code, and accessing dynamically-compiled
    code through function-pointers, turns out to be IME more convinient than
    that of having to deal with an external compiler/assembler and getting it
    linked in this way...

    of course, maybe my sense of "convinience" is skewed, who knows...
     
    BGB / cr88192, Jul 23, 2009
    #2
    1. Advertising

  3. fft1976 a écrit :
    > On Jul 22, 1:39 pm, Nicolas Bonneel
    > <> wrote:
    >> fft1976 a écrit :
    >>
    >>> I'm curious: is anyone not switching all their new projects to CUDA
    >>> and why? It seems to me that CPUs are quickly becoming irrelevant in
    >>> scientific computing.

    >> multi-core programming is not really recent. Also, coding on GPU already
    >> has several years, and CUDA is not the only way to code on GPU.
    >> Finally, Larrabee is being release soon, and I would see no reason to
    >> change all your code in a heavy way to make it run on GPU while Larrabee
    >> might require much fewer changes.

    >
    >
    > Frankly, I doubt Larrabee will be anywhere near 1.7 TFLOPS for $500
    > that we can have with CUDA _today_ (for a certain fairly wide class of
    > scientific problems)


    This is not really the point. You cannot change your whole projects with
    heavy modifications on the basis of a technology which is currently
    beeing seriously endangered. I guess Larrabee will also provide much
    better, and possibly larger cache.
    GPU also suffer from rounding of denormalized numbers to zero (which is
    fine for graphics, but maybe not for scientific computing).
    Think that to get good performance with GPUs, you really have to
    re-think your whole project (even if it was already parallelized on CPU)
    just because of the GPU cache.

    With 32cores featuring vector instructions of 16 single precision
    numbers at the same time, I guess you can indeed get performances
    similar to current GPUs. But with much less involved re-coding.

    If you target broad diffusion of your application, you're stuck anyway
    with graphic cards which do not support CUDA (while they still support
    shaders).
    So, except if by "project" you mean a 6 month research project, I don't
    find any argument to port tens/hundreds of thousands lines of code to CUDA.

    --
    Nicolas Bonneel
    http://www-sop.inria.fr/reves/Nicolas.Bonneel/
     
    Nicolas Bonneel, Jul 23, 2009
    #3
  4. fft1976

    Phil Carmody Guest

    Nicolas Bonneel <> writes:
    > GPU also suffer from rounding of denormalized numbers to zero (which
    > is fine for graphics, but maybe not for scientific computing).


    They also use floats rather than doubles, which are useless for
    scientific computing. I notice that the ones that do support
    doubles do so at a tenth of the performance of floats, which is
    roughly the hit you'd get if you wrote your own multi-precision
    floats.

    Then again, I tend to be a bit of Kahan-ite when it comes
    to FPUs.
    Phil
    --
    If GML was an infant, SGML is the bright youngster far exceeds
    expectations and made its parents too proud, but XML is the
    drug-addicted gang member who had committed his first murder
    before he had sex, which was rape. -- Erik Naggum (1965-2009)
     
    Phil Carmody, Jul 23, 2009
    #4
  5. fft1976

    Guest

    In article <h48b4n$q1o$>,
    BGB / cr88192 <> wrote:
    >
    >
    >I might be willing to code for the GPU if it were provided in a means
    >similar to GLSL, for example, as a kind of OpenGL add-on or companion
    >library, however, to make my code dependent on an IMO questionable
    >preprocessor and compiler toolset, probably not, even for the performance
    >benefit...


    That was one of my two points. They are working on it. Watch this
    space.


    Regards,
    Nick Maclaren.
     
    , Jul 23, 2009
    #5
  6. "Phil Carmody" <> wrote in message
    news:...
    > Nicolas Bonneel <> writes:
    >> GPU also suffer from rounding of denormalized numbers to zero (which
    >> is fine for graphics, but maybe not for scientific computing).

    >
    > They also use floats rather than doubles, which are useless for
    > scientific computing. I notice that the ones that do support
    > doubles do so at a tenth of the performance of floats, which is
    > roughly the hit you'd get if you wrote your own multi-precision
    > floats.
    >
    > Then again, I tend to be a bit of Kahan-ite when it comes
    > to FPUs.


    probably this would depend a lot on the type of "scientific computing", as
    there are likely some types of tasks where speed matters a whole lot more
    than accuracy...
     
    BGB / cr88192, Jul 23, 2009
    #6
  7. fft1976

    Paul Hsieh Guest

    On Jul 22, 11:54 pm, Phil Carmody <>
    wrote:
    > Nicolas Bonneel <> writes:
    > > GPU also suffer from rounding of denormalized numbers to zero (which
    > > is fine for graphics, but maybe not for scientific computing).

    >
    > They also use floats rather than doubles, which are useless for
    > scientific computing. I notice that the ones that do support
    > doubles do so at a tenth of the performance of floats, which is
    > roughly the hit you'd get if you wrote your own multi-precision
    > floats.
    >
    > Then again, I tend to be a bit of Kahan-ite when it comes
    > to FPUs.


    Sure, sure ... but don't mistake the forest from the trees. If nVidia
    is really serious about CUDA they will be forced to make real IEEE-754
    doubles just like Intel or anyone else. I am sure that this is already
    in their pipeline of features slated for a future product.

    nVidia *wants* this to be used for super computing applications and
    the TOP500 list guy refuses to accept any results from something that
    isn't 64 bit. The economical motivations should be sufficient to
    guarantee that they support it in the future. And if they don't write
    a benchmark for it showing that its slower than advertised, publicize
    your benchmark widely and watch nVidia respond.

    --
    Paul Hsieh
    http://www.pobox.com/~qed/
    http://bstring.sf.net/
     
    Paul Hsieh, Jul 23, 2009
    #7
  8. fft1976

    Guest

    In article <>,
    Paul Hsieh <> wrote:
    >
    >Sure, sure ... but don't mistake the forest from the trees. If nVidia
    >is really serious about CUDA they will be forced to make real IEEE-754
    >doubles just like Intel or anyone else. I am sure that this is already
    >in their pipeline of features slated for a future product.


    I asked them that question, and their non-answer made it pretty clear
    that it is.


    Regards,
    Nick Maclaren.
     
    , Jul 23, 2009
    #8
  9. fft1976

    Phil Carmody Guest

    writes:
    > In article <>,
    > Paul Hsieh <> wrote:
    >>
    >>Sure, sure ... but don't mistake the forest from the trees. If nVidia
    >>is really serious about CUDA they will be forced to make real IEEE-754
    >>doubles just like Intel or anyone else. I am sure that this is already
    >>in their pipeline of features slated for a future product.

    >
    > I asked them that question, and their non-answer made it pretty clear
    > that it is.


    They already have doubles in their Tesla line. Their spokespeople have
    also already admitted that they'll make professional (i.e. Tesla)
    customers pay for the advanced silicon until there's demand from the
    commodity market, and that's the direction they're expecting, and
    planning, to move in.

    Phil
    --
    If GML was an infant, SGML is the bright youngster far exceeds
    expectations and made its parents too proud, but XML is the
    drug-addicted gang member who had committed his first murder
    before he had sex, which was rape. -- Erik Naggum (1965-2009)
     
    Phil Carmody, Jul 23, 2009
    #9
  10. fft1976

    Guest

    In article <>,
    Phil Carmody <> wrote:
    >
    >They already have doubles in their Tesla line. Their spokespeople have
    >also already admitted that they'll make professional (i.e. Tesla)
    >customers pay for the advanced silicon until there's demand from the
    >commodity market, and that's the direction they're expecting, and
    >planning, to move in.


    Er, have you any idea how slow they are? Real programmers seem to
    agree that a Tesla is 3-4 times faster than a quad-core Intel CPU
    in double precision, which doesn't really repay the huge investment
    in rewriting in CUDA.

    I believe that a future version of the Tesla will improve the speed
    of double precision considerably.


    Regards,
    Nick Maclaren.
     
    , Jul 23, 2009
    #10
  11. fft1976

    Rui Maciel Guest

    Nicolas Bonneel wrote:

    > This is not really the point. You cannot change your whole projects with
    > heavy modifications on the basis of a technology which is currently
    > beeing seriously endangered.


    Where exactly do you base your "seriously endangered" allegation? And won't larrabee also force those who
    wish to adopt it to "heavily modify" their projects?


    > I guess Larrabee will also provide much
    > better, and possibly larger cache.


    According to what? And compared to what?


    > GPU also suffer from rounding of denormalized numbers to zero (which is
    > fine for graphics, but maybe not for scientific computing).


    It is reported that CUDA's double precision is fully IEEE-754 compliant and I believe that all deviations
    from that standard are explicitly listed in the CUDA programming guide.


    > Think that to get good performance with GPUs, you really have to
    > re-think your whole project (even if it was already parallelized on CPU)
    > just because of the GPU cache.


    Why do you believe that larrabee's case will be any different?


    > With 32cores featuring vector instructions of 16 single precision
    > numbers at the same time, I guess you can indeed get performances
    > similar to current GPUs. But with much less involved re-coding.


    As there are no larrabee benchmarks available and until a while ago larrabee was nothing more than a press
    release and a powerpoint presentation, where exactly do you base your performance claims? And have you
    looked into the C++ Larrabee prototype library? The examples listed there are more convoluted and
    unreadable than anything present in CUDA's programming guide.


    > If you target broad diffusion of your application, you're stuck anyway
    > with graphic cards which do not support CUDA (while they still support
    > shaders).


    Why exactly do you believe that you should not "target broad diffusion" if you chose to program for a piece
    of hardware that has nearly 75% of the market share and why exactly do you believe that, instead of
    targeting the 75% market share, you are better off targeting a piece of hardware that doesn't even exist?


    > So, except if by "project" you mean a 6 month research project, I don't
    > find any argument to port tens/hundreds of thousands lines of code to
    > CUDA.


    And instead you believe it is better to base your "projects" on non-existing, non-proven hardware? That
    advice doesn't seem to be sound by far.


    Rui Maciel
     
    Rui Maciel, Jul 24, 2009
    #11
  12. fft1976

    Phil Carmody Guest

    writes:
    > In article <>,
    > Phil Carmody <> wrote:
    >>
    >>They already have doubles in their Tesla line. Their spokespeople have
    >>also already admitted that they'll make professional (i.e. Tesla)
    >>customers pay for the advanced silicon until there's demand from the
    >>commodity market, and that's the direction they're expecting, and
    >>planning, to move in.

    >
    > Er, have you any idea how slow they are?


    Yes. Message-ID: <>

    > Real programmers seem to
    > agree that a Tesla is 3-4 times faster than a quad-core Intel CPU
    > in double precision, which doesn't really repay the huge investment
    > in rewriting in CUDA.


    I'm glad to know that this newsgroup is blessed by the presence of
    someone familiar with the priorities of every single company, university,
    and programmer in the world. We are truly honoured.

    > I believe that a future version of the Tesla will improve the speed
    > of double precision considerably.


    Technology will improve in the future. Wow, I'm so jealous of
    your remarkable insights.

    Phil
    --
    If GML was an infant, SGML is the bright youngster far exceeds
    expectations and made its parents too proud, but XML is the
    drug-addicted gang member who had committed his first murder
    before he had sex, which was rape. -- Erik Naggum (1965-2009)
     
    Phil Carmody, Jul 24, 2009
    #12
  13. fft1976

    user923005 Guest

    On Jul 23, 12:45 pm, wrote:
    > In article <>,
    > Phil Carmody  <> wrote:
    >
    >
    >
    > >They already have doubles in their Tesla line. Their spokespeople have
    > >also already admitted that they'll make professional (i.e. Tesla)
    > >customers pay for the advanced silicon until there's demand from the
    > >commodity market, and that's the direction they're expecting, and
    > >planning, to move in.

    >
    > Er, have you any idea how slow they are?  Real programmers seem to
    > agree that a Tesla is 3-4 times faster than a quad-core Intel CPU
    > in double precision, which doesn't really repay the huge investment
    > in rewriting in CUDA.
    >
    > I believe that a future version of the Tesla will improve the speed
    > of double precision considerably.


    I see it like this:
    CUDA is nice for some set of problems.
    To use CUDA you need to use special CUDA hardware and the special CUDA
    compiler. This computer that I am writing this post from has a CUDA
    graphics card. But it won't perform better than a normal CPU for most
    math operations. If you want the blazing speed, you need one of those
    fancy pants CUDA cards that has a giant pile of CPUs[1]. At some
    point the fancy pants CUDA card will no longer be faster than other
    kinds of hardware, so you will need to make a choice at that point in
    time to upgrade the CUDA card or use some other hardware. You can't
    use recursion with CUDA. Moving the data on and off the card is a
    problem unless there is a huge amount of crunching going on in the GPU
    pile.

    It's kind of like the Deep Blue chess computer. When it was created,
    nothing came remotely close to the chess compute power. But hardware
    adavances exponentially, so soon a $2000 desktop system will be as
    fast or faster.

    I think CUDA is neat. We have the fast CUDA cards in house here and
    the CUDA compiler and we have done some fooling around with it to try
    to figure out what we can do with it.
    But if you do make some CUDA solutions, your customers will also need
    the CUDA cards unless you only plan to run it in house.

    In short, CUDA is cool, but it's not a panacea. No hardware solution
    will ever be the ideal solution because hardware ages. Prophecy is
    tricky for us humans. We tend to get it wrong in the long run, most
    of the time.

    [1] Not every CUDA card is a GeForce GTX 280M.
     
    user923005, Jul 24, 2009
    #13
  14. fft1976

    bartc Guest

    "Les Neilson" <> wrote in message
    news:...
    >
    > "BGB / cr88192" <> wrote in message
    > news:h4a844$d4g$...
    >>
    >> "Phil Carmody" <> wrote in message
    >> news:...
    >>> Nicolas Bonneel <> writes:
    >>>> GPU also suffer from rounding of denormalized numbers to zero (which
    >>>> is fine for graphics, but maybe not for scientific computing).
    >>>
    >>> They also use floats rather than doubles, which are useless for
    >>> scientific computing. I notice that the ones that do support
    >>> doubles do so at a tenth of the performance of floats, which is
    >>> roughly the hit you'd get if you wrote your own multi-precision
    >>> floats.
    >>>
    >>> Then again, I tend to be a bit of Kahan-ite when it comes
    >>> to FPUs.

    >>
    >> probably this would depend a lot on the type of "scientific computing",
    >> as there are likely some types of tasks where speed matters a whole lot
    >> more than accuracy...
    >>

    >
    > Surely you mean precision rather than accuracy ?
    > Isn't it better to get accurate (correct) results, albeit slowly, than to
    > get inaccurate (wrong) results really fast ?
    >
    > But yes it does depend on the type of scientific computing - it depends on
    > the range of values covered and the precision to which they are sensitive
    > before errors become significant.
    > When constructing the Channel Tunnel (digging from both ends
    > simultaneously) overall distances were large but they still had to ensure
    > that they met without too much of a "step difference" in elevation or
    > horizontal plane. I understand that when they did meet the centre points
    > (as measured with lasers) were "off" by a only a few millimeters.


    I doubt speed was too important in that case. The tunnel took several years
    to construct.

    And probably a precision nearer 32-bit than 64-bit would have sufficed.

    --
    Bart
     
    bartc, Jul 24, 2009
    #14
  15. fft1976

    James Kuyper Guest

    Les Neilson wrote:
    >
    > "BGB / cr88192" <> wrote in message
    > news:h4a844$d4g$...
    >>
    >> "Phil Carmody" <> wrote in message
    >> news:...
    >>> Nicolas Bonneel <> writes:
    >>>> GPU also suffer from rounding of denormalized numbers to zero (which
    >>>> is fine for graphics, but maybe not for scientific computing).
    >>>
    >>> They also use floats rather than doubles, which are useless for
    >>> scientific computing. I notice that the ones that do support
    >>> doubles do so at a tenth of the performance of floats, which is
    >>> roughly the hit you'd get if you wrote your own multi-precision
    >>> floats.
    >>>
    >>> Then again, I tend to be a bit of Kahan-ite when it comes
    >>> to FPUs.

    >>
    >> probably this would depend a lot on the type of "scientific
    >> computing", as there are likely some types of tasks where speed
    >> matters a whole lot more than accuracy...
    >>

    >
    > Surely you mean precision rather than accuracy ?
    > Isn't it better to get accurate (correct) results, albeit slowly, than
    > to get inaccurate (wrong) results really fast ?


    I thinking you're interpreting "accuracy" in a different way than BGB
    did. It's not a binary choice between correct and incorrect, it's a
    quantity describing the difference between the calculated result and
    actual value. You can't expect high accuracy from a calculated value
    unless it also has a high precision, but it's the accuracy which
    actually matters.

    A certain degree of inaccuracy is unavoidable in most cases of interest,
    and it's a legitimate question to ask what level of inaccuracy is
    acceptable. Sometimes, it is indeed the case that a less accurate
    result, obtained more quickly, is much more useful than a more accurate
    result, obtained more slowly.

    There are some things that can be calculated, in principle, with
    infinite accuracy; but only at the expense of infinite amounts of
    computation time - if, as you suggest, inaccuracy should always be
    avoided, then there would be no basis for deciding when to stop
    improving the accuracy of such calculations.

    Some real scientific calculations require results accurate to 20 or so
    decimal places. Other calculations would remain useful even if they were
    off by several orders of magnitude. Most calculations fall somewhere in
    between those two extremes. Whether a computation can be done with
    acceptable accuracy using single precision, or whether it requires
    double precision (or even more!), depends upon the purpose of the
    calculation. No one rule covers all cases.
     
    James Kuyper, Jul 24, 2009
    #15
  16. fft1976

    fj Guest

    On 24 juil, 11:21, "bartc" <> wrote:
    > "Les Neilson" <> wrote in message
    >
    > news:...
    >
    >
    >
    >
    >
    > > "BGB / cr88192" <> wrote in message
    > >news:h4a844$d4g$...

    >
    > >> "Phil Carmody" <> wrote in message
    > >>news:...
    > >>> Nicolas Bonneel <> writes:
    > >>>> GPU also suffer from rounding of denormalized numbers to zero (which
    > >>>> is fine for graphics, but maybe not for scientific computing).

    >
    > >>> They also use floats rather than doubles, which are useless for
    > >>> scientific computing. I notice that the ones that do support
    > >>> doubles do so at a tenth of the performance of floats, which is
    > >>> roughly the hit you'd get if you wrote your own multi-precision
    > >>> floats.

    >
    > >>> Then again, I tend to be a bit of Kahan-ite when it comes
    > >>> to FPUs.

    >
    > >> probably this would depend a lot on the type of "scientific computing",
    > >> as there are likely some types of tasks where speed matters a whole lot
    > >> more than accuracy...

    >
    > > Surely you mean precision rather than accuracy ?
    > > Isn't it better to get accurate (correct) results, albeit slowly, than to
    > > get inaccurate (wrong) results really fast ?

    >
    > > But yes it does depend on the type of scientific computing - it depends on
    > > the range of values covered and the precision to which they are sensitive
    > > before errors become significant.
    > > When constructing the Channel Tunnel (digging from both ends
    > > simultaneously) overall distances were large but they still had to ensure
    > > that they met without too much of a "step difference" in elevation or
    > > horizontal plane. I understand that when they did meet the centre points
    > > (as measured with lasers) were "off" by a only  a few millimeters.

    >
    > I doubt speed was too important in that case. The tunnel took several years
    > to construct.
    >
    > And probably a precision nearer 32-bit than 64-bit would have sufficed.


    Not sure :
    - tunnel length about 50km (37 under the sea)
    - precision to reach in computations : 5mm (probably less)

    => at least 7 orders of magnitude to cover

    As on looses easily 2 orders because of computations, single precision
    (8 significant digits) seems not enough.

    >
    > --
    > Bart
     
    fj, Jul 24, 2009
    #16
  17. fft1976

    Guest

    In article <>,
    fj <> wrote:
    >On 24 juil, 11:21, "bartc" <> wrote:
    >>
    >> And probably a precision nearer 32-bit than 64-bit would have sufficed.

    >
    >Not sure :
    >- tunnel length about 50km (37 under the sea)
    >- precision to reach in computations : 5mm (probably less)
    >
    >=3D> at least 7 orders of magnitude to cover
    >
    >As on looses easily 2 orders because of computations, single precision
    >(8 significant digits) seems not enough.


    Even if IEEE 754 delivered 8 significant digits, which it doesn't :)
    It's just below 7.



    Regards,
    Nick Maclaren.
     
    , Jul 24, 2009
    #17
  18. fft1976

    fj Guest

    On 24 juil, 12:21, wrote:
    > In article <..com>,
    >
    > fj  <> wrote:
    > >On 24 juil, 11:21, "bartc" <> wrote:

    >
    > >> And probably a precision nearer 32-bit than 64-bit would have sufficed..

    >
    > >Not sure :
    > >- tunnel length about 50km (37 under the sea)
    > >- precision to reach in computations : 5mm (probably less)

    >
    > >=3D> at least 7 orders of magnitude to cover

    >
    > >As on looses easily 2 orders because of computations, single precision
    > >(8 significant digits) seems not enough.

    >
    > Even if IEEE 754 delivered 8 significant digits, which it doesn't :)
    > It's just below 7.


    Oops, you are right ! But as I rarely used single precision values
    (except for graphics), I forgot that ... Another explanation I prefer
    to forget immediately : Alzheimer :-(

    >
    > Regards,
    > Nick Maclaren.
     
    fj, Jul 24, 2009
    #18
  19. James Kuyper a écrit :
    > Les Neilson wrote:
    >> Surely you mean precision rather than accuracy ?
    >> Isn't it better to get accurate (correct) results, albeit slowly, than
    >> to get inaccurate (wrong) results really fast ?

    >
    > I thinking you're interpreting "accuracy" in a different way than BGB
    > did. It's not a binary choice between correct and incorrect, it's a
    > quantity describing the difference between the calculated result and
    > actual value. You can't expect high accuracy from a calculated value

    [...]
    >
    > Some real scientific calculations require results accurate to 20 or so
    > decimal places. Other calculations would remain useful even if they were
    > off by several orders of magnitude. Most calculations fall somewhere in
    > between those two extremes. Whether a computation can be done with
    > acceptable accuracy using single precision, or whether it requires
    > double precision (or even more!), depends upon the purpose of the
    > calculation. No one rule covers all cases.


    no, precision and accuracy are different. When you say: "scientific
    calculations require results accurate to 20 or so decimal places", you
    mean *precise* to 20 decimal places.
    If the true expected value should be 0, and your algorithm is converging
    toward 0 it is accurate, even if you get at the end 0.5 because you
    didn't run enough iterations (in which case it is not precise enough).
    But if your algo is converging toward 1 then the algo is not accurate.
    Even if at the end you also get 0.5 because you didn't run enough
    iterations (in which case you're both inprecise and inaccurate).


    --
    Nicolas Bonneel
    http://www-sop.inria.fr/reves/Nicolas.Bonneel/
     
    Nicolas Bonneel, Jul 24, 2009
    #19
  20. fft1976

    pdpi Guest

    On Jul 24, 9:09 am, "Les Neilson" <> wrote:
    > Surely you mean precision rather than accuracy ?
    > Isn't it better to get accurate (correct) results, albeit slowly, than to
    > get inaccurate (wrong) results really fast ?


    I trust that this isn't what he meant, but if your inaccurate results
    are unbiased, then the optimal method to get the best accuracy in the
    least time might be to simply make loads of measurements, average
    them, and let the law of large numbers work its magic.
     
    pdpi, Jul 24, 2009
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. oscar
    Replies:
    0
    Views:
    371
    oscar
    Dec 9, 2003
  2. oscar
    Replies:
    0
    Views:
    528
    oscar
    Dec 9, 2003
  3. PromisedOyster
    Replies:
    1
    Views:
    365
    bgano
    Nov 15, 2006
  4. CUDA

    , Jan 29, 2009, in forum: Python
    Replies:
    0
    Views:
    428
  5. jzakiya

    CUDA bindings?

    jzakiya, Sep 9, 2008, in forum: Ruby
    Replies:
    6
    Views:
    179
    Brian Dunford-shore
    Jun 28, 2010
Loading...

Share This Page