Understanding precision/recall graph

Discussion in 'XML' started by Maciej Gawinecki, Oct 3, 2008.

  1. Two questions related to the topic

    1. If I have an empty set of relevant results, then it would be
    better
    to have no answers from the system at all. But neither precision
    nor
    recall gives penalties for returning false positives in this case
    (0/1=0/2=...=0/100). How people handle with this ? Is there other
    measure for this cases ?

    2. Let say I have ranking:

    1. A
    2. B *
    3. C
    4. D *
    5. F

    Where relevant answers are: B,D,E, and relevant answers found by
    the
    system are marked with a star (*).

    Then for recall level 1/3 I have precision 1/2,
    for recall level 2/3 I have precision 2/4=1/2.

    The last position of ranking, false positive, is not counted in
    the
    precision/recall measure, as in this measure "only positions where
    an
    increase in recall is produced". I have the system which returns
    some
    false positives at the end of ranking, but how can I measure/
    compare
    it with other systems in terms of effectiveness, if precision/
    recall
    does not take it into account ?

    TIA,
    Maciej
     
    Maciej Gawinecki, Oct 3, 2008
    #1
    1. Advertisements

  2. Maciej Gawinecki

    Peter Flynn Guest

    Maciej Gawinecki wrote:
    > Two questions related to the topic


    WTF has this got to do with SGML or XML?

    ///Peter

    [Followups reset]

    > 1. If I have an empty set of relevant results, then it would be
    > better to have no answers from the system at all. But neither
    > precision nor recall gives penalties for returning false positives in
    > this case (0/1=0/2=...=0/100). How people handle with this ? Is there
    > other measure for this cases ?
    >
    > 2. Let say I have ranking:
    >
    > 1. A
    > 2. B *
    > 3. C
    > 4. D *
    > 5. F
    >
    > Where relevant answers are: B,D,E, and relevant answers found by the
    > system are marked with a star (*).
    >
    > Then for recall level 1/3 I have precision 1/2,
    > for recall level 2/3 I have precision 2/4=1/2.
    >
    > The last position of ranking, false positive, is not counted in the
    > precision/recall measure, as in this measure "only positions where an
    > increase in recall is produced". I have the system which returns
    > some false positives at the end of ranking, but how can I measure/
    > compare it with other systems in terms of effectiveness, if
    > precision/ recall does not take it into account ?
    >
    > TIA,
    > Maciej
     
    Peter Flynn, Oct 3, 2008
    #2
    1. Advertisements

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. moonhk
    Replies:
    5
    Views:
    757
    moonhk
    Oct 11, 2006
  2. George Sakkis
    Replies:
    1
    Views:
    595
    Szabolcs Nagy
    Jan 29, 2007
  3. Dr Ann Huxtable

    Missing Graph.h and (Graph.lib) woes - any help

    Dr Ann Huxtable, Dec 21, 2004, in forum: C Programming
    Replies:
    6
    Views:
    909
    Dr Ann Huxtable
    Dec 21, 2004
  4. Barak, Ron
    Replies:
    0
    Views:
    364
    Barak, Ron
    Nov 24, 2009
  5. Barak, Ron
    Replies:
    0
    Views:
    303
    Barak, Ron
    Mar 16, 2010
  6. Patrick Maupin

    Re: recall function definition from shell

    Patrick Maupin, May 18, 2010, in forum: Python
    Replies:
    5
    Views:
    725
    Terry Reedy
    May 19, 2010
  7. RC

    How to recall add event from an Event handler??

    RC, Jan 6, 2005, in forum: ASP .Net Web Controls
    Replies:
    1
    Views:
    380
    John Saunders
    Jan 6, 2005
  8. Emilio Mayorga
    Replies:
    6
    Views:
    598
    Martien Verbruggen
    Oct 8, 2003
Loading...