Understanding precision/recall graph

Discussion in 'XML' started by Maciej Gawinecki, Oct 3, 2008.

  1. Two questions related to the topic

    1. If I have an empty set of relevant results, then it would be
    better
    to have no answers from the system at all. But neither precision
    nor
    recall gives penalties for returning false positives in this case
    (0/1=0/2=...=0/100). How people handle with this ? Is there other
    measure for this cases ?

    2. Let say I have ranking:

    1. A
    2. B *
    3. C
    4. D *
    5. F

    Where relevant answers are: B,D,E, and relevant answers found by
    the
    system are marked with a star (*).

    Then for recall level 1/3 I have precision 1/2,
    for recall level 2/3 I have precision 2/4=1/2.

    The last position of ranking, false positive, is not counted in
    the
    precision/recall measure, as in this measure "only positions where
    an
    increase in recall is produced". I have the system which returns
    some
    false positives at the end of ranking, but how can I measure/
    compare
    it with other systems in terms of effectiveness, if precision/
    recall
    does not take it into account ?

    TIA,
    Maciej
    Maciej Gawinecki, Oct 3, 2008
    #1
    1. Advertising

  2. Maciej Gawinecki

    Peter Flynn Guest

    Maciej Gawinecki wrote:
    > Two questions related to the topic


    WTF has this got to do with SGML or XML?

    ///Peter

    [Followups reset]

    > 1. If I have an empty set of relevant results, then it would be
    > better to have no answers from the system at all. But neither
    > precision nor recall gives penalties for returning false positives in
    > this case (0/1=0/2=...=0/100). How people handle with this ? Is there
    > other measure for this cases ?
    >
    > 2. Let say I have ranking:
    >
    > 1. A
    > 2. B *
    > 3. C
    > 4. D *
    > 5. F
    >
    > Where relevant answers are: B,D,E, and relevant answers found by the
    > system are marked with a star (*).
    >
    > Then for recall level 1/3 I have precision 1/2,
    > for recall level 2/3 I have precision 2/4=1/2.
    >
    > The last position of ranking, false positive, is not counted in the
    > precision/recall measure, as in this measure "only positions where an
    > increase in recall is produced". I have the system which returns
    > some false positives at the end of ranking, but how can I measure/
    > compare it with other systems in terms of effectiveness, if
    > precision/ recall does not take it into account ?
    >
    > TIA,
    > Maciej
    Peter Flynn, Oct 3, 2008
    #2
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. moonhk
    Replies:
    5
    Views:
    519
    moonhk
    Oct 11, 2006
  2. Barak, Ron
    Replies:
    0
    Views:
    255
    Barak, Ron
    Nov 24, 2009
  3. Barak, Ron
    Replies:
    0
    Views:
    232
    Barak, Ron
    Mar 16, 2010
  4. Patrick Maupin

    Re: recall function definition from shell

    Patrick Maupin, May 18, 2010, in forum: Python
    Replies:
    5
    Views:
    551
    Terry Reedy
    May 19, 2010
  5. Emilio Mayorga
    Replies:
    6
    Views:
    307
    Martien Verbruggen
    Oct 8, 2003
Loading...

Share This Page