Understanding precision/recall graph

  • Thread starter Maciej Gawinecki
  • Start date
M

Maciej Gawinecki

Two questions related to the topic

1. If I have an empty set of relevant results, then it would be
better
to have no answers from the system at all. But neither precision
nor
recall gives penalties for returning false positives in this case
(0/1=0/2=...=0/100). How people handle with this ? Is there other
measure for this cases ?

2. Let say I have ranking:

1. A
2. B *
3. C
4. D *
5. F

Where relevant answers are: B,D,E, and relevant answers found by
the
system are marked with a star (*).

Then for recall level 1/3 I have precision 1/2,
for recall level 2/3 I have precision 2/4=1/2.

The last position of ranking, false positive, is not counted in
the
precision/recall measure, as in this measure "only positions where
an
increase in recall is produced". I have the system which returns
some
false positives at the end of ranking, but how can I measure/
compare
it with other systems in terms of effectiveness, if precision/
recall
does not take it into account ?

TIA,
Maciej
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,578
Members
45,052
Latest member
LucyCarper

Latest Threads

Top