RSS aggregator with curses and feedparser

R

Roberto Bechtlufft

Hi, I'm new around here... I'm a Python hobbyist, and I'm far from
being a professional programmer, so please be patient with me...

I'm working on my first Python program: a curses based RSS Aggregator.
It's basically a clone of snownews, one of my very favorite programs.
But I want to add some funcionality. Whenever you update a feed in
snownews, it loses all previous topics, even though you may not have
read them, it only keeps the current topics. I want my program to
actually aggregate feeds. Also, I want to make it able to read atom
feeds, which is easy since I'm using feedparser. I see that liferea
keeps a cache file for each feed it downloads, storing all of it's
topics. I'm doing the same here.

A question: how do I tell my program that a certain entry was/wasn't
downloaded yet? Should I use the date or link tag of the entry?
 
R

Roberto Bechtlufft

And another thing: feedparser returns the result entries as
dictionaries. What's the best approach to create my cache file? I see
that the cache file in liferea is an xml file. Should I try to create
my own xml file based on the results from feedparser?

Thanks for your help.
 
J

James Graham

Roberto said:
And another thing: feedparser returns the result entries as
dictionaries. What's the best approach to create my cache file? I see
that the cache file in liferea is an xml file. Should I try to create
my own xml file based on the results from feedparser?

Well you could do, using elementtree or whatever but there's no
particular reason to use XML over anything else. It's semi-human
readable which is nice but, if you're just serializing dicts some json
library (e.g. [1]) might do all you need out of the box. Alternatively,
if you don't care about the local format being human-readable, you could
simply use the built-in pickle module to save your state.
Thanks for your help.

(note that people tend to dislike top posting because, as you can see,
it tends to screw up the order of replies).
Assuming the feed is atom, you want to look at the entry's GUID to
determine whether you have already downloaded it. That may also work for
RSS feeds although I'm not sure how well RSS feeds in the wild stick to
the "Globally Unique" part of GUID... but this is more of a feed
handling question than a python one.

[1] http://cheeseshop.python.org/pypi/python-json/
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,057
Latest member
KetoBeezACVGummies

Latest Threads

Top