Server/Clients system

D

deadpickle

First of all I havent wrote any code for this yet, I'm still in the
brainstorming section. What I want to do is have a server that resides
on a networked computer somewhere. This server will recieve files from
2 clients. Then these 2 clients will ask the server for another file
and the server will send it. So in summary I have 2 Clients that can
send and recieve files and a Server that can recieve and send files.
Hope that makes sense. I am not sure on how to do this, or how to get
started, anyone got any ideas?
 
T

Ted Zlatanov

d> First of all I havent wrote any code for this yet, I'm still in the
d> brainstorming section. What I want to do is have a server that resides
d> on a networked computer somewhere. This server will recieve files from
d> 2 clients. Then these 2 clients will ask the server for another file
d> and the server will send it. So in summary I have 2 Clients that can
d> send and recieve files and a Server that can recieve and send files.
d> Hope that makes sense. I am not sure on how to do this, or how to get
d> started, anyone got any ideas?

Well, you could implement this with FTP or HTTP (HTTP has a "PUT"
command), reimplementing as much of the protocol as you desire. Are
you trying to implement something new as a fun project, or is this
real work? In the real world I'd avoid writing new protocols when so
many good ones exist already (implemented in C, bug-free, etc.).
Depending on your server and client, and your particular needs, you
could go with

FTP
HTTP
rsync
WebDAV
SCP/SFTP
BitTorrent
NFS/AFS/Samba/etc. networked filesystems

plus many others I can't think of right now. Note these are sorted by
length, not by how much I like them :)

Ted
 
D

deadpickle

d> First of all I havent wrote any code for this yet, I'm still in the
d> brainstorming section. What I want to do is have a server that resides
d> on a networked computer somewhere. This server will recieve files from
d> 2 clients. Then these 2 clients will ask the server for another file
d> and the server will send it. So in summary I have 2 Clients that can
d> send and recieve files and a Server that can recieve and send files.
d> Hope that makes sense. I am not sure on how to do this, or how to get
d> started, anyone got any ideas?

Well, you could implement this with FTP or HTTP (HTTP has a "PUT"
command), reimplementing as much of the protocol as you desire. Are
you trying to implement something new as a fun project, or is this
real work? In the real world I'd avoid writing new protocols when so
many good ones exist already (implemented in C, bug-free, etc.).
Depending on your server and client, and your particular needs, you
could go with

FTP
HTTP
rsync
WebDAV
SCP/SFTP
BitTorrent
NFS/AFS/Samba/etc. networked filesystems

plus many others I can't think of right now. Note these are sorted by
length, not by how much I like them :)

Ted

This is for a undergraduate project at my university. I'm thinking
about using BitTorrent. I want this whole system to be autonomous and
to make many transfers every minute, can BitTorrent do this?
 
B

Brian Wakem

deadpickle said:
First of all I havent wrote any code for this yet, I'm still in the
brainstorming section. What I want to do is have a server that resides
on a networked computer somewhere. This server will recieve files from
2 clients. Then these 2 clients will ask the server for another file
and the server will send it. So in summary I have 2 Clients that can
send and recieve files and a Server that can recieve and send files.
Hope that makes sense. I am not sure on how to do this, or how to get
started, anyone got any ideas?


I would just use Apache as the server and write a cgi script (or mod_perl)
to deal with the files. No point re-inventing the wheel.
 
P

Peter J. Holzer

d> First of all I havent wrote any code for this yet, I'm still in the
d> brainstorming section. What I want to do is have a server that resides
d> on a networked computer somewhere. This server will recieve files from
d> 2 clients. Then these 2 clients will ask the server for another file
d> and the server will send it. So in summary I have 2 Clients that can
d> send and recieve files and a Server that can recieve and send files.
d> Hope that makes sense. I am not sure on how to do this, or how to get
d> started, anyone got any ideas?

Well, you could implement this with FTP or HTTP (HTTP has a "PUT"
command), reimplementing as much of the protocol as you desire. Are
you trying to implement something new as a fun project, or is this
real work? In the real world I'd avoid writing new protocols when so
many good ones exist already (implemented in C, bug-free, etc.).
[...]
This is for a undergraduate project at my university. I'm thinking
about using BitTorrent. I want this whole system to be autonomous and
to make many transfers every minute, can BitTorrent do this?

BitTorrent doesn't sound like a good choice: Firstly, it doesn't have an
upload capability (AFAIK). Secondly, it isn't designed to transfer many
small files between a server and a small number of clients - it is
designed to distribute large files to to a large number of clients.

Writing a bittorrent implementation might be fun and instructive, but
for your (stated) needs it sounds like overkill. HTTP is probably the
simplest protocol for that purpose (as long as you don't implement all
of RFC 2616).

hp
 
D

deadpickle

d> First of all I havent wrote any code for this yet, I'm still in the
d> brainstorming section. What I want to do is have a server that resides
d> on a networked computer somewhere. This server will recieve files from
d> 2 clients. Then these 2 clients will ask the server for another file
d> and the server will send it. So in summary I have 2 Clients that can
d> send and recieve files and a Server that can recieve and send files.
d> Hope that makes sense. I am not sure on how to do this, or how to get
d> started, anyone got any ideas?
Well, you could implement this with FTP or HTTP (HTTP has a "PUT"
command), reimplementing as much of the protocol as you desire. Are
you trying to implement something new as a fun project, or is this
real work? In the real world I'd avoid writing new protocols when so
many good ones exist already (implemented in C, bug-free, etc.).
[...]
This is for a undergraduate project at my university. I'm thinking
about using BitTorrent. I want this whole system to be autonomous and
to make many transfers every minute, can BitTorrent do this?

BitTorrent doesn't sound like a good choice: Firstly, it doesn't have an
upload capability (AFAIK). Secondly, it isn't designed to transfer many
small files between a server and a small number of clients - it is
designed to distribute large files to to a large number of clients.

Writing a bittorrent implementation might be fun and instructive, but
for your (stated) needs it sounds like overkill. HTTP is probably the
simplest protocol for that purpose (as long as you don't implement all
of RFC 2616).

hp

--
_ | Peter J. Holzer | Blaming Perl for the inability of programmers
|_|_) | Sysadmin WSR | to write clearly is like blaming English for
| | | (e-mail address removed) | the circumlocutions of bureaucrats.
__/ |http://www.hjp.at/| -- Charlton Wilbur in clpm- Hide quoted text -

- Show quoted text -

How about SFTP. It seems prity easy to write a script in perl.
 
D

deadpickle

d> First of all I havent wrote any code for this yet, I'm still in the
d> brainstorming section. What I want to do is have a server that resides
d> on a networked computer somewhere. This server will recieve files from
d> 2 clients. Then these 2 clients will ask the server for another file
d> and the server will send it. So in summary I have 2 Clients that can
d> send and recieve files and a Server that can recieve and send files.
d> Hope that makes sense. I am not sure on how to do this, or how to get
d> started, anyone got any ideas?
Well, you could implement this with FTP or HTTP (HTTP has a "PUT"
command), reimplementing as much of the protocol as you desire. Are
you trying to implement something new as a fun project, or is this
real work? In the real world I'd avoid writing new protocols when so
many good ones exist already (implemented in C, bug-free, etc.). [...]
This is for a undergraduate project at my university. I'm thinking
about using BitTorrent. I want this whole system to be autonomous and
to make many transfers every minute, can BitTorrent do this?
BitTorrent doesn't sound like a good choice: Firstly, it doesn't have an
upload capability (AFAIK). Secondly, it isn't designed to transfer many
small files between a server and a small number of clients - it is
designed to distribute large files to to a large number of clients.
Writing a bittorrent implementation might be fun and instructive, but
for your (stated) needs it sounds like overkill. HTTP is probably the
simplest protocol for that purpose (as long as you don't implement all
of RFC 2616).

--
_ | Peter J. Holzer | Blaming Perl for the inability of programmers
|_|_) | Sysadmin WSR | to write clearly is like blaming English for
| | | (e-mail address removed) | the circumlocutions of bureaucrats.
__/ |http://www.hjp.at/| -- Charlton Wilbur in clpm- Hide quoted text -
- Show quoted text -

How about SFTP. It seems prity easy to write a script in perl.- Hide quoted text -

- Show quoted text -

Also, how can I use SFTP on a windows machine?
 
D

deadpickle

d> First of all I havent wrote any code for this yet, I'm still in the
d> brainstorming section. What I want to do is have a server that resides
d> on a networked computer somewhere. This server will recieve files from
d> 2 clients. Then these 2 clients will ask the server for another file
d> and the server will send it. So in summary I have 2 Clients that can
d> send and recieve files and a Server that can recieve and send files.
d> Hope that makes sense. I am not sure on how to do this, or how to get
d> started, anyone got any ideas?
Well, you could implement this with FTP or HTTP (HTTP has a "PUT"
command), reimplementing as much of the protocol as you desire. Are
you trying to implement something new as a fun project, or is this
real work? In the real world I'd avoid writing new protocols when so
many good ones exist already (implemented in C, bug-free, etc.).
[...]
This is for a undergraduate project at my university. I'm thinking
about using BitTorrent. I want this whole system to be autonomous and
to make many transfers every minute, can BitTorrent do this?
BitTorrent doesn't sound like a good choice: Firstly, it doesn't have an
upload capability (AFAIK). Secondly, it isn't designed to transfer many
small files between a server and a small number of clients - it is
designed to distribute large files to to a large number of clients.
Writing a bittorrent implementation might be fun and instructive, but
for your (stated) needs it sounds like overkill. HTTP is probably the
simplest protocol for that purpose (as long as you don't implement all
of RFC 2616).
hp
--
_ | Peter J. Holzer | Blaming Perl for the inability of programmers
|_|_) | Sysadmin WSR | to write clearly is like blaming English for
| | | (e-mail address removed) | the circumlocutions of bureaucrats.
__/ |http://www.hjp.at/| -- Charlton Wilbur in clpm- Hide quoted text -
- Show quoted text -
How about SFTP. It seems prity easy to write a script in perl.- Hide quoted text -
- Show quoted text -

Also, how can I use SFTP on a windows machine?

I have wrote a small script using Net::SFTP::Foreign. I keep getting
the error "invalid option(s) 'dontsave' at sftptest1.pl line 24" not
sure what this means, any help?

#!/usr/bin/perl
#!/usr/bin/sftp
####################################

# This program uses SFTP to login to a remote server. This

# remote server is used to store all the files; UAV GPS,

# Waypoint Verify, and ASOS and Satellite Placefiles. This

# specific test program is used to get the ASOS placefile and

# the Satellite image off of the Mistral Server so that they can be
used.

#

####################################

use strict;

use Net::SFTP::Foreign::Compat;

#login and setup connection

my $host = 'mistral.unl.edu';

my $sftp = Net::SFTP::Foreign::Compat->new($host, user=>'jlahowet');

#change directories (local, remote)

my $local = '/home/deadpickle/Desktop/UAV';


my $remote = '/home/jlahowet/UAV';


$sftp->do_opendir($remote);
$sftp->ls($remote);

#the for loop is used in order to repeat the transfer process


#get ASOS placefile and Satellite png


my $sat = 'vis_sat_co.png';

$sftp->get($sat);
 
J

Jamie

In said:
This is for a undergraduate project at my university. I'm thinking
about using BitTorrent. I want this whole system to be autonomous and
to make many transfers every minute, can BitTorrent do this?

Any reason for the central server?

One thing I would really like to do (alas, have no business model for it)
is to marry NNTP with podcasts and RSS.

So called "flooding protocol" is something I find quite amusing, and aside from
NNTP (AFAIK) isn't widely used anymore. Seems like it's almost always a central
master -> slave model, (mirrors, etc.. CPAN for example)

Having "NPTP" (Net Podcast Transfer Protocol? NRTP Net RSS Transfer Protocol?)
would be interesing because you could be introduced to podcasts you might not
otherwise have come into contact with. Unlike torrent, the local servers would
have expiration times, old podcasts would "scroll off" much as this article will.

Instead of newsgroups, you'd have "categories". Unlike master/slave mirror sites,
anyone could post a podcast. just as it is now for articles.

Could even arrange to have "discussion boards" wired in to regular NNTP. Wow, a
new technology.. NNTP! LOL

RSS is pretty much a "web only" thing though, the specs pretty much stipulate
all the URL's are for HTTP.

Like NNTP, the Path: header could be used to prevent redundant transfers.

Unlike NNTP, the enclosures and other RSS fields would have to be planned for
(possibly with delayed transfer/referral of enclosures)

No money, and I have a hard time seeing how someone would want to pay for it
(much less pay for the hardware and bandwidth) If I could find a practical
business model for what I just described, I'd be doing it. :)

Jamie
 
D

deadpickle

d> First of all I havent wrote any code for this yet, I'm still in the
d> brainstorming section. What I want to do is have a server that resides
d> on a networked computer somewhere. This server will recieve files from
d> 2 clients. Then these 2 clients will ask the server for another file
d> and the server will send it. So in summary I have 2 Clients that can
d> send and recieve files and a Server that can recieve and send files.
d> Hope that makes sense. I am not sure on how to do this, or how to get
d> started, anyone got any ideas?
Well, you could implement this with FTP or HTTP (HTTP has a "PUT"
command), reimplementing as much of the protocol as you desire. Are
you trying to implement something new as a fun project, or is this
real work? In the real world I'd avoid writing new protocols when so
many good ones exist already (implemented in C, bug-free, etc.).
[...]
This is for a undergraduate project at my university. I'm thinking
about using BitTorrent. I want this whole system to be autonomous and
to make many transfers every minute, can BitTorrent do this?

BitTorrent doesn't sound like a good choice: Firstly, it doesn't have an
upload capability (AFAIK). Secondly, it isn't designed to transfer many
small files between a server and a small number of clients - it is
designed to distribute large files to to a large number of clients.

Writing a bittorrent implementation might be fun and instructive, but
for your (stated) needs it sounds like overkill. HTTP is probably the
simplest protocol for that purpose (as long as you don't implement all
of RFC 2616).

hp

--
_ | Peter J. Holzer | Blaming Perl for the inability of programmers
|_|_) | Sysadmin WSR | to write clearly is like blaming English for
| | | (e-mail address removed) | the circumlocutions of bureaucrats.
__/ |http://www.hjp.at/| -- Charlton Wilbur in clpm

How can I transfer large files using HTTP? where do I get started then?
 
P

Peter J. Holzer

One thing I would really like to do (alas, have no business model for it)
is to marry NNTP with podcasts and RSS.

So called "flooding protocol" is something I find quite amusing, and aside from
NNTP (AFAIK) isn't widely used anymore. Seems like it's almost always a central
master -> slave model, (mirrors, etc.. CPAN for example)

Having "NPTP" (Net Podcast Transfer Protocol? NRTP Net RSS Transfer Protocol?) [...]
Unlike NNTP, the enclosures and other RSS fields would have to be planned for
(possibly with delayed transfer/referral of enclosures)

I think regular NNTP would work just fine for that. Just use appropriate
content-types (application/rss+xml, audio/*, video/* or whatever). You
may want a subscription mechanism in the protocol (NNTP for normal
newsgroup could use one, too, but subscribing whole hierarchies is
"traditional" and so far nobody has had enough incentive to push for a
more fine-grained approach).

No money, and I have a hard time seeing how someone would want to pay for it
(much less pay for the hardware and bandwidth) If I could find a practical
business model for what I just described, I'd be doing it. :)

The business model is simple: Make huge losses for a couple of years,
don't give any hint how you ever plan on making money, then sell to one
of the big players for several hundred million bucks :).

hp
 
P

Peter J. Holzer

How can I transfer large files using HTTP? where do I get started then?

Same as small ones[0]. There's no size limit in HTTP. There's also the
Range: header for partial transfers, should you need it.

hp

[0] Well, that depends on what you call "large".
 
J

Jamie

In said:
One thing I would really like to do (alas, have no business model for it)
is to marry NNTP with podcasts and RSS.

So called "flooding protocol" is something I find quite amusing, and aside from
NNTP (AFAIK) isn't widely used anymore. Seems like it's almost always a central
master -> slave model, (mirrors, etc.. CPAN for example)

Having "NPTP" (Net Podcast Transfer Protocol? NRTP Net RSS Transfer Protocol?) [...]
Unlike NNTP, the enclosures and other RSS fields would have to be planned for
(possibly with delayed transfer/referral of enclosures)

I think regular NNTP would work just fine for that. Just use appropriate
content-types (application/rss+xml, audio/*, video/* or whatever). You
may want a subscription mechanism in the protocol (NNTP for normal
newsgroup could use one, too, but subscribing whole hierarchies is
"traditional" and so far nobody has had enough incentive to push for a
more fine-grained approach).

The bit I can't work out is how you'd associate other types of information.
The title, description and other elements of the RSS entry, keeping the
enclosures distinct, supporting multiple enclosures pr. item and ensuring
that the items are stuck to the correct feed. Unless each feed were
it's own newsgroup, this doesn't seem viable. (threading, maybe.. but followups
might be better used as discussion areas and/or corrections)

One may not want to spool all the enclosures, but rather defer them until
someone requests them. Just flood the metadata, rather the way leafnode/suck
type news servers work, the .overview should contain as much of the RSS
data as possible.

Would be "nice" if you could go RSS -> NRTP -> RSS, loosing as little data as
possible AND provide "discussion boards" on NNTP instead of those crummy
web-based things.

You're right about the fine-grained subscription model, categories and THEN
individual podcasts, but as the whole idea is to expose people to material
they might not otherwise have seen, it could be tricky. (this is why I much
perfer NNTP over mailing lists, you can easily sign on to a news server, browse
the groups awhile, subscribe/unsubscribe at a whim)

NNTP isn't 8-bit clean and doesn't really provide a way to xfer enclosures.
The business model is simple: Make huge losses for a couple of years,
don't give any hint how you ever plan on making money, then sell to one
of the big players for several hundred million bucks :).

LOL! Only problem with that is, once you sell it.. you won't be able to
work on it anymore. heh

Come to think of it, there really isn't a "business model" as far as I can
see for NNTP either, except the commercial news servers. They would never
have been in business had it not been for the fact that NNTP was already
established. Kind of sad, commercialization really has ruined the internet.

Maybe if net neutrality fails or for whatever reason ISP's start limiting
access to certain websites, flooding protocol will come back. Also, I suppose
people could charge for value-added access or something?

This would be an incredibly fun project to work on, especially if you did it
in such a way as to not require graphical terminals or anything like that.

Jamie
 
P

Peter J. Holzer

In said:
One thing I would really like to do (alas, have no business model for it)
is to marry NNTP with podcasts and RSS.

So called "flooding protocol" is something I find quite amusing, and aside from
NNTP (AFAIK) isn't widely used anymore. Seems like it's almost always a central
master -> slave model, (mirrors, etc.. CPAN for example)

Having "NPTP" (Net Podcast Transfer Protocol? NRTP Net RSS Transfer Protocol?) [...]
Unlike NNTP, the enclosures and other RSS fields would have to be planned for
(possibly with delayed transfer/referral of enclosures)

I think regular NNTP would work just fine for that. Just use appropriate
content-types (application/rss+xml, audio/*, video/* or whatever). You
may want a subscription mechanism in the protocol (NNTP for normal
newsgroup could use one, too, but subscribing whole hierarchies is
"traditional" and so far nobody has had enough incentive to push for a
more fine-grained approach).

The bit I can't work out is how you'd associate other types of information.
The title, description and other elements of the RSS entry, keeping the
enclosures distinct, supporting multiple enclosures pr. item and ensuring
that the items are stuck to the correct feed. Unless each feed were
it's own newsgroup, this doesn't seem viable. (threading, maybe.. but followups
might be better used as discussion areas and/or corrections)

I'm probably missing something because I know little about podcasts, but
AFAIK they consist basically of

* individual media files
* RSS (or Atom, or whatever) files which refer to the media files and
add a bit of information about them.

The RSS file normally is at a fixed URL, and clients poll the URL to see
if the file has changed, and either notify the user or automatically
download the new media files so that the user can listen to them resp.
view them.

So as a first step you can set up a set of newgroups with categories.
Then instead of (or in addition to) posting your RSS file on a web site,
you post it in the apropriate newsgroup(s). Clients subscribed to at
least one of the newsgroups will retrieve the RSS and can continue to
act just the same as if they had retrieved it vie HTTP. (The user might
want to specify additional filters, for example automatically download
only files by author X or with subject Y).

The harder part is to distribute the media files themselves over NNTP:
You can just post them to the same newsgroups, but that means that every
file is flooded to every server subscribed to a newsgroup. Well, let's
just say there's a reason why most newsserver operators don't carry
binary groups :).

That's where the "more finegrained" subscription model I was talking
about comes in. Traditionally, feeds in NNTP are configured
"out-of-band": If I want a feed for some newsgroup, I send mail to my
neighbour news admins, and tell them I want to get a feed for that
newsgroup. Of course that's quite a bit of work, so it's almost never
done for individual groups, but only for hierarchies (so I'll say "I
want a feed for comp.*, at.* and de.* but without de.alt.dateien.*" or
something like that). If NNTP offered a way to configure feeds, that
could be done automatically, so turning on and off feeds for individual
groups would become feasible.

So you could have one newsgroup per channel, and each server only
requests a feed for that channel when there are users actually reading
items from it (Incidentally, that gets rid of the problem that neither
nor nntp: URIs allow specifying both a message id and a newsgroup)

So, if I wanted to make podcasts, I'd start by creating a new newsgroup
for my channel on my news server (there may need to be some global
naming scheme to avoid collisions, for example by using the reversed
domain name as prefix: "podcasts.channels.at.hjp") and post my first
opus there:

From: <[email protected]>
Newsgroups: podcasts.channels.at.hjp
Subject: Writing a newsserver in perl
Content-Type: video/mpeg
Message-Id: <[email protected]>

Then I'll post an RSS file to the appropriate categories newsgroups:

From: <[email protected]>
Newsgroups: podcasts.categories.programming.perl,podcasts.categories.comm.usenet,podcasts.channels.at.hjp
Subject: Writing an nntp server in perl
Content-Type: application/rss+xml
Message-Id: <[email protected]>

<?xml version="1.0"?>
<rss version="2.0">
<channel>
<title> HJP's rants </title>
<link> news://news.hjp.at/podcasts.channels.at.hjp </link>
...
<item>
<title> Writing a newsserver in perl </title>
<link> </link>
<description>
A very boring presentation about the nntp server I
wrote in perl.
</description>
<guid> (e-mail address removed) </guid>
<enclosure href="type="video/mpeg" length="12345678" />
</item>
</channel>
</rss>
<!-- or something like that - please ignore any errors in the RSS -->

Done.

The RSS will be flooded out to all servers which carry either
podcasts.categories.programming.perl or podcasts.categories.comm.usenet.
The video will currently stay on my own NNTP server, because no other
carries podcasts.channels.at.hjp yet.

A client will get the RSS file, and if the user requests my
presentation, it will first try to enter the group
podcasts.channels.at.hjp and request the article <[email protected]> at
the local server(s), and if that fails, from news.hjp.at.

The server will notice a request for a non-existing newsgroup
podcasts.channels.at.hjp and request a feed from its peers. If none has
it, it may parse the RSS files to find that this newsgroup is available
from news.hjp.at and request it there (it is a matter of policy if a
server should autonomously establish new peer relationships).

One may not want to spool all the enclosures, but rather defer them until
someone requests them. Just flood the metadata, rather the way leafnode/suck
type news servers work, the .overview should contain as much of the RSS
data as possible.

See above for an idea on how to flood metadata and enclosure via
different paths. The .overview could contain the content type and maybe
other data, but I don't think that's necessary - the RSS files are small
enough compared to the media files.
Would be "nice" if you could go RSS -> NRTP -> RSS, loosing as little data as
possible AND provide "discussion boards" on NNTP instead of those crummy
web-based things.

Regular postings in plain text can be intermixed between RSS and media
postings. (Hey, you can even have a discussion thread consisting of
audio or video postings :).

NNTP isn't 8-bit clean and doesn't really provide a way to xfer enclosures.

NNTP has been in practice 8-bit clean (although not binary-clean) since
the early 1990's. Even before, binaries were distributed just fine using
uuencoding. Today base64 and yenc exist as alternatives. Distributing
binaries over NNTP is really not a problem - people have been doing it
since the beginning of Usenet.

hp
 
J

Jamie

In said:
I'm probably missing something because I know little about podcasts, but
AFAIK they consist basically of

* individual media files
* RSS (or Atom, or whatever) files which refer to the media files and
add a bit of information about them.

The RSS file normally is at a fixed URL, and clients poll the URL to see
if the file has changed, and either notify the user or automatically
download the new media files so that the user can listen to them resp.
view them.

That is more or less correct. In theory, one could have multiple media
files pr. item, but I've never seen it done.
So as a first step you can set up a set of newgroups with categories.
Then instead of (or in addition to) posting your RSS file on a web site,
you post it in the apropriate newsgroup(s). Clients subscribed to at
least one of the newsgroups will retrieve the RSS and can continue to
act just the same as if they had retrieved it vie HTTP. (The user might
want to specify additional filters, for example automatically download
only files by author X or with subject Y).

Hmm.. I was thinking you'd post the individual items in some manner,
so they scroll off. Sort of like an item is a post, the channel is
??? (and this is where it seems like it'd break)

Seems like this would involve re-posting the same channel data for
each item? (maybe thats a good thing?)

.... OR.. (and this feels a little crazy):

LIST CHANNEL http://foo.example.com/bar/none/feed.rss

Which would, I presume, fetch the channel info for that particular
feed. The URL would serve as the unique ID and as I'm thinking
about it now, the result would be a channel with no items.

I don't like it.
The harder part is to distribute the media files themselves over NNTP:
You can just post them to the same newsgroups, but that means that every
file is flooded to every server subscribed to a newsgroup. Well, let's
just say there's a reason why most newsserver operators don't carry
binary groups :).

Like this?

rss.binaries.technology.*
rss.podcast.feed.technology.*
rss.podcast.feed.discuss.*
That's where the "more finegrained" subscription model I was talking
about comes in. Traditionally, feeds in NNTP are configured
"out-of-band": If I want a feed for some newsgroup, I send mail to my
neighbour news admins, and tell them I want to get a feed for that
newsgroup. Of course that's quite a bit of work, so it's almost never

Ah, subscription as in, peer subscription. (I was thinking client)

I've set up simple news servers but have no experience with larger
"real" news servers that participate in networks. (I do like
the technology behind them though, always thought it was cool)
So you could have one newsgroup per channel, and each server only
requests a feed for that channel when there are users actually reading
items from it (Incidentally, that gets rid of the problem that neither
news: nor nntp: URIs allow specifying both a message id and a newsgroup)

Then anyone would have to be able to create new channels, (I don't know how
usenet does this in practice, but, as I understand it, one can't just create a
new channel and expect it to be carried. Certainly not without actually having
their own peer on the network.

I can't just post with a header of "Newsgroups: bogus.newsgroup" and
expect the new newsgroup to be created for me. (least I /HOPE/ I can't!)
So, if I wanted to make podcasts, I'd start by creating a new newsgroup
for my channel on my news server (there may need to be some global
^^^^^^^^^^^^^^
Thats the trouble, podcasters probably won't go through the trouble.

Hmm.. this is a business model? offer to create channels for podcasters,
with the promise of how much they'll save on bandwidth by doing it this
way instead of web based. (be a hard sell though, SOMEONE has to pay
for the bandwidth)

I'd rather multiple channels appear on the same newsgroup though, with
a "one newsgroup pr. channel" model, there would be a LOT of dead
newsgroups. If you go to podcastalley.com and do searches, you'll see
podcasts on practically every subject in the known universe, but most
of them are dead.

A newsgroup as a category really fits in well with RSS.

In the RSS:

<category domain="news.example.com">podcast/channels/foo</category>

Alas, the "domain" attribute is hardly known, wish someone would have
told Apple about it..

naming scheme to avoid collisions, for example by using the reversed
domain name as prefix: "podcasts.channels.at.hjp") and post my first
opus there:

Multiple podcasts can (and do) appear on the same domain. Some places (at
least with RSS) will have several feeds related to particular subjects,
BBC and I believe CNN have many streams.
From: <[email protected]>
Newsgroups: podcasts.channels.at.hjp
Subject: Writing a newsserver in perl
Content-Type: video/mpeg
Message-Id: <[email protected]>

And this would be the binary data, (and /all/ the binary data) in one single
post? Some of those media files are HUGE.
Then I'll post an RSS file to the appropriate categories newsgroups:

[RSS example of a channel with one item snipped]
<!-- or something like that - please ignore any errors in the RSS -->

I like it. :) But some how, I should think the <channel> data should be stored
elsewhere.

The natural order in how articles expire with NNTP would be very useful if
items could be individual posts. (the same thing could be said if channels
themselves could "expire" when there were no items in them for a period of
time)
The RSS will be flooded out to all servers which carry either
podcasts.categories.programming.perl or podcasts.categories.comm.usenet.
The video will currently stay on my own NNTP server, because no other
carries podcasts.channels.at.hjp yet.

A client will get the RSS file, and if the user requests my
presentation, it will first try to enter the group
podcasts.channels.at.hjp and request the article <[email protected]> at
the local server(s), and if that fails, from news.hjp.at.

The server will notice a request for a non-existing newsgroup
podcasts.channels.at.hjp and request a feed from its peers. If none has
it, it may parse the RSS files to find that this newsgroup is available
from news.hjp.at and request it there (it is a matter of policy if a
server should autonomously establish new peer relationships).

Hmm.. I like that idea, I don't really like binaries as posts.

I wrote a news reader once (using it now actually) parsing and downloading
binary postings is really a black art, never perfect and full of gotchas. (same
with binary posting tools, a lot of 'yenc' tools won't handle the leading '.'
problem very well, resulting in corrupt downloads.

Binaries split across multiple postings with (NN/NN) patterns (and now
[AA/BB] (NN/NN) file.part012) is an extremely hit or miss procedure.

You might get the first 12 parts, but as you were fetching them, part 13
was deleted and the whole thing is a miss.
Regular postings in plain text can be intermixed between RSS and media
postings. (Hey, you can even have a discussion thread consisting of
audio or video postings :).

Yea, you could! :)
NNTP has been in practice 8-bit clean (although not binary-clean) since
the early 1990's. Even before, binaries were distributed just fine using
uuencoding. Today base64 and yenc exist as alternatives. Distributing
binaries over NNTP is really not a problem - people have been doing it
since the beginning of Usenet.

I really like the technology and the ideas of NNTP, I just wish the people
would realize the internet is /NOT/ the web. The ONLY advantage I've
ever seen in those goofy web based message boards is that webmasters
can charge PPC advertising easier, if people re-discovered NNTP, they'd
never want to use a web based gizmo again.

Jamie
 
P

Peter J. Holzer

[I think we're getting quite off-topic here. I've set Followup-To:
poster, but feel free to use a more appropriate newsgroup instead]


Hmm.. I was thinking you'd post the individual items in some manner,
so they scroll off. Sort of like an item is a post, the channel is
??? (and this is where it seems like it'd break)

Seems like this would involve re-posting the same channel data for
each item? (maybe thats a good thing?)

Yes, or for a small number of items. Since newsservers normally keep
articles for some time before expiring them, you can just post each item
individually, and don't have to keep the last n items in the RSS file to
implement "scrolling" - the newsserver does that for you.
In this case the channel data would be repeated for each item. But you
could put a few items into the same posting, either just the last 3 or 4
(to guard against message-loss) or some related items.

I think the additional bandwidth is negligible.

Like this?

rss.binaries.technology.*
rss.podcast.feed.technology.*
rss.podcast.feed.discuss.*

Yes, something like that.

Ah, subscription as in, peer subscription. (I was thinking client)

I've set up simple news servers but have no experience with larger
"real" news servers that participate in networks. (I do like
the technology behind them though, always thought it was cool)

I run two small newsservers (one at work, the other at a local user
group), with a few peers each. Text only, no binaries (we used to have a
full feed at work, but we had to stop that in the late 1990's - took
way too much bandwidth).

Then anyone would have to be able to create new channels, (I don't know how
usenet does this in practice, but, as I understand it, one can't just create a
new channel and expect it to be carried. Certainly not without actually having
their own peer on the network.

I can't just post with a header of "Newsgroups: bogus.newsgroup" and
expect the new newsgroup to be created for me. (least I /HOPE/ I can't!)

You can send out Control: newgroup messages at any time. The question
is, will anybody honor them? That depends on the hierarchy. In most
hierarchies, Control messages must be signed with a specific PGP key to
be honored by most newsservers. In some (e.g., alt.*) anybody can send
out newgroups, but if it hasn't been discussed before, other people will
send rmgroups. And in some (e.g., free.*, oesterreich.*, ...) every
newgroup will be honored.

But I wasn't thinking of using control messages for this, but letting
each server create a newsgroups when a) a local user tries to access
them and b) they can get a feed for them. If a newsgroup hasn't been
accessed for some time, it can be removed again.

^^^^^^^^^^^^^^
Thats the trouble, podcasters probably won't go through the trouble.

I wasn't thinking that everybody would have to run their own newsserver.
Just have access to a newsserver which lets them create their own
groups. (or maybe automatically creates a new group for each registered
user)

Hmm.. this is a business model? offer to create channels for podcasters,
with the promise of how much they'll save on bandwidth by doing it this
way instead of web based. (be a hard sell though, SOMEONE has to pay
for the bandwidth)

Yep, but the cost is distributed and for the poster it is (almost)
constant. If his casts are popular, they will just be flooded out over
the whole net and most people can get them from their newsservers and
don't have to get it from his. (Bittorrent has a similar property, BTW)

I'd rather multiple channels appear on the same newsgroup though, with
a "one newsgroup pr. channel" model, there would be a LOT of dead
newsgroups.

Yes, but that doesn't matter. If a newsgroup in podcasts.channels.* (to
stay with my naming scheme) hasn't been accessed for some time it can be
removed locally. It may still exist as a dead group on the server which
created it in the first place but everywhere else it will vanish if it
is dead (it will also vanish if there is traffic but nobody is reading
it, which is IMHO a big advantage over current usenet).

A newsgroup as a category really fits in well with RSS.
Yep.



Multiple podcasts can (and do) appear on the same domain. Some places (at
least with RSS) will have several feeds related to particular subjects,
BBC and I believe CNN have many streams.

The domain name system is hierarchical, so you can make that as fine
grained as you want (well, not quite: There's a limit of 255
octets in DNS, but DNS isn't used here). Using my own domain was
probably a bad example (because there is only one user there), so I'll
use my employer's: wsr.ac.at. All the channel newsgroups created on this
server would start with "podcasts.channels.at.ac.wsr" to avoid conflicts
with those created by other servers. As a matter of policy, we could then
create one per user, so I would get podcasts.channels.at.ac.wsr.hjp, and
if I wanted to create several channels, I could do that below that,
e.g., podcasts.channels.at.ac.wsr.hjp.computers,
podcasts.channels.at.ac.wsr.hjp.sf, etc.

(If you're familiar with Java, you know where I stole that idea :).

And this would be the binary data, (and /all/ the binary data) in one single
post? Some of those media files are HUGE.

Yes. NNTP doesn't have a size limit, although most newsservers have (but
since they would have to be modified to implement dynamic feed
configuration, that's the least problem). I think sending a file as one
huge posting is better than sending it in lots of little chunks (as is
currently done in binary newsgroups) for the reasons you mention below.

Of course the URLs in the RSS file can also be http or bittorrent URLs,
in which case NNTP is only used to distribute the metadata. But since I
claimed that NNTP could be used, I had to demonstrate that it can be
used to distribute *all* the data. And with a fine-grained group
structure and dynamic configuration of feeds I think it would also
reduce bandwidth requirements to a manageable level (although probably
not quite as low as bittorrent).

I really like the technology and the ideas of NNTP, I just wish the people
would realize the internet is /NOT/ the web. The ONLY advantage I've
ever seen in those goofy web based message boards is that webmasters
can charge PPC advertising easier, if people re-discovered NNTP, they'd
never want to use a web based gizmo again.

While I also vastly prefer Usenet to web forums (haven't ever seen one I
liked), I don't think that's true for all or even most of the people.
There are people who prefer to use a web mailer when they could use a
real MUA with IMAP.

hp
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top