C needs a URL*

  • Thread starter Ronald S. Nesbit
  • Start date
R

Rui Maciel

Kenneth said:
In the meantime, why not use one of the multitude of widely-available
libraries already out there? Just because the language itself doesn't
include a "standard" function, doesn't mean that there aren't de facto
"standard" functions out there.


Indeed. Moreover, I don't see the point of wanting to add to the standards a library which fails to
have any widespread practical application. In fact, it all sounds like a solution in search of a
problem.


Rui Maciel
 
C

Chris H

Rui Maciel said:
You don't need to wait for a standards committee to implement such a
library.
Agreed.

You can simply
develop it on your own, release it under a free software license and
then build a case to add your
library to the standards.

Yes But it won't be added any more than USB, CAN, TCP/IP or any other
system is part of the C standard.
There's nothing like a fully working prototype to evaluate a
technology.

Agreed.
 
T

Tom St Denis

In message <[email protected]>, Mark Bluemel








The other problem is what exactly is this wopen going to talk to?  How
is it going to see the Internet?  Remember many (most) C compilers go to
bare metal.  Many OS don't have a file system as standard let alone a
TCP/IP stack.....

Most platforms that do support IP networking have a BSD sockets
compatible API for it (even Windows does). I think this question/idea
has been answered long ago. libcurl is the way to go. And the fact
it's been ported to a few dozen platforms means it's fairly portable.

The problem with adding this to the standard is where does it stop?
We'll need libraries for graphics, sound, DSP work, FFT, etc...

It's much easier to define C as what it is and point to the highly
maintained libraries written in it.

Tom
 
R

Rui Maciel

Kenneth said:
Because there are platforms where "http://example.com/foo.txt" is a
perfectly valid path to a local file.

This is irrelevant. It's one thing for a string such as "http://example.com/foo.txt" to be a valid
path to a local file but an URI is an entirely different concept. For example, if we rely on URIs
to refer to resources then the following would be two entirely different resources:

http://example.com/foo.txt
file:///http://example.com/foo.txt


with the former referring to an internet resource which is available at example.com/foo.txt through
the HTTP protocol while the latter would refer to a local file available at
/http://example.com/foo.txt.

In order to avoid ambiguity then it would be possible to default to interpret a given text string as
a path to a local file if no URI scheme name was provided.

Nevertheless, I don't believe that fudging stdio.h and it's palls is remotely desirable or
justifiable. If such a functionality is really that useful, which I don't see why or how, it should
simply be made available through a library.


Rui Maciel
 
T

Tim Harig

Are you referring to java applets?

Despite the initial hype, I have only ever seen two Java applets; and, they
were painfully slow to use. Java is, however, heavily used for the server
side componet of a *huge* number of corporate web applications.
 
T

Tim Harig

"http:" is a perfectly good name for a directory, and "//" is expected to act
like "/", i.e. a simple directory separator, because appending "/file" to a
directory string must work whether or not the directory string already had a
trailing slash.

If somebody really wanted to do something like that, they should use the
standard set by the apache portable runtime (which the OP might be
interested in looking at). There, everthing must be expressed using the
same URL format. Accessing local files simply uses the "file://" protocol
specifier so as to avoid any such confusions. All of this can be useful; but,
none of it is necessary in the standard library. There are libraries with
this kind of functionality for those who wish to use use it.
 
K

Keith Thompson

Rui Maciel said:
This is irrelevant.
[...]

It's very relevant if fopen() is to accept URLs as file names, as
several people have suggested in this thread. But ...

[...]
Nevertheless, I don't believe that fudging stdio.h and it's palls is
remotely desirable or justifiable. If such a functionality is really
that useful, which I don't see why or how, it should simply be made
available through a library.

Agreed; if it's not done through fopen(), the possibility of a URL
also being a valid file name is irrelevant.

A urlopen() function that takes a URL and returns a FILE* might be
useful for some *simple* applications. (I could write one myself in
a few minutes, using popen() and curl, though finding the right curl
executable and deciding what arguments to pass would be an issue.)
But there are things you want to do with URLs that don't make sense
for file names. The fact that it's already perfectly possible to
provide a urlopen() function, but nobody I know of has bothered to
do so, probably indicates something.
 
N

Nobody

Indeed. Many BSD systems provide the funopen(3) interface, which takes a
"cookie" argument and function pointers for read, write, seek, and close
operations, returning a FILE pointer. It would be trivial to use this to
provide standard C-like I/O to HTTP, FTP, etc. resources.

Interestingly, I've never seen this done, either by the platform developers
or in any third-party libraries. And POSIX has chosen to standardize the far
less capable fmemopen() GNU interface. This is strong evidence against any
claims of urgency.

Note that GNU also has fopencookie(), which (AFAICT) provides equivalent
functionality to funopen().

But standardising something as general as fopencookie() would impose a
greater burden on both the standardisers and implementors than
standardising fmemopen(). In particular, they would need to specify the
execution environment for the various callbacks, including situations such
as the implicit fclose() performed by exit(), concurrency issues, etc.
 
N

Nobody

The next C standard urgently needs to include URL* as a first-class
object built right into the language,

No it doesn't. Internet access is far too complex a subject to be
incorporated into the language. The documentation of such a feature would
occupy 90% of the language specification, with the remainder of the
language making up the other 10%.

Complex tasks require complex interfaces. Simplified interfaces invariably
end up being useless for anything beyond learning exercises. If existing
libraries are too complex for you, then you just need to become a better
programmer, rather than expecting reality to simplify itself for your
benefit.
 
A

Alan Curry

You seem to be assuming POSIX file naming rules. In that case,
just take advantage of the POSIX rule that // at the beginning of
a file name is special, and write "//http://..." instead of
"http://...".

The leading double slash as a special case is a bad idea. POSIX in theory
allows it, but POSIX systems actually in use can't implement it. It kills the
simple case of

prefix=/
# ...later...
file=$prefix/etc/thingy.conf

Leading double slash (or triple or more) must mean root directory if sanity
is to remain. POSIX declarations can't remove that fact.

In practice, if you want a new magical prefix, you start it with /dev/
 
I

Ian Collins

It's been interesting reading the responses to this rather obvious troll.

I guess it's typical of C programmers (and possibly programmers in
general) to get hung up over a detail (the format of a URL) rather than
the bigger picture: what would you do with one once you opened it?

A simple unauthenticated read makes reasonable sense, but what about
write? What about authentication? There's a myriad of differences
between access a local file and an Internet resource.

Standardise libcurl? Sure, it's used everywhere Internet access makes
sense, but libcurl uses sockets, so they should be standardised as well.
But wait, sockets use naming services, lets standardise those as well!

Good troll.
 
C

Chris H

In message <[email protected]
s.com> said:
Most platforms that do support IP networking have a BSD sockets
compatible API for it (even Windows does).

Agreed. But that is "most platforms that do support IP networking" the
trouble is most platforms don't have IP networking. And most platforms
don't have an OS. So making a url open part of standard C is as
pointless as demanding a CAN library is part fo the standard. .
The problem with adding this to the standard is where does it stop?
We'll need libraries for graphics, sound, DSP work, FFT, etc...

CAN, USB, Fieldbus(s), etc

It's much easier to define C as what it is and point to the highly
maintained libraries written in it.

Agreed. This was the downfall of C99... too much in the standard that
most did not want or need.
 
K

Keith Thompson

Chris H said:
In message <[email protected]


Agreed. But that is "most platforms that do support IP networking" the
trouble is most platforms don't have IP networking. And most platforms
don't have an OS. So making a url open part of standard C is as
pointless as demanding a CAN library is part fo the standard. .

(Controller-area network?)

Platforms that don't have an OS typically have freestanding
implementations, right? Such implementations aren't even required
to support stdio. I suggest that only hosted implementations are
relevant to this discussion.

My guess is that most platforms that have file systems and support
hosted C implementations also support IP networking. And for those
that don't, a call to urlopen() or whatever can simply fail.

Again, I don't advocate adding networking to the C standard; I'm
merely quibbling with some of your arguments against it.
 
C

Chris H

Keith Thompson <kst- said:
(Controller-area network?)

Yes. My point being there are many things many groups might want to add
to the C standard that are not universally required. Open URL being one
of them
Platforms that don't have an OS typically have freestanding
implementations, right? Such implementations aren't even required
to support stdio. I suggest that only hosted implementations are
relevant to this discussion.

I would hope so.
My guess is that most platforms that have file systems and support
hosted C implementations also support IP networking. And for those
that don't, a call to urlopen() or whatever can simply fail.

Trouble is many RTOS don't have file systems or TCP/IP except as add on
items. You can add one without the other.
Again, I don't advocate adding networking to the C standard; I'm
merely quibbling with some of your arguments against it.

Fair enough.
 
N

Nobody

For Unix-like systems, defining /url/ (or /uri/, or whatever) as
the root directory of a tree of pseudo-files mapping to URLs would
not be a totally insane idea,

OTOH, people used to joke that NFS stood for "Not a FileSystem", as it
didn't strictly adhere to Unix semantics (e.g. operations which should be
atomic weren't). A URI filesystem would be much, much worse.

Realistically, a URI filesystem needs to be implemented at the application
level, so that the user can configure the vast number of options involved
for all but the most trivial protocols.
 
N

Nobody

It's been interesting reading the responses to this rather obvious troll.

I'm not so certain. There are people out there who genuinely believe that
such a feature makes sense.
 
I

Ian Collins

I'm not so certain. There are people out there who genuinely believe that
such a feature makes sense.

If that were the case, they would engage in a debate. Where are the
follow-ups from "Ronald S. Nesbit"?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,816
Messages
2,569,714
Members
45,503
Latest member
TraceyP38

Latest Threads

Top