C needs a URL*

  • Thread starter Ronald S. Nesbit
  • Start date
R

Ronald S. Nesbit

We live in an internet age. The distinction between files and
hyperdocuments on an external network hardly means anything any more.

Yet C is stuck with only FILE* access to local filesystems. Want to read
a HTML document or an FTP file? You need an external library for that -
different and incompatible libraries on Windows or Unix or Apple.

The next C standard urgently needs to include URL* as a first-class
object built right into the language, with uopen(), uclose(), uread(),
uwrite() functions to open and close internet documents right within the
C language. It's vital to the continuation of C's popularity as a DeskTop
programming language.

Sincerely
RSN
 
K

Keith Thompson

Ronald S. Nesbit said:
We live in an internet age. The distinction between files and
hyperdocuments on an external network hardly means anything any more.

Yet C is stuck with only FILE* access to local filesystems. Want to read
a HTML document or an FTP file? You need an external library for that -
different and incompatible libraries on Windows or Unix or Apple.

The next C standard urgently needs to include URL* as a first-class
object built right into the language, with uopen(), uclose(), uread(),
uwrite() functions to open and close internet documents right within the
C language. It's vital to the continuation of C's popularity as a DeskTop
programming language.

The language already permits fopen() to take a URL as a name
argument; there's no requirement that a "file" has to be a disk file.
(For example, on Unix-like systems many file names actually refer
to devices, not to physical files.)

If implementations don't take advantage of that, it's not because
the language standard forbids it.
 
B

Ben Pfaff

Ronald S. Nesbit said:
The next C standard urgently needs to include URL* as a first-class
object built right into the language, with uopen(), uclose(), uread(),
uwrite() functions to open and close internet documents right within the
C language. It's vital to the continuation of C's popularity as a DeskTop
programming language.

This sounds to me like an implementation issue: I can't think of
a reason that fopen() can't accept an arbitrary URI as its file
name string.
 
S

Seebs

We live in an internet age. The distinction between files and
hyperdocuments on an external network hardly means anything any more.
*snerk*

Yet C is stuck with only FILE* access to local filesystems. Want to read
a HTML document or an FTP file? You need an external library for that -
different and incompatible libraries on Windows or Unix or Apple.

1. Actually, I'm pretty sure curl works everywhere.
2. There are a whole lot of systems that are none of the above.
The next C standard urgently needs to include URL*

No, it really, really, doesn't.
as a first-class
object built right into the language, with uopen(), uclose(), uread(),
uwrite() functions to open and close internet documents right within the
C language. It's vital to the continuation of C's popularity as a DeskTop
programming language.

9/10, beautifully done. AAA, fast shipping, would be trolled again.

-s
 
J

jacob navia

Le 07/09/10 22:34, William Ahern a écrit :
I would also argue against the contention that C remains a ``desktop''
programming language. C++ seems to have taken over that spot, at least in
terms of mindshare if not in lines of new code. This is further evidenced by
the fact that people increasingly confuse C++'s "extern C" language with ISO
C. Point being, not much urgency here either, since the train has left the
station.

I have seen this kind of thinking in thousand different forms. Somebody
proposes an improvement of the language and some C++ fan answers that "C
is declining and anyway any development should be done in C++ because C
is declining"

A simple language is doomed to failure because a failed language that
has grown beyond any measurable complexity level is preferred. No matter
that there are now gigabytes of unmaintenable C++ code that nobody will
ever understand.

Software written in C is easier to debug because there is a billion less
things to care about. It can be maintained, improved with MUCH less
effort. But maintenance isnot tghe problem of C++ fans, nor is the
learning curver nor is the fact that even the creator of the language
can't modify it any more.

Those aren't issues. Or at least they donot lead to any reflection about
what are they doing.

But anyway... I do not want to start again this flame war and I will not
answer any replies from the regulars in this group.
 
S

Seebs

Le 07/09/10 22:34, William Ahern a ?crit :
I have seen this kind of thinking in thousand different forms. Somebody
proposes an improvement of the language and some C++ fan answers that "C
is declining and anyway any development should be done in C++ because C
is declining"

That is not what the person you're responding to said.
But anyway... I do not want to start again this flame war and I will not
answer any replies from the regulars in this group.

Maybe it'd be even more efficient to not make these posts in the first
place, on the grounds that misrepresenting what other people say is usually
a very efficient way to start a flame war?

-s
 
A

Alan Curry

This sounds to me like an implementation issue: I can't think of
a reason that fopen() can't accept an arbitrary URI as its file
name string.

"http:" is a perfectly good name for a directory, and "//" is expected to act
like "/", i.e. a simple directory separator, because appending "/file" to a
directory string must work whether or not the directory string already had a
trailing slash.

Any interface that allows a single string to be either a filename or a URL
is ambiguous.

But uopen could return FILE *, just as popen does. And popen's
standardization status should provide a clue to how likely this is.
 
B

Ben Pfaff

"http:" is a perfectly good name for a directory, and "//" is expected to act
like "/", i.e. a simple directory separator, because appending "/file" to a
directory string must work whether or not the directory string already had a
trailing slash.

You seem to be assuming POSIX file naming rules. In that case,
just take advantage of the POSIX rule that // at the beginning of
a file name is special, and write "//http://..." instead of
"http://...".
 
F

Felix Palmen

* jacob navia said:
A simple language is doomed to failure because a failed language that
has grown beyond any measurable complexity level is preferred. No matter
that there are now gigabytes of unmaintenable C++ code that nobody will
ever understand.

Maybe you should learn some C++, so you know what you're talking about.
The only "unmaintainable" C++ code I have seen so far was unmaintainable
because it was "C with objects" coding style, together with this
braindead "CFoo" naming scheme for classes, probably inspired by MFC.

It's just a fact that C++ is much more popular for desktop software than
C nowadays, and there's nothing wrong with that. The concepts of C++
match those of GUIs quite well...
But anyway... I do not want to start again this flame war and I will not
answer any replies from the regulars in this group.

Oh, really? Then I should help you resist: Right now, I'm writing a
little game in ISO C89, using SDL. It's strictly object-oriented and
uses an Event propagation scheme similar to the one found in .NET.

Regards,
Felix
 
F

Felix Palmen

* Ronald S. Nesbit said:
We live in an internet age. The distinction between files and
hyperdocuments on an external network hardly means anything any more.

Yet C is stuck with only FILE* access to local filesystems. Want to read
a HTML document or an FTP file? You need an external library for that -
different and incompatible libraries on Windows or Unix or Apple.

I think I get it ... let's include protocol handling for http, https,
ftp, sftp, gopher, rtsp (at least!) in the standard lib. And, of course,
let's include OpenSSL, too. Great idea, wise guy, maybe you should
protect your intellectual property!
 
C

Chris H

Ronald S. Nesbit said:
We live in an internet age. The distinction between files and
hyperdocuments on an external network hardly means anything any more.

Yet C is stuck with only FILE* access to local filesystems. Want to read
a HTML document or an FTP file? You need an external library for that -
different and incompatible libraries on Windows or Unix or Apple.

The next C standard urgently needs to include URL* as a first-class
object built right into the language, with uopen(), uclose(), uread(),
uwrite() functions to open and close internet documents right within the
C language. It's vital to the continuation of C's popularity as a DeskTop
programming language.


C is not a desktop language these days... C++ overtook it and is itself
being replaced by C#, C++/CLI, Java and other languages.

The main use of C is in the embedded world (and this is a huge market)
but they have stayed with C90/95 rather than C99

Most embedded systems don't have Internet access or a file system.
Though there is a swing to wards it. For those that need it there are
libraries.

You do NOT want to add things to the core of C and no matter what you ad
most people will not want it. So libraries are the answer. Otherwise we
should obviously make CAN part of C.....
 
M

Malcolm McLean

"http:" is a perfectly good name for a directory, and "//" is expected to act
like "/", i.e. a simple directory separator, because appending "/file" to a
directory string must work whether or not the directory string already had a
trailing slash.
You're thinking in set theory, not like a human. Loss of "http:" as a
directory name is a very small price to pay for transparent access to
the internet.
 
R

Rui Maciel

Chris said:
C is not a desktop language these days... C++ overtook it and is itself
being replaced by C#, C++/CLI, Java and other languages.

Can you point out which mainstream desktop applications were written in C#, C++/CLI or Java? I
can't think of a single relevant desktop application which is written in any of those languages.


<snip/>


Rui Maciel
 
R

Rui Maciel

Malcolm said:
You're thinking in set theory, not like a human. Loss of "http:" as a
directory name is a very small price to pay for transparent access to
the internet.

What's wrong with parsing a given URI and, whether it's a path to a local file or a URI, you use the
according code to access the resource?


Rui Maciel
 
C

Chris H

Rui Maciel said:
Can you point out which mainstream desktop applications were written in
C#, C++/CLI or Java? I
can't think of a single relevant desktop application which is written
in any of those languages.

Define "relevant" :) I thought most MS products were done using
C++/CLI and C#? A lot of web stuff is done with Java? I thought Eclipse
was Java?

In any event C is alive and well and still the most common for embedded
work. C++ is coming in on the larger MCU's in the less critical areas.
 
J

jacob navia

Le 08/09/10 12:32, Rui Maciel a écrit :
Can you point out which mainstream desktop applications were written in C#, C++/CLI or Java? I
can't think of a single relevant desktop application which is written in any of those languages.


<snip/>


Rui Maciel
Please do not argue with facts. They are stubborn and it is impossible
to "optimize" them away.

:)
 
K

Kenny McCormack

You're thinking in set theory, not like a human. Loss of "http:" as a
directory name is a very small price to pay for transparent access to
the internet.

True, but valid nonetheless (particularly in the context of this silly
newsgroup).

Not a big concern in the MS world, which long ago declared that it was
too much trouble to write "/dev/" when referencing a device, so they
allowed you to just use things like "nul", "prn", "aux", etc. Compared
to that boondoggle, reserving strings starting with "http:" is minor.

Footnote to the above: I think (but haven't tested lately) that in early
versions of DOS, any string *beginning* with any of the above 3 character
strings became reserved as well. This (luckily) isn't true anymore in
current versions of DOS.

--
"The anti-regulation business ethos is based on the charmingly naive notion
that people will not do unspeakable things for money." - Dana Carpender

Quoted by Paul Ciszek (pciszek at panix dot com). But what I want to know
is why is this diet/low-carb food author doing making pithy political/economic
statements?

Nevertheless, the above quote is dead-on, because, the thing is - business
in one breath tells us they don't need to be regulated (which is to say:
that they can morally self-regulate), then in the next breath tells us that
corporations are amoral entities which have no obligations to anyone except
their officers and shareholders, then in the next breath they tell us they
don't need to be regulated (that they can morally self-regulate) ...
 
S

Stephen Sprunk

You're thinking in set theory, not like a human. Loss of "http:" as a
directory name is a very small price to pay for transparent access to
the internet.

The problem is not the loss of one filename but the loss of dozens
already and potentially hundreds or even thousands in the future as more
URL schemes are defined--which current software and users don't know
anything about.

If you want to hack this ability into fopen(), it would have to be done
using a syntax that is guaranteed to be illegal today, and AFAIK there
can't be any such guarantee because the contents of the filename string
are implementation-defined.

S
 
B

Ben Bacarisse

It's not lost altogether. "./http:" would, presumably, get access to it.
What's wrong with parsing a given URI and, whether it's a path to a
local file or a URI, you use the according code to access the
resource?

It depends how it's done. Alan Curry's point is that

fopen("http://www.google.com", "r")

already means something (on many systems) that has nothing to do with
access to the WWW so some existing code has to have its meaning changed.

If you mean that all local files would now need the "file:" prefix, then
the problem would be the huge volume of broken code that would need to
be patched.

If you planed on using some hybrid system where only ambiguous names need
to be altered (for example insisting that the above be re-written

fopen("./http://www.google.com", "r")

then the disruption would probably be manageable.

But these are the surface problems. There are deeper questions about
HTTP headers, character encodings and so on. Given the simplicity and
ubiquity of libraries like cURL, it seems inappropriate to mandate an
interface that would, by necessity, be so very limited.
 
S

Shao Miller

Felix said:
... ... ... Right now, I'm writing a
little game in ISO C89, using SDL. It's strictly object-oriented and
uses an Event propagation scheme similar to the one found in .NET.

Is this open source, by any chance, or not? :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,269
Latest member
vinaykumar_nevatia23

Latest Threads

Top