Which book is better for a begainer to study C language?

B

Ben Bacarisse

JosephKK said:
I would not be too sure of that. Backus-Naur Form is Turing complete
is it not?

No. BNF describes context free grammars which define the languages
recognised by push-down automata. This is a larger set than the
regular languages recognised by REs, but is strictly smaller than the
set of languages recognised by Turing machines.
And no possible RE system is not representable in BNF.
It is an attribute of the language, not one of the parser. I have yet
to meet a language (expressed in a LR script) that i could not write a
LALR(1) parser for. This includes C, C++, BASIC, ALGOL, FORTRAN,
COBOL, Ada, Pascal, FORTH, APL and every assembler i have seen. I
have heard reports that early FORTRAN and COBOL versions were not
Turing Complete, I do not know myself.

Tiny bit of on-topicness: neither C (nor C++) are LALR(1). They can
be fudged with a bit of fiddling so you can squeeze them though
automatic systems like YACC, but they are not technically LALR(1).

Its been a long time, but I suspect FORTRAN is even worse.
 
R

Richard Bos

Keith Thompson said:
Doesn't true Turing-completeness require infinite storage?

In theory, yes, but of course nobody has infinite storage, and nobody
_can_ have infinite storage. Therefore, a language is usually considered
Turing-complete for practical reasons if it would be theoretically
Turing-complete given an infinite universe to work with. So C is Turing-
complete, even though no single C implementation is, or can be; the
fault lies not with dmr, but with our finitely-sized universe.

Richard
 
R

Richard Bos

Richard said:
What is this total nonsense?

It is something which anyone with even a superficial education in
computation theory would recognise.

Tell me, does the name Markov mean anything to you? Church? Curry?

Richard
 
R

Richard Bos

Richard said:
To despise a language because of the indentation strikes me as rather
silly.

Which is why I don't, and didn't write that. I _dislike_ Python, which
is not the same thing as _despising_ it.

Richard
 
P

pdpi

No.  BNF describes context free grammars which define the languages
recognised by push-down automata.  This is a larger set than the
regular languages recognised by REs, but is strictly smaller than the
set of languages recognised by Turing machines.



Tiny bit of on-topicness: neither C (nor C++) are LALR(1).  They can
be fudged with a bit of fiddling so you can squeeze them though
automatic systems like YACC, but they are not technically LALR(1).

Its been a long time, but I suspect FORTRAN is even worse.

And we've now gone from "what the hell is Turing completeness" to LALR
(1) grammars without actually explaining what a Turing Machine is.

In the simplest terms, a Turing Machine is something with arbitrarily
large amounts of memory (if you can't hit the upper bound, it's large
enough), that can do additions and can branch execution conditionally.
Essentially, if it can do increments, GOTOs, and IFs, it's a Turing
Machine. (This is _not_ an exact definition, but should be "good
enough").

A Turing Complete language is one in which you can express (and
compute) anything that you could do with a Turing Machine. Most modern
programming languages aim at simulating a Turing Machine with loads of
syntactic sugar. As in, a while loop is essentially a formalization of
"if something, goto top of loop", a for loop is a very specific case
of a while loop, etc etc.

Church's Lambda calculus is based on recursive functions, and the idea
that mostly "anything" can be computed as the composition of recursive
functions. For example, you can do x + y as: "if x = 0, return y, else
return (--x) + (++y)", and then build x * y on top as "if x = 1,
return y, else return (((--x) * y) + y)". The Church-Turing thesis
later proved the equivalence between Turing Machines and Lambda
Calculus. That is, what can be computed with one can be computed with
the other. LisP is basically an implementation of sugared-up Lambda
Calculus.

Also, all useful programming language are Turing Complete (some, or
perhaps most, markup languages, like HTML, CSS, etc etc are not).
Turing Completeness, when referring to actual languages you're liable
to use, is only useful when it's actually surprising: POV-Ray's SDL is
Turing Complete, whereas VRML is not. TeX is Turing Complete, versus
HTML, which is not.

Directly on topic, in my mind the question of whether C is a newbie-
friendly language can be answered thus: "it's a brilliant language if
you want to learn how computers work, it's crap if you want to learn
how to program". If you want to know about computers, dealing with
pointers, having absolutely no safety nets around buffer overflows,
etc. are all features. If you want to learn how to program, you
probably want a language that lets you worry about the logic rather
than fussing over the computer's innards (at least just yet).
Personally, I'd suggest Python (yes, indenting delineates blocks, but
you'd likely indent the code like that anyway), some academics would
lead you towards Scheme (which is a LisP dialect).
 
B

Ben Bacarisse

pdpi said:
Church's Lambda calculus is based on recursive functions, and the idea
that mostly "anything" can be computed as the composition of recursive
functions. For example, you can do x + y as: "if x = 0, return y, else
return (--x) + (++y)", and then build x * y on top as "if x = 1,
return y, else return (((--x) * y) + y)". The Church-Turing thesis
later proved the equivalence between Turing Machines and Lambda
Calculus. That is, what can be computed with one can be computed with
the other. LisP is basically an implementation of sugared-up Lambda
Calculus.

I'd like to correct a term in this otherwise excellent summary. The
Church-Turing _thesis_ is an unproved (and unprovable) speculation
about what it means to compute (or be computable).

The equivalence of lambda calculus and Turing machines is, as you say,
an established theorem (first published by Turing in 1937 I think) but
that is not quite the same thing.
 
P

pdpi

I'd like to correct a term in this otherwise excellent summary.  The
Church-Turing _thesis_ is an unproved (and unprovable) speculation
about what it means to compute (or be computable).

The equivalence of lambda calculus and Turing machines is, as you say,
an established theorem (first published by Turing in 1937 I think) but
that is not quite the same thing.

Good point. I think it was Kleene who proved it.
 
B

Ben Bacarisse

pdpi said:
On May 21, 11:30 pm, Ben Bacarisse <[email protected]> wrote:
Good point. I think it was Kleene who proved it.

That was my recollection too, but in a 1939 paper, J B Rosser (who can
be considered well-informed on the matter) claimed it was done in:

A. M. Turing, "Computability and λ-definability", Journal of
Symbolic Logic, vol. 2 (1937), pp. 153-163.

In fact, the outline of the equivalence was sketched out in Turing's
original 1936/37 paper (something I'd forgotten) and this later paper
seems to be a very detailed argument. There would not have been much
time for Kleene to get a publication out between these two.
 
P

pdpi

That was my recollection too, but in a 1939 paper, J B Rosser (who can
be considered well-informed on the matter) claimed it was done in:

  A. M. Turing, "Computability and λ-definability", Journal of
  Symbolic Logic, vol. 2 (1937), pp. 153-163.

In fact, the outline of the equivalence was sketched out in Turing's
original 1936/37 paper (something I'd forgotten) and this later paper
seems to be a very detailed argument.  There would not have been much
time for Kleene to get a publication out between these two.

According to Wikipedia[1]: "The three computational processes
(recursion, λ-calculus, and Turing machine) were shown to be
equivalent by Alonzo Church, Stephen Kleene and J.B. Rosser (1934–6)
and by Alan Turing (1936–7)", which is what I based my comment on.
Digging this stuff up brings back memories, like implementing Lambda
calculus by hand (for a functional programming course in college).
This is still an old favourite of mine, very few programming
assignments you can get in college can teach you the importance of
programming for performance while still teaching you loads of
_computer science_.

[1] - http://en.wikipedia.org/wiki/Church-Turing_thesis
 
P

pdpi

That was my recollection too, but in a 1939 paper, J B Rosser (who can
be considered well-informed on the matter) claimed it was done in:
  A. M. Turing, "Computability and λ-definability", Journal of
  Symbolic Logic, vol. 2 (1937), pp. 153-163.
In fact, the outline of the equivalence was sketched out in Turing's
original 1936/37 paper (something I'd forgotten) and this later paper
seems to be a very detailed argument.  There would not have been much
time for Kleene to get a publication out between these two.

According to Wikipedia[1]: "The three computational processes
(recursion, λ-calculus, and Turing machine) were shown to be
equivalent by Alonzo Church, Stephen Kleene and J.B. Rosser (1934–6)
and by Alan Turing (1936–7)", which is what I based my comment on..
Digging this stuff up brings back memories, like implementing Lambda
calculus by hand (for a functional programming course in college).
This is still an old favourite of mine, very few programming
assignments you can get in college can teach you the importance of
programming for performance while still teaching you loads of
_computer science_.

[1] -http://en.wikipedia.org/wiki/Church-Turing_thesis

Hmmm, seems my reading comprehension fails. Rereading, it seems Turing
proved his machine was equivalent to recursion and λ-calculus.
 
T

Tim Rentsch

pdpi said:
[snip]

And we've now gone from "what the hell is Turing completeness" to LALR
(1) grammars without actually explaining what a Turing Machine is.

In the simplest terms, a Turing Machine is something with arbitrarily
large amounts of memory (if you can't hit the upper bound, it's large
enough), that can do additions and can branch execution conditionally.
Essentially, if it can do increments, GOTOs, and IFs, it's a Turing
Machine. (This is _not_ an exact definition, but should be "good
enough").

A Turing Complete language is one in which you can express (and
compute) anything that you could do with a Turing Machine. Most modern
programming languages aim at simulating a Turing Machine with loads of
syntactic sugar. As in, a while loop is essentially a formalization of
"if something, goto top of loop", a for loop is a very specific case
of a while loop, etc etc.

Church's Lambda calculus is based on recursive functions, and the idea
that mostly "anything" can be computed as the composition of recursive
functions. For example, you can do x + y as: "if x =3D 0, return y, else
return (--x) + (++y)", and then build x * y on top as "if x =3D 1,
return y, else return (((--x) * y) + y)". The Church-Turing thesis
later proved the equivalence between Turing Machines and Lambda
Calculus. That is, what can be computed with one can be computed with
the other. LisP is basically an implementation of sugared-up Lambda
Calculus.

Normally I stay away from postings like this one, but there were
just too many misstatements here.

A Turing Machine is an abstract concept that was intended to
mirror human capabilities for computation; the idea is that if a
Turing Machine could calculate something then a person certainly
could (given enough time, of course). The Turing Machine model
is /very/ simple; there is no direct notion of addition,
incrementing, GOTO, or IF's. Turing Machines do include a
fundamental concept of "state change", but that isn't the same as
either GOTO or IF as Turing Machines don't have sequential
execution in the way that real computers do.

It's true that most programming languages (as the term is
generally understood) are Turing complete, but no widespread
programming language works by simulating a Turing Machine, with
or without syntactic sugar. Being Turing complete is a property
that is proved after the fact, not designed in at the beginning.

The Lambda Calculus does have "functions" and it does have
function application, but it doesn't have any notion of recursive
functions (functions in the Lambda Calculus are anonymous); it
also has no built-in notion of 'if'. Certainly Lisp drew from
the Lambda Calculus, but Lisp is not just a sugared version of
the Lambda Calculus.

Abstract function theory is an /axiomatic/ basis for what
constitues a computable function. Abstract function theory
is not a programming model, just a basis for reasoning what
functions must be included in certain abstract sets of
functions.

General rewrite grammars are another model for specifying
computation. General rewrite grammars are just like the
regular BNF (context free) grammars that are so familiar,
except that general rewrite grammars allow multiple symbols
on the left hand side of a production, including rules that
have more symbols on the left hand side of a production
than there are on its right hand side.

The remarkable thing is that all of these different models of
computation produce exactly the same set of possible
computations. It's harder to prove in some cases than in
others, but in fact the different formal models are all
equivalent in terms of their computational power.

One more comment, on a more humorous note.

Remembered from a talk I went to, /many/ years ago:

FORTRAN is LR(infinity).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,813
Messages
2,569,699
Members
45,489
Latest member
SwethaJ

Latest Threads

Top