No. BNF describes context free grammars which define the languages
recognised by push-down automata. This is a larger set than the
regular languages recognised by REs, but is strictly smaller than the
set of languages recognised by Turing machines.
Tiny bit of on-topicness: neither C (nor C++) are LALR(1). They can
be fudged with a bit of fiddling so you can squeeze them though
automatic systems like YACC, but they are not technically LALR(1).
Its been a long time, but I suspect FORTRAN is even worse.
And we've now gone from "what the hell is Turing completeness" to LALR
(1) grammars without actually explaining what a Turing Machine is.
In the simplest terms, a Turing Machine is something with arbitrarily
large amounts of memory (if you can't hit the upper bound, it's large
enough), that can do additions and can branch execution conditionally.
Essentially, if it can do increments, GOTOs, and IFs, it's a Turing
Machine. (This is _not_ an exact definition, but should be "good
enough").
A Turing Complete language is one in which you can express (and
compute) anything that you could do with a Turing Machine. Most modern
programming languages aim at simulating a Turing Machine with loads of
syntactic sugar. As in, a while loop is essentially a formalization of
"if something, goto top of loop", a for loop is a very specific case
of a while loop, etc etc.
Church's Lambda calculus is based on recursive functions, and the idea
that mostly "anything" can be computed as the composition of recursive
functions. For example, you can do x + y as: "if x = 0, return y, else
return (--x) + (++y)", and then build x * y on top as "if x = 1,
return y, else return (((--x) * y) + y)". The Church-Turing thesis
later proved the equivalence between Turing Machines and Lambda
Calculus. That is, what can be computed with one can be computed with
the other. LisP is basically an implementation of sugared-up Lambda
Calculus.
Also, all useful programming language are Turing Complete (some, or
perhaps most, markup languages, like HTML, CSS, etc etc are not).
Turing Completeness, when referring to actual languages you're liable
to use, is only useful when it's actually surprising: POV-Ray's SDL is
Turing Complete, whereas VRML is not. TeX is Turing Complete, versus
HTML, which is not.
Directly on topic, in my mind the question of whether C is a newbie-
friendly language can be answered thus: "it's a brilliant language if
you want to learn how computers work, it's crap if you want to learn
how to program". If you want to know about computers, dealing with
pointers, having absolutely no safety nets around buffer overflows,
etc. are all features. If you want to learn how to program, you
probably want a language that lets you worry about the logic rather
than fussing over the computer's innards (at least just yet).
Personally, I'd suggest Python (yes, indenting delineates blocks, but
you'd likely indent the code like that anyway), some academics would
lead you towards Scheme (which is a LisP dialect).