Good book

Discussion in 'C Programming' started by Bill Cunningham, May 13, 2014.

  1. I just thought I'd post this if anyone had a real interest. If not well
    fine. The flame trolls should love it. kandr2 is out the window for me. Well
    I might keep it as reference. But it's /terrible/ for learning. About 2-3
    weeks ago I found a much simplified older book from '94 called "C
    programming in 12 easy lessons" by Greg Perry. It's *great*. It actually is
    a tutorial and not a refernce. Good for amateurs and it's a start in the
    right direction.

    Bill
    FYI for those who care.
     
    Bill Cunningham, May 13, 2014
    #1
    1. Advertisements

  2. I suppose it depends on "where your mind is at". I have usually found books
    on programming languages that were written by the creators of the
    language... to be the best for explaining the language. For some people,
    these types of books do *not* seem to resonate though.
     
    Charles Richmond, May 13, 2014
    #2
    1. Advertisements

  3. (snip, someone wrote)
    I think I learned C mostly from K&R (1, though not so long before 2).

    I also learned Fortran from the IBM Fortran IV Reference Manual,
    though I know many people who dislike IBM manuals.

    In general, I like the original source, but many people like
    other references better.

    -- glen
     
    glen herrmannsfeldt, May 13, 2014
    #3
  4. Bill Cunningham

    James Kuyper Guest

    Some people have said that K&R is not a good book to use, if C is your
    first programming language, because it assumes a certain minimum amount
    of familiarity with programming. I wouldn't know - I had already learned
    Fortran I, Basic, and APL before I learned C. I found K&R clear, easy to
    understand, and fairly comprehensive.
     
    James Kuyper, May 13, 2014
    #4
  5. I have spent *a lot* of time with kandr2 and it didn't even read well.
    It was like a book to look at to refresh from a known language. Like a
    reference as I said /supra/. But this other book made pointers very clear.
    Now the next step is exactly how to use them to change values and such.
    Pointer notation. Pointer arithmetic. I'll take it lesson by lesson now that
    I have some time to devote to C again.

    Bill
     
    Bill Cunningham, May 13, 2014
    #5
  6. (snip)
    Well, even more, C pointers are easy to learn if you have been
    doing assembly programming beforehand. You get used to thinking
    in terms of addresses of things, instead of just things.

    There are books for many languages that start out from the most basic
    features of computing, such as binary arithmetic, and build up
    from there. If you do already know some programming, you get tired
    of them pretty fast.

    -- glen
     
    glen herrmannsfeldt, May 14, 2014
    #6
  7. Exactly.
    I learnt C after I had learnt assembly programming. So it was "what's
    that funny unary multiplication sign doing?" "oh, it's the indirection
    operator".
    If you're new to programming, of course, being told that C uses an
    asterisk for indirection isn't in the least bit helpful.
     
    Malcolm McLean, May 14, 2014
    #7
  8. Bill Cunningham

    Ken Brody Guest

    When I first learned C, I found K&R to be a very useful book. However, I
    already had experience with several languages prior to C, and wanted a book
    that explained "The C Language", and not "programming".

    Maybe it's just me, but I don't think so.
     
    Ken Brody, May 14, 2014
    #8
  9. Bill Cunningham

    Ken Brody Guest

    On 5/13/2014 5:22 PM, James Kuyper wrote:
    [...]
    I first learned programming with "Basic BASIC".

    Fortunately, it (and APL, too) didn't ruin me for "real" languages. :)
     
    Ken Brody, May 14, 2014
    #9
  10. Bill Cunningham

    Ken Brody Guest

    Unfortunately, many people learning programming nowadays have no concept of
    what "assembly language" is, let alone are able to actually code in it.

    And a lot have the attitude of "why would you want to code in $FOO, when
    $BAR can do everything you want?"
     
    Ken Brody, May 14, 2014
    #10
  11. Bill Cunningham

    James Kuyper Guest

    On 05/14/2014 05:55 PM, Ken Brody wrote:
    ....
    I'm not entirely sure that's a bad thing. I've learned three assembly
    languages in my day, used all three in projects of varying complexity,
    and have made little or no use of anything that I learned while working
    on those projects at any time in the past two decades (except in this
    newsgroup, where some of my comments have been informed by those
    experiences). There needs to be someone who knows the details of
    assembly language, but as more and more programming is done in
    higher-level languages, I don't think it's necessary that all
    programmers have such knowledge. Basic concepts that underlie assembly
    language programming can be helpful to programmers using higher-level
    languages, but an actually detailed knowledge of the syntax and
    semantics of a particular assembly language is needed only by those who
    will be actually writing it.
    Again, that strikes me as a fairly reasonable attitude. The only good
    reason I can think of for bothering to use $FOO rather than $BAR is that
    there is something that I can do in $FOO rather that I can't do in $BAR
    (or, at least, I can do more easily in $FOO than in $BAR).
     
    James Kuyper, May 14, 2014
    #11
  12. Bill Cunningham

    Joe Pfeiffer Guest

    No, I think you're exactly right on this. I found the original C memo
    written by K&R (that later mutated into a chapter in the book) to be
    exactly what I needed for learning the language. If it had been my
    first language, it would have been hopeless (I don't even remember how
    many languages I'd learned by the time I came across C -- I think I'd
    even had my "language of the week" style programming languages course
    first).
     
    Joe Pfeiffer, May 14, 2014
    #12
  13. Bill Cunningham

    Ian Collins Guest

    My experience is similar and I agree. I see a lot of incorrect
    assumptions made by people who "know assembly language" from the days of
    the 68K or 386 about the performance of code on modern descendents of
    those processors. The core assembly may the same, but the machines
    underneath bear next to no resemblance to their forbears.
     
    Ian Collins, May 15, 2014
    #13
  14. I usually believe that people do better at something when they
    have a reasonable understanding one level below the one that they
    are actually working in.

    I this case, I probably believe that knowing an assembly language,
    even if not for the actual processor in use, is enough. That is
    especially true as processors get RISCier, and it is more difficult
    to understand what is actually happening, though it isn't really
    all that different.
    It helps to be able to read the generated code listings, which
    may or may not have the same syntax as actual assemblers.

    (snip)

    -- glen
     
    glen herrmannsfeldt, May 15, 2014
    #14
  15. (snip, I wrote)
    Hmm. I think my Java programs look more like C than those of others.

    You need a balance from being really wasteful with object creation
    and destruction (GC), and way overdoing it in hand optimizing.

    -- glen
     
    glen herrmannsfeldt, May 15, 2014
    #15
  16. Bill Cunningham

    David Brown Guest

    The biggest incorrect assumption I see about assembly is the idea that
    any C code could be re-written in assembly to make it faster.
    Occasionally it makes sense to do this, but only occasionally.

    I think knowledge of assembly is more important for smaller processors -
    if you are using bigger and more complex processors, you can usually
    ignore the low-level details. But on smaller devices, understanding the
    assembly - and in particular, understanding the processor architecture -
    can make a significant difference to the size and performance of your
    code. Obvious examples are that if you know your processor has only
    single-point hardware floating point, don't use doubles if you can avoid
    them. If your chip is 16-bit, don't use 32-bit types if you don't need
    them.

    Less obvious examples would be knowing the number of pointer registers
    available, and taking that into account in code, or knowing the range of
    "static pointer + index" and considering that when putting a temporary
    array on the stack or statically allocated.

    And of course when things don't work, or don't work fast enough, it can
    be useful to examine the generated assembly code.

    I also think that a background in assembly programming gives a developer
    better insight into what is happening under the hood, and the resulting
    code is often more efficient. But there is a danger that people get too
    carried away, and write code full of "micro-optimisations" which are
    detrimental to the clarity, correctness or maintainability of the code,
    which are unnecessary with modern tools, and might be pessimisms on
    newer processors. There is a balance to be struck.
     
    David Brown, May 15, 2014
    #16
  17. Bill Cunningham

    Ian Collins Guest

    That was basically what I was hinting at.
    Indeed. If I were to some of my old 8 and 16 bit device code I would
    find it full of code tailored to the processor architecture.
    That can be to top of a slipper slope, given different member of the
    same family have different register sets. Writing code that constrains
    performance on a newer or bigger versions of the process should not be
    undertaken lightly.
    This often helps on smaller or older processors where assembly maps
    directly the the hardware, but on RISC or pseudo CISC on a RISC core
    (current x86 chips), can lead you down the path of pseudo understanding!
    Not really on bigger CPUs. Understanding the processor and system
    architecture is way more important when writing in a higher level
    language (or C!).
    I would say nearly always rather than might.
     
    Ian Collins, May 15, 2014
    #17
  18. Bill Cunningham

    David Brown Guest

    "Obvious" is in the eye of the beholder. I know that /I/ like to know
    things like the numbers and types of registers in a cpu, but many other
    people don't. But things like floating point support are often listed
    clearly as features for chips - therefore I rated it as more "obvious".
    This will vary. Among other things, it depends on the complexity of the
    chip and the experience of the developer. If you are familiar with the
    assembly language in question, you will find it natural to look at the
    generated assembly more often because it is easy to do. But if you are
    not familiar with it, and especially if it is a complicated cpu, then
    you usually want to avoid seeing the assembly.
    Agreed - and also tempered with the knowledge that compilers know lots
    of tricks too, so you don't need to hand-optimise the C code. There are
    people who write "(x << 2) + x" because they "know" that this will be
    faster on their cpu with limited multiply support than writing "x * 5".
    Usually, they are wrong - if shifts and adds is faster, the C compiler
    will generate it, and it can possibly take greater advantage of the
    clearer and simpler source code for other optimisations.
     
    David Brown, May 15, 2014
    #18
  19. Bill Cunningham

    David Brown Guest

    I am planning for a project in which an old 8-bit processor with 12 year
    old C code using a rather limited compiler is going to be replaced with
    a Cortex M4 and modern gcc. The old code is full of things like copying
    data from structures into local variables because the compiler was not
    smart enough to re-use data that was loaded into registers. I'm glad
    I've left that sort of thing behind me.
    Agreed - and it is rare that you should sacrifice clarity and
    maintainability in the name of optimisation. But sometimes, especially
    with smaller cpus, this sort of thing can make a very big difference.
    It is not uncommon for small processors to be able to work fairly well
    with two pointers at a time - but a third pointer makes code much
    slower. Keeping target limitations in mind is not a bad thing.
    I see understanding assembly as a major part of understanding the cpu
    and system architecture. But for bigger processors, it is perhaps a
    less vital part of the process than things like caches or memory
    structures, as memory and bus bandwidth is often the bottleneck rather
    than the cpu instructions.
     
    David Brown, May 15, 2014
    #19
  20. It used to be a lot faster to pad a 2D array out to a sum of two powers of 2, then
    access via ((y << 8) | (y << 2)) + x. I haven't actually checked, but I wouldn't be at
    all surprised if y * (soft) width +x isn't equally fast or even faster.

    However it could easily go back again as chip architecture develops. If you
    have experience of work at the instruction level, you're aware of these potential
    issues.
     
    Malcolm McLean, May 15, 2014
    #20
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.