"null Considered Harmful"

Discussion in 'C Programming' started by Lynn McGuire, Dec 11, 2013.

  1. The reason is always that either customer didn't have enough memory installed,
    or some other process was taking up the memory.
     
    Malcolm McLean, Dec 13, 2013
    #41
    1. Advertisements

  2. Lynn McGuire

    BartC Guest

    So if a program has a memory leak, the solution to just keep adding more
    memory?!

    It also happens that a wrong choice of algorithm could be using up too much
    memory, perhaps for intermediate results. In either case the software needs
    fixing rather than upgrading the hardware first.
     
    BartC, Dec 13, 2013
    #42
    1. Advertisements

  3. Lynn McGuire

    Les Cargill Guest

    That is a planning failure masquerading as a software bug.
     
    Les Cargill, Dec 13, 2013
    #43
  4. Lynn McGuire

    BGB Guest


    OTOH: it can also be because the app is 32-bit, and there is only 3GB of
    address space available for 32-bit apps, effectively limiting the
    maximum amount of allocated memory to around 2.0 - 2.5 GB (the remaining
    space being needed for things like stacks and the program
    binaries/DLLs/... and similar).


    also, the app can be designed in such a way that it actually *uses* most
    of the address space, without it actually being a leak.

    for example, a voxel-based 3D engine can eat up lots of RAM for things
    like voxel data and similar (lots of 3D arrays).

    in such a case, memory allocation failure may then effectively mean "no,
    your heap isn't getting any bigger", and may have to be dealt with
    gracefully.

    nevermind if it pages really bad on older computers that don't have a
    lot of RAM installed, and requires ~ 8GB-16GB of swap space in these
    cases, ...


    I have used it successfully on an old laptop with 1GB of RAM though (set
    up for 8GB swap), and it sort of runs passably (if the player doesn't
    run around too much, the swap can mostly keep up).

    FWIW, this laptop can't really run Minecraft either...
     
    BGB, Dec 13, 2013
    #44
  5. LC> So I have habits that *preclude* that sort of thing. It's too
    LC> detailed, but enforce your constraints with the furniture
    LC> provided by the language system.

    But that's exactly my point: for some problem domains, taking 25% as
    long to write the code, because things can be expressed tersely, but
    having the code take 4 times longer to run, because the runtime is
    handling exceptions and checking array bounds and validating after each
    operation that constraints are true is a very desirable tradeoff.

    This is why there are many languages and many development frameworks and
    many development environments. The sweet spot for tradeoffs on a
    multi-user 248K PDP-11 is not the same as the sweet spot for tradeoffs
    on a 1MB Mac Plus is not the same as the sweet spot for tradeoffs on a
    512MB iPhone 5.

    Charlton
     
    Charlton Wilbur, Dec 13, 2013
    #45
  6. For a GUI app that essentially provides a nice user interface to a few
    trivial calculations. Which is a lot of software, but not everything.

    If you start attacking NP-complete problems O(2^N) then you often find that
    you can get a reasonable answer in reasonable time, at the cost of a
    huge amount of memory. Quite often that memory is in a tree or similar
    structure that naturally maps to billions of allocations of small chunks.
    However large your machine, you can swiftly exhaust the memory.
     
    Malcolm McLean, Dec 13, 2013
    #46
  7. Lynn McGuire

    Les Cargill Guest

    Sure - that's included in what I meant. I am pretty sure it all comes
    out in the wash. I doubt you'd get 400% speedups in development
    time just from exceptions, though.

    I just don't find explicit constraint checks to be that slow to put in
    nor to test.

    If I'm *really* pressed for time, I tend to use Tcl*,
    and that sort of thing just doesn't matter, outside of
    pulling data from files or sockets.

    *path dependency...
    May the ghost of Grace Murray Hopper haunt you with her nanosecond! :)

    I mean, since it's getting to be "A Christmas Carol" season and all :)
     
    Les Cargill, Dec 14, 2013
    #47
  8. Lynn McGuire

    Thomas Jahns Guest

    Except that on todays machines it's more often address space than physical
    memory which runs out. At least for the large chunk of programs still compiled
    to 32bit execution environments.

    Thomas
     
    Thomas Jahns, Dec 16, 2013
    #48
  9. Lynn McGuire

    Thomas Jahns Guest

    For some programs that might be true. For the big automatons we run, error
    recovery is usually not possible without user intervention (meaning the users
    change input or program logic). Failing early and hard is usually the most sane
    option.

    Thomas
     
    Thomas Jahns, Dec 16, 2013
    #49
  10. I suppose, but just barely.

    Reminds me of about 20 years ago when I was using a 486 machine
    withe 8MB as a router (no-one wanted it for anything else) and
    put a brand new 2GB disk in it to run FreeBSD. I allocated 1GB
    for swap, so 128 times physical memory.

    But 4GB physical (installed) memory is pretty common now, though
    it doesn't cost all that much for more. With the memory used by the
    OS and other things that have to run, you really don't want more
    than 2GB allocated for a user program. Many 32 bit systems limit
    user address space to 2GB, leaving 2GB for the OS.

    If programs made better use of virtual memory, larger address
    space would be more useful.

    -- glen
     
    glen herrmannsfeldt, Dec 16, 2013
    #50
  11. TJ> For some programs that might be true. For the big automatons we
    TJ> run, error recovery is usually not possible without user
    TJ> intervention (meaning the users change input or program
    TJ> logic). Failing early and hard is usually the most sane option.

    And since there are many possible sets of circumstances, there are many
    programmers. And many programming languages.

    Charlton
     
    Charlton Wilbur, Dec 16, 2013
    #51
  12. Lynn McGuire

    BGB Guest

    in most computers I have seen in recent years, 8GB or 16GB has gotten a
    lot more common, with some higher-end "gamer rigs" with 32GB and similar
    (ex: 4x 8GB modules...).

    newer PCs coming with 4GB is at this point mostly laptop territory.


    my desktop PC has 16GB of RAM in it, FWIW (4x 4GB).
     
    BGB, Dec 16, 2013
    #52
  13. Lynn McGuire

    Siri Cruz Guest

    That's real memory not virtual memory. I have 4 GB real memory and currently 180
    GB virtual. Apple has switched to 64-bit virtual byte address, but the limit on
    the virtual address space may be smaller because of restricted address
    translation hardware. The real memory address is currently up to about 35 bits;
    hardware restriction might impose a limit below 64 bits.

    The kernel and address translation hardware convert the potentially 64 bit
    virtual address down to page faults or the much smaller 32 bit real address
    space on my Mac. Or the slightly larger real address space on the Mac next to it.
     
    Siri Cruz, Dec 16, 2013
    #53
  14. Lynn McGuire

    BGB Guest

    and almost completely non-viable for end-user graphical application
    software...


    if the app just exits and dumps the user off at the desktop, they are
    more likely to have a response like "WTF?!".

    better is, at least, to provide a notification error-box "hey, this crap
    has died on you.", or more often attempt error recovery, very often
    while playing a "ding" sound effect and/or popping up a notification box.


    many other types of applications are largely autonomous and will try to
    handle any recovery on their own, filling in any holes with a plausible
    substitute.

    this is much more common in things like games and graphical software
    (such as 3D modeling software, ...).
    "hey, this 3D model uses a material which can't be loaded?! well, just
    use some sort of generic checkerboard placeholder pattern or similar
    instead, and maybe print an error message to the in-program console."



    but, it doesn't really make much sense to have completely different
    infrastructure for command-line tools vs end-user application software,
    so usually a general compromise is needed.

    most often, this is either some sort of exception mechanism, or
    returning status indicators of some sort.
     
    BGB, Dec 16, 2013
    #54
  15. Lynn McGuire

    BGB Guest

    but, yes, the specific topic there was physical memory installed, which
    is at this point typically 8GB or 16GB, rather than 4GB (at least in
    newer desktop PCs, nevermind older desktop PCs or laptops).

    also nevermind if a 32-bit process is normally limited to 2GB or 3GB,
    and an application will use up the 32-bit virtual space well before the
    available physical RAM is exhausted.


    it is a very different situation on my old 2003-era laptop, which has a
    larger virtual-address space than physical RAM (and as such using up the
    whole 2-3GB of VA space will result in considerable swapping), and this
    is only doable really because I went and turned up the swap to 8GB
    (which is about 1/6 of said laptops' HDD space as well...).
     
    BGB, Dec 16, 2013
    #55
  16. Lynn McGuire

    Ian Collins Guest

    Exceptions are a good option in this case. If you can't handle the
    exception, the programme will abort (fail early and fast). When you can
    (say when there is a user to prompt), the result is better than a crash.
     
    Ian Collins, Dec 21, 2013
    #56
  17. Lynn McGuire

    Ian Collins Guest

    When implemented well, they only add overhead when they are thrown. the
    normal code path should be faster and clearer without the overhead (to
    both the human reader and the machine) of error checking code.
     
    Ian Collins, Dec 21, 2013
    #57
  18. Lynn McGuire

    protherojeff Guest

    Null-pointer errors are characteristic of C variants, which use 1970's-era typechecking. Modern typesafe languages eliminate this complete class of bugs at compiletime, basically by treating "foo*, might be NULL" as a different type from "foo*, known to be non-NULL". In Mythryl (which I happen to maintain... :) the distinction is between types Null_Or(Foo) vs Foo. (Null_Or() is a type constructor, another feature of modern languages based in Hindley-Milner-Damas type-inference/type-checking.) Dereferencing a pointer is not allowed unless it is known to be non-null.

    In a language with this sort of typechecking, returning NULL pointers is actually safer than throwing an exception, in general: It is easy to forget to catch the exception at the appropriate leaf in the code, but the typechecker guarantees that the leaf has to check for NULL before dereferencing.

    This might sound clumsy and intrusive, but actually it works very smoothly.The type inference means that one rarely has to actually specify types except at compilation-unit interfaces (the code looks more like Ruby than Java, due to the pervasive lack of type declarations), and the overwhelming majority of pointers are guaranteed by the typechecker to be non-NULL, so it is fairly rare to have to explicitly check for NULL -- typically happens when calling a library routine that may fail due to filesystem issues (permission, missing file) or such.

    I've been programming in Mythryl pretty intensively for about ten years now, and I must say in all that time I've never seen a null pointer bug, and Ihaven't missed the experience one bit. :)

    This technology is slowly trickling down to legacy languages like Java. Seefor example http://help.eclipse.org/juno/index....oc.user/tasks/task-using_null_annotations.htm. So even if your installed base prevents you from upgrading to a modern language, there is still hope!
     
    protherojeff, Feb 14, 2014
    #58
  19. Hmm. I have written a number of simple tree processing routines
    in Java, and usually check for null at the top, instead of before
    each recursion call. For one, it means one if instead of two,
    and usually in a simpler place. That means more recursion depth,
    though.
    Well, Java won't turn off the null Object test, so you will always
    get the exception if you miss.

    -- glen
     
    glen herrmannsfeldt, Feb 14, 2014
    #59
  20. In C++ a reference is known to be non-null.
    But in some ways it's a bit of a nuisance, because the references always have
    to be constructed at the point they come into scope.
    So we have

    class Node
    {
    Node &link1;
    Node &link2;

    Node Node()
    : link1( what do we put here ?),
    link2(same problem)
    {
    }
    };

    If you're not careful you end up creating a dummy node. Which ends up as
    effectively a null pointer, except the system inherently cannot catch the
    bug if you try to use it.
     
    Malcolm McLean, Feb 14, 2014
    #60
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.