Re: Class in another file

Discussion in 'Java' started by Robert Maas, see http://tinyurl.com/uh3t, May 14, 2005.

  1. > Newsgroups: comp.lang.java
    (Added comp.lang.java.programmer because comp.lang.java is not valid.)
    > From: "Ted Dunning" <>
    > I should point out that using static methods is often an indication
    > of poor design.


    I respectifully disagree with your generalization. If you are defining
    a new data structure, different from anything already available in the
    Java API, then it's appropriate to have one or more constructors to
    create such structures and one or more methods to provide access or
    mutation upon such structures. But if you are merely defining methods
    that work with already-existing data structures and/or primitive data
    types, that were defined in classes whose source you are not allowed to
    modify (and might not even be able to see), then it is impossible for
    you to make instance methods for processing those objects, so you have
    no choice but to use static methods, which are completely appropriate
    for such use.

    Note: You might still choose to sub-class the API class, not define any
    new data members if you don't need any, but do define additional
    instance methods that are available only for types of your sub-class.
    One problem with that tactic is that you can't directly apply your
    methods to objects of the base class, as you can for static methods.
    You can't even down-cast objects of the base class to apply your
    derived-class instance methods to them. You need to construct objects
    *originally* of your derived class, then you can apply methods of both
    base and derived classes. Or if you are given an object of the base
    class you need to pick apart all the data members and re-build an
    object of your derived class, a royal pain. Better to just use a static
    method which can be directly applied to base-class objects.

    What indicates poor design is using an inappropriate type of method,
    static method when instance method is more appropriate, or vice versa.
     
    Robert Maas, see http://tinyurl.com/uh3t, May 14, 2005
    #1
    1. Advertising

  2. Robert Maas, see http://tinyurl.com/uh3t

    Ross Bamford Guest

    On Sat, 2005-05-14 at 01:17 -0700, Robert Maas, see
    http://tinyurl.com/uh3t wrote:
    > > Newsgroups: comp.lang.java

    > (Added comp.lang.java.programmer because comp.lang.java is not valid.)
    > > From: "Ted Dunning" <>
    > > I should point out that using static methods is often an indication
    > > of poor design.

    >
    > I respectifully disagree with your generalization. If you are defining
    > a new data structure, different from anything already available in the
    > Java API, then it's appropriate to have one or more constructors to
    > create such structures and one or more methods to provide access or
    > mutation upon such structures. But if you are merely defining methods
    > that work with already-existing data structures and/or primitive data
    > types, that were defined in classes whose source you are not allowed to
    > modify (and might not even be able to see), then it is impossible for
    > you to make instance methods for processing those objects, so you have
    > no choice but to use static methods, which are completely appropriate
    > for such use.
    >
    > Note: You might still choose to sub-class the API class, not define any
    > new data members if you don't need any, but do define additional
    > instance methods that are available only for types of your sub-class.
    > One problem with that tactic is that you can't directly apply your
    > methods to objects of the base class, as you can for static methods.
    > You can't even down-cast objects of the base class to apply your
    > derived-class instance methods to them. You need to construct objects
    > *originally* of your derived class, then you can apply methods of both
    > base and derived classes. Or if you are given an object of the base
    > class you need to pick apart all the data members and re-build an
    > object of your derived class, a royal pain. Better to just use a static
    > method which can be directly applied to base-class objects.


    To quote someone (far) smarter than I, "A simple definition of object
    oriented design is that it is a technique that focuses design on data,
    and on the interfaces to it. To make an analogy with carpentry, an
    'Object Oriented' carpenter would be mostly concerned with the chair he
    was building, and secondarily with the tools used to make it; a 'non-OO'
    carpenter would think primarily of his tools. This is also the mechanism
    for defining how objects 'plug & play'"
    (James Gosling in the early Oak docs).

    The design you're advocating is a classic example of being too involved
    in the tools (classes) rather than simply using them to do the job. I
    can see where you're coming from, but *trust me* there's no mileage in
    it.

    Some pointers to things you might find interesting given some of your
    points (google for 'em):

    1) Class factories (and the Singleton pattern) - the exception to the
    rule.

    2) Dependency Injection (with particular reference to decoupling
    implementations from interfaces).

    3) Antipatterns: Functional Decomposition, Abstraction Inversion, Big
    Ball of Mud, etc.


    Cheers,
    Ross

    >
    > What indicates poor design is using an inappropriate type of method,
    > static method when instance method is more appropriate, or vice versa.

    --
    [Ross A. Bamford] [ross AT the.website.domain]
    Roscopeco Open Tech ++ Open Source + Java + Apache + CMF
    http://www.roscopec0.f9.co.uk/ + in
     
    Ross Bamford, May 14, 2005
    #2
    1. Advertising

  3. > From: Ross Bamford <>
    > To quote someone (far) smarter than I, "A simple definition of object
    > oriented design is that it is a technique that focuses design on
    > data, and on the interfaces to it.


    By that definition, the very original LISP, long before it had strings
    and keywords and hashtables and arrays etc., back when it had only four
    primitive data types: integers floats symbols and standard pairs
    (car-cdr pairs, called CONSs in jargon, but I consider that jargon to
    be begging the question what the thing is you're talking about, so
    please allow my new and better term), was OOP. The typical thing a
    programmer did was: (1) design a way of using a structure of standard
    pairs to emulate the mathematical structure you had in your mind, (2)
    write functions to build such an object starting from either a
    different but related object (i.e. what's now called a "constructor
    with parameter") or from empty collection (i.e. what's now called a
    "default constructor") and then add items to it, and functions to
    perform modifications upon it, and functions to perform lookups or
    searches etc. upon it, and functions to read out its contents for nice
    printing or conversion to an older kind of object, and then (3) write
    all code regarding such a simulated data type using that set of
    constructor/mutator/accessor/printer/etc. functions.

    There was no need to artificially hide the innerds. You documented the
    standard functions for dealing with your new data structure, and if
    anybody bypassed your advertised functions to directly diddle the
    innerds that was their problem or the boss's problem.

    There was no need to change the compiler to treat the new kind of
    structure as if it were a primitive type. There was no need to use a
    different syntax (such as Java's object.method(otherargs)), static
    function calls sufficed. There was no need to implement polymorphism in
    the language itself. If you really wanted a function to be generic, it
    was good enough to dispatch on the type of argument passed to it. One
    common convention was to emulate new data types by having the CAR of
    the standard-pair be an identifier (symbol) identifying the type of
    emulated object and the CDR be the details of the structure, so
    dispatching on the CAR was trivial to code.

    Sure, that's not as nice as true parameter overloading that Common Lisp
    and Java now have, where you can define the different versions of
    static functions separately and let the compiler pick which one to
    generate code for, and not as nice as runtime polymorphism which Java
    has on one parameter (the instance before the dot in that special
    instance.method(otherargs) notation) and Common Lisp has on *all*
    parameters of generic functions, but still it was OOP per the
    definition you quoted.

    I'm sure your quoted definition is seriously different from the
    standard definitions used today. I'm curious whom you got it from.

    > To make an analogy with carpentry, an 'Object Oriented' carpenter
    > would be mostly concerned with the chair he was building, and
    > secondarily with the tools used to make it; a 'non-OO' carpenter
    > would think primarily of his tools.


    If *all* the tools the carpenter ever needed already existed available
    in stores for purchase, and all were cheap enough he could afford them,
    then of course he'd simply figure out which tools he needed that he
    didn't already have, buy those, and from that point onward he'd just
    use the tools he had, maybe learn better skills as he practiced with
    newly purchased tools he never used before, but over time use of those
    tools would seem natural.

    But what if the tools he already had plus the tools available in stores
    weren't really sufficient for his needs? What if it'd be a royal pain
    to try to make this special kind of chair with only the pre-existing
    tools, but he had an idea for a brand-new tool that would make the
    construction of this new kind of chair much better (easier, less
    hassle, better result)? If he knew how to make tools, he might first
    make a new tool, and then start making the chair using that new tool
    plus all the old tools.

    In Lisp it's easy to make tools in virtually every case, and in Java
    it's easy to make tooks in many cases. (The primary place where Lisp
    beats Java is when needing a new print-representation for a new data
    structure that you plan to write out and then work with manually in a
    text editor and then read the edited print-representation back in. In
    any language, printing is trivial, but parsing the print representation
    back in is nontrivial in any language except Lisp, except if you use
    Java's serialization where then you find it impossible to perform a
    manual edit without trashing your data structure.)

    > The design you're advocating is a classic example of being too
    > involved in the tools (classes) rather than simply using them to do
    > the job. I can see where you're coming from, but *trust me* there's
    > no mileage in it.


    Huh? All I did was disagree with somebody who basically implied that
    static methods are a sign of poor design, and I indicated some times
    you have no reasonble choice except using static methods (when
    writing a method to perform data processing on objects that are instances
    of class(es) that are already set in stone so you can't get into the
    source of that/those class(es) to add new instance methods there, or if
    you're defining new methods that act on primitive types for which
    *nobody* can possibly add new instance methods because there's no such
    thing as instance methods for them.

    If you have such a task you must perform, what would you personally
    choose instead of static methods to accomplish that task? Let's take a
    specific example: Suppose you need to compute the checksum of a string
    in a new way. Would you really define a new class of checksummable
    strings, which was just a wrapper on strings, so anybody using your
    package would have to first call a constructor:
    MyChecksummableString cs = new MyChecksummableString(s); // s is a String
    and then call the instance method:
    int cksm = cs.computeChecksum();
    instead of just calling a static method:
    int cksm = MyStringChecksummer.computeChecksum(s); // s is a String
    Or what would you propose instead of either?

    How exactly is disagreeing about whether to always use instance methods
    or sometimes use static methods instead, being *too* worried about
    tools? Surely you must pick an appropriate tool before you can use that
    tool? If a carpenter didn't care what tool he was using for a given
    task, if he *never* paid a moment's thought to appropriate tools, he
    might use a hammar claw to try to turn a screw, and might use a chisel
    as a way to carry nails (carefully balanced on the flat side) from site
    to site, etc. If somebody said "never use a hammar, always use a
    screwdriver instead, it's a symptom of bad work to ever use a hammar",
    do you think he should follow that advice, or maybe he should contest
    it and point out when a hammar is more appropriate than a screwdriver?
    Do you really think that contesting the never-hammar advice means he's
    "too converned about tools"?

    > Class factories (and the Singleton pattern)


    I already know about both:
    (1) When you can't know at compile time what specific constructor you
    will need to call, it varies with circumstances determinable only at
    runtime, so you use a static method (usually) which dispatchs to the
    appropriate constructor at runtime. This works if there's some
    super-class or interface that includes all the possible kinds of
    objects you might need to conctruct. Is that essentially correct?
    (2) If you absolutely must never make more than one instance of a
    particular class of object, but it's expensive to compute so you don't
    want to compute it at class-load time if you're never going to use it.
    So you have a static variable that is null if you haven't yet made the
    object, and is the object if you've made it already, and you have a
    static method that checks that variable and calls the private
    constructor if and only if it's not already made. Right?

    So if I was correct in both of those, why did you think I didn't know
    about them? (To quote Commodoare Decker, played by William Windom, in
    the original StarTrek episode "The Doomsday Machine", when Decker said
    he sent his whole crew to the third planet, and Captain Kirk said there
    is no third planet any more, "DON'T YOU THINK I ALREADY KNOW THAT?")

    > the exception to the rule.


    The *only* exception, no other exception? Bullshit! You should know
    better than that. Take a look at:
    http://java.sun.com/j2se/1.3/docs/api/java/lang/Math.html
    static double abs(double a)
    static float abs(float a)
    static int abs(int a)
    static long abs(long a)
    static double acos(double a)
    static double asin(double a)
    static double atan(double a)
    static double atan2(double a, double b)
    static double ceil(double a)
    static double cos(double a)
    static double exp(double a)
    static double floor(double a)
    static double IEEEremainder(double f1, double f2)
    static double log(double a)
    static double max(double a, double b)
    static float max(float a, float b)
    static int max(int a, int b)
    static long max(long a, long b)
    static double min(double a, double b)
    static float min(float a, float b)
    static int min(int a, int b)
    static long min(long a, long b)
    static double pow(double a, double b)
    static double random()
    static double rint(double a)
    static long round(double a)
    static int round(float a)
    static double sin(double a)
    static double sqrt(double a)
    static double tan(double a)
    static double toDegrees(double angrad)
    static double toRadians(double angdeg)
    I count 32 exceptions to the "static is usually bad design" there, and
    not one of them fits the factory or singleton pattern you say is *the*
    exception i.e. the *only* exception.

    > 2) Dependency Injection (with particular reference to decoupling
    > implementations from interfaces).


    I haven't heard of that one before. I'll look it up ...
    http://www.martinfowler.com/articles/injection.html
    Part of it simply evades the main point that what we really want is
    co-routines between supplier and consumer or event-maker and
    event-handler which are running on separate threads, but most
    programming languages (including Java) don't support the building of
    co-routines, except in the special case of serial I/O byte streams
    (such as Unix pipe, TCP stream, Unix FIFO, etc.) where you have to
    serialize and deserialize the data making such methods inappropriate
    for handling GUI events for example. The only language I've ever used
    that truly supported co-routines was assembly language, and indeed I
    implemented co-routines a few times way back when I did that sort of
    thing.

    Without co-routines, one of the two routines must call the other, and
    the whole organization of the program changes depending on which is the
    caller and which is the callee. For GUI of course, because actions are
    asynchrous, with the GUI component not the business logic determining
    which event will occur in sequence, you have no choice but to have the
    GUI component call the business logic, so the business logic must be
    written as a state machine where events cause state changes, and the
    particular state at any moment determines which GUI components are
    active and/or visible and/or expanded vs. terse etc. So anyway I
    already know about that.

    But then the article gets into constructors for multi-level dataflow
    structures, such as a file producing formatted data which passes
    through a parser to yield internal structured data which then passes
    through a filter which eliminates non-matching records and finally the
    remaining records are then vended to the consumer. This again is
    nothing new to me. Way back in the late 1970's, using MacLisp, I
    noticed a feature called something like "FileArray", which was just
    like a normal I/O stream except instead of reading directly from a file
    or writing directly to a file using built-in primitives READ or WRITE,
    it allowed the application programmer to construct his own kind of
    pseudo-I/O stream where he defined his own methods for getting a byte
    or sending a byte, and then the FileArray wrapper called those instead
    of calling normal I/O in the process of reading or writing data. But
    FileArray suported only one level of such user definition of how a
    stream works. To make a two-level FileArray, for example the kind of
    thing discussed in the article, one would have to first explicitly
    create the open bottom-level FileArray, defining the readbyte method
    for it, and then create a functional closure containing the
    already-created first-level FileArray as the value of a lexical
    (instance) variable within the second-level method, and then the
    FileArray you get from that could actually be used by the program. It
    was a slight mess to get all the code correct, and so I devised a
    notation for doing the whole thing by a single call using a LIST (daisy
    chain of CDR-connected standard pairs i.e. CONS cells) that specified
    the various levels of processing in the pipeline.

    I notice the difference between constructor injection and setter
    injection is simply the difference in philosophy between PITA (Prefer
    Initialization To Assignment), i.e. do all the initialization in the
    constructor, so upon return from the constructor you have an object
    already set up in a valid state ready to use, vs. JavaBeans where you
    have only a no-arg constructor and you must use explicit setter methods
    for all data members before the object will be ready to actually use
    for anything, in order that XML/JSP syntax is sufficient to create a
    JavaBean simply by declaring the bean (which in effect calls the no-arg
    constructor) and then make it suitable for use (by calling setter
    methods). So who wins the debate, XML/JSP, or PITA?

    Note that using an XML configuration file to specify each individual
    decision in hooking up the various levels is similar to my s-expression
    spec for nested FileArrays. The major difference, other than notational
    of s-expression vs. XML, is that I put my config spec as a literal
    constant in the application program itself, which is practical in Lisp
    but not in Java because only Lisp supports literal nested lists. I.e.
    in Lisp you can say:
    (setq config '(("filter" "Producer" "Stephen Spielberg")
    ("delim-parser" ":")
    ("file" "movies.txt")))
    whereas in Java it'd be a royal pain to build an equivalent structure
    like this:
    List config = new ArrayList(3);
    { List c1 = new ArrayList(3)
    c1.add("filter");
    c1.add("Producer");
    c1.add("Stephen Spielberg");
    config.add(c1); }
    { List c2 = new ArrayList(2)'
    c2.add("delim-parser");
    c2.add(":");
    config.add(c2); }
    { List c3 = new ArrayList(2)
    c3.add("file");
    c3.add("movies.txt");
    config.add(c3); }
    so parsing an XML file is done instead.

    By the way, how do you like my use of nested {blocks} to force
    indentation of the little sections that are added to the main object? I
    started doing that when building GUI structures, such as when adding
    various components to various JPanels and adding each of the JPanels to
    the content frame of the toplevel JFrame, to make the code easier to
    read, and it seems appropriate in this horrendous example too.

    So anyway, why did you think I needed to learn that partcular jargon
    for a topic I already knew about in general, or why did you think I
    needed to learn these kinds of specifics at this time?

    > 3) Antipatterns: Functional Decomposition, Abstraction Inversion, Big
    > Ball of Mud, etc.


    Abstraction Inversion was already covered above.
    I'll take a quick look at the other two now:

    Elementary Method "Functional Decomposition" (FCTD)
    http://www.informatik.uni-bremen.de/gdpa/methods/m-fctd.htm
    That seems to be nothing more than top-down analysis combined with
    making new abstractions anywhere appropriate. Did I miss anything?
    Lisp is the only language I know of that makes FCTD really easy and
    powerful. In Java you can create new kinds of objects but you can't
    create new kinds of syntax via structured macros like you can in Lisp.
    In Lisp, for example, you already have multi-level literals of
    built-in object, usually trees of standard pairs (CONS cells), but it's
    easy to provide a mapping from an easy-to-edit structure to another
    structure that is semantically the same but easier for the program to
    traverse. For example, you could list as a literal, or equivalently
    as the content of a configuration file, simple list of the names
    of nodes in a graph and the neighbors of each (per directed edges).
    Then a translator could build a hash table that made it easy to find
    each node and list of links from it, avoiding the need to sequentially
    search the originally input data, or if hash lookups are slowing down
    your program you could translate the data to a complex cross-linked
    structure that made it easy to jump directly from one node to any of
    its neighbors without needing to perform any lookups of any kind. But
    in Lisp only, not Java, you could generate a whole new syntax for
    specifying the function/method calls typically done, so you can express
    tasks in a more natural language to the users, so they can
    interactively make queries without having to learn SQL or Java source
    code or even XML/JSP. If the language has parentheses around atoms or
    lists of atoms, and the first element in the toplevel list is the name
    of the kind of task to be performed, then Lisp macros can convert such
    input into executable code, so somebody who knows just the
    parenthesized list notation but nothing about Lisp function names can
    nevertheless build scripts that compile into executable
    functions/methods/programs.
    (login) (power on) (manufacture cars until time (5 PM)) (power off) (logout)
    That could be the toplevel script for a robotic auto-assembly factory.

    Re: Antipattern Big Ball of Mud
    http://www.antipatterns.com/dev_cat.htm
    Software development is a chaotic activity, therefore
    the implemented structure of systems tends to stray from the planned
    structure as determined by the architecture, analysis and design.

    I think that's a gross undertatement. When writing code for a brand new
    task, where the programmer doesn't know in advance whether it's even
    possible, and if possible then which tools are most appropriate for
    accomplishing the task, and given a tool how exactly it works if it's a
    tool the programmer has never before used (after all even the J2SE API
    is really large and hardly anyone has personally used every last bit of
    it), the best route toward implementation of the required task may be
    to first identify the key potential roadblocks and write test code to
    verify that it is actually possible to get past those roadblocks, then
    to write test code to make sure all the major tools function as
    expected or if not then to learn how they actually do perform one way
    despite the JavaDocs making it sound like they work slightly
    differently, and also to get experience with the nuances of their
    detailed functionning in the midst of a complex situation. Sometime
    along that line of experiments, it becomes easier to start working with
    live data, writing code starting from input of raw data through parsers
    to processing the parsed data and finally producing desired final
    output, rather than to continuing writing an independent set of test
    rigs all using test-rig-canned toy data. So through all this process
    there *is* no overall plan because not enough is known initially to
    have any clear idea which of many potential plans is best. The only
    meta-plan is bottom-up input-to-output tool-building. After all the
    tools are built up to the point where raw input can be parsed and
    processed and formatted to make correct output, so the program is
    basically working already, *then* the programmer finally has enough
    knowledge of what works and what doesn't work or is not worth trying,
    and *then* it's possible to think about refactoring the code to use the
    same basic tools but organized in a new more flexible way. (And in some
    cases the design that evolved from bottom-up input-to-output
    tool-building already is totally good enough and no major refactoring
    is needed, although usually refactoring of some of the medium-size
    tools will help make the code easier to comprehend, but there's a
    tradeoff involved in taking a chance of breaking something that was
    already working if it's not really in need of refactoring to make it
    easier to add new features.

    Regarding spaghetti code: Bottom-up tool building generally does not
    produce spaghetti code, except within the innerds of an individual tool
    implementation that calls too-low-level tools or which doesn't use an
    appropriate abstraction for dealing with its level in the hierarchy.
    So no largescale refactoring of the application is usually needed.
    Only what might be called "peephole refactoring" would be needed.

    Hmm, this article has a fine list of bad features of software, but none
    of them is what you mentionned. So I searched for that phrase, and it
    turned up only in a reference to "Foote 97" but no the number 97
    doesn't appear anywhere else in the WebPage. I clicked the Up link
    twice but couldn't find any other mention of 97. Back to Google ...

    OK, indirect link got me to:
    http://www.laputan.org/mud/mud.html#BigBallOfMud

    A shantytown is a place where nobody has hardly any money for
    purchasing anything in advance, and even if somebody purchased
    something they wouldn't have any way to protect it from theft or
    destruction, so there's no point spending effort on infrastructure
    instead of on immediate value.

    That's not a good metaphor for a business which does have some
    available income (although perhaps not much), and does have relatively
    secure buildings where purchased products could be stored between uses,
    and does have the legal right to own anything it purchases or develops,
    but despite all that it chooses not to invest in infrastructure.

    Shantytown is a better metaphor for my personal case, where I have no
    money even for food, and I have to make do with what I already have or
    what I can make myself. Until my modem on my laptop stopped working, I
    might have downloaded free stuff, as I already did with BeanShell a
    couple weeks before the modem died, and as I was trying with abyss and
    the required glibc when it actually died.

    Data structures may be haphazardly constructed,
    or even next to non-existent.

    If a useful data structure is obvious on a few minute's thought, why
    should the decision be prolonged for weeks instead of immediately
    chosen and implemented so that you can put that tool behind you and
    start using the tool to build higher-up tools? Why the derogatory word
    "haphazard" for anything that takes less than a month to decide upon
    the design and implement and test? I can typically design and implement
    and unit-test several new tools per workday, using either Lisp's usual
    ReadEvalPrint loop, or Java's BeanShell ParseEvalPrint loop. What's
    wrong with that?

    Everything talks to everything else.

    Nonsense!! Only a poor programmer works like that. Each function calls
    the tools it needs, passing parameters to them and getting results
    back, and occasionally setting global as a side-effect (caching the
    result of a calculation), and referencing a global whenever that info
    is needed.

    Every shred of important state data may be global.

    Nonsense. On my bad days I use global variables for things that are set
    once and referenced from many places. On my good days I use fewer
    global variables that point to structures that contain groups of
    different global values, or I pass around a state parameter that points
    to such a group of related values. But most state data is either local
    or parameter/return or a field within a larger collection even when the
    toplevel pointer to the collection is global.

    Variable and function names might be uninformative, or even
    misleading.

    What kind of strawman is that author attacking? Do you have that same
    attitude, which is why you referred me to this topic? What does choice
    of names of variables have to do with the way modules connect together
    or how quickly the program was designed? With an appropriate
    programming language that doesn't keep getting in the way of
    programmer's efforts, and a tool-building approach by the programmer,
    rapid prototyping and agile development and TDD (Test-Driven
    Development) with unit testing can yield much better designs than the
    strawman described here.

    One thing that isn't the answer is rigid, totalitarian, top-down
    design.

    Well at least we agree here. Top-down analysis of the general way to
    solve the problem and provide the various "features" and fit them into
    a general scheme is good, keeping most of the details vague, something
    looser than pseudocode to describe the major data structures and
    operations upon them and relations between different structures within
    the program, and possibly to create a hierarchial list of the major
    tools that need to be created. But that's just to make a general
    framework for planning the bottom-up input-to-output tool building that
    will commence.

    Make it work. Make it right. Make it fast [Beck 1997].

    That's basically what I do, although with a good programming language I
    can usually make it mostly "right" right at the start while I'm still
    concentrating on making it work, and with efficient data structures
    chosen to fit the typical operations, it's already halfway decently
    fast from the start too.

    Anyway, the only place I recall seeing big inscrutable balls of mud is
    in BASIC programs and in Unix shell scripts and Perl that tries to
    outdo shell scripts. Maybe fortran and assembly language programs were
    like that but it's been so long since I looked at them I don't
    remember.

    I'm cutting my reading a few pages after that point. I've spent too
    much time on this followup and I haven't gotten any feedback on whether
    I'm even reading the right stuff you wanted me to read. Time to
    post and get your feedback on my remarks about these topics.
     
    Robert Maas, see http://tinyurl.com/uh3t, Jun 21, 2005
    #3
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. cyril
    Replies:
    2
    Views:
    3,871
    cyril
    Aug 25, 2004
  2. E11
    Replies:
    1
    Views:
    4,790
    Thomas Weidenfeller
    Oct 12, 2005
  3. Toke H?iland-J?rgensen
    Replies:
    10
    Views:
    645
    Toke H?iland-J?rgensen
    Jan 1, 2004
  4. christopher diggins
    Replies:
    16
    Views:
    756
    Pete Becker
    May 4, 2005
  5. marekw2143
    Replies:
    3
    Views:
    1,366
    marekw2143
    Jul 25, 2009
Loading...

Share This Page