Cracking DES with C++ is faster than Java?

Discussion in 'C++' started by Julie, Apr 25, 2004.

  1. Julie

    perry Guest

    actually, it's a bit of an illusion to argue that c & c++ produce faster
    implementations than java. before the days of JIT (just-in-time)
    compilation java was primarily an interpreted language so there was no
    argument there. however with the introduction of JIT things changed.
    true, technically speaking c/c++ is faster but only by a marging of less
    than 1%.

    check out:
    http://java.sun.com/products/hotspo...tspot_v1.4.1/Java_HSpot_WP_v1.4.1_1002_4.html

    http://java.sun.com/docs/books/tutorial/post1.0/preview/performance.html
    http://java.sun.com/developer/onlineTraining/Programming/JDCBook/perf2.html
    http://java.sun.com/developer/onlineTraining/Programming/JDCBook/perf2.html#jit
    http://java.sun.com/developer/onlineTraining/Programming/JDCBook/perfTech.html

    "The server VM contains an advanced adaptive compiler that supports many
    of the same types of optimizations performed by optimizing C++
    compilers, as well as some optimizations that can't be done by
    traditional compilers, such as aggressive inlining across virtual method
    invocations. This is a competitive and performance advantage over static
    compilers. Adaptive optimization technology is very flexible in its
    approach, and typically outperforms even advanced static analysis and
    compilation techniques."

    http://java.sun.com/products/hotspot/docs/whitepaper/Java_HotSpot_WP_Final_4_30_01.html

    i know, your going to stick to your guns over the 1%. however, the
    difference is performance at level is typically most insignificant.

    - perry
     
    perry, May 1, 2004
    #81
    1. Advertisements

  2. Julie

    Paul Schmidt Guest

    You need to look at conditions of the time though, you could hire a
    programmer for $5.00 an hour, computer time cost over $1000 per machine
    second, so if wasting 400 hours of programmer time saved 5 machine
    seconds you were ahead of the game.

    Today we look at different conditions, you can get a year of computer
    time for $1,000 but the programmer costs that for a week, so tools need
    to be programmer efficient rather then machine efficient. If you waste
    5 hours of machine time and save a week of programmer time, your ahead
    of the game.

    Java becomes more programmer efficient by 2 of the 3Rs (reduce is the
    missing one) reuse and recycle, because a class is an independant
    entity, you can use the same class over and over again, in different
    programs.

    I think the future will be more descriptive, in that a program will
    describe what an object needs to accomplish rather then how the object
    does it. The compiler will then figure out how to do that.

    Paul
     
    Paul Schmidt, May 2, 2004
    #82
    1. Advertisements

  3. Stoustrup explicitly qualified the relationship between C and C++ with
    the statement "Except for minor details, C++ is a superset of the C
    programming language." Even read informally, that statement cannot be
    interpreted simply as "C++ is a superset of C".

    The intersection of C and C++ is not equal to C, but it is close
    enough to cause problems that are rarely noticed at compile time and
    manifest as hard to debug, run time crashes. Notably code in either
    language that relies heavily on sizeof (e.g., generic data
    structures), and C code that abuses goto in ways that violate C++
    scoping rules (typically machine generated), can break if compiled
    using the "other" language.

    George
    =============================================
    Send real email to GNEUNER2 at COMCAST o NET
     
    George Neuner, May 2, 2004
    #83
  4. Could you give an actual example of computer time costing over $1000
    per machine second? That would amount to over $86 _million_ per day
    or $31 _billion_ per year ---- quite a lucrative business for just
    one single computer!!!! And even if the computer would be used by
    paying customers only 10% of the time, it would still mean over $3
    billion per year --- several decades ago when money was worth much
    more than today!

    Also, remember that the computers of the late 60's were, by today's
    standards, quite slow. The amount of computation made during CPU
    second on one of those machines could be duplicated by a human on a
    mechanical calculator in less than 10 hours. And if a programmer
    cost $5/hour, a human doing calculations could probably be obtained
    for $2.5/hour. So why waste over $1000 on computer time when the
    same amount of computations could be done by a human for less than
    $25 ?????

    There was software reuse before classes -- the subroutine was
    invented for that specific purpose: to wrap code into a package
    making it suitable for reuse in different programs.

    Also: in real life, classes aren't as independent as you describe
    here: most classes are dependent on other classes. And in extreme
    examples, trying to extract a class from a particular program for use
    in another program will force you to bring along a whole tree of
    classes, which can make moving that class to the new program
    infeasible.
    Like Prolog ? It was considered "the future" in the 1980's .....

    --
     
    Paul Schlyter, May 2, 2004
    #84
  5. Yes ... your mileage may vary....
    True --- so let's change it to "The main reason COBOL is still used...."
    However the huge amount of legacy code is one very important reason.
    If that legacy code wasn't there, I don't think COBOL would be used
    very much. Fortran could still have a significant use though, due to
    its superiority in producing efficient machine code for heavy number
    crunching programs.
    I don't see much difference between the phrase "any kind of problem"
    and "all kinds of problems", except that the latter would indicate
    an attempt to actually try to solve all conceivable kinds of problems,
    while the former only recognizing the potential of doing so.

    --
     
    Paul Schlyter, May 2, 2004
    #85
  6. Julie

    perry Guest

    Yes ... your mileage may vary....


    True --- so let's change it to "The main reason COBOL is still used...."



    "However the huge amount of legacy code is one very important reason.
    If that legacy code wasn't there, I don't think COBOL would be used
    very much. Fortran could still have a significant use though, due to
    its superiority in producing efficient machine code for heavy number
    crunching programs."

    we have to get out of the mindset that one language is one size fitz all.

    the universe is constantly expanding and this expansion is constantly
    creating new and unique opportunities for growth. both early and modern
    computer language design is a reflection of this.

    further what you are talking about know is commonly addressed using
    design patterns, one in particular is the wrapper that allows legacy
    code to be "wrapped" inside another (typically more advnaced)
    implementation in order to harness the best of both worlds without
    nullifying past efforts.

    you might smug at COBOL and FORTRAN but a great many people have
    accomplished many great feats with these tools.... the piece of plastic
    in your back pocket would not be there except for these...

    - perry
     
    perry, May 2, 2004
    #86
  7. [snip]
    COBOL's big enemy is Visual BASIC. This would be a
    big surprise to people in the 1960s.

    Andrew Swallow
     
    Andrew Swallow, May 2, 2004
    #87
  8. Occasionally I'm actually amazed myself that BASIC didn't die many
    years ago......

    Btw should there ever be a COBOL with object oriented extensions to
    the language, the name of that object oriented COBOL would be:

    "Add one to COBOL" ...... :)

    --
     
    Paul Schlyter, May 2, 2004
    #88
  9. Do you mean the case when f is called thru supplying the
    same actaul arguement to the two formal parameters?
    But then that's also a problem with Fortran, if I don't
    err. (The issue was the difference between these PLs.)

    M. K. Shen
     
    Mok-Kong Shen, May 2, 2004
    #89
  10. I don't understand in which sense is Visual BASIC an enemy
    of COBOL. COBOL has widespread use in certain commercial
    sectors, notably banking, where BASIC is barely used, if
    I don't err.

    As to 'object-oriented' COBOL, I happen to know the title of
    one book (which I have never seen though):

    Ned Chapin, Standard Object-Oriented Cobol

    M. K. Shen
     
    Mok-Kong Shen, May 2, 2004
    #90
  11. [snip]
    The transfer of data entry from punch cards to PCs
    has allowed Visual BASIC to take over as the main
    data processing language in new developments.

    In simple English, that is where the jobs are.

    Although Java is now trying to be come the main user
    friendly MMI language.

    Andrew Swallow
     
    Andrew Swallow, May 3, 2004
    #91
  12. Julie

    Paul Schmidt Guest

    In trying to prove that I my implementation of the theory was wrong, you
    missed the point of the theory. In the 1960's computer time was
    expensive and labour was cheap, systems attempted to use as little
    computer time as possible, due to the cost. So if writing 400 lines of
    Fortran, COBOL or Assembler saved a few seconds of computer time, it was
    worth it. Today computers are cheap, and labour is expensive, so new
    languages have to be more oriented to reducing labour resources at the
    expensive of computer resources. Using massive libraries of precanned
    class libraries and reusing classes is a good way of reducing labour.
    The fact that the more general code may not be as machine efficient is a
    small tradeoff.
    Subroutines only dealt with the code, you had to be very careful with
    the data, a lot of programs used global data, and it was common that one
    subroutine would step on a another subroutines data. C and Pascal
    allowed for local data, but still rely largely on global data. Objects
    cured this to a large extent, it's easier to black-box an object, then
    to black-box a subroutine.
    You missed the point, you CAN write classes with the idea of writing a
    class once, and then using it over and over again, in each new project
    that needs that kind of class, put it in a package or class library and
    just bolt in the library or package.

    Okay, so it's not a new idea, and previous implementations have failed,
    objects were the same way, the first attempt to Objectize C was
    ObjectiveC who uses it today, you don't see a big call for SmallTalk
    programmers either? Everybody seemed to like C++, and Java has been
    popular enough. We have objects, we will eventually move away from low
    level object handling to high level object handling.

    Paul
     
    Paul Schmidt, May 3, 2004
    #92
  13. Julie

    Jerry Coffin Guest

    (Paul Schlyter) wrote in message
    [ ... ]
    I'm not certain I agree, but at least I'm not absolutely certain this
    is wrong.
    I'd tend toward more or less the opposite: Fortran has little real
    advantage for most work. C99 (for one example) allows one to express
    the same concepts, but even without that, well written C++ meets or
    exceeds the standard set by Fortran.

    COBOL, OTOH, provides reasonable solutions for a fairly large class of
    problems that nearly no other language addresses as well. Keep in
    mind that COBOL was intended for use by people who are not primarily
    programmers, and that's still its primary use -- most people who write
    COBOL are business majors and such who rarely have more than a couple
    of classes in programming. The people I've talked to in that field
    seem to think that's the way things should be; they've nearly all
    tried to use real programmers to do the job, but have almost
    universally expressed disappointment in the results (or, often, lack
    thereof).

    Now I'm not sure they're entirely correct, but I'm hard put to
    completely ignore or discount their experiences either.

    [ ... ]
    As I'd interpret your origial statement ("none best for any kind of
    problem") it means "there is no problem for which any of them is the
    best". I can hardly believe that's what you intended to say, but as
    it was worded, I can't figure out another interpretation for it
    either.
     
    Jerry Coffin, May 3, 2004
    #93
  14. No.
     
    Douglas A. Gwyn, May 3, 2004
    #94
  15. While thinking about it, I became aware that, for a DES cracker, it
    _may_ be possible to reduce that factor. What kills speed in Java are
    memory allocations (you _have_ to use "new" to allocate a small array
    of bytes, whereas in C or C++ you can often use a buffer on the stack,
    which is way faster, both for allocating and releasing) and array
    accesses (which are checked against the array length).

    A DES cracker (as opposed to a simple DES encryption engine) can be
    programmed without any array at all, thus suppressing both problems. The
    implementation would most likely use so-called "bitslice" techniques,
    where any data is spread over many variables (one bit per variable).
    Thus, S-box are no longer tables, but some "circuit", and bit
    permutations become "free" (it is a matter of routing data, solved at
    compilation and not at runtime). With the Java "long" type, 64 instances
    are performed in parallel. In bitslice representation, I/O becomes a
    real problem (you have to bitswap a lot) but a DES cracker does _not_
    perform I/O.

    So an optimized DES cracker in Java would look like a _big_ method with
    a plethora of local variables, no array access, no object, no memory
    allocation. It _may_ be as efficient as a C or C++ implementation,
    provided that the JIT compiler does not drown under the task (even C
    compilers have trouble handling a 55000-lines function with 10000+ local
    variables -- try it !). Of course, it will be slow as hell on any JVM
    without a JIT (e.g., the Microsoft VM under Internet Explorer 5.5).

    Either way, the Java implementation will not be better than the C
    implementation.


    --Thomas Pornin
     
    Thomas Pornin, May 3, 2004
    #95
  16. Sure --- but still, computer time wasn't THAT expensive!!!! At $1000
    per CPU second, as you claimed, with the slow computers available
    back then, the computer business would have efficiently killed itself
    at a very early stage, since hand computation by humans would then
    have been cost effective in comparison.

    $1 per CPU second is more reasonable -- and that's still a lot!

    (btw the word "computer" existed in the English language already
    100+ years ago, but with a different meaning: a human, hired to
    perform computations)
    I never argued against that. However, for the most CPU intensive
    programs, such as numerical weather prediction, it's still cost
    effective to devote programmer time to make the program more
    efficient. And sometimes it's not just a matter of cutting runtime
    costs, but the matter of being able to solve a particular problem at
    all or not. Admittedly, only a small fraction of all existing
    programs are of this kind.

    FORTRAN had data local to subroutines before C and Pascal even existed.
    Yes, this local data was static, but since FORTRAN explicitly disallowed
    recursion, that was not a problem.
    Sure, but still software was reused before object orientation came
    in fashin....
    You can do the same with subroutines.....

    .....or some new programming paradigm will become popular, making
    objects obsolete. You never know what the future will bring....

    --
     
    Paul Schlyter, May 3, 2004
    #96
  17. Your view and my view are not contradicting one another other. Indeed
    Fortran has little real advantage for most work, since most work isn't
    heavy number crunching.
    Unfortunately, there are few C99 implementations out there. Do
    you know any C99 available for supercomputers, for instance?
    :) .... try the standard problem of writing a subroutine to invert a
    matrix of arbitrary size. Fortran has had the ability to pass a
    2-dimensional array of arbitrary size to subroutines for decades. In
    C++ you cannot do that -- you'll have to play games with pointers to
    acheive similar functionality. That's why I once wrote the amalloc()
    function (it's written in C-89 but compilable in C++), freely
    available at http://www.snippets.org
    In principle true, but not particularly relevant: the database
    functionality which is missing from most programming languages is
    insteac acheived with a suitable library.
    The vision of COBOL was to enable the programmers to express their
    solutions in plain English. COBOL didn't reach quite that far, but it's
    still a quite "babbling" language in comparison to almost all other
    programming languages.
    I guess real programmers want real problems, or else they'll get
    bored. Try to put a very skilled engineer on an accounting job
    and you'll probably see similar results....
    I believe you --- and COBOL will most likely continue to be used
    for a long time.
    True, I didn't intend to say that -- with "any problem" I meant "any
    problem which could appear" and not "at least one problem"... but
    OK, English isn't my native language....

    --
     
    Paul Schlyter, May 3, 2004
    #97
  18. C doesn't have multidimensional arrays, but it does support
    arrays of arrays and other complex structures. Using these
    tools you get a *choice* of how to represent matrices,
    unlike the native Fortran facility where you're stuck with
    whatever the compiler has wired in.

    In C++ one would normally use a matrix class in order to be
    able to apply the standard operators, e.g. + and *.
     
    Douglas A. Gwyn, May 3, 2004
    #98
  19. Julie

    Liwp Guest

    Someone posted the link below to this thread earlier. I'm guessing you
    did not read the article. For example, C has problems with optimizing
    pointers that result in similar problems as Java has with array bounds
    checking. Also, GCs provide memory locality which again reduces the
    number of cache misses which results in better performance. Then again
    if you don't allocate anything dynamically you don't have to worry about
    that.

    http://www.idiom.com/~zilla/Computer/javaCbenchmark.html

    If you look at the benchmarks Java goes from being 9 times slower to
    being 4 times faster than C. I think the only conclusions you can draw
    from the stats is that you can seriously muck things up with both Java
    and C unless you know how certain structures affect performance in
    relation to register allocations, memory access, and optimizations.
     
    Liwp, May 3, 2004
    #99
  20. From at least what I was told, programming in such
    business as banking continues to use COBOL and no
    chance was ever given to BASIC, whether new development
    or not, however.

    M. K. Shen
     
    Mok-Kong Shen, May 3, 2004
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.