Discussion in 'C++' started by 88888 Dihedral, Feb 28, 2012.

  1. After checking some C++ libs, I think now the problem is not the expensive DRAM
    and the HD in tera bytes.

    It is the problem to let executables stayed in L2 in the same chip in nontrivial program executions to beat those hobbists and novices in the execution.

    I regard the cheap DRAM as the slow HD in the 90's.
    88888 Dihedral, Feb 28, 2012
    1. Advertisements

  2. 88888 Dihedral

    Quint Rankid Guest

    It's not clear to me what you want to accomplish.

    Do you wish to reduce the size of your executables? I've heard that
    some compilers have an option for this. Maybe the linkers do too?

    Wouldn't whatever is in L2 cache get rolled out somewhere if another
    program needs it?
    Maybe you should look into some sort of semiconductor based HD? Or is
    that an oxymoron?

    Please clarify.
    Quint Rankid, Feb 28, 2012
    1. Advertisements

  3. I have a nagging suspicion: Are you a bot trying to pass the Turing test?
    Juha Nieminen, Feb 28, 2012
  4. 88888 Dihedral

    Dombo Guest

    Op 28-Feb-12 21:01, Juha Nieminen schreef:
    My thoughts exactly.
    Dombo, Feb 28, 2012
  5. 88888 Dihedral

    Ian Collins Guest

    And failing!
    Ian Collins, Feb 28, 2012
  6. 在 2012å¹´2月29日星期三UTC+8上åˆ3æ—¶38分39秒,Quint Rankid写é“:

    在 2012å¹´2月29日星期三UTC+8上åˆ3æ—¶38分39秒,Quint Rankid写é“:
    OK, lets test programs do repeated random RW for some data in L1, and thenL2, and then the dram, and then the HD first for the current mass producedavailable hardware.

    Then one can think about how to design programs with those parameters
    changed over the time.
    88888 Dihedral, Mar 1, 2012
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.