shared graphics in notebook

Discussion in 'VHDL' started by suz, Jul 20, 2004.

  1. suz

    suz Guest

    I recently plan to buy a notebook to run verilog simulations, some of
    them come with "shared graphic memory" and offer better price. I am
    wondering, if this kind of structure has any impact on simualtion
    performmance?
    I think the simulation process itself is memory access intensive, and
    if there are some display activity going on, two sides may fight for
    memory buses.
    anyone has idea or experience about this?
     
    suz, Jul 20, 2004
    #1
    1. Advertising

  2. suz

    Jason Zheng Guest

    suz wrote:

    > I recently plan to buy a notebook to run verilog simulations, some of
    > them come with "shared graphic memory" and offer better price. I am
    > wondering, if this kind of structure has any impact on simualtion
    > performmance?
    > I think the simulation process itself is memory access intensive, and
    > if there are some display activity going on, two sides may fight for
    > memory buses.
    > anyone has idea or experience about this?


    I have no experience about your situation, but I think besides the point
    that the shared-memory graphics card chews up your main memory, there's
    little to worry about the graphics subsystem fighting the CPU for
    memory, because if you are doing simulation, all the graphical processes
    happen after the number-crunching is done. I'm not sure about gaming
    performance, but think about it, would you really count on playing
    Unreal 2003 smoothly with a cheap intel graphics chip (with shared
    memory)? Probably not even on a desktop PC, not even if it has its own
    memory.
     
    Jason Zheng, Jul 20, 2004
    #2
    1. Advertising

  3. suz

    Cliff Brake Guest

    Jason Zheng wrote:

    > suz wrote:
    >
    >> I recently plan to buy a notebook tconsonantlyimulations, some of
    >> them come with "shared graphic memory" and offer better price. I am
    >> wondering, if this kind of structure has any impact on simualtion
    >> performmance?
    >> I think the simulation process itself is memory access intensive, and
    >> if there are some display activity going on, two sides may fight for
    >> memory buses.
    >> anyone has idea or experience about this?

    >
    > I have no experience about your situation, but I think besides the point
    > that the shared-memory graphics card chews up your main memory, there's
    > little to worry about the graphics subsystem fighting the CPU for
    > memory, because if you are doing simulation, all the graphical processes
    > happen after the number-crunching is done. I'm not sure about gaming
    > performance, but think about it, would you really count on playing
    > Unreal 2003 smoothly with a cheap intel graphics chip (with shared
    > memory)? Probably not even on a desktop PC, not even if it has its own
    > memory.


    Typically, in a shared memory architecture, the display controller has to
    constantly scan the frame buffer in main memory to repaint the display.
    Depending on the size of your display and the memory bandwidth, this
    scanning may use enough memory bandwidth to affect system performance.

    --

    Cliff Brake
    BEC Systems
    cbrake _at_ bec-systems _dot_ com
     
    Cliff Brake, Jul 21, 2004
    #3
  4. Jason Zheng <> wrote in message news:<cdju0f$fqp$>...

    > I have no experience about your situation, but I think besides the point
    > that the shared-memory graphics card chews up your main memory, there's
    > little to worry about the graphics subsystem fighting the CPU for
    > memory, because if you are doing simulation, all the graphical processes
    > happen after the number-crunching is done. I'm not sure about gaming
    > performance, but think about it, would you really count on playing
    > Unreal 2003 smoothly with a cheap intel graphics chip (with shared
    > memory)? Probably not even on a desktop PC, not even if it has its own
    > memory.


    I think you're missing something fundamental: how does the RAMDAC draw
    the display for a plain vanilla 2D framebuffer? By regular fetches to
    mainstore.

    I suppose you could blank the screen by disabling video during
    simulation, but that might not be convenient.

    -t
     
    Anthony J Bybell, Jul 21, 2004
    #4
  5. suz

    Jason Zheng Guest

    Anthony J Bybell wrote:
    > Jason Zheng <> wrote in message news:<cdju0f$fqp$>...
    >
    >
    >>I have no experience about your situation, but I think besides the point
    >>that the shared-memory graphics card chews up your main memory, there's
    >>little to worry about the graphics subsystem fighting the CPU for
    >>memory, because if you are doing simulation, all the graphical processes
    >>happen after the number-crunching is done. I'm not sure about gaming
    >>performance, but think about it, would you really count on playing
    >>Unreal 2003 smoothly with a cheap intel graphics chip (with shared
    >>memory)? Probably not even on a desktop PC, not even if it has its own
    >>memory.

    >
    >
    > I think you're missing something fundamental: how does the RAMDAC draw
    > the display for a plain vanilla 2D framebuffer? By regular fetches to
    > mainstore.
    >
    > I suppose you could blank the screen by disabling video during
    > simulation, but that might not be convenient.
    >
    > -t

    Yah but that's at 60-100Hz frequency, very little bandwidth.
     
    Jason Zheng, Jul 23, 2004
    #5
  6. Jason Zheng wrote:
    > Anthony J Bybell wrote:
    >
    >> Jason Zheng <> wrote in message
    >> news:<cdju0f$fqp$>...
    >>
    >>
    >>> I have no experience about your situation, but I think besides the
    >>> point that the shared-memory graphics card chews up your main memory,
    >>> there's little to worry about the graphics subsystem fighting the CPU
    >>> for memory, because if you are doing simulation, all the graphical
    >>> processes happen after the number-crunching is done. I'm not sure
    >>> about gaming performance, but think about it, would you really count
    >>> on playing Unreal 2003 smoothly with a cheap intel graphics chip
    >>> (with shared memory)? Probably not even on a desktop PC, not even if
    >>> it has its own memory.

    >>
    >>
    >>
    >> I think you're missing something fundamental: how does the RAMDAC draw
    >> the display for a plain vanilla 2D framebuffer? By regular fetches to
    >> mainstore.
    >>
    >> I suppose you could blank the screen by disabling video during
    >> simulation, but that might not be convenient.
    >>
    >> -t

    >
    > Yah but that's at 60-100Hz frequency, very little bandwidth.


    That may be the refresh rate but that is NOT the refresh bandwidth
    requirements. A "typical" display these days is at least 8-bit color at
    1024x768 at 75 Hz refresh. This translates to a minimum of 59 MB/s
    bandwidth requirement. A more common CAD setup is 24-bit color at
    1280x1024 at 85 Hz refresh -> 334 MB/s. To me, that doesn't count as
    "very little bandwidth".

    The performance impact of shared memory depends on many design factors
    in addition to the display resolution so the only true measure of
    whether or not the setup is acceptable is to test it.

    But consider this: you will be spending significant $ on software, does
    it really make any sense to then cripple the performance of this
    software to save $100 on your hardware?
    --
    Tim Hubberstey, P.Eng. . . . . . Hardware/Software Consulting Engineer
    Marmot Engineering . . . . . . . VHDL, ASICs, FPGAs, embedded systems
    Vancouver, BC, Canada . . . . . . . . . . . http://www.marmot-eng.com
     
    Tim Hubberstey, Jul 23, 2004
    #6
  7. suz

    Jason Zheng Guest

    Tim Hubberstey wrote:
    > Jason Zheng wrote:
    >
    >> Anthony J Bybell wrote:
    >>
    >>> Jason Zheng <> wrote in message
    >>> news:<cdju0f$fqp$>...
    >>>
    >>>
    >>>> I have no experience about your situation, but I think besides the
    >>>> point that the shared-memory graphics card chews up your main
    >>>> memory, there's little to worry about the graphics subsystem
    >>>> fighting the CPU for memory, because if you are doing simulation,
    >>>> all the graphical processes happen after the number-crunching is
    >>>> done. I'm not sure about gaming performance, but think about it,
    >>>> would you really count on playing Unreal 2003 smoothly with a cheap
    >>>> intel graphics chip (with shared memory)? Probably not even on a
    >>>> desktop PC, not even if it has its own memory.
    >>>
    >>>
    >>>
    >>>
    >>> I think you're missing something fundamental: how does the RAMDAC draw
    >>> the display for a plain vanilla 2D framebuffer? By regular fetches to
    >>> mainstore.
    >>>
    >>> I suppose you could blank the screen by disabling video during
    >>> simulation, but that might not be convenient.
    >>>
    >>> -t

    >>
    >>
    >> Yah but that's at 60-100Hz frequency, very little bandwidth.

    >
    >
    > That may be the refresh rate but that is NOT the refresh bandwidth
    > requirements. A "typical" display these days is at least 8-bit color at
    > 1024x768 at 75 Hz refresh. This translates to a minimum of 59 MB/s
    > bandwidth requirement. A more common CAD setup is 24-bit color at
    > 1280x1024 at 85 Hz refresh -> 334 MB/s. To me, that doesn't count as
    > "very little bandwidth".


    Just for the sake of arguement, a laptop setup is more likely to be
    1024*768*24bit at 60Hz (Active Matrix), which is about 138 MB/s. 266 Mhz
    Dual Channgel DDR gives you about 3.2 Gb/s. 400Mhz Dual DDR gives you
    6.4Gb/s bandwidth. Also consider the fact that the memory controller
    only have to service the graphics controller every 17us.

    >
    > The performance impact of shared memory depends on many design factors
    > in addition to the display resolution so the only true measure of
    > whether or not the setup is acceptable is to test it.


    Point taken, you'd have to have a laptop system where you can switch
    between an independent graphics card and the onboard graphics chip to
    compare. I don't think you can do so unless you have a docking station
    that supports AGP slots.

    > But consider this: you will be spending significant $ on software, does
    > it really make any sense to then cripple the performance of this
    > software to save $100 on your hardware?


    I agree, but this is for a low budget setup, so the person who is making
    this purchase prob. won't spend big bucks on software to begin with.
     
    Jason Zheng, Jul 26, 2004
    #7
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Thomas G. Marshall
    Replies:
    6
    Views:
    399
    Thomas G. Marshall
    Sep 26, 2005
  2. Replies:
    6
    Views:
    531
    Andrew Thompson
    Nov 4, 2005
  3. Replies:
    2
    Views:
    560
    Chris Uppal
    Feb 28, 2006
  4. F. GEIGER
    Replies:
    4
    Views:
    372
    F. GEIGER
    Oct 1, 2003
  5. Martin Zuber
    Replies:
    1
    Views:
    429
    F. GEIGER
    Oct 21, 2003
Loading...

Share This Page