J
Jukka Lahtinen
Dirk Bruere at NeoPax said:Processing power and bandwidth are only going to get cheaper for the next
30 years
...unless the recent disaster in Japan temporarily changes that.
Dirk Bruere at NeoPax said:Processing power and bandwidth are only going to get cheaper for the next
30 years
Processing power and bandwidth are only going to get cheaper for the
next 30 years
Joshua said:My understanding is that we are beginning to hit physical limits. Hell,
clock speeds won't go up anymore because we can't cool chips. Cramming
more transistors onto the same size die will become impossible at around
11nm technology or so; any smaller, and quantum effects start to break
things. That means that Moore's Law will finally break down around 2015,
unless we switch from semiconductor-based computing.
Yes.
In other words, within the next decade or so, the only tenable way to
increase processing power is parallelization ...
My understanding is that we are beginning to hit physical limits. Hell,
clock speeds won't go up anymore because we can't cool chips. Cramming
more transistors onto the same size die will become impossible at around
11nm technology or so; any smaller, and quantum effects start to break
things. That means that Moore's Law will finally break down around 2015,
unless we switch from semiconductor-based computing.
In other words, within the next decade or so, the only tenable way to
increase processing power is parallelization, and parsing is still
rather serial...
I would not expect Moore's law broken i 2015.
They can still shrink a little bit, number of cores will
increase, size of caches will increase (the cost of a cache
miss is equivalent to a lot of instruction using cache), smarter
CPU's (maybe even more CISCy), fundamental new technologies
(even though I suspect they will not be generally available
as soon as 2015).
Moore's Law states simply that the *number of transistors* on a chip
doubles every 2 years (implicitly referring to roughly same-size chips,
so perhaps transistor density doubling would be a more precise wording).
I think that's what most people think about when talking about Moore'sAh - You are correct.
I was really talking about the doubling of computing speed every 2 years
that so far has been a consequence of Moore's law, but does not
necesarrily need to be tied to it.
Arne
Lew said:So?
I was asking about performance, not size. As mentioned, ZIP handles the
performance implications of the size just fine. So that's a red herring.
I think that's what most people think about when talking about Moore's
Law, the processing power doubling time.
I don't see it coming to a halt for at least another 20 years (as
always). Graphene is on the horizon and clock speeds will start to rise
again. It also seems good down to the nm level.
Then there is the possibility of 3D stacking, or wafer scale integration.
Yes.
But Joshua is correct that Moore did really talk about the transistors.
I think was a chip guy not a software guy.
I would also expect solutions to be found.
Difficult to say, since there are plans to go to a 12nm node eventually.Actually, if you really want to be accurate, Moore's Law is merely a
descriptive observation, not a prescriptive rule: Moore noticed that
semiconductors had been following this curve. Then the chip
manufacturers basically set it as their goalposts, and it became a
self-fulfilling prophecy.
I don't expect it to stall forever, but I do expect about 5 years or so
of a gap between technologies.
Joshua said:Actually, if you really want to be accurate, Moore's Law is merely a
descriptive observation, not a prescriptive rule: Moore noticed that
semiconductors had been following this curve. Then the chip
manufacturers basically set it as their goalposts, and it became a
self-fulfilling prophecy.
And he was talking about economics: the number of transistors that
could be put on a chip at a reasonable price, for mass production.
And, of course, Moore himself predicted the curve would persist for
"about ten years". In 1965.
Then about five years later Carver Mead coined the phrase "Moore's
Law", and mythology set in. Later, as you say, IC designers started
using it to set product-release schedules - which makes economic
sense, but arguably makes the observation somewhat less interesting.
Dirk said:More interesting is its generalization to computing power per dollar.
Also whether there will be a plateau for the next few years as power
consumption and parallel programming issues are addressed. I suspect
that if we add GPUs into the equation then Moore's Law will continue.
You raise a very good point - it's not only symmetric multiprocessing
that improves performance. The old Commodore Amiga used that
principle to good effect, and of course modern PCs do, too. It can be
very effective to add specialty processors like GPUs (which are used
to good effect in supercomputing, incidentally).
GPUs aren't just graphics processors. They're optimized for
caculations that are useful in graphics, like vector operations (four
32-bit doubles per 128-bit register!), multiply-add, bit-blt and
things like that that are useful for all kinds of things.
So a general-purpose CPU can ask a specialty one to calculate, say,
the product of a scalar and a 64-by-64 matrix and get substantial
performance boosts.
Maybe.
I think that when CUDA and friends came out PC power jumped an order or
two of magnitude (in theory). A home machine with 1TFLOPS became easy to
do. So I guess Moore's Law kind of spiked at that point.
Still we are factors of billions away from any realistic fundamental
computing limits. The Human brain does around 1PFLOPS/W and I doubt if
it's particularly well optimized.
Arne Vajhøj said:Feel free to come up with a more optimized design!
It's fun to make new prototypes.
Feel free to come up with a more optimized design!
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.