Java’s Future Lies In Mobile?

  • Thread starter Lawrence D'Oliveiro
  • Start date
J

Jukka Lahtinen

Dirk Bruere at NeoPax said:
Processing power and bandwidth are only going to get cheaper for the next
30 years

...unless the recent disaster in Japan temporarily changes that.
 
J

Joshua Cranmer

Processing power and bandwidth are only going to get cheaper for the
next 30 years

My understanding is that we are beginning to hit physical limits. Hell,
clock speeds won't go up anymore because we can't cool chips. Cramming
more transistors onto the same size die will become impossible at around
11nm technology or so; any smaller, and quantum effects start to break
things. That means that Moore's Law will finally break down around 2015,
unless we switch from semiconductor-based computing.

In other words, within the next decade or so, the only tenable way to
increase processing power is parallelization, and parsing is still
rather serial...
 
L

Lew

Joshua said:
My understanding is that we are beginning to hit physical limits. Hell,
clock speeds won't go up anymore because we can't cool chips. Cramming
more transistors onto the same size die will become impossible at around
11nm technology or so; any smaller, and quantum effects start to break
things. That means that Moore's Law will finally break down around 2015,
unless we switch from semiconductor-based computing.

Yes.

In other words, within the next decade or so, the only tenable way to
increase processing power is parallelization ...

No.

You said it yourself - unless we switch from semiconductor-based
computing.

Which we'll do, I believe within this decade.

We also might improve chip density and clock speed even with
semiconductors well enough to keep up with Moore's Law. Diamond
substrates show some promise.

Anyhow, I challenge "only tenable way". There are too many others
under development that might turn out to be tenable.
 
A

Arne Vajhøj

My understanding is that we are beginning to hit physical limits. Hell,
clock speeds won't go up anymore because we can't cool chips. Cramming
more transistors onto the same size die will become impossible at around
11nm technology or so; any smaller, and quantum effects start to break
things. That means that Moore's Law will finally break down around 2015,
unless we switch from semiconductor-based computing.

In other words, within the next decade or so, the only tenable way to
increase processing power is parallelization, and parsing is still
rather serial...

I would not expect Moore's law broken i 2015.

They can still shrink a little bit, number of cores will
increase, size of caches will increase (the cost of a cache
miss is equivalent to a lot of instruction using cache), smarter
CPU's (maybe even more CISCy), fundamental new technologies
(even though I suspect they will not be generally available
as soon as 2015).

Arne
 
J

Joshua Cranmer

I would not expect Moore's law broken i 2015.

They can still shrink a little bit, number of cores will
increase, size of caches will increase (the cost of a cache
miss is equivalent to a lot of instruction using cache), smarter
CPU's (maybe even more CISCy), fundamental new technologies
(even though I suspect they will not be generally available
as soon as 2015).

Moore's Law states simply that the *number of transistors* on a chip
doubles every 2 years (implicitly referring to roughly same-size chips,
so perhaps transistor density doubling would be a more precise wording).
Improving ISAs would certainly help in terms of performance, but most of
the research I've seen recently has focused on helping parallelization,
not improving serial code execution.
 
A

Arne Vajhøj

Moore's Law states simply that the *number of transistors* on a chip
doubles every 2 years (implicitly referring to roughly same-size chips,
so perhaps transistor density doubling would be a more precise wording).

Ah - You are correct.

I was really talking about the doubling of computing speed every 2 years
that so far has been a consequence of Moore's law, but does not
necesarrily need to be tied to it.

Arne
 
D

Dirk Bruere at NeoPax

Ah - You are correct.

I was really talking about the doubling of computing speed every 2 years
that so far has been a consequence of Moore's law, but does not
necesarrily need to be tied to it.

Arne
I think that's what most people think about when talking about Moore's
Law, the processing power doubling time.
I don't see it coming to a halt for at least another 20 years (as
always). Graphene is on the horizon and clock speeds will start to rise
again. It also seems good down to the nm level.

Then there is the possibility of 3D stacking, or wafer scale integration.
 
M

Michael Wojcik

Lew said:
So?

I was asking about performance, not size. As mentioned, ZIP handles the
performance implications of the size just fine. So that's a red herring.

"ZIP handles the performance implications of the size" for
transmission purposes (and, perhaps, long-term storage purposes,
depending on the application). Size has other effects on performance:
paging, cache lines, locality of reference.

I've seen customers operate on some rather large XML documents, up in
the multiple-gigabyte range. (Yes, there are people creating ETL
applications that will unload entire database tables into a single XML
file and then ship them around and process them.) There a factor of 5
or 10 can have a big difference on an application's working set, cache
misses, and so forth.

And if you have a system that's handling a lot of XML documents
simultaneously (say, a server with a lot of clients), and it's not
feasible to zip those out of band (say, because they're dynamically
generated), it may not be feasible to zip them for transmission, either.
 
A

Arne Vajhøj

I think that's what most people think about when talking about Moore's
Law, the processing power doubling time.

Yes.

But Joshua is correct that Moore did really talk about the transistors.
I think was a chip guy not a software guy.
I don't see it coming to a halt for at least another 20 years (as
always). Graphene is on the horizon and clock speeds will start to rise
again. It also seems good down to the nm level.

Then there is the possibility of 3D stacking, or wafer scale integration.

I would also expect solutions to be found.

Arne
 
J

Joshua Cranmer

Yes.

But Joshua is correct that Moore did really talk about the transistors.
I think was a chip guy not a software guy.

Actually, if you really want to be accurate, Moore's Law is merely a
descriptive observation, not a prescriptive rule: Moore noticed that
semiconductors had been following this curve. Then the chip
manufacturers basically set it as their goalposts, and it became a
self-fulfilling prophecy.
I would also expect solutions to be found.

I don't expect it to stall forever, but I do expect about 5 years or so
of a gap between technologies.
 
D

Dirk Bruere at NeoPax

Actually, if you really want to be accurate, Moore's Law is merely a
descriptive observation, not a prescriptive rule: Moore noticed that
semiconductors had been following this curve. Then the chip
manufacturers basically set it as their goalposts, and it became a
self-fulfilling prophecy.


I don't expect it to stall forever, but I do expect about 5 years or so
of a gap between technologies.
Difficult to say, since there are plans to go to a 12nm node eventually.
IIRC we are currently at around 30nm for leading edge stuff
 
M

Michael Wojcik

Joshua said:
Actually, if you really want to be accurate, Moore's Law is merely a
descriptive observation, not a prescriptive rule: Moore noticed that
semiconductors had been following this curve. Then the chip
manufacturers basically set it as their goalposts, and it became a
self-fulfilling prophecy.

And he was talking about economics: the number of transistors that
could be put on a chip at a reasonable price, for mass production.

And, of course, Moore himself predicted the curve would persist for
"about ten years". In 1965.

Then about five years later Carver Mead coined the phrase "Moore's
Law", and mythology set in. Later, as you say, IC designers started
using it to set product-release schedules - which makes economic
sense, but arguably makes the observation somewhat less interesting.
 
D

Dirk Bruere at NeoPax

And he was talking about economics: the number of transistors that
could be put on a chip at a reasonable price, for mass production.

And, of course, Moore himself predicted the curve would persist for
"about ten years". In 1965.

Then about five years later Carver Mead coined the phrase "Moore's
Law", and mythology set in. Later, as you say, IC designers started
using it to set product-release schedules - which makes economic
sense, but arguably makes the observation somewhat less interesting.

More interesting is its generalization to computing power per dollar.
Also whether there will be a plateau for the next few years as power
consumption and parallel programming issues are addressed. I suspect
that if we add GPUs into the equation then Moore's Law will continue.
 
L

Lew

Dirk said:
More interesting is its generalization to computing power per dollar.
Also whether there will be a plateau for the next few years as power
consumption and parallel programming issues are addressed. I suspect
that if we add GPUs into the equation then Moore's Law will continue.

You raise a very good point - it's not only symmetric multiprocessing
that improves performance. The old Commodore Amiga used that
principle to good effect, and of course modern PCs do, too. It can be
very effective to add specialty processors like GPUs (which are used
to good effect in supercomputing, incidentally).

GPUs aren't just graphics processors. They're optimized for
caculations that are useful in graphics, like vector operations (four
32-bit doubles per 128-bit register!), multiply-add, bit-blt and
things like that that are useful for all kinds of things.

So a general-purpose CPU can ask a specialty one to calculate, say,
the product of a scalar and a 64-by-64 matrix and get substantial
performance boosts.

Maybe.
 
D

Dirk Bruere at NeoPax

You raise a very good point - it's not only symmetric multiprocessing
that improves performance. The old Commodore Amiga used that
principle to good effect, and of course modern PCs do, too. It can be
very effective to add specialty processors like GPUs (which are used
to good effect in supercomputing, incidentally).

GPUs aren't just graphics processors. They're optimized for
caculations that are useful in graphics, like vector operations (four
32-bit doubles per 128-bit register!), multiply-add, bit-blt and
things like that that are useful for all kinds of things.

So a general-purpose CPU can ask a specialty one to calculate, say,
the product of a scalar and a 64-by-64 matrix and get substantial
performance boosts.

Maybe.

I think that when CUDA and friends came out PC power jumped an order or
two of magnitude (in theory). A home machine with 1TFLOPS became easy to
do. So I guess Moore's Law kind of spiked at that point.

Still we are factors of billions away from any realistic fundamental
computing limits. The Human brain does around 1PFLOPS/W and I doubt if
it's particularly well optimized.
 
A

Arne Vajhøj

I think that when CUDA and friends came out PC power jumped an order or
two of magnitude (in theory). A home machine with 1TFLOPS became easy to
do. So I guess Moore's Law kind of spiked at that point.

Still we are factors of billions away from any realistic fundamental
computing limits. The Human brain does around 1PFLOPS/W and I doubt if
it's particularly well optimized.

Feel free to come up with a more optimized design!

:)

Arne
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,773
Messages
2,569,594
Members
45,125
Latest member
VinayKumar Nevatia_
Top