Patricia Shanahan said:
flexible.
How do you know it is fast? If you have enough of a processor design to
make performance evaluation possible I would be interested in seeing it.
I don't claim to be an expert on processor architecture, but I've been
a performance architect for large multiprocessor systems, and I've
worked enough with processor architects to have some idea of their issues.
In computer hardware, space inefficiency often causes time inefficiency.
For example, consider loading data from memory, one of my favorite
subjects. Each signal wire costs at least one chip pin, and the number
that can be supported by a package is limited. Each wire has a limited
number of times per second that it can change state cleanly. Moving two
bits for each data bit reduces the number of data bits that can be moved
from memory to processor in a second.
If that's the case than why do computer systems still use bus width of 32
bits or 64 bits ?
What dont they make it 1000 or 1.000.000 wires etc... directly to memory ?
They can already make very little wires, so it's not a problem at all.
The reason might be that increasing the bandwidth isn't that much
interesting
since most operations are still 32 bit maybe 64 bit
So I present to you the chicken and egg problem
Another fine issue holding back the development of more powerfull computers.
Again this won't be an issue with variable bit cpu's.
They can be tiny little 1 bit cpu's running at incredible speed since they
are so tiny.
Also... could it be done wireleslly ?
Similar issues arise in caches. Each cache has size limits that result
e.g. from physical layout limitations, especially caches that are placed
close to the arithmetic unit. Using space in a cache for metadata means
less space for payload data, so more cache misses. That applies at every
level of the memory hierarchy, from buffers in arithmetic units to main
memory.
Even if there is physical space to make something bigger, doing so
generally makes it slower and/or increases its power consumption. Two of
the main problems in computer architecture now are power distribution
and cooling.
Exactly the problem... processors nowadays problably have many circuits to
handle
move 8 bits, 16, bits
add 16 bits, 8 bits
mov 32 bits, 16 bits
etc, etc, etc
All kinda of combinations.
Which make the chip incrediable large.
Throw away all that junk.
Simply replace it with a 1 bit variable bit cpu make it really tiny = FAST
by your own definition.
And simply pump up the speed at which it can do single bit operations and
bit stream memory transfers.
As soon as you hit some kind of physic limit of doing this... for example:
electrons can not travel any faster...
The only remaining solution would be to do things in parallel.
Now that dual core's and multi cores are on the horizon... it makes
perfectly sense to ditch all the big slow junk
and replace it it with tiny 1 bit cpu's
And simply slap as many of these tiny cpu's as possible on the surface of
whatever it is you want to use
Memory access might be a problem since it can not happen at once all the
time.
Each 1 bit cpu would probably require 3 bitstreams. Two bitstreams for input
one bitstream for output.
It might require a special memory controllor to make sure that the cpu's are
not trying to work in main memory at the same locations
etc that might be bad.
The cell processor solves this problem differently and has a little bit of
seperate memory of each seperate "cpu".
Though this design would prevent the flexibility and scalibility I have in
mind....
So the cell design goes out the window
To add anything to a processor design you need to show that it is a
better use of the resources it will take than competing uses.
Seriously, I suggest the following steps:
1. Unless you already have equivalent knowledge of computer
architecture, read the Hennessy and Patterson book I recommended.
I see no reason to look at old complex junk like this except when I need to
see how the solved certain thingies etc.
2. Evaluate your idea using the methods they give. Compare to
software-only implementation, as well as not doing it at all.
No thx I ll go my own... if it happens to be they same way they did it then
must have done something right otherwise they suck
3. If you still think it is a good idea, take it to comp.arch.
I take it everywhere
Now let me ask you a simple question:
Suppose a chip is build using
64 of these 1 bit tiny variable bit cpu's which can handle any operation.
Compare such a chip with a 64 bit chip nowadays which probably uses many
many many transistors for many many many different cases.
In otherwords these 64 bit modern chips waste many many many transistors for
things which could have been implemented just ONCE for the general case.
So these modern 64 bit chips probably waste lot's of space.
And I dont need to read any damn book for that.
I just look at the instruction set and see all the different cases which are
mentioned.
Bye,
Skybuck.