The variable bit cpu

Discussion in 'C Programming' started by Skybuck Flying, Jul 30, 2005.

  1. Hi,

    I think I might have just invented the variable bit cpu :)

    It works simply like this:

    Each "data bit" has a "meta data bit".

    The meta data bit describes if the bit is the ending bit of a possibly large
    structure/field.

    The meta bits together form a sort of bit mask or bit pattern.

    For example the idea is best seen when putting the data bits
    and meta bits below each other.

    data bits: 01110101110101101010101
    meta bits: 00000000100010001100001

    In reality the data bit and meta bit are grouped together as a single entity
    which can be read into the cpu since otherwise the cpu would not know where
    to start reading the data or meta bits. Now it simplies start with the first
    data + meta bit pair.

    Because a cpu might need to know the length of the bit field up front the
    cpu/algorithm works simply as follows:

    The cpu starts reading data and meta bits until it reaches a meta bit of 1.

    All bits that form the variable bit field are now read and can be used etc.

    The above example then looks like this:

    data bits: 011101011#1010#1101#0#10101
    meta bits: 000000001#0001#0001#1#00001

    (The # sign is too indicate to you where the variable bit fields are.)

    Notice how even single bit fields are possible.

    The reason for the variable bit cpu with variable bit software is too save
    costs and to make computers/software even more powerfull and usefull ;)

    For example:

    Currently fixed bitsoftware has to be re-written or modified, re-compiled,
    re-documented, re-distributed, re-installed, re-configured when it's fixed
    bit limit is reached and has to be increased for example from 32 bit to 64
    bit etc.

    Example are windows xp 32 to 64 bit, the internet IPv4 to IPv6.

    Bye,
    Skybuck.
    Skybuck Flying, Jul 30, 2005
    #1
    1. Advertising

  2. Skybuck Flying

    Michael Mair Guest

    Skybuck Flying wrote:
    > Hi,
    >
    > I think I might have just invented the variable bit cpu :)
    >
    > It works simply like this:
    >
    > Each "data bit" has a "meta data bit".
    >
    > The meta data bit describes if the bit is the ending bit of a possibly large
    > structure/field.
    >
    > The meta bits together form a sort of bit mask or bit pattern.
    >
    > For example the idea is best seen when putting the data bits
    > and meta bits below each other.
    >
    > data bits: 01110101110101101010101
    > meta bits: 00000000100010001100001
    >
    > In reality the data bit and meta bit are grouped together as a single entity
    > which can be read into the cpu since otherwise the cpu would not know where
    > to start reading the data or meta bits. Now it simplies start with the first
    > data + meta bit pair.
    >
    > Because a cpu might need to know the length of the bit field up front the
    > cpu/algorithm works simply as follows:
    >
    > The cpu starts reading data and meta bits until it reaches a meta bit of 1.
    >
    > All bits that form the variable bit field are now read and can be used etc.
    >
    > The above example then looks like this:
    >
    > data bits: 011101011#1010#1101#0#10101
    > meta bits: 000000001#0001#0001#1#00001
    >
    > (The # sign is too indicate to you where the variable bit fields are.)
    >
    > Notice how even single bit fields are possible.
    >
    > The reason for the variable bit cpu with variable bit software is too save
    > costs and to make computers/software even more powerfull and usefull ;)
    >
    > For example:
    >
    > Currently fixed bitsoftware has to be re-written or modified, re-compiled,
    > re-documented, re-distributed, re-installed, re-configured when it's fixed
    > bit limit is reached and has to be increased for example from 32 bit to 64
    > bit etc.
    >
    > Example are windows xp 32 to 64 bit, the internet IPv4 to IPv6.


    Why don't you just show us your C implementation and show us how this
    improves or countermands a good, portable (whatever that may mean)
    programming style.
    Otherwise, I fear you are rather off-topic round here and may go on to
    comp.programming or wherever appropriate.

    -Michael
    --
    E-Mail: Mine is an /at/ gmx /dot/ de address.
    Michael Mair, Jul 30, 2005
    #2
    1. Advertising

  3. Skybuck Flying

    Flash Gordon Guest

    Michael Mair wrote:
    > Skybuck Flying wrote:


    <snip rubbish>

    > Why don't you just show us your C implementation and show us how this
    > improves or countermands a good, portable (whatever that may mean)
    > programming style.
    > Otherwise, I fear you are rather off-topic round here and may go on to
    > comp.programming or wherever appropriate.


    I suggest you search the archives for other rubbish posted by skybuck.
    This kill file it.
    --
    Flash Gordon
    Living in interesting times.
    Although my email address says spam, it is real and I read it.
    Flash Gordon, Jul 30, 2005
    #3
  4. Skybuck Flying

    Michael Mair Guest

    Flash Gordon wrote:
    > Michael Mair wrote:
    >
    >> Skybuck Flying wrote:

    >
    >
    > <snip rubbish>
    >
    >> Why don't you just show us your C implementation and show us how this
    >> improves or countermands a good, portable (whatever that may mean)
    >> programming style.
    >> Otherwise, I fear you are rather off-topic round here and may go on to
    >> comp.programming or wherever appropriate.

    >
    >
    > I suggest you search the archives for other rubbish posted by skybuck.
    > This kill file it.


    Thanks for the suggestion -- Skybuck already has spent some time
    in my killfile but got out on probation. Guess this settles it.

    Cheers
    Michael
    --
    E-Mail: Mine is an /at/ gmx /dot/ de address.
    Michael Mair, Jul 30, 2005
    #4
  5. Skybuck Flying

    CBFalconer Guest

    Michael Mair wrote:
    > Flash Gordon wrote:
    >> Michael Mair wrote:
    >>> Skybuck Flying wrote:

    >>
    >> <snip rubbish>
    >>
    >>> Why don't you just show us your C implementation and show us
    >>> how this improves or countermands a good, portable (whatever
    >>> that may mean) programming style.
    >>> Otherwise, I fear you are rather off-topic round here and may
    >>> go on to comp.programming or wherever appropriate.

    >>
    >> I suggest you search the archives for other rubbish posted by
    >> skybuck. This kill file it.

    >
    > Thanks for the suggestion -- Skybuck already has spent some time
    > in my killfile but got out on probation. Guess this settles it.


    Makes almost as much sense as Reaganomics. He even multi-posted
    it. He's gone.

    --
    "If you want to post a followup via groups.google.com, don't use
    the broken "Reply" link at the bottom of the article. Click on
    "show options" at the top of the article, then click on the
    "Reply" at the bottom of the article headers." - Keith Thompson
    CBFalconer, Jul 31, 2005
    #5
  6. "Michael Mair" <> wrote in message
    news:...
    > Skybuck Flying wrote:
    > > Hi,
    > >
    > > I think I might have just invented the variable bit cpu :)
    > >
    > > It works simply like this:
    > >
    > > Each "data bit" has a "meta data bit".
    > >
    > > The meta data bit describes if the bit is the ending bit of a possibly

    large
    > > structure/field.
    > >
    > > The meta bits together form a sort of bit mask or bit pattern.
    > >
    > > For example the idea is best seen when putting the data bits
    > > and meta bits below each other.
    > >
    > > data bits: 01110101110101101010101
    > > meta bits: 00000000100010001100001
    > >
    > > In reality the data bit and meta bit are grouped together as a single

    entity
    > > which can be read into the cpu since otherwise the cpu would not know

    where
    > > to start reading the data or meta bits. Now it simplies start with the

    first
    > > data + meta bit pair.
    > >
    > > Because a cpu might need to know the length of the bit field up front

    the
    > > cpu/algorithm works simply as follows:
    > >
    > > The cpu starts reading data and meta bits until it reaches a meta bit of

    1.
    > >
    > > All bits that form the variable bit field are now read and can be used

    etc.
    > >
    > > The above example then looks like this:
    > >
    > > data bits: 011101011#1010#1101#0#10101
    > > meta bits: 000000001#0001#0001#1#00001
    > >
    > > (The # sign is too indicate to you where the variable bit fields are.)
    > >
    > > Notice how even single bit fields are possible.
    > >
    > > The reason for the variable bit cpu with variable bit software is too

    save
    > > costs and to make computers/software even more powerfull and usefull ;)
    > >
    > > For example:
    > >
    > > Currently fixed bitsoftware has to be re-written or modified,

    re-compiled,
    > > re-documented, re-distributed, re-installed, re-configured when it's

    fixed
    > > bit limit is reached and has to be increased for example from 32 bit to

    64
    > > bit etc.
    > >
    > > Example are windows xp 32 to 64 bit, the internet IPv4 to IPv6.

    >
    > Why don't you just show us your C implementation and show us how this
    > improves or countermands a good, portable (whatever that may mean)
    > programming style.
    > Otherwise, I fear you are rather off-topic round here and may go on to
    > comp.programming or wherever appropriate.


    There is no implementation yet.

    Though you are free to think about this concept/idea and maybe find a way
    how to implement it in C ;)

    Bye,
    Skybuck.
    Skybuck Flying, Jul 31, 2005
    #6
  7. Skybuck Flying

    Malcolm Guest

    "Skybuck Flying" <> wrote
    >
    > I think I might have just invented the variable bit cpu :)
    >
    > It works simply like this:
    > Each "data bit" has a "meta data bit".
    >

    This is on topic for an electronic engineering, CPU design type newsgroup,
    but not on comp.lang.c.

    It sounds like an idea that might be worth exploring further, but I'm not a
    hardware person. You would double memory costs, but these days that is only
    a slight penalty. You would certainly be able to catch a lot of illegal
    operations at a very low level. What I couldn't tell you is what the effect
    would be on the speed and costs of building such a chip, which is why this
    ng isn't the best place to discuss this.

    However you might want to look at Fibonnacci computers to see a related,
    though different, idea.
    Malcolm, Jul 31, 2005
    #7
  8. Yeah after some feedback a more efficient encoding could be used for large
    values/data.

    type field + type marker + length field + length maker + data

    For example
    1 bit 1 bit 20 bits 20 bits +
    data

    The first 4 fields all use the original idea of gaining flexible fields.

    Since the length is now known the remaining data does not need any markers.

    (Instead of meta bits I now called them markers a term borried from the ibm
    1401 reference manual ;))

    The type field is to indicate the encoding type.

    Bye,
    Skybuck.

    "Malcolm" <> wrote in message
    news:dchqoq$f70$-infra.bt.com...
    >
    > "Skybuck Flying" <> wrote
    > >
    > > I think I might have just invented the variable bit cpu :)
    > >
    > > It works simply like this:
    > > Each "data bit" has a "meta data bit".
    > >

    > This is on topic for an electronic engineering, CPU design type newsgroup,
    > but not on comp.lang.c.
    >
    > It sounds like an idea that might be worth exploring further, but I'm not

    a
    > hardware person. You would double memory costs, but these days that is

    only
    > a slight penalty. You would certainly be able to catch a lot of illegal
    > operations at a very low level. What I couldn't tell you is what the

    effect
    > would be on the speed and costs of building such a chip, which is why this
    > ng isn't the best place to discuss this.
    >
    > However you might want to look at Fibonnacci computers to see a related,
    > though different, idea.
    >
    >
    Skybuck Flying, Jul 31, 2005
    #8
  9. Skybuck Flying wrote:
    > Hi,
    >
    > I think I might have just invented the variable bit cpu :)


    Apart from the off-topic, you haven't invented it yet... You simply
    proposed a scheme that cannot necessarily be implemented - well, it
    might be, but I am not very sure that it would be helpful at all.

    > It works simply like this:
    >
    > Each "data bit" has a "meta data bit".
    >
    > The meta data bit describes if the bit is the ending bit of a possibly large
    > structure/field.
    >
    > The meta bits together form a sort of bit mask or bit pattern.
    >
    > For example the idea is best seen when putting the data bits
    > and meta bits below each other.
    >
    > data bits: 01110101110101101010101
    > meta bits: 00000000100010001100001


    You have already doubled the memory you are using.

    > In reality the data bit and meta bit are grouped together as a single entity
    > which can be read into the cpu since otherwise the cpu would not know where
    > to start reading the data or meta bits. Now it simplies start with the first
    > data + meta bit pair.


    The CPU does not read bits... If it has some kind of a cache, it moves
    memory blocks of many bytes from memory to the cache and several bytes
    from the cache to the registers (4bytes if 32bit, 8bytes if 64bit).

    How big would the registers be in this hypothetical machine of variable
    bits? And it would be better if called "variable word".

    And if you would suggest to get rid of cache, it would severely harm
    perfomance.

    > Because a cpu might need to know the length of the bit field up front the
    > cpu/algorithm works simply as follows:
    >
    > The cpu starts reading data and meta bits until it reaches a meta bit of 1.
    >
    > All bits that form the variable bit field are now read and can be used etc.
    >
    > The above example then looks like this:
    >
    > data bits: 011101011#1010#1101#0#10101
    > meta bits: 000000001#0001#0001#1#00001
    >
    > (The # sign is too indicate to you where the variable bit fields are.)
    >
    > Notice how even single bit fields are possible.
    >
    > The reason for the variable bit cpu with variable bit software is too save
    > costs and to make computers/software even more powerfull and usefull ;)


    You wouldn't accomplish all that, but you'd just removed the fuss from
    the programmer to choose whether to use int, char, long, long long etc.

    However, you could more easily write a profiler that would scan the
    program, estimate the range of values of every variable and assign the
    proper type to the variable. It would be nice that one...


    --
    one's freedom stops where other's begin

    Giannis Papadopoulos
    http://dop.users.uth.gr/
    University of Thessaly
    Computer & Communications Engineering dept.
    Giannis Papadopoulos, Jul 31, 2005
    #9
  10. Skybuck Flying

    Eric Sosman Guest

    Skybuck Flying wrote:
    >>
    >>Skybuck Flying wrote:
    >>>
    >>>I think I might have just invented the variable bit cpu :)

    >
    > There is no implementation yet.


    Such machines existed in the 1960's; I personally used
    one in 1966 and 1967. If you can show that your invention
    predated that era, you may be able to sue IBM for patent
    infringement and make yourself a very rich man. (Or woman.
    Or mollusc, or slime mold; whatever.)

    However tenuous the legal argument may be, the potential
    rewards are surely enormous. Were I in your enviable position,
    I would strain every sinew to wresting my pot of gold from
    plutocratic corporate Amerika, even if the effort left me no
    time to pollute Usenet with nonsense.

    --
    Eric Sosman
    lid
    Eric Sosman, Jul 31, 2005
    #10
  11. In article <dcgia6$l6r$1.ov.home.nl>,
    Skybuck Flying <> wrote:
    >I think I might have just invented the variable bit cpu :)


    >The reason for the variable bit cpu with variable bit software is too save
    >costs and to make computers/software even more powerfull and usefull ;)


    >For example:


    >Currently fixed bitsoftware has to be re-written or modified, re-compiled,
    >re-documented, re-distributed, re-installed, re-configured when it's fixed
    >bit limit is reached and has to be increased for example from 32 bit to 64
    >bit etc.


    The implication of your example is that the software would dynamically
    expand fields as necessary to hold results -- e.g., if a result
    would overflow 16 bits, then use 17 (or 18 or whatever) instead.

    You weren't just thinking in terms of application programs that might
    have to deal with the OS deciding to use larger fields file sizes, for
    example]: the example you gave was Windows -itself-. Thus, you aren't
    just thinking "Ah, Adobe Acrobat wouldn't have to be recompiled to go
    from Windows XP Pro to Windows XP Pro64, even though XP Pro64 had to be
    changed", you are thinking "There would be only one Windows XP Pro"
    and that implies that the system would be expected to adjust
    size fields according to the needs of calculations, not just
    that it would be able to work within upper-bound size limits
    handed down by other parts of the system that were [inherently]
    compiled with fixed sizes.

    So... dynamic sizing of results.


    Now, dynamic sizing of results has some... interesting... properties
    when it comes to C programs. One cannot, for example, allocate
    a fixed amount of storage for array entries, because the different
    entries might have different sizes, and a field that is (say) 3 bits
    wide now might suddenly become 115 bits wide; in a traditional
    linear-storage machine, that would require that all the other entries
    "scoot over" and that pointers somehow auto-adjust.... unless you
    want to spend most of your time just running through from the
    beginning of memory trying to figure out where the 29563'rd field starts
    (Repeat the calculation after every STORE that changes a field size...)

    This suggests that in order to use such a scheme, that an address /
    "pointer" would have to be a field number, and that *behind the scenes*
    the processor would be keeping track of where in the linear bit store
    the data really was. Memory fragmentation would be a way of life,
    but fortunately because -every- access would be indirect, the
    processor could pause and do de-fragmentation without affecting
    the addresses / pointers as known to the programs. Could present
    some interesting challenges for real-time programming...


    Dynamic sizing of results has some real symmantic challenges.
    In the below x#y indicates data bitfield x with "marker" bits y.

    - Signed and unsigned additive operators can no longer be treated
    equivilently in internal logic. For example, in a standard 8 bit
    2's complement machine, 0xFF + 0x01 is 0x00 no matter whether the
    0xFF is signed or unsigned. In a variable-bit automatic-extension
    machine, signed 11111111#00000001 + 1#1 -> 0#1
    but unsigned 11111111#00000001 + 1#1 -> 100000000#000000001

    The compiler does, though, know the difference, and could generate
    appropriate instruction sequences, but I suspect that a lot of existing
    code relies upon 2's complement equivilences.

    - Quick, what is the bitwise "not" of 0#1 ?
    Is it 1#1 ?
    Is it 11111111#00000001 ?
    Is it 11111111111111111111111111111111#00000000000000000000000000000001 ?
    Is it a string of 1's that fills up all of available memory?

    The answer pretty much has to be that (~ unsigned 0#1) is
    "extensible" 1#1. It would be tempting to say that "extensible 1#1" is
    the same as "signed 1#1" -- but extensible unsigned 1#1 and signed 1#1
    have different properties for addition, as indicated above.

    In a system with dynamic sizing, what should
    1 + ~ (unsigned 0) *be* ?

    One cannot say that this is simply a degenerate case that will not
    come up in practice, as ~ and | and subtraction (and I do not mean xor)
    come up a fair bit in IP address / netmask / broadcast address
    calculations.


    - What do you do about floating point numbers, seeing as you want
    XP and XP64 (with different floating point limitations) to be the
    same binary image? Your thesis of not needing to recompile implies
    that code cannot assume 32 bit single-precision float, since the same code
    might, 2 or 3 hardware generations down the road, be expected to
    operate on 512 bit floating point *without any change or recompilation*.


    - In order to avoid locking in formats for values that could change
    in size, [v]printf() formats would generally have to be
    variable. %d vs %ld would generally have to disappear, in favour
    of a generalized integral format. (Fixed-width reports would take
    rather a severe hit...) But then what do you do if (for example)
    asked to print out the hex representation of that
    "extensible unsigned 1#1" ? How do you know when to stop extending
    it? The correct answer today on "64 bit hardware" might be the wrong
    answer for tomorrow on "256 bit hardware". Indeed, I would suggest
    to you that the "correct" answer would depend upon the individual
    reader of the output...
    --
    Look out, there are llamas!
    Walter Roberson, Jul 31, 2005
    #11
  12. "Giannis Papadopoulos" <> wrote in message
    news:dcicbk$5fe$...
    > Skybuck Flying wrote:
    > > Hi,
    > >
    > > I think I might have just invented the variable bit cpu :)

    >
    > Apart from the off-topic, you haven't invented it yet... You simply
    > proposed a scheme that cannot necessarily be implemented - well, it
    > might be, but I am not very sure that it would be helpful at all.


    Ok then I invented the scheme... yeeeeah for me :D

    >
    > > It works simply like this:
    > >
    > > Each "data bit" has a "meta data bit".
    > >
    > > The meta data bit describes if the bit is the ending bit of a possibly

    large
    > > structure/field.
    > >
    > > The meta bits together form a sort of bit mask or bit pattern.
    > >
    > > For example the idea is best seen when putting the data bits
    > > and meta bits below each other.
    > >
    > > data bits: 01110101110101101010101
    > > meta bits: 00000000100010001100001

    >
    > You have already doubled the memory you are using.


    Yes I did.

    Take a look at the new encoding though.

    header: encoding type + encoding markers + interleaved bit

    payload: length field + length markers + data

    Example:

    1 bit + 1 bit + 1 bit + 20 bits + 20 bits + 1 milion bits

    Overhead is 43 bits for 1 million bits of data. Not bad ;)

    The header and the length fields are encoded using the original encoding.

    The data needs no encoding since the length is now known ;)

    Nice eh :)

    It still depends on the original encoding idea ;)

    So it's still the original encoding idea that gives it the infinite
    flexibility that it has =D, WOW ;)

    >
    > > In reality the data bit and meta bit are grouped together as a single

    entity
    > > which can be read into the cpu since otherwise the cpu would not know

    where
    > > to start reading the data or meta bits. Now it simplies start with the

    first
    > > data + meta bit pair.

    >
    > The CPU does not read bits... If it has some kind of a cache, it moves
    > memory blocks of many bytes from memory to the cache and several bytes
    > from the cache to the registers (4bytes if 32bit, 8bytes if 64bit).
    >
    > How big would the registers be in this hypothetical machine of variable
    > bits? And it would be better if called "variable word".


    Registers would be variable as well and located in main memory. (virtual
    registers)

    The CPU would mostly operate on main memory.

    Alternatively

    The CPU could use it's embedded registers underwater.

    That kind of CPU could use a sliding register to slide over the big fields
    in main memory.

    >
    > And if you would suggest to get rid of cache, it would severely harm
    > perfomance.


    A cache is an optimization technique, you are free to add whatever hardware
    optimizations you like.

    >
    > > Because a cpu might need to know the length of the bit field up front

    the
    > > cpu/algorithm works simply as follows:
    > >
    > > The cpu starts reading data and meta bits until it reaches a meta bit of

    1.
    > >
    > > All bits that form the variable bit field are now read and can be used

    etc.
    > >
    > > The above example then looks like this:
    > >
    > > data bits: 011101011#1010#1101#0#10101
    > > meta bits: 000000001#0001#0001#1#00001
    > >
    > > (The # sign is too indicate to you where the variable bit fields are.)
    > >
    > > Notice how even single bit fields are possible.
    > >
    > > The reason for the variable bit cpu with variable bit software is too

    save
    > > costs and to make computers/software even more powerfull and usefull ;)

    >
    > You wouldn't accomplish all that, but you'd just removed the fuss from
    > the programmer to choose whether to use int, char, long, long long etc.


    Exactly the user is not affected by the shortsightedness of the
    programmer/designer.

    >
    > However, you could more easily write a profiler that would scan the
    > program, estimate the range of values of every variable and assign the
    > proper type to the variable. It would be nice that one...


    Look what you saying this might be a nice solution for you the programmer,
    but the user is fucked.

    Bye,
    Skybuck ;)

    >
    >
    > --
    > one's freedom stops where other's begin
    >
    > Giannis Papadopoulos
    > http://dop.users.uth.gr/
    > University of Thessaly
    > Computer & Communications Engineering dept.
    Skybuck Flying, Aug 1, 2005
    #12
  13. "Walter Roberson" <-cnrc.gc.ca> wrote in message
    news:dcj0g2$d7n$...
    > In article <dcgia6$l6r$1.ov.home.nl>,
    > Skybuck Flying <> wrote:
    > >I think I might have just invented the variable bit cpu :)

    >
    > >The reason for the variable bit cpu with variable bit software is too

    save
    > >costs and to make computers/software even more powerfull and usefull ;)

    >
    > >For example:

    >
    > >Currently fixed bitsoftware has to be re-written or modified,

    re-compiled,
    > >re-documented, re-distributed, re-installed, re-configured when it's

    fixed
    > >bit limit is reached and has to be increased for example from 32 bit to

    64
    > >bit etc.

    >
    > The implication of your example is that the software would dynamically
    > expand fields as necessary to hold results -- e.g., if a result
    > would overflow 16 bits, then use 17 (or 18 or whatever) instead.


    Correct.

    > You weren't just thinking in terms of application programs that might
    > have to deal with the OS deciding to use larger fields file sizes, for
    > example]: the example you gave was Windows -itself-. Thus, you aren't
    > just thinking "Ah, Adobe Acrobat wouldn't have to be recompiled to go
    > from Windows XP Pro to Windows XP Pro64, even though XP Pro64 had to be
    > changed", you are thinking "There would be only one Windows XP Pro"
    > and that implies that the system would be expected to adjust
    > size fields according to the needs of calculations, not just
    > that it would be able to work within upper-bound size limits
    > handed down by other parts of the system that were [inherently]
    > compiled with fixed sizes.
    >
    > So... dynamic sizing of results.


    Correct.

    >
    >
    > Now, dynamic sizing of results has some... interesting... properties
    > when it comes to C programs. One cannot, for example, allocate
    > a fixed amount of storage for array entries, because the different
    > entries might have different sizes, and a field that is (say) 3 bits
    > wide now might suddenly become 115 bits wide; in a traditional
    > linear-storage machine, that would require that all the other entries
    > "scoot over" and that pointers somehow auto-adjust.... unless you
    > want to spend most of your time just running through from the
    > beginning of memory trying to figure out where the 29563'rd field starts
    > (Repeat the calculation after every STORE that changes a field size...)


    ..NET does something similiar, shifting over memory etc, it's not too shaby
    ;)

    > This suggests that in order to use such a scheme, that an address /
    > "pointer" would have to be a field number, and that *behind the scenes*
    > the processor would be keeping track of where in the linear bit store
    > the data really was. Memory fragmentation would be a way of life,
    > but fortunately because -every- access would be indirect, the
    > processor could pause and do de-fragmentation without affecting
    > the addresses / pointers as known to the programs. Could present
    > some interesting challenges for real-time programming...


    Nice idea ;) Indeed a little gremlin is behind the scenes taking care of
    things ;) and I don't mean killing :D

    Personally I dont like gremlin's.

    But it could be too much to ask of programmers to fix the system when
    fragmentation becomes high or when memory is low etc...

    So the damn little gremlin can take care of it :)

    >
    > Dynamic sizing of results has some real symmantic challenges.
    > In the below x#y indicates data bitfield x with "marker" bits y.
    >
    > - Signed and unsigned additive operators can no longer be treated
    > equivilently in internal logic. For example, in a standard 8 bit
    > 2's complement machine, 0xFF + 0x01 is 0x00 no matter whether the
    > 0xFF is signed or unsigned. In a variable-bit automatic-extension
    > machine, signed 11111111#00000001 + 1#1 -> 0#1
    > but unsigned 11111111#00000001 + 1#1 -> 100000000#000000001


    So far I have only thought about positive stuff ;)

    I never liked the idea of using half the bits for negative and half the bits
    for positive for an integer while the same ammount of bits is used for full
    positive in an unsigned integer.

    Both examples look exactly the same and I don't like that because it's
    confusing.

    So I rather see that the first bit is used to indicate positive or negative.

    Positive would be a leading 1
    Negative would be a leading zero

    But since I also want extra space for performance reasons (avoiding
    fragmentation)

    I think I would add a special bit field to indicate positive and negative
    values :) saves me a whole lot of shit :)

    sign bit ;)

    for data fields those bit is irrelevant and could maybe be used for other
    purposes ;)

    Maybe I am just gonna add a data type field as well... I kinda like the idea
    of the processor being more aware of what is flowing through it ;)

    So it could look as follows:

    data type + data

    0 is unknown/general data
    1 is numerical data

    here a branch takes place
    seeing a 1 then next field would be:

    the sign bit
    0 negative
    1 positive

    following that are the numerical bits.

    Because fragmentation can be a real problem etc I am going to allow the
    following :)

    000000000000000#000000000000001

    and also
    000000101011101#000000000000001

    So any number of leading zero's can be applied as long as the "marker mask"
    is of equal length.

    So this format exploits the knowledge of the smarter programmer.

    The smarter programmer can still specify how much bits he needs he thinks
    for each field.

    However the programmer should always keep in the back of his mind that these
    fields are allowed to grow...

    as soon as the fields is like this:

    1111#0001

    And +1 is added. it will have to grow to:

    10000#00001

    Compilers could be smart and reserve some space between the fields so that
    fields can grow in both directions.

    For example the compiler might place this 4 bit field like this:

    1010101#0000001#00000000000#1111#0001#000000000000000000

    Then when the field needs to grow it looks like:

    1010101#0000001#0000000000#10000#00001#00000000000000000

    There should also be directives and maybe even routines to specify that this
    is unwanted behaviour for some cases
    where the bits need to be packed.

    For example network communications could be slow and things can be speeded
    up by removing the extra bits. (packing)

    1. This option implies special routines which can be called by the
    programmer/program at any time to for example package a structure ;)
    right before it is send or stored etc... however if speed or space is not
    important all extra bits inbetween can be stored as well... though this
    could be risky
    if there are lot's of them in between ;) :) though since it would be part of
    a structure this would be unlikely ;) and if it does happen to bad... pack
    it ;) :)
    The program could be fed extra information about the structure for example:
    The content size of the structure (counting only the content bits)
    The real size of the structure (couting content and extra bits in between)

    The program can then make a decision that if the percentage of extra bits is
    low... whatever low means ;) the program can simply choose to write the
    extra
    bits in between as well to allow future growth directly into the file... and
    to save cpu processing etc.

    2. Or by simplieing making sure that the fields are packed from the start.
    (packed)

    The last option implies that stuff needs to be shifted/moved around etc...
    this means a directive that the structure should be packed at all times.

    >
    > The compiler does, though, know the difference, and could generate
    > appropriate instruction sequences, but I suspect that a lot of existing
    > code relies upon 2's complement equivilences.
    >
    > - Quick, what is the bitwise "not" of 0#1 ?
    > Is it 1#1 ?
    > Is it 11111111#00000001 ?
    > Is it 11111111111111111111111111111111#00000000000000000000000000000001 ?
    > Is it a string of 1's that fills up all of available memory?
    >
    > The answer pretty much has to be that (~ unsigned 0#1) is
    > "extensible" 1#1. It would be tempting to say that "extensible 1#1" is
    > the same as "signed 1#1" -- but extensible unsigned 1#1 and signed 1#1
    > have different properties for addition, as indicated above.
    >
    > In a system with dynamic sizing, what should
    > 1 + ~ (unsigned 0) *be* ?


    Ok I ll interpret this as:

    positive 1 plus the reverse of zero

    since I have added a sign bit... positive zero... will turn to negative zero
    and negative zero will turn to positive zero...

    For the cpu it's all the same.. since zero is zero ;) at least for us humans
    and that's how the cpu will work too ;) unless somebody has any objections ?
    :):):)

    so we humans do +1 + - 0 = 1
    or
    so we humans do +1 + + 0 = 1

    >
    > One cannot say that this is simply a degenerate case that will not
    > come up in practice, as ~ and | and subtraction (and I do not mean xor)
    > come up a fair bit in IP address / netmask / broadcast address
    > calculations.


    I think you mean same question but different operator ?

    Like

    1 or (unsigned 0) ? = 1 ;)

    I see absolutely no problems at all at this point....

    Since we designing a new cpu we can build in any special handling where
    needed...

    Maybe we get problems with multiplications or division or something like
    that...

    But so far I see no problems at all ;) :)

    >
    > - What do you do about floating point numbers, seeing as you want
    > XP and XP64 (with different floating point limitations) to be the
    > same binary image? Your thesis of not needing to recompile implies
    > that code cannot assume 32 bit single-precision float, since the same code
    > might, 2 or 3 hardware generations down the road, be expected to
    > operate on 512 bit floating point *without any change or recompilation*.


    Floating point numbers are a big no-no, since they are imperfect and fixed.

    So floating point numbers go out the window ;)

    No floating point numbers for me :)

    Instead the software can use special software which performs calculations
    with stuff like 1 / 8 + 2 / 8 = 3 / 8 etc...

    Maybe this special kind of software/calculations can be build into the cpu.

    I am not going to infect my cpu with inperfection :) floating points are
    inperfection ;) :):):):):)

    However I haven't though yet about how to handle floating point like stuff
    user friendly etc... with dot's in them etc...
    like 56.89

    Maybe this will just be translated into 56 + 89 / 100

    56 is stored in memory and 89 and 100 is stored in memory.

    I don't know the english word for this kind of thing... like a fraction ? or
    broken number ? I dont know ;)

    Anyway since the numbers can now be infinite in theory they can be very
    large in practice as well.

    So huuuugggeee floating point like stuff would be possible :)

    It would look like a floating point but it's not really a floating point :)

    At least not the floating point format...

    The floating point format will be something special which also uses the
    variable bit fields ;)

    Hmm let see...

    Maybe something like this:

    # whole number # marker # broken number # marker # thousands # marker ;)

    with a data type and a sign type in front of them.

    0 was unknown data type
    1 was numerical data type... maybe I should call this integer type after all
    ;)
    2 will be floating point data type :)

    >
    > - In order to avoid locking in formats for values that could change
    > in size, [v]printf() formats would generally have to be
    > variable. %d vs %ld would generally have to disappear, in favour
    > of a generalized integral format. (Fixed-width reports would take
    > rather a severe hit...) But then what do you do if (for example)
    > asked to print out the hex representation of that
    > "extensible unsigned 1#1" ? How do you know when to stop extending
    > it? The correct answer today on "64 bit hardware" might be the wrong
    > answer for tomorrow on "256 bit hardware". Indeed, I would suggest
    > to you that the "correct" answer would depend upon the individual
    > reader of the output...


    Euhm the above stuff about printf etc is too c library like specific, you
    lost me ;)

    Thanks for your post, it forced me to think about a lot of new stuff which I
    hadn't thought about before...

    I think I have given you some great answers...

    Now it's time for others to analyze it to find any flaws in them :D

    Bye,
    Skybuck =D
    Skybuck Flying, Aug 1, 2005
    #13
  14. "Skybuck Flying" <> writes:
    [...]
    > Euhm the above stuff about printf etc is too c library like specific, you
    > lost me ;)


    Then why are you posting in comp.lang.c?

    I don't expect "Skybuck Flying" to pay any attention, but to everyone
    else:

    +-------------------+ .:\:\:/:/:.
    | PLEASE DO NOT | :.:\:\:/:/:.:
    | FEED THE TROLLS | :=.' - - '.=:
    | | '=(\ 9 9 /)='
    | Thank you, | ( (_) )
    | Management | /`-vvv-'\
    +-------------------+ / \
    | | @@@ / /|,,,,,|\ \
    | | @@@ /_// /^\ \\_\
    @x@@x@ | | |/ WW( ( ) )WW
    \||||/ | | \| __\,,\ /,,/__
    \||/ | | | jgs (______Y______)
    /\/\/\/\/\/\/\/\//\/\\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
    ==============================================================

    --
    Keith Thompson (The_Other_Keith) <http://www.ghoti.net/~kst>
    San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
    We must do something. This is something. Therefore, we must do this.
    Keith Thompson, Aug 1, 2005
    #14
  15. Skybuck Flying

    Richard Bos Guest

    Eric Sosman <> wrote:

    > Skybuck Flying wrote:


    > > There is no implementation yet.

    >
    > Such machines existed in the 1960's; I personally used
    > one in 1966 and 1967. If you can show that your invention
    > predated that era, you may be able to sue IBM for patent
    > infringement and make yourself a very rich man. (Or woman.
    > Or mollusc, or slime mold; whatever.)


    Hey, don't insult slime molds! They provide a great service to the
    hungry hacker, unlike the OP.

    Richard
    Richard Bos, Aug 1, 2005
    #15
  16. Skybuck Flying

    Grumble Guest

    Skybuck Flying wrote:
    > I think I might have just invented the variable bit cpu :)


    Then you would probably want to discuss it in comp.arch

    (comp.arch stands for computer architecture.)
    Grumble, Aug 1, 2005
    #16
  17. Skybuck Flying

    Grumble Guest

    Grumble wrote:

    > Skybuck Flying wrote:
    >
    >> I think I might have just invented the variable bit cpu :)

    >
    > Then you would probably want to discuss it in comp.arch
    >
    > (comp.arch stands for computer architecture.)


    I see you multi-posted... (No cookie for you.)

    Newsgroups: comp.arch
    Subject: The variable bit cpu
    Date: Sat, 30 Jul 2005 20:54:02 +0200

    Newsgroups: comp.lang.c
    Subject: The variable bit cpu
    Date: Sat, 30 Jul 2005 20:54:26 +0200

    If you *must* post to several groups, then you should cross-post,
    and set the followup-to field to the more appropriate group(s).
    Grumble, Aug 1, 2005
    #17
  18. In article <dck9lt$ebk$1.ov.home.nl>,
    Skybuck Flying <> wrote:

    >"Walter Roberson" <-cnrc.gc.ca> wrote in message
    >news:dcj0g2$d7n$...


    >> 1 + ~ (unsigned 0) *be* ?


    >Ok I ll interpret this as:


    >positive 1 plus the reverse of zero


    This is comp.lang.c . ~ is the "bitwise not" operator. It doesn't
    mean "reverse", and it doesn't mean "negate", it means "turn every
    0 into a 1, and every 1 into a 0." This is sometimes called
    "1's complement".

    >since I have added a sign bit... positive zero... will turn to negative zero
    >and negative zero will turn to positive zero...


    No, that would be the - operator, not the ~ operator.


    >> One cannot say that this is simply a degenerate case that will not
    >> come up in practice, as ~ and | and subtraction (and I do not mean xor)
    >> come up a fair bit in IP address / netmask / broadcast address
    >> calculations.


    >I think you mean same question but different operator ?


    >Like


    >1 or (unsigned 0) ? = 1 ;)


    ~ and | and & are specific operators in C -- bitwise-not, bitwise-or,
    and bitwise-and.

    If you have a 32 bit IP address, A, and a 32 bit netmask, M, then
    the broadcast address can be calculated as (A | (~M)). For example,
    for 192.168.5.18 with netmask 255.255.255.0, then that's
    0xc0a80512 | (~0xffffff00) which is
    0xc0a80512 | 0x000000ff which is
    0xc0a805ff also known as 192.168.5.255

    Now, on your variable bit machine, start by taking the bitwise-not
    of 0xffffff00 . The result will -end- in 0x000000ff, but the
    presumption of your design is that the width of variables is not fixed
    because later data might come along which needs more space. For example,
    the algorithm might be trying to cope with an IPv6 address, which is
    considerably wider. Consider that that netmask that is 255.255.255.0
    (0xffffff00) now might, in a later version of IP, really be
    0.0.0.0.255.255.255.0 -- so the "right" bitwise-not for it would then be
    0xffffffff000000ff . But then the same program might be used for an
    even later IP version in which network masks are 128 bits long, so
    the "right" bitwise-not might be 0xffffffffffffffffffffffff000000ff
    and the "right" broadcast address might be
    0xffffffffffffffffffffffffc0a805ff

    At the time the machine is doing the bitwise not, the only thing
    it knows is that the -source- value was 32 bits wide, 0xffffff00,
    but as you are positing dynamic expansion of values as needed,
    the machine cannot know whether the right result for the bitwise-not
    is 0x000000ff or 0xffffffff000000ff or 0xffffffffffffffffffffffff000000ff
    or something else. Thus as far as the desired semantics of the machine
    are concerned, because any input value is "an indefinite number of 0's
    followed by a certain definite number of bits", the output of the
    bitwise-not must be "an indefinite number of 1's followed by a
    certain definite number of bits". The program can always deliberately
    throw away those indefinite 1's later, if that's what it is sure
    the algorithm calls for, but the machine semantics must include it
    in the value representation -- because if it stops the expansion
    at any particular point, then the program cannot handle any wider field.


    The only alternative to this is that there must be another parameter
    to each low-level instruction, with the parameter indicating the
    limit on the number of bits to produce. This parameter must not
    (in keeping with your desired expansion-proof semantics) be a constant.
    There is the question of how the program would calculate that number
    in the first place... For example, if the user enters an IP address
    of 192.168.5.18.49.37 then how is the program to decide whether
    the user entered an invalid IP address or entered a "new improved"
    IP address that needs more space?



    >like 56.89


    >Maybe this will just be translated into 56 + 89 / 100


    >56 is stored in memory and 89 and 100 is stored in memory.


    >I don't know the english word for this kind of thing... like a fraction ? or
    >broken number ? I dont know ;)


    "rational number". Which is useful for some things but not for others.
    For example, the great majority of expressions to do with Pi, logorithms,
    trigonometry, and fractional exponentials (e.g. square root),
    all produce irrational numbers. There are "closed" "algebraic forms"
    for some of the irrational numbers, but others of them
    ("transcendental numbers") have no -possible- "closed algebraic"
    expression, so there is simply no way to accurately represent those
    values using any finite number of rational numbers.


    >> - In order to avoid locking in formats for values that could change
    >> in size, [v]printf() formats would generally have to be
    >> variable. %d vs %ld would generally have to disappear, in favour
    >> of a generalized integral format.


    >Euhm the above stuff about printf etc is too c library like specific, you
    >lost me ;)


    This is comp.lang.c . If you wish to discuss the proposed machine
    here, you have to be prepared to discuss how the machine would
    implement fundamental C semantics; otherwise, the discussion is
    Off Topic.
    --
    "Who Leads?" / "The men who must... driven men, compelled men."
    "Freak men."
    "You're all freaks, sir. But you always have been freaks.
    Life is a freak. That's its hope and glory." -- Alfred Bester, TSMD
    Walter Roberson, Aug 1, 2005
    #18
  19. "Eric Sosman" <> wrote in message
    news:...
    > Skybuck Flying wrote:
    > >>
    > >>Skybuck Flying wrote:
    > >>>
    > >>>I think I might have just invented the variable bit cpu :)

    > >
    > > There is no implementation yet.

    >
    > Such machines existed in the 1960's; I personally used
    > one in 1966 and 1967. If you can show that your invention
    > predated that era, you may be able to sue IBM for patent
    > infringement and make yourself a very rich man. (Or woman.
    > Or mollusc, or slime mold; whatever.)


    These machines used a similiar concept but a different implementation.

    These machines used 6 bits for a BCD and used extra bits per character to
    indentify otherwise... (something like that)

    Whoever they were limited in the ammount of memory they could address
    because of limited sized registers

    So while similiar in concept the implementation is different.

    The IBM machines used probably like 1 bit to idenfity where "bytes" started
    and ended.

    My idea is about idenfitieing where "bits" start and end :) and it uses more
    bits to do that ;)

    > However tenuous the legal argument may be, the potential
    > rewards are surely enormous. Were I in your enviable position,
    > I would strain every sinew to wresting my pot of gold from
    > plutocratic corporate Amerika, even if the effort left me no
    > time to pollute Usenet with nonsense.


    Oh well suppose intel/amd implements this idea lol... and my implemention
    might hold up in court I could always patent and demand compensation or sue
    them later lol :D

    Since in the USA it's possible to patent an idea after publication ;)

    Bye,
    Skybuck.
    Skybuck Flying, Aug 1, 2005
    #19
  20. Skybuck Flying wrote:
    Lots and lots of lines full of nonesense and smileys and... bad taste
    in general.
    <snip everything, absurd discussion about an impractical idea about a
    "variable bit CPU">

    I'd like to make a couple of comments:

    1. Smileys are not a mark for end of paragraph as you seem to believe.
    Your sentences were not funny, and obviouly not intended as joke, so
    please, stop using them after every sentece.

    2. This is group is not appropiate for this discussion.

    3. If you're going to troll... show some style. There are trolls, and
    then there are boring trolls like you.
    Antonio Contreras, Aug 2, 2005
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Skybuck Flying

    The variable bit cpu

    Skybuck Flying, Jul 30, 2005, in forum: Java
    Replies:
    28
    Views:
    819
    Tim Tyler
    Aug 2, 2005
  2. Skybuck Flying

    Re: The variable bit cpu

    Skybuck Flying, Aug 2, 2005, in forum: Java
    Replies:
    2
    Views:
    339
    Skybuck Flying
    Aug 2, 2005
  3. bfische
    Replies:
    6
    Views:
    31,577
    jimmy
    Sep 18, 2008
  4. Replies:
    3
    Views:
    1,743
    Timothy Bendfelt
    Jan 19, 2007
  5. Replies:
    9
    Views:
    966
    Juha Nieminen
    Aug 22, 2007
Loading...

Share This Page