An implementation where sizeof(short int) does not divide sizeof(int)

K

Keith Thompson

Dik T. Winter said:
How about where sizeof int is not evenly divisible by sizeof char?
CDC Cyber UTexas C compiler
[snip]
That is possible (but gives problems with floats that are also 60 bits).
But on those machines it is much more reasonable to have 48 bit ints
with the remaining 16 bits (possible) garbage. But integer arithmetic
on those machines had quite a few strange points. That is, I could
easily construct examples where:
(a + a) * 2 != a + a + a + a
in integer arithmetic, even when a < 2**48.

I'm having trouble imagining that that's anything other than WRONG
(assuming there's no overflow).
 
U

user923005

Keith said:
Dik T. Winter said:
How about where sizeof int is not evenly divisible by sizeof char?
CDC Cyber UTexas C compiler
[snip]
That is possible (but gives problems with floats that are also 60 bits).
But on those machines it is much more reasonable to have 48 bit ints
with the remaining 16 bits (possible) garbage. But integer arithmetic
on those machines had quite a few strange points. That is, I could
easily construct examples where:
(a + a) * 2 != a + a + a + a
in integer arithmetic, even when a < 2**48.

I'm having trouble imagining that that's anything other than WRONG
(assuming there's no overflow).

A bit of unwrapping of this URL will be wanted. Read from here down:
http://groups.google.com/group/comp...6bbc874b5db?tvc=1&hl=en&#doc_593da52104d0ae03
 
K

Keith Thompson

user923005 said:
Keith said:
Dik T. Winter said:
How about where sizeof int is not evenly divisible by sizeof char?
CDC Cyber UTexas C compiler [snip]
That is possible (but gives problems with floats that are also 60 bits).
But on those machines it is much more reasonable to have 48 bit ints
with the remaining 16 bits (possible) garbage. But integer arithmetic
on those machines had quite a few strange points. That is, I could
easily construct examples where:
(a + a) * 2 != a + a + a + a
in integer arithmetic, even when a < 2**48.

I'm having trouble imagining that that's anything other than WRONG
(assuming there's no overflow).

A bit of unwrapping of this URL will be wanted. Read from here down:
http://groups.google.com/group/comp...6bbc874b5db?tvc=1&hl=en&#doc_593da52104d0ae03

Or <http://preview.tinyurl.com/ykcnee>.

But that thread talks about floating-point arithmetic.
 
J

Jean-Marc Bourguet

Keith Thompson said:
sizeof(int) is divisible by sizeof(char) in any conforming C
implementation; sizeof(char) is 1 by definition.

Implementing C on a system with 8-bit characters and 60-bit integers
would be, um, interesting.

KCC a C compiler for PDP-10 had an option to use 7 bit char instead of the
standard conforming default of 9 bits char (PDP-10 is a 36 bits computer).
This probably to help interoperability as chars on the PDP-10 were usually
packed 5 by 36 bit word with an unused bit. I've never used this option so
can't answer any of the numerous question that can be raised.

Yours,
 
D

Dik T. Winter

> "Dik T. Winter said:
> > We shifted to a CDC Cyber around 1976, or perhaps earlier. The first time
> > I even heard about the UTexas C compiler was about 10 years later. That was
> > when the computing centre asked us whether it should be available as a
> > standard compiler kit, I do not think it was much older. My answer to
> > the question was: *no*. The reason was that it was not suitable for
> > interactive work (the compiler required too much memory).
> > [...]
>
> Out of curiosity: how much memory was considered too much for a
> compiler to consume on that machine at the time?

The machine had 131,072 words of memory for over 100 simultaneous
interactive users. The limit for an interactive session was
28,672 words. If I remember well, the C compiler needed 40,960
words.
 
D

Dik T. Winter

> user923005 wrote: ....
>
> I suspect you are confusing CHAR_BIT with sizeof char.

I have mused a bit about it. I think now that the UTexas C compiler
used 12 bit chars (one of the possible variants on that system).
 
D

Dik T. Winter

> "Dik T. Winter said:
> > In article <[email protected]>
> > > How about where sizeof int is not evenly divisible by sizeof char?
> > > CDC Cyber UTexas C compiler
> [snip]
> > That is possible (but gives problems with floats that are also 60 bits).
> > But on those machines it is much more reasonable to have 48 bit ints
> > with the remaining 16 bits (possible) garbage. But integer arithmetic
> > on those machines had quite a few strange points. That is, I could
> > easily construct examples where:
> > (a + a) * 2 != a + a + a + a
> > in integer arithmetic, even when a < 2**48.
>
> I'm having trouble imagining that that's anything other than WRONG
> (assuming there's no overflow).

If the compiler tells you an int is 60 bits, there is no overflow, and
it is indeed wrong. If the copiler tells you an int is 48 bits, there
is overflow, and it is not wrong. So it is much more reasonable to tell
that ints are 48 bits. (The reason behind it is that the multiply
instruction used also serves as instruction to calculate the second half
of the product of two floating point numbers. When what is done depends
on the actual operands.)
 
R

Richard Bos

user923005 said:
I wonder if it was written between 1969 (~ eariliest origins of C) and
1978 (K&R I published), in which case K&R I does not apply.

I have here PDFs of texts (from the look of it drafts of K&R I) by
Messrs Kernighan and Ritchie, apparently from 1974, which seem to claim
that in those days, a char was one byte large, and sizeof reported in
bytes. OTOH, the same documents also indicate that char used strict
ASCII, and only actually used 7 bits out of those 8-bit bytes. Draw your
own conclusion; I punt.

Richard
 
G

Giorgio Silvestri

christian.bau said:
What about...

An implementation where long has 40 value bits... (TI DSPs)


If you are thinking about "TI DSP C6X family":

WIDTH(int) = 32
WIDTH(long) = 40

but

sizeof (int) = 4
sizeof (long) = 8

In general "WIDTH(T1) does not divide WIDTH(T2)"

does not implies

"sizeof (T1) does not divide sizeof (T2)".
 
S

Spiros Bousbouras

Giorgio said:
If you are thinking about "TI DSP C6X family":

WIDTH(int) = 32
WIDTH(long) = 40

but

sizeof (int) = 4
sizeof (long) = 8

In general "WIDTH(T1) does not divide WIDTH(T2)"

does not implies

"sizeof (T1) does not divide sizeof (T2)".

By width do you mean the number of value bits ?
 
S

Spiros Bousbouras

Peter said:
But why are you concerned about size?

I had the following problem in mind:

Assume you have a function unsigned long rnd(void)
which returns pseudorandom numbers uniformly
distributed in the range covered by unsigned long.
(I was under the mistaken impression that rand()
returns an unsigned long). I was also willing to assume
that I'm working on a platform where there are no padding
bits in the representation of unsigned long (long).
The problem was to construct a function which
returns uniformly distributed pseudorandom values
in the range covered by unsigned long long.

#define LL sizeof(unsigned long long)
#define L sizeof(unsigned long)

If LL == 2*L then the solution to the problem is

return (unsigned long long) rnd() << L*CHAR_BIT | rnd() ;

If LL == m*L where m is a positive integer then
you'll need a loop. But if L does not divide LL then
it gets slightly messier. I had actually thought of
how to do it portably with various #if's to cover the
various cases so that only the necessary code gets generated
for each platform but since rand() does not return
unsigned long it seems pointless to write the code.

By the way , if you don't assume that the representation
of unsigned long (long) only has value bits then it gets
really messy.
 
G

Guest

Spiros said:
By width do you mean the number of value bits ?

The width of an integer type is the number of non-padding bits. This
includes the value bits as well as (for signed types) the sign bit.
 
P

Peter Nilsson

Spiros said:
I had the following problem in mind:

Assume you have a function unsigned long rnd(void)
which returns pseudorandom numbers uniformly
distributed in the range covered by unsigned long.
(I was under the mistaken impression that rand()
returns an unsigned long).

What rand()'s return type is irrelevant. You only need to know
that it returns a value in the range 0..RAND_MAX.
I was also willing to assume that I'm working on a platform
where there are no padding bits in the representation of
unsigned long (long).

You mean you weren't willing to discard that unnecessary
assumption.
The problem was to construct a function which
returns uniformly distributed pseudorandom values
in the range covered by unsigned long long.

#define LL sizeof(unsigned long long)
#define L sizeof(unsigned long)

If LL == 2*L then the solution to the problem is

return (unsigned long long) rnd() << L*CHAR_BIT | rnd() ;

If LL == m*L where m is a positive integer then
you'll need a loop.

Is that a problem?
But if L does not divide LL then it gets slightly messier.

How so?
I had actually thought of
how to do it portably with various #if's to cover the
various cases so that only the necessary code gets generated
for each platform but since rand() does not return
unsigned long it seems pointless to write the code.

By the way , if you don't assume that the representation
of unsigned long (long) only has value bits then it gets
really messy.

If you want a simple option...

unsigned rand8bits(void)
{
unsigned r, m = RAND_MAX - RAND_MAX & 0xFFu;
while ((r = rand()) >= m) continue;
return r & 0xFFu;
}

unsigned long randul(void)
{
unsigned long m, r = 0;
for (m = 0xFF; m; m <<= 8) r = (r << 8) | rand8bits();
return r;
}

unsigned long long randull(void)
{
unsigned long long m, r = 0;
for (m = 0xFF; m; m <<= 8) r = (r << 8) | rand8bits();
return r;
}

or...

unsigned long long randull(void)
{
unsigned long long m, r = 0, ulsh = -1ul + 1ull;
for (m = -1ul; m; m *= ulsh) r = (r * ulsh) | randul();
return r;
}

More robust (statistical) options involve implementing the PRNG across
a wider range than 8 bits, but again it can easily be done by ignoring
the
size, using only values.
 
C

CBFalconer

Peter said:
Spiros Bousbouras wrote:
.... snip ...

What rand()'s return type is irrelevant. You only need to know
that it returns a value in the range 0..RAND_MAX.

Just for sport I decided to test cokusMT for generation of 0
through RAND_MAX (although its sequence length is much longer than
that). I used:

#include <stdio.h>
#include "cokusmt.h"

#define MAXMT ((unsigned long)-1)
int main(void)
{
unsigned long i, r;

i = 0;
do {
if (0 == (r = randomMT()))
printf("randomMT() == 0 after %lu tries\n", i);
else if (MAXMT == r)
printf("randomMT() == %lu after %lu tries\n", MAXMT, i);
} while (++i);
printf("No more zeroes found in %lu+1 tries\n", MAXMT);
return 0;
}

[1] c:\c\random>cc -o zerotest.exe zerotest.c cokusmt.o

[1] c:\c\random>timerun zerotest
Timer 3 on: 15:04:17
randomMT() == 0 after 1171079842 tries
randomMT() == 0 after 1960155399 tries
randomMT() == 4294967295 after 3331043402 tries
No more zeroes found in 4294967295+1 tries
Timer 3 off: 15:10:55 Elapsed: 0:06:37.60

showing that both 0 and max are generated. The execution time is
about 75 nanosecs per value, on my 450 MHz P3. The cokusmt module
is that included in hashlib.lib for regression testing. See:

<http://cbfalconer.home.att.net/download/hashlib.zip>
 
J

jaysome

Peter said:
Spiros Bousbouras wrote:
... snip ...

What rand()'s return type is irrelevant. You only need to know
that it returns a value in the range 0..RAND_MAX.

Just for sport I decided to test cokusMT for generation of 0
through RAND_MAX (although its sequence length is much longer than
that). I used:

#include <stdio.h>
#include "cokusmt.h"

#define MAXMT ((unsigned long)-1)
int main(void)
{
unsigned long i, r;

i = 0;
do {
if (0 == (r = randomMT()))
printf("randomMT() == 0 after %lu tries\n", i);
else if (MAXMT == r)
printf("randomMT() == %lu after %lu tries\n", MAXMT, i);
} while (++i);
printf("No more zeroes found in %lu+1 tries\n", MAXMT);
return 0;
}

[1] c:\c\random>cc -o zerotest.exe zerotest.c cokusmt.o

[1] c:\c\random>timerun zerotest
Timer 3 on: 15:04:17
randomMT() == 0 after 1171079842 tries
randomMT() == 0 after 1960155399 tries
randomMT() == 4294967295 after 3331043402 tries
No more zeroes found in 4294967295+1 tries
Timer 3 off: 15:10:55 Elapsed: 0:06:37.60

showing that both 0 and max are generated. The execution time is
about 75 nanosecs per value, on my 450 MHz P3. The cokusmt module
is that included in hashlib.lib for regression testing. See:

<http://cbfalconer.home.att.net/download/hashlib.zip>

Excellent.

I downloaded this and extracted the ZIP and copied cokusmt.c and
cokusmt.h to my project directory and included cokusmt.c in my project
and updated my main() with yours and rebuilt the project and executed
it and it all worked. That's how it should be.

Compiled with VC++ 6.0 and running under Windows Vista RTM on a dual
core AMD 4800+ processor with 2 GB of RAM, these are my results:

randomMT() == 0 after 1171079842 tries
randomMT() == 0 after 1960155399 tries
randomMT() == 4294967295 after 3331043402 tries
No more zeroes found in 4294967295+1 tries
Elapsed time: 43.948 seconds

The output is exactly like yours, except that my execution time is
about 8 nanoseconds per value--about 9 times faster than yours :^)

Here's the source to my main():

#include <stdio.h>
#include <time.h>
#include "cokusmt.h"

#define MAXMT ((unsigned long)-1)
int main(void)
{
unsigned long i, r;
clock_t t1, t2;

t1 = clock();
i = 0;
do {
if (0 == (r = randomMT()))
printf("randomMT() == 0 after %lu tries\n", i);
else if (MAXMT == r)
printf("randomMT() == %lu after %lu tries\n", MAXMT, i);
} while (++i);
t2 = clock();
printf("No more zeroes found in %lu+1 tries\n", MAXMT);
printf("Elapsed time: %.3f seconds\n",
(double)(t2 - t1)/CLOCKS_PER_SEC);
return 0;
}

Best regards
 
C

CBFalconer

jaysome said:
CBFalconer said:
Peter said:
Spiros Bousbouras wrote:
... snip ...

Assume you have a function unsigned long rnd(void)
which returns pseudorandom numbers uniformly
distributed in the range covered by unsigned long.
(I was under the mistaken impression that rand()
returns an unsigned long).

What rand()'s return type is irrelevant. You only need to know
that it returns a value in the range 0..RAND_MAX.

Just for sport I decided to test cokusMT for generation of 0
through RAND_MAX (although its sequence length is much longer than
that). I used:

#include <stdio.h>
#include "cokusmt.h"

#define MAXMT ((unsigned long)-1)
int main(void)
{
unsigned long i, r;

i = 0;
do {
if (0 == (r = randomMT()))
printf("randomMT() == 0 after %lu tries\n", i);
else if (MAXMT == r)
printf("randomMT() == %lu after %lu tries\n", MAXMT, i);
} while (++i);
printf("No more zeroes found in %lu+1 tries\n", MAXMT);
return 0;
}

[1] c:\c\random>cc -o zerotest.exe zerotest.c cokusmt.o

[1] c:\c\random>timerun zerotest
Timer 3 on: 15:04:17
randomMT() == 0 after 1171079842 tries
randomMT() == 0 after 1960155399 tries
randomMT() == 4294967295 after 3331043402 tries
No more zeroes found in 4294967295+1 tries
Timer 3 off: 15:10:55 Elapsed: 0:06:37.60

showing that both 0 and max are generated. The execution time is
about 75 nanosecs per value, on my 450 MHz P3. The cokusmt module
is that included in hashlib.lib for regression testing. See:

<http://cbfalconer.home.att.net/download/hashlib.zip>

Excellent.

I downloaded this and extracted the ZIP and copied cokusmt.c and
cokusmt.h to my project directory and included cokusmt.c in my project
and updated my main() with yours and rebuilt the project and executed
it and it all worked. That's how it should be.

Compiled with VC++ 6.0 and running under Windows Vista RTM on a dual
core AMD 4800+ processor with 2 GB of RAM, these are my results:

randomMT() == 0 after 1171079842 tries
randomMT() == 0 after 1960155399 tries
randomMT() == 4294967295 after 3331043402 tries
No more zeroes found in 4294967295+1 tries
Elapsed time: 43.948 seconds

The output is exactly like yours, except that my execution time is
about 8 nanoseconds per value--about 9 times faster than yours :^)

To be expected, considering that your CPU is running over 10 times
faster than mine. The identical output is why I include cokusmt in
the hashlib release, so that the test sequences for hashlib should
not vary with installation.

However, I note that you are using Vista. That is extremely
dangerous. See the URL in my sig below.

--
"A man who is right every time is not likely to do very much."
-- Francis Crick, co-discover of DNA
"There is nothing more amazing than stupidity in action."
-- Thomas Matthews
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
 
M

mensanator

CBFalconer said:
jaysome said:
CBFalconer said:
Peter Nilsson wrote:
Spiros Bousbouras wrote:

... snip ...

Assume you have a function unsigned long rnd(void)
which returns pseudorandom numbers uniformly
distributed in the range covered by unsigned long.
(I was under the mistaken impression that rand()
returns an unsigned long).

What rand()'s return type is irrelevant. You only need to know
that it returns a value in the range 0..RAND_MAX.

Just for sport I decided to test cokusMT for generation of 0
through RAND_MAX (although its sequence length is much longer than
that). I used:

#include <stdio.h>
#include "cokusmt.h"

#define MAXMT ((unsigned long)-1)
int main(void)
{
unsigned long i, r;

i = 0;
do {
if (0 == (r = randomMT()))
printf("randomMT() == 0 after %lu tries\n", i);
else if (MAXMT == r)
printf("randomMT() == %lu after %lu tries\n", MAXMT, i);
} while (++i);
printf("No more zeroes found in %lu+1 tries\n", MAXMT);
return 0;
}

[1] c:\c\random>cc -o zerotest.exe zerotest.c cokusmt.o

[1] c:\c\random>timerun zerotest
Timer 3 on: 15:04:17
randomMT() == 0 after 1171079842 tries
randomMT() == 0 after 1960155399 tries
randomMT() == 4294967295 after 3331043402 tries
No more zeroes found in 4294967295+1 tries
Timer 3 off: 15:10:55 Elapsed: 0:06:37.60

showing that both 0 and max are generated. The execution time is
about 75 nanosecs per value, on my 450 MHz P3. The cokusmt module
is that included in hashlib.lib for regression testing. See:

<http://cbfalconer.home.att.net/download/hashlib.zip>

Excellent.

I downloaded this and extracted the ZIP and copied cokusmt.c and
cokusmt.h to my project directory and included cokusmt.c in my project
and updated my main() with yours and rebuilt the project and executed
it and it all worked. That's how it should be.

Compiled with VC++ 6.0 and running under Windows Vista RTM on a dual
core AMD 4800+ processor with 2 GB of RAM, these are my results:

randomMT() == 0 after 1171079842 tries
randomMT() == 0 after 1960155399 tries
randomMT() == 4294967295 after 3331043402 tries
No more zeroes found in 4294967295+1 tries
Elapsed time: 43.948 seconds

The output is exactly like yours, except that my execution time is
about 8 nanoseconds per value--about 9 times faster than yours :^)

To be expected, considering that your CPU is running over 10 times
faster than mine. The identical output is why I include cokusmt in
the hashlib release, so that the test sequences for hashlib should
not vary with installation.

However, I note that you are using Vista. That is extremely
dangerous. See the URL in my sig below.

You can't imagine how disappointed I am that I won't be able
to connect a $4000 monitor to a pair of $1500 video cards.
 
J

jaysome

[snip]
However, I note that you are using Vista. That is extremely
dangerous. See the URL in my sig below.

It's misleading, IMHO, to make an assertion like Vista "is extremely
dangerous", without providing a sufficient context or even arguments
to substantiate your assert().

The URL in your sig has to do with, in general, "A Cost Analysis of
Windows Vista Content Protection", and in specific, how:

"Windows Vista includes an extensive reworking of core OS elements in
order to provide content protection for so-called "premium content",
typically HD data from Blu-Ray and HD-DVD sources."

The above citation is from the section titled "Executive Executive
Summary" (note the superfluous, anomalous recitation of the word
"Executive", which is enough to make one wonder if this web site was
authored by a teenager, or an uneducated adult, or a zealot, or etc.).

Here's the URL everyone, in case we should forget:

http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.html

What makes this so ironic for me is the wording "in order to", which
implies that the primary objective of the "extensive reworking of core
OS elements" (of Windows Vista) was to "provide content protection for
so-called "premium content"".

BS and Poppycock.

I don't own a Blu-Ray or even HD-DVD source, let alone intend to ever
connect devices that provide that type of content to my PC, let alone
view the contents of such sources in Windows Vista. And even if
someday I did do all that, I have a hunch that I'd feel comfortable
with playing something that is legitimate and legal, and that Windows
Vista insured me of that. As I like to say: "you gotta keep the honest
people honest".

IE7 in Windows Vista plays Flash content just fine. I'm still waiting
for the day when a 64-bit flash player for Firefox running under
64-bit Ubuntu 6.10 Linux is released. I haven't booted into 64-bit
Ununtu 6.10 for quite some while. That leads me to the epitome ...

.... if you believe in what was said on that web site, may we interest
you in another article of ours, whose Executive Summary is:

"Ubuntu 6.10 includes the latest Linux kernel, which is an extensive
reworking of core OS elements in order to prevent there from being a
64-bit Flash player for Firefox. "

Hopefully others will form their own opinions, based on the facts, and
see through the proverbial smokescreen.

Best regards
 
S

santosh

jaysome said:
[snip]
However, I note that you are using Vista. That is extremely
dangerous. See the URL in my sig below.

It's misleading, IMHO, to make an assertion like Vista "is extremely
dangerous", without providing a sufficient context or even arguments
to substantiate your assert().

The context was the link he provided.
The URL in your sig has to do with, in general, "A Cost Analysis of
Windows Vista Content Protection", and in specific, how:

"Windows Vista includes an extensive reworking of core OS elements in
order to provide content protection for so-called "premium content",
typically HD data from Blu-Ray and HD-DVD sources."

Yes. We can read that. So? Isn't it possible that one aspect of an
operating system's job, if done unadvisably enough, cannot affect the
entire usability and perception of the system, and networks connected
to it?

The paper is examining the damaging effect, upon hardware, performance,
usability, stability and other systems, of the over-engineered DRM
sub-system of Vista.
The above citation is from the section titled "Executive Executive
Summary" (note the superfluous, anomalous recitation of the word
"Executive", which is enough to make one wonder if this web site was
authored by a teenager, or an uneducated adult, or a zealot, or etc.).

Do you have any counter-arguments to the technical details of the
paper, rather than ad hominem against the author?
I don't own a Blu-Ray or even HD-DVD source, let alone intend to ever
connect devices that provide that type of content to my PC, let alone
view the contents of such sources in Windows Vista.

It's not about any one user. The paper is looking at the repurcussions
of Vista's DRM on the PC industry, it's user base etc. Just because you
don't play premium content doesn't refute or invalidate a single point
in that article.
And even if
someday I did do all that, I have a hunch that I'd feel comfortable
with playing something that is legitimate and legal, and that Windows
Vista insured me of that. As I like to say: "you gotta keep the honest
people honest".

So you want a company known for it's monopolistic practises and brass
knuckles to play the global vigilante? How quaint.

<irrelevant details snipped>
 
C

CBFalconer

jaysome said:
[snip]
However, I note that you are using Vista. That is extremely
dangerous. See the URL in my sig below.

It's misleading, IMHO, to make an assertion like Vista "is extremely
dangerous", without providing a sufficient context or even arguments
to substantiate your assert().

The URL in your sig has to do with, in general, "A Cost Analysis of
Windows Vista Content Protection", and in specific, how:

"Windows Vista includes an extensive reworking of core OS elements in
order to provide content protection for so-called "premium content",
typically HD data from Blu-Ray and HD-DVD sources."

The above citation is from the section titled "Executive Executive
Summary" (note the superfluous, anomalous recitation of the word
"Executive", which is enough to make one wonder if this web site was
authored by a teenager, or an uneducated adult, or a zealot, or etc.).

Here's the URL everyone, in case we should forget:

http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.html

What makes this so ironic for me is the wording "in order to", which
implies that the primary objective of the "extensive reworking of core
OS elements" (of Windows Vista) was to "provide content protection for
so-called "premium content"".

BS and Poppycock.

I don't own a Blu-Ray or even HD-DVD source, let alone intend to ever
connect devices that provide that type of content to my PC, let alone
view the contents of such sources in Windows Vista. And even if
someday I did do all that, I have a hunch that I'd feel comfortable
with playing something that is legitimate and legal, and that Windows
Vista insured me of that. As I like to say: "you gotta keep the honest
people honest".

Scan that URL for 'medical'. Around here the medical world is
highly dependant on images transmitted over the Internet. I am
aware of this because of recent problems. The Vista system can
quietly degrade those images behind your back, preventing spotting
problems. It is inherently dangerous. Note the words "behind your
back" and "quietly".

--
"A man who is right every time is not likely to do very much."
-- Francis Crick, co-discover of DNA
"There is nothing more amazing than stupidity in action."
-- Thomas Matthews
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,265
Latest member
TodLarocca

Latest Threads

Top