Blog: C++ Code Smells

J

Jerry Coffin

[ ... ]
IIRC, it used to be normal practice (possibly still now) to use the
strongest asymmetric encryption only in order to exchange symmetric
encryption keys. Certainly symmetric encryption shouldn't expand the
data.

Not much anyway -- essentially any block cipher requires that you pad
the data to the block size. The block size will typically be on the
order of 128-256 bits, and on average, the padding should be about
half that...
 
G

Gerhard Fiedler

Yannick said:
Gerhard said:
Richard said:
[Please do not mail me a copy of your followup]

I didn't, and I wouldn't. I don't need this. Did I pay for it? :)
[...]

Most of that probably irrelevant to you personally but still "cost"
that do exist and are being met by someone.

Can you guesstimate the network traffic this post of yours on this
matter caused? And relate it to the https overhead, in numbers of
accessing the page in question?

Gerhard
 
S

Stephen Horne

[ ... ]
IIRC, it used to be normal practice (possibly still now) to use the
strongest asymmetric encryption only in order to exchange symmetric
encryption keys. Certainly symmetric encryption shouldn't expand the
data.

Not much anyway -- essentially any block cipher requires that you pad
the data to the block size. The block size will typically be on the
order of 128-256 bits, and on average, the padding should be about
half that...

Good point - but the padding will only be an average half-of-that for
the tail block of each packet, surely.
 
S

Stephen Horne

LOL!!!!

Sorry, just can't help it...this is too funny.

Don't laught - that power is probably generated from fossil fuels,
resulting in whole *MOLECULES* of carbon dioxide released into the
atmosphere.

We should immediately set up a protest and have millions of people
travel there, and we should burn every computer security book we can
find as a symbol of our hatred of this *EVIL* encryption!

Invite everyone on Earth by spamming this message everywhere you can
immediately!

THINK OF THE CHILDREN!!!!

;-)
 
J

Jerry Coffin

[ ... ]
Good point - but the padding will only be an average half-of-that for
the tail block of each packet, surely.

Not even necessarily for each packet -- quite possibly for an entire
encrypted stream.

Perhaps instead of saying "Not much" I should have said "Not enough
to care about". The only time you'd really care would be if you
compared the two and worried because there WAS a discrepancy. It
would take strange circumstances for the extra bandwidth to matter.
 
J

Jorgen Grahn

If that's true, I'm pretty surprised. AFAIK there are some metadata
overheads and some protocol overheads, but the encrypted data should
be exactly the same size as the original data.

I haven't done any research, but I'd expect a compression step to be
part of the encryption (i.e. be applied first). PGP does that, I
guess partly because the extra CPU usage is minimal, and partly
to compensate for the lack of good compression /after/ the encryption
when everything looks like white noise.
IIRC, it used to be normal practice (possibly still now) to use the
strongest asymmetric encryption only in order to exchange symmetric
encryption keys. Certainly symmetric encryption shouldn't expand the
data.


Lots of data sent over the internet doesn't compress well and - all
that already-compressed video, photos, audio, archive files - to be
honest, I'd have thought the costs of compressing and decompressing
lots of often uncompressable data in real time either end of a pipe
would probably outweigh the benefits.

I'd expect a browser to go "oh, and by the way, if you feel like it
you may give me the data compressed" only for certain MIME types,
and/or the server only try it for certain types -- like text/plain and
text/html. The web server knows the type of all its resources, and
trying to compress a JPEG image would be pointless.
I'm not disagreeing with the overall argument - just a bit surprised
by those particular points.

/Jorgen
 
S

Stephen Horne

I haven't done any research, but I'd expect a compression step to be
part of the encryption (i.e. be applied first). PGP does that, I
guess partly because the extra CPU usage is minimal, and partly
to compensate for the lack of good compression /after/ the encryption
when everything looks like white noise.

I know that... I once had a "conversation" with a guy who insisted
that the compression must mean that zip-like headers would be an easy
target for cryptanalysis. He simply couldn't get his head around the
idea of the compression algorithm divorced from a zip-like archive
format, or the idea of headerless compression.

That said, I've often wondered myself if common compression scheme
creates an easy target for cryptanalysis. In the early part of the
file, a large part of the dictionary is empty (?), meaning that
presumably a substantial portion of the possible codes in the output
stream aren't used until later, when the dictionary is filled.

I assume that can be fixed by ensuring the dictionary is always full
of something, but I'm still curious as to whether it could be a real
issue with the wrong choice of compression algorithm.
 
J

Juha Nieminen

Stephen said:
I know that... I once had a "conversation" with a guy who insisted
that the compression must mean that zip-like headers would be an easy
target for cryptanalysis. He simply couldn't get his head around the
idea of the compression algorithm divorced from a zip-like archive
format, or the idea of headerless compression.

If the compression algorithm has sufficiently high cryptographic
quality, it doesn't really matter if you know parts of the original data
(eg. some standard header data) and the exact algorithm used for the
encryption. It doesn't help you resolving the decryption key nor the
rest of the data. Lots of research has been done in cryptography for
this exact purpose.

(Ok, I must confess that I haven't studied cryptography enough to tell
if this is true for any amount of known data. For example, if from a 1MB
encrypted file you know 800kB of the original data, can you resolve the
remaining 200kB? I'm pretty certain, however, that if you encrypt a
100kB jpeg, knowing the original jpeg header will not help you
decrypting the rest of the image.)
 
J

Jerry Coffin

[ ... ]
That said, I've often wondered myself if common compression scheme
creates an easy target for cryptanalysis. In the early part of the
file, a large part of the dictionary is empty (?), meaning that
presumably a substantial portion of the possible codes in the output
stream aren't used until later, when the dictionary is filled.

I assume that can be fixed by ensuring the dictionary is always full
of something, but I'm still curious as to whether it could be a real
issue with the wrong choice of compression algorithm.

For encryption purposes, you wouldn't want to use an LZ-based
compression by itself. You want to use something like Huffman
compression.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,065
Latest member
OrderGreenAcreCBD

Latest Threads

Top