Age | Commit message (Collapse) | Author |
|
The iv field in des_ctx/des3_ede_ctx/serpent_ctx has never been used.
This was noticed by Dag Arne Osvik.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Implementation:
===============
The encrypt/decrypt code is based on an x86 implementation I did a while
ago which I never published. This unpublished implementation does
include an assembler based key schedule and precomputed tables. For
simplicity and best acceptance, however, I took Gladman's in-kernel code
for table generation and key schedule for the kernel port of my
assembler code and modified this code to produce the key schedule as
required by my assembler implementation. File locations and Kconfig are
kept similar to the i586 AES assembler implementation.
It may seem a little bit strange to use 32 bit I/O and registers in the
assembler implementation but this gives the best code size. My
implementation takes one instruction more per round compared to
Gladman's x86 assembler but it doesn't require any stack for local
variables or saved registers and it is less serialized than Gladman's
code.
Note that all comparisons to Gladman's code were done after my code was
implemented. I did only use FIPS PUB 197 for the implementation so my
implementation is independent work.
If anybody has a better assembler solution for x86_64 I'll be pleased to
have my code replaced with the better solution.
Testing:
========
The implementation passes the in-kernel crypto testing module and I'm
running it without any problems on my laptop where it is mainly used for
dm-crypt.
Microbenchmark:
===============
The microbenchmark was done in userspace with similar compile flags as
used during kernel compile.
Encrypt/decrypt is about 35% faster than the generic C implementation.
As the generic C as well as my assembler implementation are both table
I don't really expect that there is much room for further
improvements though I'll be glad to be corrected here.
The key schedule is about 5% slower than the generic C implementation.
This is due to the fact that some more work has to be done in the key
schedule routine to fit the schedule to the assembler implementation.
Code Size:
==========
Encrypt and decrypt are together about 2.1 Kbytes smaller than the
generic C implementation which is important with regard to L1 cache
usage. The key schedule routine is about 100 bytes larger than the
generic C implementation.
Data Size:
==========
There's no difference in data size requirements between the assembler
implementation and the generic C implementation.
License:
========
Gladmans's code is dual BSD/GPL whereas my assembler code is GPLv2 only
(I'm not going to change the license for my code). So I had to change
the module license for the x86_64 aes module from 'Dual BSD/GPL' to
'GPL' to reflect the most restrictive license within the module.
Signed-off-by: Andreas Steinmetz <ast@domdv.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
As far as I'm aware there's a general concensus that functions that are
responsible for freeing resources should be able to cope with being passed
a NULL pointer. This makes sense as it removes the need for all callers to
check for NULL, thus elliminating the bugs that happen when some forget
(safer to just check centrally in the freeing function) and it also makes
for smaller code all over due to the lack of all those NULL checks.
This patch makes it safe to pass the crypto_free_tfm() function a NULL
pointer. Once this patch is applied we can start removing the NULL checks
from the callers.
Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Even though cit_iv is now always aligned, the user can still supply an
unaligned iv through crypto_cipher_encrypt_iv/crypto_cipher_decrypt_iv.
This patch will check the alignment of the user-supplied iv and copy
it if necessary.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch ensures that cit_iv is aligned according to cra_alignmask
by allocating it as part of the tfm structure. As a side effect the
crypto layer will also guarantee that the tfm ctx area has enough space
to be aligned by cra_alignmask. This allows us to remove the extra
space reservation from the Padlock driver.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch makes a needlessly global function static.
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The VIA Padlock device requires the input and output buffers to
be aligned on 16-byte boundaries. This patch adds the alignmask
attribute for low-level cipher implementations to indicate their
alignment requirements.
The mid-level crypt() function will copy the input/output buffers
if they are not aligned correctly before they are passed to the
low-level implementation.
Strictly speaking, some of the software implementations require
the buffers to be aligned on 4-byte boundaries as they do 32-bit
loads. However, it is not clear whether it is better to copy
the buffers or pay the penalty for unaligned loads/stores.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch adds hooks for cipher algorithms to implement multi-block
ECB/CBC operations directly. This is expected to provide significant
performance boots to the VIA Padlock.
It could also be used for improving software implementations such as
AES where operating on multiple blocks at a time may enable certain
optimisations.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The VIA Padlock device is able to perform much better when multiple
blocks are fed to it at once. As this device offers an exceptional
throughput rate it is worthwhile to optimise the infrastructure
specifically for it.
We shift the existing page-sized fast path down to the CBC/ECB functions.
We can then replace the CBC/ECB functions with functions provided by the
underlying algorithm that performs the multi-block operations.
As a side-effect this improves the performance of large cipher operations
for all existing algorithm implementations. I've measured the gain to be
around 5% for 3DES and 15% for AES.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Checking a pointer for NULL before calling kfree() on it is redundant.
This patch removes such checks from crypto/
Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
After using this facility for a while to test my changes to the
cipher crypt() layer, I realised that I should've listend to Dave
and made this thing use CPU cycle counters :) As it is it's too
jittery for me to feel safe about relying on the results.
So here is a patch to make it use CPU cycles by default but fall
back to jiffies if the user specifies a non-zero sec value.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The existing keys used in the speed tests do not pass the 3DES quality check.
This patch makes it use the template keys instead.
Other algorithms can supply template keys through the same interface if needed.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
From: Reyk Floeter <reyk@vantronix.net>
I recently had the requirement to do some benchmarking on cryptoapi, and
I found reyk's very useful performance test patch [1].
However, I could not find any discussion on why that extension (or
something providing a similar feature but different implementation) was
not merged into mainline. If there was such a discussion, can someone
please point me to the archive[s]?
I've now merged the old patch into 2.6.12-rc1, the result can be found
attached to this email.
[1] http://lists.logix.cz/pipermail/padlock/2004/000010.html
Signed-off-by: Harald Welte <laforge@gnumonks.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
It seems that bad code tends to get copied (see test_cipher_speed). So let's
kill this idiom before it spreads any further.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The netlink gfp_any() problem made me double-check the uses of in_softirq()
in crypto/*. It seems to me that we should be checking in_atomic() instead
of in_softirq() in crypto_yield. Otherwise people calling the crypto ops
with spin locks held or preemption disabled will get burnt, right?
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
null_encrypt() needs to copy the data in case src and dst are disjunct,
null_compress() needs to copy the data in any case as far as I can tell. I
joined compress/decompress and encrypt/decrypt to avoid duplicating code.
Without this patch ESP null_enc packets look like this:
IP (tos 0x0, ttl 64, id 23130, offset 0, flags [DF], length: 128)
10.0.0.1 > 10.0.0.2: ESP(spi=0x0f9ca149,seq=0x4)
0x0000: 4500 0080 5a5a 4000 4032 cbef 0a00 0001 E...ZZ@.@2......
0x0010: 0a00 0002 0f9c a149 0000 0004 0000 0000 .......I........
0x0020: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0030: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0040: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0050: 0000 ..
IP (tos 0x0, ttl 64, id 256, offset 0, flags [DF], length: 128)
10.0.0.2 > 10.0.0.1: ESP(spi=0x0e4f7b51,seq=0x2)
0x0000: 4500 0080 0100 4000 4032 254a 0a00 0002 E.....@.@2%J....
0x0010: 0a00 0001 0e4f 7b51 0000 0002 a8a8 a8a8 .....O{Q........
0x0020: a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 ................
0x0030: a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 ................
0x0040: a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 ................
0x0050: a8a8 ..
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
We want to make possible, for the user, to enable the i586 AES implementation.
This requires a restructure.
- Add a CONFIG_UML_X86 to notify that we are building a UML for i386.
- Rename CONFIG_64_BIT to CONFIG_64BIT as is used for all other archs
- Tell crypto/Kconfig that UML_X86 is as good as X86
- Tell it that it must exclude not X86_64 but 64BIT, which will give the
same results.
- Tell kbuild to descend down into arch/i386/crypto/ to build what's needed.
Signed-off-by: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
In the deflate_[compress|uncompress|pcompress] functions we call the
zlib_[in|de]flateReset function at the beginning. This is OK. But when we
unload the deflate module we don't call zlib_[in|de]flateEnd to free all
the zlib internal data. It looks like a bug for me. Please, consider the
attached patch.
Signed-off-by: Artem B. Bityuckiy <dedekind@infradead.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.
Let it rip!
|