[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Re: scrypt ASICs - litecoin N, r, p settings - Re: Revisiting the SHA1 default password hash
On 07/03/2017 18:35, Howard Chu wrote:
> Lorenzo M. Catucci wrote:
>> Just to add some context to Howard Chu's message from Sat, 25 Feb 2017,
>> I'd like to point out that the scrypt settings chosen for litecoin PoW,
>> which I think are the ones commercial mining hardware are optimized for, are
>> some orders of magnitude different than the ones recommended for user
>> authentication, and are not, in my humble opinion, a sufficient reason to
>> exclude the scrypt construct from consideration:
>>
>> - https://litecoin.info/Scrypt
>> uses N = 1024, r = 1, p = 1
>> which means using
>> 1024 * 1 * 1 * 128 = 128 KiBytes bytes of memory
>> and doing
>> 2 * (1024 * 1) = 2 Ki hashing rounds
>>
>> - the traditional default values for the parameters are
>> N = 16384, r = 8, p = 1 , which means using
>> 2^16 * 8 * 1 * 128 = 16 MiBytes of memory
>> and doing
>> 32Ki hashing rounds
>>
>> and these values can be customized (almost) at will to increase both the
>> computational effort and the memory footprint of the password hashing; e.g.
>> libsodium's crypto_pwhash_scryptsalsa208sha256 sets N = 2^20 in the
>> "{OPS,MEM}LIMIT_SENSITIVE" case, leading to a memory occupation of 1GiB and
>> an hashing rounds count of 2 * 2^20.
>
> Requiring 1GB for a password hash will preclude using it on small devices,
> e.g. raspberry pi.
>
> Even 16MB is excessive.
>
At first, I need to correct a typo on the memory consumption formula with
default scrypt parameters: I wrongly typed 16 instead of 14; sorry for not
double checking before posting. The correct lines are
"""
N = 16384, r = 8, p = 1 , which means using
2^14 * 8 * 1 * 128 = 16 MiBytes of memory
"""
Now, I dare remember both argon2i and scrypt are meant to hinder GPU/ASIC
usage in password cracking by forcing consumption of "big" blocks of RAM for
each password check cycle; choosing a too low memory consumption would put
both multi-gigabytes GPU and ASIC at the same advantage in relation to the
defendant CPU they are in the limited memory usage algorithms like pbkdf2 or
the scrypt(128KiB, 2K iteractions).
As for the RaspberryPI example, i think it's just as extreme as the numbers I
quoted, which are the recommended ones for using the scrypt construction as a
key derivation function for potentially sensitive key usage; here we are
talking about a multiple seconds delay to get a working key after password
input. If you really were to use a RasperryPI as an LDAP authentication server
for a big number of clients, you could choose N = 2^12, leading to a memory
cost of 4MiB, but this would really warrant GPU/ASIC equipped adversaries the
same advantage they enjoy with limited memory usage constructs.
Thank you,
lorenzo