Merge #13393: Enable double-SHA256-for-64-byte code on 32-bit x86

57ba401abcfe564a2c4d259e0f758401ed74616d Enable double-SHA256-for-64-byte code on 32-bit x86 (Pieter Wuille)

Pull request description:

  The SSE4 and AVX2 double-SHA256-for-64-byte input code from #13191 compiles fine on 32-bit x86 systems, but the autodetection logic in sha256.cpp doesn't enable it. Fix this.

  Note that these instruction sets are only available on CPUs that support 64-bit mode as well, so it is only beneficial in the (perhaps unlikely) scenario where a 64-bit CPU is running a 32-bit Bitcoin Core binary.

Tree-SHA512: 39d5963c1ba8c33932549d5fe98bd184932689a40aeba95043eca31dd6824f566197c546b60905555eccaf407408a5f0f200247bb0907450d309b0a70b245102
This commit is contained in:
Wladimir J. van der Laan 2018-06-12 18:52:00 +02:00 committed by Alexander Block
parent d07f54b967
commit 1df17c02e5

View File

@ -530,7 +530,7 @@ bool SelfTest() {
}
#if defined(USE_ASM) && (defined(__x86_64__) || defined(__amd64__))
#if defined(USE_ASM) && (defined(__x86_64__) || defined(__amd64__) || defined(__i386__))
// We can't use cpuid.h's __get_cpuid as it does not support subleafs.
void inline cpuid(uint32_t leaf, uint32_t subleaf, uint32_t& a, uint32_t& b, uint32_t& c, uint32_t& d)
{
@ -551,13 +551,15 @@ bool AVXEnabled()
std::string SHA256AutoDetect()
{
std::string ret = "standard";
#if defined(USE_ASM) && (defined(__x86_64__) || defined(__amd64__))
#if defined(USE_ASM) && (defined(__x86_64__) || defined(__amd64__) || defined(__i386__))
(void)AVXEnabled; // Silence unused warning (in case ENABLE_AVX2 is not defined)
uint32_t eax, ebx, ecx, edx;
cpuid(1, 0, eax, ebx, ecx, edx);
if ((ecx >> 19) & 1) {
#if defined(__x86_64__) || defined(__amd64__)
Transform = sha256_sse4::Transform;
TransformD64 = TransformD64Wrapper<sha256_sse4::Transform>;
#endif
#if defined(ENABLE_SSE41) && !defined(BUILD_BITCOIN_INTERNAL)
TransformD64_4way = sha256d64_sse41::Transform_4way;
ret = "sse4(1way+4way)";