fix possible block db breakage during re-index

When re-indexing, there are a few cases where garbage data may be skipped in
the block files. In these cases, the indices are correctly written to the index
db, however the pointer to the next position for writing in the current block
file is calculated by adding the sizes of the valid blocks found.

As a result, when the re-index is finished, the index db is correct for all
existing blocks, but the next block will be written to an incorrect offset,
likely overwriting existing blocks.

Rather than using the sum of all valid blocks to determine the next write
position, use the end of the last block written to the file. Don't assume that
the current block is the last one in the file, since they may be read
out-of-order.
This commit is contained in:
Cory Fields 2015-03-06 20:26:09 -05:00
parent 7c3fbc34ae
commit bb6acff079

View File

@ -2455,8 +2455,11 @@ bool FindBlockPos(CValidationState &state, CDiskBlockPos &pos, unsigned int nAdd
} }
nLastBlockFile = nFile; nLastBlockFile = nFile;
vinfoBlockFile[nFile].nSize += nAddSize;
vinfoBlockFile[nFile].AddBlock(nHeight, nTime); vinfoBlockFile[nFile].AddBlock(nHeight, nTime);
if (fKnown)
vinfoBlockFile[nFile].nSize = std::max(pos.nPos + nAddSize, vinfoBlockFile[nFile].nSize);
else
vinfoBlockFile[nFile].nSize += nAddSize;
if (!fKnown) { if (!fKnown) {
unsigned int nOldChunks = (pos.nPos + BLOCKFILE_CHUNK_SIZE - 1) / BLOCKFILE_CHUNK_SIZE; unsigned int nOldChunks = (pos.nPos + BLOCKFILE_CHUNK_SIZE - 1) / BLOCKFILE_CHUNK_SIZE;