Merge #16986: doc: Doxygen-friendly CuckooCache comments

7aad3b68e7e1680870ca70d945eee88f790d6454 doc: Doxygen-friendly CuckooCache comments (Jon Layton)

Pull request description:

  Similar theme to #16947.

  - `invalid`, `contains` now appear in Doxygen docs
  - `setup` refers to correct argument name `b`
  - Argument references in `code blocks `
  - Lists markdown conformant, uniform line endings

  Tested with `make docs`

ACKs for top commit:
  laanwj:
    ACK 7aad3b68e7e1680870ca70d945eee88f790d6454
  practicalswift:
    ACK 7aad3b68e7e1680870ca70d945eee88f790d6454

Tree-SHA512: 70b38c10e534bad9c6ffcd88cc7a4797644afba5956d47a6c7cc655fcd5857a91f315d6da60e28ce9678d420ed4a51e22267eb8b89e26002b99cad63373dd349
This commit is contained in:
fanquake 2019-10-28 09:18:24 -04:00 committed by Pasta
parent 6a980d92d9
commit 9b8250c7c6
No known key found for this signature in database
GPG Key ID: 52527BEDABE87984
2 changed files with 55 additions and 54 deletions

View File

@ -14,42 +14,40 @@
#include <vector> #include <vector>
/** namespace CuckooCache provides high performance cache primitives /** High-performance cache primitives.
* *
* Summary: * Summary:
* *
* 1) bit_packed_atomic_flags is bit-packed atomic flags for garbage collection * 1. @ref bit_packed_atomic_flags is bit-packed atomic flags for garbage collection
* *
* 2) cache is a cache which is performant in memory usage and lookup speed. It * 2. @ref cache is a cache which is performant in memory usage and lookup speed. It
* is lockfree for erase operations. Elements are lazily erased on the next * is lockfree for erase operations. Elements are lazily erased on the next insert.
* insert.
*/ */
namespace CuckooCache namespace CuckooCache
{ {
/** bit_packed_atomic_flags implements a container for garbage collection flags /** @ref bit_packed_atomic_flags implements a container for garbage collection flags
* that is only thread unsafe on calls to setup. This class bit-packs collection * that is only thread unsafe on calls to setup. This class bit-packs collection
* flags for memory efficiency. * flags for memory efficiency.
* *
* All operations are std::memory_order_relaxed so external mechanisms must * All operations are `std::memory_order_relaxed` so external mechanisms must
* ensure that writes and reads are properly synchronized. * ensure that writes and reads are properly synchronized.
* *
* On setup(n), all bits up to n are marked as collected. * On setup(n), all bits up to `n` are marked as collected.
* *
* Under the hood, because it is an 8-bit type, it makes sense to use a multiple * Under the hood, because it is an 8-bit type, it makes sense to use a multiple
* of 8 for setup, but it will be safe if that is not the case as well. * of 8 for setup, but it will be safe if that is not the case as well.
*
*/ */
class bit_packed_atomic_flags class bit_packed_atomic_flags
{ {
std::unique_ptr<std::atomic<uint8_t>[]> mem; std::unique_ptr<std::atomic<uint8_t>[]> mem;
public: public:
/** No default constructor as there must be some size */ /** No default constructor, as there must be some size. */
bit_packed_atomic_flags() = delete; bit_packed_atomic_flags() = delete;
/** /**
* bit_packed_atomic_flags constructor creates memory to sufficiently * bit_packed_atomic_flags constructor creates memory to sufficiently
* keep track of garbage collection information for size entries. * keep track of garbage collection information for `size` entries.
* *
* @param size the number of elements to allocate space for * @param size the number of elements to allocate space for
* *
@ -68,7 +66,7 @@ public:
}; };
/** setup marks all entries and ensures that bit_packed_atomic_flags can store /** setup marks all entries and ensures that bit_packed_atomic_flags can store
* at least size entries * at least `b` entries.
* *
* @param b the number of elements to allocate space for * @param b the number of elements to allocate space for
* @post bit_set, bit_unset, and bit_is_set function properly forall x. x < * @post bit_set, bit_unset, and bit_is_set function properly forall x. x <
@ -84,19 +82,18 @@ public:
/** bit_set sets an entry as discardable. /** bit_set sets an entry as discardable.
* *
* @param s the index of the entry to bit_set. * @param s the index of the entry to bit_set
* @post immediately subsequent call (assuming proper external memory * @post immediately subsequent call (assuming proper external memory
* ordering) to bit_is_set(s) == true. * ordering) to bit_is_set(s) == true.
*
*/ */
inline void bit_set(uint32_t s) inline void bit_set(uint32_t s)
{ {
mem[s >> 3].fetch_or(1 << (s & 7), std::memory_order_relaxed); mem[s >> 3].fetch_or(1 << (s & 7), std::memory_order_relaxed);
} }
/** bit_unset marks an entry as something that should not be overwritten /** bit_unset marks an entry as something that should not be overwritten.
* *
* @param s the index of the entry to bit_unset. * @param s the index of the entry to bit_unset
* @post immediately subsequent call (assuming proper external memory * @post immediately subsequent call (assuming proper external memory
* ordering) to bit_is_set(s) == false. * ordering) to bit_is_set(s) == false.
*/ */
@ -105,10 +102,10 @@ public:
mem[s >> 3].fetch_and(~(1 << (s & 7)), std::memory_order_relaxed); mem[s >> 3].fetch_and(~(1 << (s & 7)), std::memory_order_relaxed);
} }
/** bit_is_set queries the table for discardability at s /** bit_is_set queries the table for discardability at `s`.
* *
* @param s the index of the entry to read. * @param s the index of the entry to read
* @returns if the bit at index s was set. * @returns true if the bit at index `s` was set, false otherwise
* */ * */
inline bool bit_is_set(uint32_t s) const inline bool bit_is_set(uint32_t s) const
{ {
@ -116,15 +113,15 @@ public:
} }
}; };
/** cache implements a cache with properties similar to a cuckoo-set /** @ref cache implements a cache with properties similar to a cuckoo-set.
* *
* The cache is able to hold up to (~(uint32_t)0) - 1 elements. * The cache is able to hold up to `(~(uint32_t)0) - 1` elements.
* *
* Read Operations: * Read Operations:
* - contains(*, false) * - contains() for `erase=false`
* *
* Read+Erase Operations: * Read+Erase Operations:
* - contains(*, true) * - contains() for `erase=true`
* *
* Erase Operations: * Erase Operations:
* - allow_erase() * - allow_erase()
@ -141,10 +138,10 @@ public:
* *
* User Must Guarantee: * User Must Guarantee:
* *
* 1) Write Requires synchronized access (e.g., a lock) * 1. Write requires synchronized access (e.g. a lock)
* 2) Read Requires no concurrent Write, synchronized with the last insert. * 2. Read requires no concurrent Write, synchronized with last insert.
* 3) Erase requires no concurrent Write, synchronized with last insert. * 3. Erase requires no concurrent Write, synchronized with last insert.
* 4) An Erase caller must release all memory before allowing a new Writer. * 4. An Erase caller must release all memory before allowing a new Writer.
* *
* *
* Note on function names: * Note on function names:
@ -177,7 +174,7 @@ private:
mutable std::vector<bool> epoch_flags; mutable std::vector<bool> epoch_flags;
/** epoch_heuristic_counter is used to determine when an epoch might be aged /** epoch_heuristic_counter is used to determine when an epoch might be aged
* & an expensive scan should be done. epoch_heuristic_counter is * & an expensive scan should be done. epoch_heuristic_counter is
* decremented on insert and reset to the new number of inserts which would * decremented on insert and reset to the new number of inserts which would
* cause the epoch to reach epoch_size when it reaches zero. * cause the epoch to reach epoch_size when it reaches zero.
*/ */
@ -194,24 +191,25 @@ private:
uint32_t epoch_size; uint32_t epoch_size;
/** depth_limit determines how many elements insert should try to replace. /** depth_limit determines how many elements insert should try to replace.
* Should be set to log2(n)*/ * Should be set to log2(n).
*/
uint8_t depth_limit; uint8_t depth_limit;
/** hash_function is a const instance of the hash function. It cannot be /** hash_function is a const instance of the hash function. It cannot be
* static or initialized at call time as it may have internal state (such as * static or initialized at call time as it may have internal state (such as
* a nonce). * a nonce).
* */ */
const Hash hash_function; const Hash hash_function;
/** compute_hashes is convenience for not having to write out this /** compute_hashes is convenience for not having to write out this
* expression everywhere we use the hash values of an Element. * expression everywhere we use the hash values of an Element.
* *
* We need to map the 32-bit input hash onto a hash bucket in a range [0, size) in a * We need to map the 32-bit input hash onto a hash bucket in a range [0, size) in a
* manner which preserves as much of the hash's uniformity as possible. Ideally * manner which preserves as much of the hash's uniformity as possible. Ideally
* this would be done by bitmasking but the size is usually not a power of two. * this would be done by bitmasking but the size is usually not a power of two.
* *
* The naive approach would be to use a mod -- which isn't perfectly uniform but so * The naive approach would be to use a mod -- which isn't perfectly uniform but so
* long as the hash is much larger than size it is not that bad. Unfortunately, * long as the hash is much larger than size it is not that bad. Unfortunately,
* mod/division is fairly slow on ordinary microprocessors (e.g. 90-ish cycles on * mod/division is fairly slow on ordinary microprocessors (e.g. 90-ish cycles on
* haswell, ARM doesn't even have an instruction for it.); when the divisor is a * haswell, ARM doesn't even have an instruction for it.); when the divisor is a
* constant the compiler will do clever tricks to turn it into a multiply+add+shift, * constant the compiler will do clever tricks to turn it into a multiply+add+shift,
@ -223,10 +221,10 @@ private:
* somewhat complicated and the result is still slower than other options: * somewhat complicated and the result is still slower than other options:
* *
* Instead we treat the 32-bit random number as a Q32 fixed-point number in the range * Instead we treat the 32-bit random number as a Q32 fixed-point number in the range
* [0,1) and simply multiply it by the size. Then we just shift the result down by * [0, 1) and simply multiply it by the size. Then we just shift the result down by
* 32-bits to get our bucket number. The result has non-uniformity the same as a * 32-bits to get our bucket number. The result has non-uniformity the same as a
* mod, but it is much faster to compute. More about this technique can be found at * mod, but it is much faster to compute. More about this technique can be found at
* http://lemire.me/blog/2016/06/27/a-fast-alternative-to-the-modulo-reduction/ * http://lemire.me/blog/2016/06/27/a-fast-alternative-to-the-modulo-reduction/ .
* *
* The resulting non-uniformity is also more equally distributed which would be * The resulting non-uniformity is also more equally distributed which would be
* advantageous for something like linear probing, though it shouldn't matter * advantageous for something like linear probing, though it shouldn't matter
@ -237,8 +235,8 @@ private:
* 32*32->64 multiply, which means the operation is reasonably fast even on a * 32*32->64 multiply, which means the operation is reasonably fast even on a
* typical 32-bit processor. * typical 32-bit processor.
* *
* @param e the element whose hashes will be returned * @param e The element whose hashes will be returned
* @returns std::array<uint32_t, 8> of deterministic hashes derived from e * @returns Deterministic hashes derived from `e` uniformly mapped onto the range [0, size)
*/ */
inline std::array<uint32_t, 8> compute_hashes(const Element& e) const inline std::array<uint32_t, 8> compute_hashes(const Element& e) const
{ {
@ -252,14 +250,14 @@ private:
(uint32_t)(((uint64_t)hash_function.template operator()<7>(e) * (uint64_t)size) >> 32)}}; (uint32_t)(((uint64_t)hash_function.template operator()<7>(e) * (uint64_t)size) >> 32)}};
} }
/* end /** invalid returns a special index that can never be inserted to
* @returns a constexpr index that can never be inserted to */ * @returns the special constexpr index that can never be inserted to */
constexpr uint32_t invalid() const constexpr uint32_t invalid() const
{ {
return ~(uint32_t)0; return ~(uint32_t)0;
} }
/** allow_erase marks the element at index n as discardable. Threadsafe /** allow_erase marks the element at index `n` as discardable. Threadsafe
* without any concurrent insert. * without any concurrent insert.
* @param n the index to allow erasure of * @param n the index to allow erasure of
*/ */
@ -268,7 +266,7 @@ private:
collection_flags.bit_set(n); collection_flags.bit_set(n);
} }
/** please_keep marks the element at index n as an entry that should be kept. /** please_keep marks the element at index `n` as an entry that should be kept.
* Threadsafe without any concurrent insert. * Threadsafe without any concurrent insert.
* @param n the index to prioritize keeping * @param n the index to prioritize keeping
*/ */
@ -336,7 +334,7 @@ public:
* *
* @param new_size the desired number of elements to store * @param new_size the desired number of elements to store
* @returns the maximum number of elements storable * @returns the maximum number of elements storable
**/ */
uint32_t setup(uint32_t new_size) uint32_t setup(uint32_t new_size)
{ {
// depth_limit must be at least one otherwise errors can occur. // depth_limit must be at least one otherwise errors can occur.
@ -360,7 +358,7 @@ public:
* negligible compared to the size of the elements. * negligible compared to the size of the elements.
* *
* @param bytes the approximate number of bytes to use for this data * @param bytes the approximate number of bytes to use for this data
* structure. * structure
* @returns the maximum number of elements storable (see setup() * @returns the maximum number of elements storable (see setup()
* documentation for more detail) * documentation for more detail)
*/ */
@ -376,10 +374,12 @@ public:
* It drops the last tried element if it runs out of depth before * It drops the last tried element if it runs out of depth before
* encountering an open slot. * encountering an open slot.
* *
* Thus * Thus:
* *
* ```
* insert(x); * insert(x);
* return contains(x, false); * return contains(x, false);
* ```
* *
* is not guaranteed to return true. * is not guaranteed to return true.
* *
@ -387,7 +387,6 @@ public:
* @post one of the following: All previously inserted elements and e are * @post one of the following: All previously inserted elements and e are
* now in the table, one previously inserted element is evicted from the * now in the table, one previously inserted element is evicted from the
* table, the entry attempted to be inserted is evicted. * table, the entry attempted to be inserted is evicted.
*
*/ */
inline void insert(Element e) inline void insert(Element e)
{ {
@ -416,9 +415,9 @@ public:
/** Swap with the element at the location that was /** Swap with the element at the location that was
* not the last one looked at. Example: * not the last one looked at. Example:
* *
* 1) On first iteration, last_loc == invalid(), find returns last, so * 1. On first iteration, last_loc == invalid(), find returns last, so
* last_loc defaults to locs[0]. * last_loc defaults to locs[0].
* 2) On further iterations, where last_loc == locs[k], last_loc will * 2. On further iterations, where last_loc == locs[k], last_loc will
* go to locs[k+1 % 8], i.e., next of the 8 indices wrapping around * go to locs[k+1 % 8], i.e., next of the 8 indices wrapping around
* to 0 if needed. * to 0 if needed.
* *
@ -439,17 +438,19 @@ public:
} }
} }
/* contains iterates through the hash locations for a given element /** contains iterates through the hash locations for a given element
* and checks to see if it is present. * and checks to see if it is present.
* *
* contains does not check garbage collected state (in other words, * contains does not check garbage collected state (in other words,
* garbage is only collected when the space is needed), so: * garbage is only collected when the space is needed), so:
* *
* ```
* insert(x); * insert(x);
* if (contains(x, true)) * if (contains(x, true))
* return contains(x, false); * return contains(x, false);
* else * else
* return true; * return true;
* ```
* *
* executed on a single thread will always return true! * executed on a single thread will always return true!
* *
@ -458,7 +459,7 @@ public:
* contains returns a bool set true if the element was found. * contains returns a bool set true if the element was found.
* *
* @param e the element to check * @param e the element to check
* @param erase * @param erase whether to attempt setting the garbage collect flag
* *
* @post if erase is true and the element is found, then the garbage collect * @post if erase is true and the element is found, then the garbage collect
* flag is set * flag is set

View File

@ -10,11 +10,11 @@
/** Test Suite for CuckooCache /** Test Suite for CuckooCache
* *
* 1) All tests should have a deterministic result (using insecure rand * 1. All tests should have a deterministic result (using insecure rand
* with deterministic seeds) * with deterministic seeds)
* 2) Some test methods are templated to allow for easier testing * 2. Some test methods are templated to allow for easier testing
* against new versions / comparing * against new versions / comparing
* 3) Results should be treated as a regression test, i.e., did the behavior * 3. Results should be treated as a regression test, i.e., did the behavior
* change significantly from what was expected. This can be OK, depending on * change significantly from what was expected. This can be OK, depending on
* the nature of the change, but requires updating the tests to reflect the new * the nature of the change, but requires updating the tests to reflect the new
* expected behavior. For example improving the hit rate may cause some tests * expected behavior. For example improving the hit rate may cause some tests
@ -99,9 +99,9 @@ static double test_cache(size_t megabytes, double load)
* *
* Examples: * Examples:
* *
* 1) at load 0.5, we expect a perfect hit rate, so we multiply by * 1. at load 0.5, we expect a perfect hit rate, so we multiply by
* 1.0 * 1.0
* 2) at load 2.0, we expect to see half the entries, so a perfect hit rate * 2. at load 2.0, we expect to see half the entries, so a perfect hit rate
* would be 0.5. Therefore, if we see a hit rate of 0.4, 0.4*2.0 = 0.8 is the * would be 0.5. Therefore, if we see a hit rate of 0.4, 0.4*2.0 = 0.8 is the
* normalized hit rate. * normalized hit rate.
* *