2015-09-26 18:25:54 +02:00
Developer Notes
===============
2013-05-20 06:30:00 +02:00
2018-03-29 15:37:20 +02:00
<!-- markdown - toc start -->
**Table of Contents**
- [Developer Notes ](#developer-notes )
2018-08-07 13:55:50 +02:00
- [Coding Style (General) ](#coding-style-general )
- [Coding Style (C++) ](#coding-style-c )
- [Coding Style (Python) ](#coding-style-python )
2019-01-04 11:49:19 +01:00
- [Coding Style (Doxygen-compatible comments) ](#coding-style-doxygen-compatible-comments )
2018-03-29 15:37:20 +02:00
- [Development tips and tricks ](#development-tips-and-tricks )
- [Compiling for debugging ](#compiling-for-debugging )
- [Compiling for gprof profiling ](#compiling-for-gprof-profiling )
2019-11-23 17:16:46 +01:00
- [`debug.log` ](#debuglog )
2018-03-29 15:37:20 +02:00
- [Testnet and Regtest modes ](#testnet-and-regtest-modes )
- [DEBUG_LOCKORDER ](#debug_lockorder )
- [Valgrind suppressions file ](#valgrind-suppressions-file )
- [Compiling for test coverage ](#compiling-for-test-coverage )
Merge #14519: tests: add utility to easily profile node performance with perf
13782b8ba8 docs: add perf section to developer docs (James O'Beirne)
58180b5fd4 tests: add utility to easily profile node performance with perf (James O'Beirne)
Pull request description:
Adds a context manager to easily (and selectively) profile node performance during functional test execution using `perf`.
While writing some tests, I encountered some odd bitcoind slowness. I wrote up a utility (`TestNode.profile_with_perf`) that generates performance diagnostics for a node by running `perf` during the execution of a particular region of test code.
`perf` usage is detailed in the excellent (and sadly unmerged) https://github.com/bitcoin/bitcoin/pull/12649; all due props to @eklitzke.
### Example
```python
with node.profile_with_perf("large-msgs"):
for i in range(200):
node.p2p.send_message(some_large_msg)
node.p2p.sync_with_ping()
```
This generates a perf data file in the test node's datadir (`/tmp/testtxmpod0y/node0/node-0-TestName-large-msgs.perf.data`).
Running `perf report` generates nice output about where the node spent most of its time while running that part of the test:
```bash
$ perf report -i /tmp/testtxmpod0y/node0/node-0-TestName-large-msgs.perf.data --stdio \
| c++filt \
| less
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 135 of event 'cycles:pp'
# Event count (approx.): 1458205679493582
#
# Children Self Command Shared Object Symbol
# ........ ........ ............... ................... ........................................................................................................................................................................................................................................................................
#
70.14% 0.00% bitcoin-net bitcoind [.] CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
|
---CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
70.14% 0.00% bitcoin-net bitcoind [.] CNetMessage::readData(char const*, unsigned int)
|
---CNetMessage::readData(char const*, unsigned int)
CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
35.52% 0.00% bitcoin-net bitcoind [.] std::vector<char, zero_after_free_allocator<char> >::_M_fill_insert(__gnu_cxx::__normal_iterator<char*, std::vector<char, zero_after_free_allocator<char> > >, unsigned long, char const&)
|
---std::vector<char, zero_after_free_allocator<char> >::_M_fill_insert(__gnu_cxx::__normal_iterator<char*, std::vector<char, zero_after_free_allocator<char> > >, unsigned long, char const&)
CNetMessage::readData(char const*, unsigned int)
CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
...
```
Tree-SHA512: 9ac4ceaa88818d5eca00994e8e3c8ad42ae019550d6583972a0a4f7b0c4f61032e3d0c476b4ae58756bc5eb8f8015a19a7fc26c095bd588f31d49a37ed0c6b3e
2019-02-05 23:40:11 +01:00
- [Performance profiling with perf ](#performance-profiling-with-perf )
2019-11-23 17:16:46 +01:00
- [Sanitizers ](#sanitizers )
2018-03-29 15:37:20 +02:00
- [Locking/mutex usage notes ](#lockingmutex-usage-notes )
- [Threads ](#threads )
- [Ignoring IDE/editor files ](#ignoring-ideeditor-files )
- [Development guidelines ](#development-guidelines )
2020-10-16 04:31:42 +02:00
- [General Dash Core ](#general-dash-core )
2018-03-29 15:37:20 +02:00
- [Wallet ](#wallet )
- [General C++ ](#general-c )
- [C++ data structures ](#c-data-structures )
- [Strings and formatting ](#strings-and-formatting )
2019-08-29 02:13:07 +02:00
- [Shadowing ](#shadowing )
2018-03-29 15:37:20 +02:00
- [Threads and synchronization ](#threads-and-synchronization )
2018-12-06 15:45:48 +01:00
- [Scripts ](#scripts )
- [Shebang ](#shebang )
2018-03-29 15:37:20 +02:00
- [Source code organization ](#source-code-organization )
- [GUI ](#gui )
- [Subtrees ](#subtrees )
- [Scripted diffs ](#scripted-diffs )
2019-11-19 15:21:31 +01:00
- [Suggestions and examples ](#suggestions-and-examples )
2019-01-30 16:17:58 +01:00
- [Release notes ](#release-notes )
2018-03-29 15:37:20 +02:00
- [RPC interface guidelines ](#rpc-interface-guidelines )
<!-- markdown - toc end -->
2018-08-07 13:55:50 +02:00
Coding Style (General)
----------------------
2018-03-29 15:37:20 +02:00
2014-07-09 18:17:23 +02:00
Various coding styles have been used during the history of the codebase,
and the result is not very consistent. However, we're now trying to converge to
2017-05-31 16:59:33 +02:00
a single style, which is specified below. When writing patches, favor the new
2017-08-16 00:28:57 +02:00
style over attempting to mimic the surrounding style, except for move-only
2017-05-31 16:59:33 +02:00
commits.
Do not submit patches solely to modify the style of existing code.
2017-01-30 13:49:34 +01:00
2018-08-07 13:55:50 +02:00
Coding Style (C++)
------------------
2017-05-31 16:59:33 +02:00
- **Indentation and whitespace rules** as specified in
[src/.clang-format ](/src/.clang-format ). You can use the provided
[clang-format-diff script ](/contrib/devtools/README.md#clang-format-diffpy )
tool to clean up patches automatically before submission.
2018-04-17 19:22:13 +02:00
- Braces on new lines for classes, functions, methods.
2014-07-09 18:17:23 +02:00
- Braces on the same line for everything else.
- 4 space indentation (no tabs) for every block except namespaces.
2017-01-11 13:52:06 +01:00
- No indentation for `public` /`protected`/`private` or for `namespace` .
2019-11-23 17:16:46 +01:00
- No extra spaces inside parenthesis; don't do `( this )` .
2017-01-11 13:52:06 +01:00
- No space after function names; one space after `if` , `for` and `while` .
2017-05-31 16:59:33 +02:00
- If an `if` only has a single-statement `then` -clause, it can appear
on the same line as the `if` , without braces. In every other case,
braces are required, and the `then` and `else` clauses must appear
2017-01-11 13:52:06 +01:00
correctly indented on a new line.
Merge #20986: docs: update developer notes to discourage very long lines
aa929abf8dc022e900755234c857541faeea8239 [docs] Update developer notes to discourage very long lines (John Newbery)
Pull request description:
Mandatory rules on line lengths are bad - there will always be cases where a longer line is more readable than the alternative.
However, very long lines for no good reason _do_ hurt readability. For example, this declaration in validation.h is 274 chars:
```c++
bool ConnectTip(BlockValidationState& state, const CChainParams& chainparams, CBlockIndex* pindexNew, const std::shared_ptr<const CBlock>& pblock, ConnectTrace& connectTrace, DisconnectedBlockTransactions& disconnectpool) EXCLUSIVE_LOCKS_REQUIRED(cs_main, m_mempool.cs);
```
That won't fit on one line without wrapping on my 27" monitor with a comfortable font size. Much easier to read is something like:
```c++
bool ConnectTip(BlockValidationState& state, const CChainParams& chainparams,
CBlockIndex* pindexNew, const std::shared_ptr<const CBlock>& pblock,
ConnectTrace& connectTrace, DisconnectedBlockTransactions& disconnectpool)
EXCLUSIVE_LOCKS_REQUIRED(cs_main, m_mempool.cs);
```
Therefore, _discourage_ (don't forbid) line lengths greater than 100 characters in our developer style guide.
100 chars is somewhat arbitrary. The old standard was 80, but that seems very limiting with modern displays.
ACKs for top commit:
fanquake:
ACK aa929abf8dc022e900755234c857541faeea8239 - this is basically just something to point too when a PR has unreasonably long lines for no particularly reason.
practicalswift:
ACK aa929abf8dc022e900755234c857541faeea8239
amitiuttarwar:
ACK aa929abf8dc022e900755234c857541faeea8239
theStack:
ACK aa929abf8dc022e900755234c857541faeea8239
glozow:
ACK https://github.com/bitcoin/bitcoin/commit/aa929abf8dc022e900755234c857541faeea8239
Tree-SHA512: 17f1b11f811137497ede8851ede93fa612dc622922b5ad7ac8f065ea026d9a718db5b92325754b74d24012b4d45c4e2cd5cd439a6a8d34bbabf5da927d783970
2021-02-14 09:48:28 +01:00
- There's no hard limit on line width, but prefer to keep lines to < 100
characters if doing so does not decrease readability. Break up long
function declarations over multiple lines using the Clang Format
[AlignAfterOpenBracket ](https://clang.llvm.org/docs/ClangFormatStyleOptions.html )
style option.
2017-05-31 16:59:33 +02:00
- **Symbol naming conventions**. These are preferred in new code, but are not
required when doing so would need changes to significant pieces of existing
code.
2019-08-06 03:43:45 +02:00
- Variable (including function arguments) and namespace names are all lowercase and may use `_` to
2017-07-26 11:02:06 +02:00
separate words (snake_case).
2017-05-31 16:59:33 +02:00
- Class member variables have a `m_` prefix.
- Global variables have a `g_` prefix.
2021-06-30 03:22:57 +02:00
- Constant names are all uppercase, and use `_` to separate words.
2019-08-06 03:43:45 +02:00
- Class names, function names, and method names are UpperCamelCase
2017-07-26 11:02:06 +02:00
(PascalCase). Do not prefix class names with `C` .
2018-04-02 00:28:14 +02:00
- Test suite naming convention: The Boost test suite in file
2018-04-08 10:44:46 +02:00
`src/test/foo_tests.cpp` should be named `foo_tests` . Test suite names
must be unique.
2017-05-31 16:59:33 +02:00
- **Miscellaneous**
2016-08-26 15:54:25 +02:00
- `++i` is preferred over `i++` .
2017-08-18 15:23:54 +02:00
- `nullptr` is preferred over `NULL` or `(void*)0` .
2017-08-18 09:35:28 +02:00
- `static_assert` is preferred over `assert` where possible. Generally; compile-time checking is preferred over run-time checking.
2017-12-14 01:33:08 +01:00
- Align pointers and references to the left i.e. use `type& var` and not `type &var` .
2014-06-26 11:49:51 +02:00
2014-07-09 18:17:23 +02:00
Block style example:
```c++
2017-05-31 16:59:33 +02:00
int g_count = 0;
2018-04-17 19:22:13 +02:00
namespace foo {
2014-07-09 18:17:23 +02:00
class Class
{
2017-05-31 16:59:33 +02:00
std::string m_name;
public:
2017-01-11 13:52:06 +01:00
bool Function(const std::string& s, int n)
2014-07-09 18:17:23 +02:00
{
// Comment summarising what this section of code does
2016-08-26 15:54:25 +02:00
for (int i = 0; i < n ; + + i ) {
2017-05-31 16:59:33 +02:00
int total_sum = 0;
2014-07-09 18:17:23 +02:00
// When something fails, return early
2017-01-11 13:52:06 +01:00
if (!Something()) return false;
2014-07-09 18:17:23 +02:00
...
2017-05-31 16:59:33 +02:00
if (SomethingElse(i)) {
total_sum += ComputeSomething(g_count);
2017-01-11 13:52:06 +01:00
} else {
2017-05-31 16:59:33 +02:00
DoSomething(m_name, total_sum);
2017-01-11 13:52:06 +01:00
}
2014-07-09 18:17:23 +02:00
}
2013-05-20 06:30:00 +02:00
2014-07-09 18:17:23 +02:00
// Success return is usually at the end
return true;
}
}
2017-06-26 13:37:42 +02:00
} // namespace foo
2014-06-26 11:49:51 +02:00
```
2013-05-20 06:30:00 +02:00
2019-01-04 11:49:19 +01:00
Coding Style (Python)
---------------------
Refer to [/test/functional/README.md#style-guidelines ](/test/functional/README.md#style-guidelines ).
2014-04-07 08:39:31 +02:00
2019-01-04 11:49:19 +01:00
Coding Style (Doxygen-compatible comments)
------------------------------------------
2021-06-29 00:39:58 +02:00
Dash Core uses [Doxygen ](http://www.doxygen.nl/ ) to generate its official documentation.
2019-01-04 11:49:19 +01:00
Use Doxygen-compatible comment blocks for functions, methods, and fields.
2014-04-07 08:39:31 +02:00
For example, to describe a function use:
2019-08-06 03:43:45 +02:00
2014-04-07 08:39:31 +02:00
```c++
/**
* ... text ...
* @param [in] arg1 A description
* @param [in] arg2 Another argument description
* @pre Precondition for function...
*/
bool function(int arg1, const char *arg2)
```
2019-08-06 03:43:45 +02:00
2014-04-07 08:39:31 +02:00
A complete list of `@xxx` commands can be found at http://www.stack.nl/~dimitri/doxygen/manual/commands.html.
As Doxygen recognizes the comments by the delimiters (`/**` and `*/` in this case), you don't
2015-04-28 16:48:28 +02:00
*need* to provide any commands for a comment to be valid; just a description text is fine.
2014-04-07 08:39:31 +02:00
2019-08-06 03:43:45 +02:00
To describe a class, use the same construct above the class definition:
2014-04-07 08:39:31 +02:00
```c++
2015-10-17 12:10:45 +02:00
/**
2014-04-07 08:39:31 +02:00
* Alerts are for notifying old versions if they become too obsolete and
* need to upgrade. The message is displayed in the status bar.
* @see GetWarnings()
*/
class CAlert
{
```
To describe a member or variable use:
```c++
int var; //!< Detailed description after the member
```
2016-04-05 17:49:42 +02:00
or
2019-01-04 11:49:19 +01:00
```c++
2016-04-05 17:49:42 +02:00
//! Description before the member
int var;
```
2014-04-07 08:39:31 +02:00
Also OK:
```c++
///
/// ... text ...
///
bool function2(int arg1, const char *arg2)
```
Not OK (used plenty in the current source, but not picked up):
```c++
//
// ... text ...
//
```
2019-01-04 11:49:19 +01:00
A full list of comment syntaxes picked up by Doxygen can be found at https://www.stack.nl/~dimitri/doxygen/manual/docblocks.html,
but the above styles are favored.
2014-04-07 08:39:31 +02:00
2019-01-04 11:49:19 +01:00
Documentation can be generated with `make docs` and cleaned up with `make clean-docs` . The resulting files are located in `doc/doxygen/html` ; open `index.html` to view the homepage.
2018-02-17 08:49:18 +01:00
2019-08-06 03:43:45 +02:00
Before running `make docs` , you will need to install dependencies `doxygen` and `dot` . For example, on macOS via Homebrew:
2019-01-04 11:49:19 +01:00
```
2019-09-20 10:35:01 +02:00
brew install graphviz doxygen
2019-01-04 11:49:19 +01:00
```
2018-08-07 13:55:50 +02:00
2014-12-13 05:35:39 +01:00
Development tips and tricks
---------------------------
2018-03-29 15:37:20 +02:00
### Compiling for debugging
2014-12-13 05:35:39 +01:00
2018-03-29 15:37:20 +02:00
Run configure with `--enable-debug` to add additional compiler flags that
produce better debugging builds.
2014-12-13 05:35:39 +01:00
2018-03-29 15:37:20 +02:00
### Compiling for gprof profiling
2018-03-06 20:20:00 +01:00
2018-03-29 15:37:20 +02:00
Run configure with the `--enable-gprof` option, then make.
2018-03-06 20:20:00 +01:00
2019-11-23 17:16:46 +01:00
### `debug.log`
2014-12-13 05:35:39 +01:00
2019-11-23 17:16:46 +01:00
If the code is behaving strangely, take a look in the `debug.log` file in the data directory;
2014-12-13 05:35:39 +01:00
error and debugging messages are written there.
2018-03-29 15:37:20 +02:00
The `-debug=...` command-line option controls debugging; running with just `-debug` or `-debug=1` will turn
2019-11-23 17:16:46 +01:00
on all categories (and give you a very large `debug.log` file).
2014-12-13 05:35:39 +01:00
2019-11-23 17:16:46 +01:00
The Qt code routes `qDebug()` output to `debug.log` under category "qt": run with `-debug=qt`
2014-12-13 05:35:39 +01:00
to see it.
2018-03-29 15:37:20 +02:00
### Testnet and Regtest modes
2014-12-13 05:35:39 +01:00
2018-03-29 15:37:20 +02:00
Run with the `-testnet` option to run with "play coins" on the test network, if you
2014-12-13 05:35:39 +01:00
are testing multi-machine code that needs to operate across the internet.
2018-03-29 15:37:20 +02:00
If you are testing something that can run on one machine, run with the `-regtest` option.
In regression test mode, blocks can be created on-demand; see [test/functional/ ](/test/functional ) for tests
that run in `-regtest` mode.
2014-12-13 05:35:39 +01:00
2018-03-29 15:37:20 +02:00
### DEBUG_LOCKORDER
2014-12-13 05:35:39 +01:00
2018-03-29 15:37:20 +02:00
Dash Core is a multi-threaded application, and deadlocks or other
multi-threading bugs can be very difficult to track down. The `--enable-debug`
configure option adds `-DDEBUG_LOCKORDER` to the compiler flags. This inserts
2019-08-06 03:43:45 +02:00
run-time checks to keep track of which locks are held and adds warnings to the
2019-11-23 17:16:46 +01:00
`debug.log` file if inconsistencies are detected.
2014-12-13 05:35:39 +01:00
2018-03-29 15:37:20 +02:00
### Valgrind suppressions file
2017-11-13 14:59:29 +01:00
Valgrind is a programming tool for memory debugging, memory leak detection, and
profiling. The repo contains a Valgrind suppressions file
2020-01-30 14:37:50 +01:00
([`valgrind.supp`](https://github.com/dashpay/dash/blob/master/contrib/valgrind.supp))
2017-11-13 14:59:29 +01:00
which includes known Valgrind warnings in our dependencies that cannot be fixed
in-tree. Example use:
```shell
2020-01-30 14:37:50 +01:00
$ valgrind --suppressions=contrib/valgrind.supp src/test/test_dash
2017-11-13 14:59:29 +01:00
$ valgrind --suppressions=contrib/valgrind.supp --leak-check=full \
2020-01-30 14:37:50 +01:00
--show-leak-kinds=all src/test/test_dash --log_level=test_suite
$ valgrind -v --leak-check=full src/dashd -printtoconsole
2020-03-11 15:21:02 +01:00
$ ./test/functional/test_runner.py --valgrind
2017-11-13 14:59:29 +01:00
```
2018-03-29 15:37:20 +02:00
### Compiling for test coverage
2017-11-15 08:28:36 +01:00
LCOV can be used to generate a test coverage report based upon `make check`
execution. LCOV must be installed on your system (e.g. the `lcov` package
on Debian/Ubuntu).
To enable LCOV report generation during test runs:
```shell
./configure --enable-lcov
make
make cov
2020-01-30 14:37:50 +01:00
# A coverage report will now be accessible at `./test_dash.coverage/index.html`.
2017-11-15 08:28:36 +01:00
```
Merge #14519: tests: add utility to easily profile node performance with perf
13782b8ba8 docs: add perf section to developer docs (James O'Beirne)
58180b5fd4 tests: add utility to easily profile node performance with perf (James O'Beirne)
Pull request description:
Adds a context manager to easily (and selectively) profile node performance during functional test execution using `perf`.
While writing some tests, I encountered some odd bitcoind slowness. I wrote up a utility (`TestNode.profile_with_perf`) that generates performance diagnostics for a node by running `perf` during the execution of a particular region of test code.
`perf` usage is detailed in the excellent (and sadly unmerged) https://github.com/bitcoin/bitcoin/pull/12649; all due props to @eklitzke.
### Example
```python
with node.profile_with_perf("large-msgs"):
for i in range(200):
node.p2p.send_message(some_large_msg)
node.p2p.sync_with_ping()
```
This generates a perf data file in the test node's datadir (`/tmp/testtxmpod0y/node0/node-0-TestName-large-msgs.perf.data`).
Running `perf report` generates nice output about where the node spent most of its time while running that part of the test:
```bash
$ perf report -i /tmp/testtxmpod0y/node0/node-0-TestName-large-msgs.perf.data --stdio \
| c++filt \
| less
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 135 of event 'cycles:pp'
# Event count (approx.): 1458205679493582
#
# Children Self Command Shared Object Symbol
# ........ ........ ............... ................... ........................................................................................................................................................................................................................................................................
#
70.14% 0.00% bitcoin-net bitcoind [.] CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
|
---CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
70.14% 0.00% bitcoin-net bitcoind [.] CNetMessage::readData(char const*, unsigned int)
|
---CNetMessage::readData(char const*, unsigned int)
CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
35.52% 0.00% bitcoin-net bitcoind [.] std::vector<char, zero_after_free_allocator<char> >::_M_fill_insert(__gnu_cxx::__normal_iterator<char*, std::vector<char, zero_after_free_allocator<char> > >, unsigned long, char const&)
|
---std::vector<char, zero_after_free_allocator<char> >::_M_fill_insert(__gnu_cxx::__normal_iterator<char*, std::vector<char, zero_after_free_allocator<char> > >, unsigned long, char const&)
CNetMessage::readData(char const*, unsigned int)
CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
...
```
Tree-SHA512: 9ac4ceaa88818d5eca00994e8e3c8ad42ae019550d6583972a0a4f7b0c4f61032e3d0c476b4ae58756bc5eb8f8015a19a7fc26c095bd588f31d49a37ed0c6b3e
2019-02-05 23:40:11 +01:00
### Performance profiling with perf
Profiling is a good way to get a precise idea of where time is being spent in
code. One tool for doing profiling on Linux platforms is called
[`perf` ](http://www.brendangregg.com/perf.html ), and has been integrated into
the functional test framework. Perf can observe a running process and sample
(at some frequency) where its execution is.
Perf installation is contingent on which kernel version you're running; see
2019-11-23 17:16:46 +01:00
[this thread ](https://askubuntu.com/questions/50145/how-to-install-perf-monitoring-tool )
Merge #14519: tests: add utility to easily profile node performance with perf
13782b8ba8 docs: add perf section to developer docs (James O'Beirne)
58180b5fd4 tests: add utility to easily profile node performance with perf (James O'Beirne)
Pull request description:
Adds a context manager to easily (and selectively) profile node performance during functional test execution using `perf`.
While writing some tests, I encountered some odd bitcoind slowness. I wrote up a utility (`TestNode.profile_with_perf`) that generates performance diagnostics for a node by running `perf` during the execution of a particular region of test code.
`perf` usage is detailed in the excellent (and sadly unmerged) https://github.com/bitcoin/bitcoin/pull/12649; all due props to @eklitzke.
### Example
```python
with node.profile_with_perf("large-msgs"):
for i in range(200):
node.p2p.send_message(some_large_msg)
node.p2p.sync_with_ping()
```
This generates a perf data file in the test node's datadir (`/tmp/testtxmpod0y/node0/node-0-TestName-large-msgs.perf.data`).
Running `perf report` generates nice output about where the node spent most of its time while running that part of the test:
```bash
$ perf report -i /tmp/testtxmpod0y/node0/node-0-TestName-large-msgs.perf.data --stdio \
| c++filt \
| less
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 135 of event 'cycles:pp'
# Event count (approx.): 1458205679493582
#
# Children Self Command Shared Object Symbol
# ........ ........ ............... ................... ........................................................................................................................................................................................................................................................................
#
70.14% 0.00% bitcoin-net bitcoind [.] CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
|
---CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
70.14% 0.00% bitcoin-net bitcoind [.] CNetMessage::readData(char const*, unsigned int)
|
---CNetMessage::readData(char const*, unsigned int)
CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
35.52% 0.00% bitcoin-net bitcoind [.] std::vector<char, zero_after_free_allocator<char> >::_M_fill_insert(__gnu_cxx::__normal_iterator<char*, std::vector<char, zero_after_free_allocator<char> > >, unsigned long, char const&)
|
---std::vector<char, zero_after_free_allocator<char> >::_M_fill_insert(__gnu_cxx::__normal_iterator<char*, std::vector<char, zero_after_free_allocator<char> > >, unsigned long, char const&)
CNetMessage::readData(char const*, unsigned int)
CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
...
```
Tree-SHA512: 9ac4ceaa88818d5eca00994e8e3c8ad42ae019550d6583972a0a4f7b0c4f61032e3d0c476b4ae58756bc5eb8f8015a19a7fc26c095bd588f31d49a37ed0c6b3e
2019-02-05 23:40:11 +01:00
for specific instructions.
Certain kernel parameters may need to be set for perf to be able to inspect the
2019-06-16 03:33:20 +02:00
running process's stack.
Merge #14519: tests: add utility to easily profile node performance with perf
13782b8ba8 docs: add perf section to developer docs (James O'Beirne)
58180b5fd4 tests: add utility to easily profile node performance with perf (James O'Beirne)
Pull request description:
Adds a context manager to easily (and selectively) profile node performance during functional test execution using `perf`.
While writing some tests, I encountered some odd bitcoind slowness. I wrote up a utility (`TestNode.profile_with_perf`) that generates performance diagnostics for a node by running `perf` during the execution of a particular region of test code.
`perf` usage is detailed in the excellent (and sadly unmerged) https://github.com/bitcoin/bitcoin/pull/12649; all due props to @eklitzke.
### Example
```python
with node.profile_with_perf("large-msgs"):
for i in range(200):
node.p2p.send_message(some_large_msg)
node.p2p.sync_with_ping()
```
This generates a perf data file in the test node's datadir (`/tmp/testtxmpod0y/node0/node-0-TestName-large-msgs.perf.data`).
Running `perf report` generates nice output about where the node spent most of its time while running that part of the test:
```bash
$ perf report -i /tmp/testtxmpod0y/node0/node-0-TestName-large-msgs.perf.data --stdio \
| c++filt \
| less
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 135 of event 'cycles:pp'
# Event count (approx.): 1458205679493582
#
# Children Self Command Shared Object Symbol
# ........ ........ ............... ................... ........................................................................................................................................................................................................................................................................
#
70.14% 0.00% bitcoin-net bitcoind [.] CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
|
---CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
70.14% 0.00% bitcoin-net bitcoind [.] CNetMessage::readData(char const*, unsigned int)
|
---CNetMessage::readData(char const*, unsigned int)
CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
35.52% 0.00% bitcoin-net bitcoind [.] std::vector<char, zero_after_free_allocator<char> >::_M_fill_insert(__gnu_cxx::__normal_iterator<char*, std::vector<char, zero_after_free_allocator<char> > >, unsigned long, char const&)
|
---std::vector<char, zero_after_free_allocator<char> >::_M_fill_insert(__gnu_cxx::__normal_iterator<char*, std::vector<char, zero_after_free_allocator<char> > >, unsigned long, char const&)
CNetMessage::readData(char const*, unsigned int)
CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
...
```
Tree-SHA512: 9ac4ceaa88818d5eca00994e8e3c8ad42ae019550d6583972a0a4f7b0c4f61032e3d0c476b4ae58756bc5eb8f8015a19a7fc26c095bd588f31d49a37ed0c6b3e
2019-02-05 23:40:11 +01:00
```sh
$ sudo sysctl -w kernel.perf_event_paranoid=-1
$ sudo sysctl -w kernel.kptr_restrict=0
```
Make sure you [understand the security
trade-offs](https://lwn.net/Articles/420403/) of setting these kernel
parameters.
To profile a running dashd process for 60 seconds, you could use an
invocation of `perf record` like this:
```sh
$ perf record \
-g --call-graph dwarf --per-thread -F 140 \
-p `pgrep dashd` -- sleep 60
```
2019-08-06 03:43:45 +02:00
You could then analyze the results by running:
Merge #14519: tests: add utility to easily profile node performance with perf
13782b8ba8 docs: add perf section to developer docs (James O'Beirne)
58180b5fd4 tests: add utility to easily profile node performance with perf (James O'Beirne)
Pull request description:
Adds a context manager to easily (and selectively) profile node performance during functional test execution using `perf`.
While writing some tests, I encountered some odd bitcoind slowness. I wrote up a utility (`TestNode.profile_with_perf`) that generates performance diagnostics for a node by running `perf` during the execution of a particular region of test code.
`perf` usage is detailed in the excellent (and sadly unmerged) https://github.com/bitcoin/bitcoin/pull/12649; all due props to @eklitzke.
### Example
```python
with node.profile_with_perf("large-msgs"):
for i in range(200):
node.p2p.send_message(some_large_msg)
node.p2p.sync_with_ping()
```
This generates a perf data file in the test node's datadir (`/tmp/testtxmpod0y/node0/node-0-TestName-large-msgs.perf.data`).
Running `perf report` generates nice output about where the node spent most of its time while running that part of the test:
```bash
$ perf report -i /tmp/testtxmpod0y/node0/node-0-TestName-large-msgs.perf.data --stdio \
| c++filt \
| less
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 135 of event 'cycles:pp'
# Event count (approx.): 1458205679493582
#
# Children Self Command Shared Object Symbol
# ........ ........ ............... ................... ........................................................................................................................................................................................................................................................................
#
70.14% 0.00% bitcoin-net bitcoind [.] CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
|
---CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
70.14% 0.00% bitcoin-net bitcoind [.] CNetMessage::readData(char const*, unsigned int)
|
---CNetMessage::readData(char const*, unsigned int)
CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
35.52% 0.00% bitcoin-net bitcoind [.] std::vector<char, zero_after_free_allocator<char> >::_M_fill_insert(__gnu_cxx::__normal_iterator<char*, std::vector<char, zero_after_free_allocator<char> > >, unsigned long, char const&)
|
---std::vector<char, zero_after_free_allocator<char> >::_M_fill_insert(__gnu_cxx::__normal_iterator<char*, std::vector<char, zero_after_free_allocator<char> > >, unsigned long, char const&)
CNetMessage::readData(char const*, unsigned int)
CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
...
```
Tree-SHA512: 9ac4ceaa88818d5eca00994e8e3c8ad42ae019550d6583972a0a4f7b0c4f61032e3d0c476b4ae58756bc5eb8f8015a19a7fc26c095bd588f31d49a37ed0c6b3e
2019-02-05 23:40:11 +01:00
```sh
perf report --stdio | c++filt | less
```
or using a graphical tool like [Hotspot ](https://github.com/KDAB/hotspot ).
See the functional test documentation for how to invoke perf within tests.
2019-11-23 17:16:46 +01:00
### Sanitizers
2018-03-29 22:57:09 +02:00
2020-10-16 04:31:42 +02:00
Dash Core can be compiled with various "sanitizers" enabled, which add
2018-03-29 22:57:09 +02:00
instrumentation for issues regarding things like memory safety, thread race
conditions, or undefined behavior. This is controlled with the
`--with-sanitizers` configure flag, which should be a comma separated list of
sanitizers to enable. The sanitizer list should correspond to supported
`-fsanitize=` options in your compiler. These sanitizers have runtime overhead,
so they are most useful when testing changes or producing debugging builds.
Some examples:
```bash
# Enable both the address sanitizer and the undefined behavior sanitizer
./configure --with-sanitizers=address,undefined
# Enable the thread sanitizer
./configure --with-sanitizers=thread
```
If you are compiling with GCC you will typically need to install corresponding
"san" libraries to actually compile with these flags, e.g. libasan for the
address sanitizer, libtsan for the thread sanitizer, and libubsan for the
undefined sanitizer. If you are missing required libraries, the configure script
will fail with a linker error when testing the sanitizer flags.
The test suite should pass cleanly with the `thread` and `undefined` sanitizers,
but there are a number of known problems when using the `address` sanitizer. The
address sanitizer is known to fail in
[sha256_sse4::Transform ](/src/crypto/sha256_sse4.cpp ) which makes it unusable
unless you also use `--disable-asm` when running configure. We would like to fix
sanitizer issues, so please send pull requests if you can fix any errors found
by the address sanitizer (or any other sanitizer).
Not all sanitizer options can be enabled at the same time, e.g. trying to build
with `--with-sanitizers=address,thread` will fail in the configure script as
these sanitizers are mutually incompatible. Refer to your compiler manual to
learn more about these options and which sanitizers are supported by your
compiler.
Additional resources:
* [AddressSanitizer ](https://clang.llvm.org/docs/AddressSanitizer.html )
* [LeakSanitizer ](https://clang.llvm.org/docs/LeakSanitizer.html )
* [MemorySanitizer ](https://clang.llvm.org/docs/MemorySanitizer.html )
* [ThreadSanitizer ](https://clang.llvm.org/docs/ThreadSanitizer.html )
* [UndefinedBehaviorSanitizer ](https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html )
* [GCC Instrumentation Options ](https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html )
* [Google Sanitizers Wiki ](https://github.com/google/sanitizers/wiki )
* [Issue #12691: Enable -fsanitize flags in Travis ](https://github.com/bitcoin/bitcoin/issues/12691 )
2013-05-20 06:30:00 +02:00
Locking/mutex usage notes
2014-04-07 08:39:31 +02:00
-------------------------
2013-05-20 06:30:00 +02:00
2019-08-06 03:43:45 +02:00
The code is multi-threaded and uses mutexes and the
2018-03-29 15:37:20 +02:00
`LOCK` and `TRY_LOCK` macros to protect data structures.
2013-05-20 06:30:00 +02:00
2018-03-29 15:37:20 +02:00
Deadlocks due to inconsistent lock ordering (thread 1 locks `cs_main` and then
`cs_wallet` , while thread 2 locks them in the opposite order: result, deadlock
as each waits for the other to release its lock) are a problem. Compile with
`-DDEBUG_LOCKORDER` (or use `--enable-debug` ) to get lock order inconsistencies
2019-11-23 17:16:46 +01:00
reported in the `debug.log` file.
2013-05-20 06:30:00 +02:00
Re-architecting the core code so there are better-defined interfaces
between the various components is a goal, with any necessary locking
2018-03-27 20:42:09 +02:00
done by the components (e.g. see the self-contained `CBasicKeyStore` class
2018-03-29 15:37:20 +02:00
and its `cs_KeyStore` lock for example).
2013-05-20 06:30:00 +02:00
Threads
2014-04-07 08:39:31 +02:00
-------
2013-05-20 06:30:00 +02:00
2014-03-05 21:23:39 +01:00
- ThreadScriptCheck : Verifies block scripts.
- ThreadImport : Loads blocks from blk*.dat files or bootstrap.dat.
- ThreadDNSAddressSeed : Loads addresses of peers from the DNS.
2019-08-06 03:43:45 +02:00
- ThreadMapPort : Universal plug-and-play startup/shutdown.
2013-05-20 06:30:00 +02:00
2017-02-19 22:02:33 +01:00
- ThreadSocketHandler : Sends/Receives data from peers on port 9999.
2014-03-05 21:23:39 +01:00
- ThreadOpenAddedConnections : Opens network connections to added nodes.
2013-05-20 06:30:00 +02:00
- ThreadOpenConnections : Initiates new connections to peers.
2019-01-22 14:33:20 +01:00
- ThreadOpenMasternodeConnections : Opens network connections to masternodes.
2014-03-05 21:23:39 +01:00
- ThreadMessageHandler : Higher-level message handling (sending and receiving).
2019-11-23 17:16:46 +01:00
- DumpAddresses : Dumps IP addresses of nodes to `peers.dat` .
2013-05-20 06:30:00 +02:00
2016-03-06 16:26:01 +01:00
- ThreadRPCServer : Remote procedure call handler, listens on port 9998 for connections and services them.
2014-03-05 21:23:39 +01:00
- Shutdown : Does an orderly shutdown of everything.
2015-10-23 12:32:40 +02:00
2019-08-28 09:54:19 +02:00
- CSigSharesManager::WorkThreadMain : Processes pending BLS signature shares.
- CInstantSendManager::WorkThreadMain : Processes pending InstantSend locks.
Thread pools
------------
- CBLSWorker : A highly parallelized worker/helper for BLS/DKG calculations.
- CDKGSessionManager : A thread pool for processing LLMQ messages.
2015-10-23 12:32:40 +02:00
Ignoring IDE/editor files
--------------------------
2019-08-06 03:43:45 +02:00
In closed-source environments in which everyone uses the same IDE, it is common
2015-10-23 12:32:40 +02:00
to add temporary files it produces to the project-wide `.gitignore` file.
2016-03-06 16:26:01 +01:00
However, in open source software such as Dash Core, where everyone uses
2015-10-23 12:32:40 +02:00
their own editors/IDE/tools, it is less common. Only you know what files your
editor produces and this may change from version to version. The canonical way
to do this is thus to create your local gitignore. Add this to `~/.gitconfig` :
```
[core]
excludesfile = /home/.../.gitignore_global
```
(alternatively, type the command `git config --global core.excludesfile ~/.gitignore_global`
on a terminal)
Then put your favourite tool's temporary filenames in that file, e.g.
```
# NetBeans
nbproject/
```
Another option is to create a per-repository excludes file `.git/info/exclude` .
These are not committed but apply only to one repository.
If a set of tools is used by the build system or scripts the repository (for
example, lcov) it is perfectly acceptable to add its files to `.gitignore`
and commit them.
2015-11-13 11:49:12 +01:00
Development guidelines
============================
A few non-style-related recommendations for developers, as well as points to
2016-03-06 16:26:01 +01:00
pay attention to for reviewers of Dash Core code.
2015-11-13 11:49:12 +01:00
2016-03-06 16:26:01 +01:00
General Dash Core
2015-11-13 11:49:12 +01:00
----------------------
2019-08-06 03:43:45 +02:00
- New features should be exposed on RPC first, then can be made available in the GUI.
2015-11-13 11:49:12 +01:00
- *Rationale*: RPC allows for better automatic testing. The test suite for
2019-08-06 03:43:45 +02:00
the GUI is very limited.
2015-11-13 11:49:12 +01:00
2019-08-06 03:43:45 +02:00
- Make sure pull requests pass Travis CI before merging.
2015-11-13 11:49:12 +01:00
- *Rationale*: Makes sure that they pass thorough testing, and that the tester will keep passing
2019-08-06 03:43:45 +02:00
on the master branch. Otherwise, all new pull requests will start failing the tests, resulting in
confusion and mayhem.
2016-08-26 15:54:25 +02:00
2015-11-13 11:49:12 +01:00
- *Explanation*: If the test suite is to be updated for a change, this has to
2019-08-06 03:43:45 +02:00
be done first.
2015-11-13 11:49:12 +01:00
Wallet
-------
2015-11-23 21:11:53 +01:00
- Make sure that no crashes happen with run-time option `-disablewallet` .
2015-11-13 11:49:12 +01:00
2019-08-06 03:43:45 +02:00
- Include `db_cxx.h` (BerkeleyDB header) only when `ENABLE_WALLET` is set.
2015-11-13 11:49:12 +01:00
2019-08-06 03:43:45 +02:00
- *Rationale*: Otherwise compilation of the disable-wallet build will fail in environments without BerkeleyDB.
2015-11-13 11:49:12 +01:00
General C++
-------------
2019-05-17 17:27:20 +02:00
For general C++ guidelines, you may refer to the [C++ Core
Guidelines](https://isocpp.github.io/CppCoreGuidelines/).
Common misconceptions are clarified in those sections:
- Passing (non-)fundamental types in the [C++ Core
2019-08-06 03:43:45 +02:00
Guideline](https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#Rf-conventional).
2019-05-17 17:27:20 +02:00
2019-08-06 03:43:45 +02:00
- Assertions should not have side-effects.
2015-11-13 11:49:12 +01:00
2017-06-22 20:36:16 +02:00
- *Rationale*: Even though the source code is set to refuse to compile
2015-11-13 11:49:12 +01:00
with assertions disabled, having side-effects in assertions is unexpected and
2019-08-06 03:43:45 +02:00
makes the code harder to understand.
2015-11-13 11:49:12 +01:00
2019-08-06 03:43:45 +02:00
- If you use the `.h` , you must link the `.cpp` .
2015-11-13 11:49:12 +01:00
2015-11-23 21:11:53 +01:00
- *Rationale*: Include files define the interface for the code in implementation files. Including one but
2015-11-13 11:49:12 +01:00
not linking the other is confusing. Please avoid that. Moving functions from
2019-08-06 03:43:45 +02:00
the `.h` to the `.cpp` should not result in build errors.
2015-11-13 11:49:12 +01:00
2019-08-06 03:43:45 +02:00
- Use the RAII (Resource Acquisition Is Initialization) paradigm where possible. For example, by using
2016-06-10 11:29:17 +02:00
`unique_ptr` for allocations in a function.
2015-11-13 11:49:12 +01:00
2019-08-06 03:43:45 +02:00
- *Rationale*: This avoids memory and resource leaks, and ensures exception safety.
2015-11-13 11:49:12 +01:00
2019-08-06 03:43:45 +02:00
- Use `MakeUnique()` to construct objects owned by `unique_ptr` s.
2018-09-19 19:26:43 +02:00
- *Rationale*: `MakeUnique` is concise and ensures exception safety in complex expressions.
`MakeUnique` is a temporary project local implementation of `std::make_unique` (C++14).
2015-11-13 11:49:12 +01:00
C++ data structures
--------------------
2019-08-06 03:43:45 +02:00
- Never use the `std::map []` syntax when reading from a map, but instead use `.find()` .
2015-11-13 11:49:12 +01:00
2015-11-23 21:11:53 +01:00
- *Rationale*: `[]` does an insert (of the default element) if the item doesn't
2015-11-13 11:49:12 +01:00
exist in the map yet. This has resulted in memory leaks in the past, as well as
2019-08-06 03:43:45 +02:00
race conditions (expecting read-read behavior). Using `[]` is fine for *writing* to a map.
2015-11-13 11:49:12 +01:00
- Do not compare an iterator from one data structure with an iterator of
2019-08-06 03:43:45 +02:00
another data structure (even if of the same type).
2015-11-13 11:49:12 +01:00
- *Rationale*: Behavior is undefined. In C++ parlor this means "may reformat
2019-08-06 03:43:45 +02:00
the universe", in practice this has resulted in at least one hard-to-debug crash bug.
2015-11-13 11:49:12 +01:00
2016-06-10 11:29:17 +02:00
- Watch out for out-of-bounds vector access. `&vch[vch.size()]` is illegal,
including `&vch[0]` for an empty vector. Use `vch.data()` and `vch.data() +
vch.size()` instead.
2015-11-13 11:49:12 +01:00
2019-08-06 03:43:45 +02:00
- Vector bounds checking is only enabled in debug mode. Do not rely on it.
2015-11-13 11:49:12 +01:00
2018-04-07 16:42:22 +02:00
- Initialize all non-static class members where they are defined.
If this is skipped for a good reason (i.e., optimization on the critical
2019-08-06 03:43:45 +02:00
path), add an explicit comment about this.
2015-11-13 11:49:12 +01:00
- *Rationale*: Ensure determinism by avoiding accidental use of uninitialized
values. Also, static analyzers balk about this.
2018-04-07 16:42:22 +02:00
Initializing the members in the declaration makes it easy to
spot uninitialized ones.
```cpp
class A
{
uint32_t m_count{0};
}
```
2015-11-13 11:49:12 +01:00
2019-11-23 17:16:46 +01:00
- By default, declare constructors `explicit` .
2017-08-25 02:59:06 +02:00
2019-11-23 17:16:46 +01:00
- *Rationale*: This is a precaution to avoid unintended
[conversions ](https://en.cppreference.com/w/cpp/language/converting_constructor ).
2017-08-25 02:59:06 +02:00
2015-11-13 11:49:12 +01:00
- Use explicitly signed or unsigned `char` s, or even better `uint8_t` and
`int8_t` . Do not use bare `char` unless it is to pass to a third-party API.
This type can be signed or unsigned depending on the architecture, which can
lead to interoperability problems or dangerous conditions such as
2019-08-06 03:43:45 +02:00
out-of-bounds array accesses.
2015-11-13 11:49:12 +01:00
2019-08-06 03:43:45 +02:00
- Prefer explicit constructions over implicit ones that rely on 'magical' C++ behavior.
2015-11-13 11:49:12 +01:00
- *Rationale*: Easier to understand what is happening, thus easier to spot mistakes, even for those
2019-08-06 03:43:45 +02:00
that are not language lawyers.
2020-08-30 16:22:51 +02:00
- Prefer signed ints and do not mix signed and unsigned integers. If an unsigned int is used, it should have a good
reason. The fact a value will never be negative is not a good reason. The most common reason will be that mod two
arithmetic is needed, such as in cryptographic primitives. If you need to make sure that some value is always
a non-negative one, use an assertion or exception instead.
- *Rationale*: When signed ints are mixed with unsigned ints, the signed int is converted to a unsigned
int. If the signed int is some negative `N` , it'll become `INT_MAX - N` which might cause unexpected consequences.
2015-11-13 11:49:12 +01:00
2019-10-14 23:54:46 +02:00
- Prefer `enum class` (scoped enumerations) over `enum` (traditional enumerations) where possible.
- *Rationale*: Scoped enumerations avoid two potential pitfalls/problems with traditional C++ enumerations: implicit conversions to `int` , and name clashes due to enumerators being exported to the surrounding scope.
- `switch` statement on an enumeration example:
```cpp
enum class Tabs {
INFO,
CONSOLE,
GRAPH,
PEERS
};
int GetInt(Tabs tab)
{
switch (tab) {
case Tabs::INFO: return 0;
case Tabs::CONSOLE: return 1;
case Tabs::GRAPH: return 2;
case Tabs::PEERS: return 3;
} // no default case, so the compiler can warn about missing cases
assert(false);
}
```
*Rationale*: The comment documents skipping `default:` label, and it complies with `clang-format` rules. The assertion prevents firing of `-Wreturn-type` warning on some compilers.
2015-11-13 11:49:12 +01:00
Strings and formatting
------------------------
2015-11-23 21:11:53 +01:00
- Be careful of `LogPrint` versus `LogPrintf` . `LogPrint` takes a `category` argument, `LogPrintf` does not.
2015-11-13 11:49:12 +01:00
- *Rationale*: Confusion of these can result in runtime exceptions due to
2019-08-06 03:43:45 +02:00
formatting mismatch, and it is easy to get wrong because of subtly similar naming.
2015-11-13 11:49:12 +01:00
2019-08-06 03:43:45 +02:00
- Use `std::string` , avoid C string manipulation functions.
2015-11-13 11:49:12 +01:00
- *Rationale*: C++ string handling is marginally safer, less scope for
2019-08-06 03:43:45 +02:00
buffer overflows, and surprises with `\0` characters. Also, some C string manipulations
tend to act differently depending on platform, or even the user locale.
2015-11-13 11:49:12 +01:00
2019-08-06 03:43:45 +02:00
- Use `ParseInt32` , `ParseInt64` , `ParseUInt32` , `ParseUInt64` , `ParseDouble` from `utilstrencodings.h` for number parsing.
2015-11-13 11:49:12 +01:00
2019-08-06 03:43:45 +02:00
- *Rationale*: These functions do overflow checking and avoid pesky locale issues.
2018-06-07 08:55:26 +02:00
- Avoid using locale dependent functions if possible. You can use the provided
2018-09-13 17:48:27 +02:00
[`lint-locale-dependence.sh` ](/test/lint/lint-locale-dependence.sh )
2018-06-07 08:55:26 +02:00
to check for accidental use of locale dependent functions.
- *Rationale*: Unnecessary locale dependence can cause bugs that are very tricky to isolate and fix.
- These functions are known to be locale dependent:
`alphasort` , `asctime` , `asprintf` , `atof` , `atoi` , `atol` , `atoll` , `atoq` ,
`btowc` , `ctime` , `dprintf` , `fgetwc` , `fgetws` , `fprintf` , `fputwc` ,
`fputws` , `fscanf` , `fwprintf` , `getdate` , `getwc` , `getwchar` , `isalnum` ,
`isalpha` , `isblank` , `iscntrl` , `isdigit` , `isgraph` , `islower` , `isprint` ,
`ispunct` , `isspace` , `isupper` , `iswalnum` , `iswalpha` , `iswblank` ,
`iswcntrl` , `iswctype` , `iswdigit` , `iswgraph` , `iswlower` , `iswprint` ,
`iswpunct` , `iswspace` , `iswupper` , `iswxdigit` , `isxdigit` , `mblen` ,
`mbrlen` , `mbrtowc` , `mbsinit` , `mbsnrtowcs` , `mbsrtowcs` , `mbstowcs` ,
`mbtowc` , `mktime` , `putwc` , `putwchar` , `scanf` , `snprintf` , `sprintf` ,
`sscanf` , `stoi` , `stol` , `stoll` , `strcasecmp` , `strcasestr` , `strcoll` ,
`strfmon` , `strftime` , `strncasecmp` , `strptime` , `strtod` , `strtof` ,
`strtoimax` , `strtol` , `strtold` , `strtoll` , `strtoq` , `strtoul` ,
`strtoull` , `strtoumax` , `strtouq` , `strxfrm` , `swprintf` , `tolower` ,
`toupper` , `towctrans` , `towlower` , `towupper` , `ungetwc` , `vasprintf` ,
`vdprintf` , `versionsort` , `vfprintf` , `vfscanf` , `vfwprintf` , `vprintf` ,
`vscanf` , `vsnprintf` , `vsprintf` , `vsscanf` , `vswprintf` , `vwprintf` ,
`wcrtomb` , `wcscasecmp` , `wcscoll` , `wcsftime` , `wcsncasecmp` , `wcsnrtombs` ,
`wcsrtombs` , `wcstod` , `wcstof` , `wcstoimax` , `wcstol` , `wcstold` ,
`wcstoll` , `wcstombs` , `wcstoul` , `wcstoull` , `wcstoumax` , `wcswidth` ,
`wcsxfrm` , `wctob` , `wctomb` , `wctrans` , `wctype` , `wcwidth` , `wprintf`
2015-11-13 11:49:12 +01:00
2019-08-06 03:43:45 +02:00
- For `strprintf` , `LogPrint` , `LogPrintf` formatting characters don't need size specifiers.
2015-11-13 11:49:12 +01:00
2019-08-06 03:43:45 +02:00
- *Rationale*: Dash Core uses tinyformat, which is type safe. Leave them out to avoid confusion.
2015-11-13 11:49:12 +01:00
2019-10-30 17:18:21 +01:00
- Use `.c_str()` sparingly. Its only valid use is to pass C++ strings to C functions that take NULL-terminated
strings.
- Do not use it when passing a sized array (so along with `.size()` ). Use `.data()` instead to get a pointer
to the raw data.
- *Rationale*: Although this is guaranteed to be safe starting with C++11, `.data()` communicates the intent better.
- Do not use it when passing strings to `tfm::format` , `strprintf` , `LogPrint[f]` .
- *Rationale*: This is redundant. Tinyformat handles strings.
- Do not use it to convert to `QString` . Use `QString::fromStdString()` .
2021-12-28 22:45:54 +01:00
- *Rationale*: Qt has built-in functionality for converting their string
2019-10-30 17:18:21 +01:00
type from/to C++. No need to roll your own.
- In cases where do you call `.c_str()` , you might want to additionally check that the string does not contain embedded '\0' characters, because
it will (necessarily) truncate the string. This might be used to hide parts of the string from logging or to circumvent
checks. If a use of strings is sensitive to this, take care to check the string for embedded NULL characters first
and reject it if there are any (see `ParsePrechecks` in `strencodings.cpp` for an example).
2019-08-29 02:13:07 +02:00
Shadowing
2016-11-09 14:12:07 +01:00
--------------
2019-08-06 03:43:45 +02:00
Although the shadowing warning (`-Wshadow`) is not enabled by default (it prevents issues arising
2017-05-11 07:17:10 +02:00
from using a different variable with the same name),
please name variables so that their names do not shadow variables defined in the source code.
2016-11-09 14:12:07 +01:00
When using nested cycles, do not name the inner cycle variable the same as in
2019-08-06 03:43:45 +02:00
the upper cycle, etc.
2016-11-09 14:12:07 +01:00
2015-11-13 11:49:12 +01:00
Threads and synchronization
----------------------------
- Build and run tests with `-DDEBUG_LOCKORDER` to verify that no potential
2016-09-12 09:40:23 +02:00
deadlocks are introduced.
2015-11-13 11:49:12 +01:00
- When using `LOCK` /`TRY_LOCK` be aware that the lock exists in the context of
the current scope, so surround the statement and the code that needs the lock
2019-08-06 03:43:45 +02:00
with braces.
2015-11-13 11:49:12 +01:00
OK:
```c++
{
TRY_LOCK(cs_vNodes, lockNodes);
...
}
```
Wrong:
```c++
TRY_LOCK(cs_vNodes, lockNodes);
{
...
}
```
2018-12-06 15:45:48 +01:00
Scripts
--------------------------
### Shebang
- Use `#!/usr/bin/env bash` instead of obsolete `#!/bin/bash` .
- [*Rationale* ](https://github.com/dylanaraps/pure-bash-bible#shebang ):
`#!/bin/bash` assumes it is always installed to /bin/ which can cause issues;
`#!/usr/bin/env bash` searches the user's PATH to find the bash binary.
OK:
```bash
#!/usr/bin/env bash
```
Wrong:
```bash
#!/bin/bash
```
2015-11-13 11:49:12 +01:00
Source code organization
--------------------------
- Implementation code should go into the `.cpp` file and not the `.h` , unless necessary due to template usage or
2019-08-06 03:43:45 +02:00
when performance due to inlining is critical.
2015-11-13 11:49:12 +01:00
2019-08-06 03:43:45 +02:00
- *Rationale*: Shorter and simpler header files are easier to read and reduce compile time.
2015-11-13 11:49:12 +01:00
2018-06-12 14:02:14 +02:00
- Use only the lowercase alphanumerics (`a-z0-9`), underscore (`_`) and hyphen (`-`) in source code filenames.
- *Rationale*: `grep` :ing and auto-completing filenames is easier when using a consistent
naming pattern. Potential problems when building on case-insensitive filesystems are
avoided when using only lowercase characters in source code filenames.
2017-06-13 19:09:59 +02:00
- Every `.cpp` and `.h` file should `#include` every header file it directly uses classes, functions or other
2018-04-11 16:45:45 +02:00
definitions from, even if those headers are already included indirectly through other headers.
2017-06-13 19:09:59 +02:00
- *Rationale*: Excluding headers because they are already indirectly included results in compilation
failures when those indirect dependencies change. Furthermore, it obscures what the real code
dependencies are.
2015-11-13 11:49:12 +01:00
- Don't import anything into the global namespace (`using namespace ...`). Use
fully specified types such as `std::string` .
2019-08-06 03:43:45 +02:00
- *Rationale*: Avoids symbol conflicts.
2015-11-13 11:49:12 +01:00
2017-06-26 13:37:42 +02:00
- Terminate namespaces with a comment (`// namespace mynamespace`). The comment
should be placed on the same line as the brace closing the namespace, e.g.
```c++
namespace mynamespace {
2018-04-17 19:22:13 +02:00
...
2017-06-26 13:37:42 +02:00
} // namespace mynamespace
namespace {
2018-04-17 19:22:13 +02:00
...
2017-06-26 13:37:42 +02:00
} // namespace
```
2019-08-06 03:43:45 +02:00
- *Rationale*: Avoids confusion about the namespace context.
2017-06-26 13:37:42 +02:00
2018-06-11 20:17:36 +02:00
- Use `#include <primitives/transaction.h>` bracket syntax instead of
`#include "primitives/transactions.h"` quote syntax.
2020-03-19 23:46:56 +01:00
- *Rationale*: Bracket syntax is less ambiguous because the preprocessor
searches a fixed list of include directories without taking location of the
source file into account. This allows quoted includes to stand out more when
the location of the source file actually is relevant.
2018-04-02 00:30:17 +02:00
- Use include guards to avoid the problem of double inclusion. The header file
`foo/bar.h` should use the include guard identifier `BITCOIN_FOO_BAR_H` , e.g.
```c++
#ifndef BITCOIN_FOO_BAR_H
#define BITCOIN_FOO_BAR_H
...
#endif // BITCOIN_FOO_BAR_H
```
2015-11-13 11:49:12 +01:00
GUI
-----
2019-08-06 03:43:45 +02:00
- Do not display or manipulate dialogs in model code (classes `*Model` ).
2015-11-13 11:49:12 +01:00
- *Rationale*: Model classes pass through events and data from the core, they
should not interact with the user. That's where View classes come in. The converse also
holds: try to not directly access core data structures from Views.
2016-06-09 16:42:16 +02:00
2019-08-06 03:43:45 +02:00
- Avoid adding slow or blocking code in the GUI thread. In particular, do not
2020-10-01 16:05:57 +02:00
add new `interfaces::Node` and `interfaces::Wallet` method calls, even if they
2018-04-02 18:16:35 +02:00
may be fast now, in case they are changed to lock or communicate across
processes in the future.
Prefer to offload work from the GUI thread to worker threads (see
`RPCExecutor` in console code as an example) or take other steps (see
https://doc.qt.io/archives/qq/qq27-responsive-guis.html) to keep the GUI
responsive.
- *Rationale*: Blocking the GUI thread can increase latency, and lead to
hangs and deadlocks.
2016-12-02 16:20:17 +01:00
Subtrees
----------
Several parts of the repository are subtrees of software maintained elsewhere.
Some of these are maintained by active developers of Bitcoin Core, in which case changes should probably go
2019-08-06 03:43:45 +02:00
directly upstream without being PRed directly against the project. They will be merged back in the next
2016-12-02 16:20:17 +01:00
subtree merge.
2019-08-06 03:43:45 +02:00
Others are external projects without a tight relationship with our project. Changes to these should also
be sent upstream, but bugfixes may also be prudent to PR against Dash Core so that they can be integrated
quickly. Cosmetic changes should be purely taken upstream.
2016-12-02 16:20:17 +01:00
2018-05-29 15:35:01 +02:00
There is a tool in `test/lint/git-subtree-check.sh` to check a subtree directory for consistency with
2016-12-02 16:20:17 +01:00
its upstream repository.
Current subtrees include:
- src/leveldb
Merge #12495: Increase LevelDB max_open_files
ccedbaf Increase LevelDB max_open_files unless on 32-bit Unix. (Evan Klitzke)
Pull request description:
Currently we set `max_open_files = 64` on all architectures due to concerns about file descriptor exhaustion. This is extremely expensive due to how LevelDB is designed.
When a LevelDB file handle is opened, a bloom filter and block index are decoded, and some CRCs are checked. Bloom filters and block indexes in open table handles can be checked purely in memory. This means that when doing a key lookup, if a given table file may contain a given key, all of the lookup operations can happen completely in RAM until the block itself is fetched. In the common case fetching the block is one disk seek, because the block index stores its physical offset. This is the ideal case, and what we want to happen as often as possible.
If a table file handle is not open in the table cache, then in addition to the regular system calls to open the file, the block index and bloom filter need to be decoded before they can be checked. This is expensive and is something we want to avoid.
The current setting of 64 file handles means that on a synced node, only about 4% of key lookups can be satisifed by table file handles that are actually open and in memory.
The original concerns about file descriptor exhaustion are unwarranted on most systems because:
* On 64-bit POSIX hosts LevelDB will open up to 1000 file descriptors using `mmap()`, and it does not retain an open file descriptor for such files.
* On Windows non-socket files do not interfere with the main network `select()` loop, so the same fd exhaustion issues do not apply there.
This change keeps the default `max_open_files` value (which is 1000) on all systems except 32-bit POSIX hosts (which do not use `mmap()`). Open file handles use about 20 KB of memory (for the block index), so the extra file handles do not cause much memory overhead. At most 1000 will be open, and a fully synced node right now has about 1500 such files.
Profile of `loadblk` thread before changes: https://monad.io/maxopenfiles-master.svg
Profile of `loadblk` thread after changes: https://monad.io/maxopenfiles-increase.svg
Tree-SHA512: de54f77d57e9f8999eaf8d12592aab5b02f5877be8fa727a1f42cf02da2693ce25846445eb19eb138ce4e5045d1c65e14054df72faf3ff32c7655c9cfadd27a9
2018-03-29 14:56:44 +02:00
- Upstream at https://github.com/google/leveldb ; Maintained by Google, but
open important PRs to Core to avoid delay.
- **Note**: Follow the instructions in [Upgrading LevelDB ](#upgrading-leveldb ) when
2018-10-05 01:58:00 +02:00
merging upstream changes to the LevelDB subtree.
2016-12-02 16:20:17 +01:00
2021-07-15 22:42:55 +02:00
- src/crc32c
- Used by leveldb for hardware acceleration of CRC32C checksums for data integrity.
- Upstream at https://github.com/google/crc32c ; Maintained by Google.
2016-12-02 16:20:17 +01:00
- src/libsecp256k1
2018-10-05 01:58:00 +02:00
- Upstream at https://github.com/bitcoin-core/secp256k1/ ; actively maintained by Core contributors.
2016-12-02 16:20:17 +01:00
- src/crypto/ctaes
- Upstream at https://github.com/bitcoin-core/ctaes ; actively maintained by Core contributors.
- src/univalue
2018-12-06 17:33:36 +01:00
- Upstream at https://github.com/bitcoin-core/univalue ; actively maintained by Core contributors, deviates from upstream https://github.com/jgarzik/univalue
2016-12-02 16:20:17 +01:00
Merge #12495: Increase LevelDB max_open_files
ccedbaf Increase LevelDB max_open_files unless on 32-bit Unix. (Evan Klitzke)
Pull request description:
Currently we set `max_open_files = 64` on all architectures due to concerns about file descriptor exhaustion. This is extremely expensive due to how LevelDB is designed.
When a LevelDB file handle is opened, a bloom filter and block index are decoded, and some CRCs are checked. Bloom filters and block indexes in open table handles can be checked purely in memory. This means that when doing a key lookup, if a given table file may contain a given key, all of the lookup operations can happen completely in RAM until the block itself is fetched. In the common case fetching the block is one disk seek, because the block index stores its physical offset. This is the ideal case, and what we want to happen as often as possible.
If a table file handle is not open in the table cache, then in addition to the regular system calls to open the file, the block index and bloom filter need to be decoded before they can be checked. This is expensive and is something we want to avoid.
The current setting of 64 file handles means that on a synced node, only about 4% of key lookups can be satisifed by table file handles that are actually open and in memory.
The original concerns about file descriptor exhaustion are unwarranted on most systems because:
* On 64-bit POSIX hosts LevelDB will open up to 1000 file descriptors using `mmap()`, and it does not retain an open file descriptor for such files.
* On Windows non-socket files do not interfere with the main network `select()` loop, so the same fd exhaustion issues do not apply there.
This change keeps the default `max_open_files` value (which is 1000) on all systems except 32-bit POSIX hosts (which do not use `mmap()`). Open file handles use about 20 KB of memory (for the block index), so the extra file handles do not cause much memory overhead. At most 1000 will be open, and a fully synced node right now has about 1500 such files.
Profile of `loadblk` thread before changes: https://monad.io/maxopenfiles-master.svg
Profile of `loadblk` thread after changes: https://monad.io/maxopenfiles-increase.svg
Tree-SHA512: de54f77d57e9f8999eaf8d12592aab5b02f5877be8fa727a1f42cf02da2693ce25846445eb19eb138ce4e5045d1c65e14054df72faf3ff32c7655c9cfadd27a9
2018-03-29 14:56:44 +02:00
Upgrading LevelDB
---------------------
Extra care must be taken when upgrading LevelDB. This section explains issues
you must be aware of.
### File Descriptor Counts
2019-08-06 03:43:45 +02:00
In most configurations, we use the default LevelDB value for `max_open_files` ,
Merge #12495: Increase LevelDB max_open_files
ccedbaf Increase LevelDB max_open_files unless on 32-bit Unix. (Evan Klitzke)
Pull request description:
Currently we set `max_open_files = 64` on all architectures due to concerns about file descriptor exhaustion. This is extremely expensive due to how LevelDB is designed.
When a LevelDB file handle is opened, a bloom filter and block index are decoded, and some CRCs are checked. Bloom filters and block indexes in open table handles can be checked purely in memory. This means that when doing a key lookup, if a given table file may contain a given key, all of the lookup operations can happen completely in RAM until the block itself is fetched. In the common case fetching the block is one disk seek, because the block index stores its physical offset. This is the ideal case, and what we want to happen as often as possible.
If a table file handle is not open in the table cache, then in addition to the regular system calls to open the file, the block index and bloom filter need to be decoded before they can be checked. This is expensive and is something we want to avoid.
The current setting of 64 file handles means that on a synced node, only about 4% of key lookups can be satisifed by table file handles that are actually open and in memory.
The original concerns about file descriptor exhaustion are unwarranted on most systems because:
* On 64-bit POSIX hosts LevelDB will open up to 1000 file descriptors using `mmap()`, and it does not retain an open file descriptor for such files.
* On Windows non-socket files do not interfere with the main network `select()` loop, so the same fd exhaustion issues do not apply there.
This change keeps the default `max_open_files` value (which is 1000) on all systems except 32-bit POSIX hosts (which do not use `mmap()`). Open file handles use about 20 KB of memory (for the block index), so the extra file handles do not cause much memory overhead. At most 1000 will be open, and a fully synced node right now has about 1500 such files.
Profile of `loadblk` thread before changes: https://monad.io/maxopenfiles-master.svg
Profile of `loadblk` thread after changes: https://monad.io/maxopenfiles-increase.svg
Tree-SHA512: de54f77d57e9f8999eaf8d12592aab5b02f5877be8fa727a1f42cf02da2693ce25846445eb19eb138ce4e5045d1c65e14054df72faf3ff32c7655c9cfadd27a9
2018-03-29 14:56:44 +02:00
which is 1000 at the time of this writing. If LevelDB actually uses this many
2019-08-06 03:43:45 +02:00
file descriptors, it will cause problems with Bitcoin's `select()` loop, because
Merge #12495: Increase LevelDB max_open_files
ccedbaf Increase LevelDB max_open_files unless on 32-bit Unix. (Evan Klitzke)
Pull request description:
Currently we set `max_open_files = 64` on all architectures due to concerns about file descriptor exhaustion. This is extremely expensive due to how LevelDB is designed.
When a LevelDB file handle is opened, a bloom filter and block index are decoded, and some CRCs are checked. Bloom filters and block indexes in open table handles can be checked purely in memory. This means that when doing a key lookup, if a given table file may contain a given key, all of the lookup operations can happen completely in RAM until the block itself is fetched. In the common case fetching the block is one disk seek, because the block index stores its physical offset. This is the ideal case, and what we want to happen as often as possible.
If a table file handle is not open in the table cache, then in addition to the regular system calls to open the file, the block index and bloom filter need to be decoded before they can be checked. This is expensive and is something we want to avoid.
The current setting of 64 file handles means that on a synced node, only about 4% of key lookups can be satisifed by table file handles that are actually open and in memory.
The original concerns about file descriptor exhaustion are unwarranted on most systems because:
* On 64-bit POSIX hosts LevelDB will open up to 1000 file descriptors using `mmap()`, and it does not retain an open file descriptor for such files.
* On Windows non-socket files do not interfere with the main network `select()` loop, so the same fd exhaustion issues do not apply there.
This change keeps the default `max_open_files` value (which is 1000) on all systems except 32-bit POSIX hosts (which do not use `mmap()`). Open file handles use about 20 KB of memory (for the block index), so the extra file handles do not cause much memory overhead. At most 1000 will be open, and a fully synced node right now has about 1500 such files.
Profile of `loadblk` thread before changes: https://monad.io/maxopenfiles-master.svg
Profile of `loadblk` thread after changes: https://monad.io/maxopenfiles-increase.svg
Tree-SHA512: de54f77d57e9f8999eaf8d12592aab5b02f5877be8fa727a1f42cf02da2693ce25846445eb19eb138ce4e5045d1c65e14054df72faf3ff32c7655c9cfadd27a9
2018-03-29 14:56:44 +02:00
it may cause new sockets to be created where the fd value is >= 1024. For this
2019-08-06 03:43:45 +02:00
reason, on 64-bit Unix systems, we rely on an internal LevelDB optimization that
Merge #12495: Increase LevelDB max_open_files
ccedbaf Increase LevelDB max_open_files unless on 32-bit Unix. (Evan Klitzke)
Pull request description:
Currently we set `max_open_files = 64` on all architectures due to concerns about file descriptor exhaustion. This is extremely expensive due to how LevelDB is designed.
When a LevelDB file handle is opened, a bloom filter and block index are decoded, and some CRCs are checked. Bloom filters and block indexes in open table handles can be checked purely in memory. This means that when doing a key lookup, if a given table file may contain a given key, all of the lookup operations can happen completely in RAM until the block itself is fetched. In the common case fetching the block is one disk seek, because the block index stores its physical offset. This is the ideal case, and what we want to happen as often as possible.
If a table file handle is not open in the table cache, then in addition to the regular system calls to open the file, the block index and bloom filter need to be decoded before they can be checked. This is expensive and is something we want to avoid.
The current setting of 64 file handles means that on a synced node, only about 4% of key lookups can be satisifed by table file handles that are actually open and in memory.
The original concerns about file descriptor exhaustion are unwarranted on most systems because:
* On 64-bit POSIX hosts LevelDB will open up to 1000 file descriptors using `mmap()`, and it does not retain an open file descriptor for such files.
* On Windows non-socket files do not interfere with the main network `select()` loop, so the same fd exhaustion issues do not apply there.
This change keeps the default `max_open_files` value (which is 1000) on all systems except 32-bit POSIX hosts (which do not use `mmap()`). Open file handles use about 20 KB of memory (for the block index), so the extra file handles do not cause much memory overhead. At most 1000 will be open, and a fully synced node right now has about 1500 such files.
Profile of `loadblk` thread before changes: https://monad.io/maxopenfiles-master.svg
Profile of `loadblk` thread after changes: https://monad.io/maxopenfiles-increase.svg
Tree-SHA512: de54f77d57e9f8999eaf8d12592aab5b02f5877be8fa727a1f42cf02da2693ce25846445eb19eb138ce4e5045d1c65e14054df72faf3ff32c7655c9cfadd27a9
2018-03-29 14:56:44 +02:00
uses `mmap()` + `close()` to open table files without actually retaining
references to the table file descriptors. If you are upgrading LevelDB, you must
sanity check the changes to make sure that this assumption remains valid.
In addition to reviewing the upstream changes in `env_posix.cc` , you can use `lsof` to
check this. For example, on Linux this command will show open `.ldb` file counts:
```bash
2020-10-16 04:31:42 +02:00
$ lsof -p $(pidof dashd) |\
Merge #12495: Increase LevelDB max_open_files
ccedbaf Increase LevelDB max_open_files unless on 32-bit Unix. (Evan Klitzke)
Pull request description:
Currently we set `max_open_files = 64` on all architectures due to concerns about file descriptor exhaustion. This is extremely expensive due to how LevelDB is designed.
When a LevelDB file handle is opened, a bloom filter and block index are decoded, and some CRCs are checked. Bloom filters and block indexes in open table handles can be checked purely in memory. This means that when doing a key lookup, if a given table file may contain a given key, all of the lookup operations can happen completely in RAM until the block itself is fetched. In the common case fetching the block is one disk seek, because the block index stores its physical offset. This is the ideal case, and what we want to happen as often as possible.
If a table file handle is not open in the table cache, then in addition to the regular system calls to open the file, the block index and bloom filter need to be decoded before they can be checked. This is expensive and is something we want to avoid.
The current setting of 64 file handles means that on a synced node, only about 4% of key lookups can be satisifed by table file handles that are actually open and in memory.
The original concerns about file descriptor exhaustion are unwarranted on most systems because:
* On 64-bit POSIX hosts LevelDB will open up to 1000 file descriptors using `mmap()`, and it does not retain an open file descriptor for such files.
* On Windows non-socket files do not interfere with the main network `select()` loop, so the same fd exhaustion issues do not apply there.
This change keeps the default `max_open_files` value (which is 1000) on all systems except 32-bit POSIX hosts (which do not use `mmap()`). Open file handles use about 20 KB of memory (for the block index), so the extra file handles do not cause much memory overhead. At most 1000 will be open, and a fully synced node right now has about 1500 such files.
Profile of `loadblk` thread before changes: https://monad.io/maxopenfiles-master.svg
Profile of `loadblk` thread after changes: https://monad.io/maxopenfiles-increase.svg
Tree-SHA512: de54f77d57e9f8999eaf8d12592aab5b02f5877be8fa727a1f42cf02da2693ce25846445eb19eb138ce4e5045d1c65e14054df72faf3ff32c7655c9cfadd27a9
2018-03-29 14:56:44 +02:00
awk 'BEGIN { fd=0; mem=0; } /ldb$/ { if ($4 == "mem") mem++; else fd++ } END { printf "mem = %s, fd = %s\n", mem, fd}'
mem = 119, fd = 0
```
The `mem` value shows how many files are mmap'ed, and the `fd` value shows you
many file descriptors these files are using. You should check that `fd` is a
small number (usually 0 on 64-bit hosts).
See the notes in the `SetMaxOpenFiles()` function in `dbwrapper.cc` for more
details.
### Consensus Compatibility
It is possible for LevelDB changes to inadvertently change consensus
compatibility between nodes. This happened in Bitcoin 0.8 (when LevelDB was
2019-08-06 03:43:45 +02:00
first introduced). When upgrading LevelDB, you should review the upstream changes
Merge #12495: Increase LevelDB max_open_files
ccedbaf Increase LevelDB max_open_files unless on 32-bit Unix. (Evan Klitzke)
Pull request description:
Currently we set `max_open_files = 64` on all architectures due to concerns about file descriptor exhaustion. This is extremely expensive due to how LevelDB is designed.
When a LevelDB file handle is opened, a bloom filter and block index are decoded, and some CRCs are checked. Bloom filters and block indexes in open table handles can be checked purely in memory. This means that when doing a key lookup, if a given table file may contain a given key, all of the lookup operations can happen completely in RAM until the block itself is fetched. In the common case fetching the block is one disk seek, because the block index stores its physical offset. This is the ideal case, and what we want to happen as often as possible.
If a table file handle is not open in the table cache, then in addition to the regular system calls to open the file, the block index and bloom filter need to be decoded before they can be checked. This is expensive and is something we want to avoid.
The current setting of 64 file handles means that on a synced node, only about 4% of key lookups can be satisifed by table file handles that are actually open and in memory.
The original concerns about file descriptor exhaustion are unwarranted on most systems because:
* On 64-bit POSIX hosts LevelDB will open up to 1000 file descriptors using `mmap()`, and it does not retain an open file descriptor for such files.
* On Windows non-socket files do not interfere with the main network `select()` loop, so the same fd exhaustion issues do not apply there.
This change keeps the default `max_open_files` value (which is 1000) on all systems except 32-bit POSIX hosts (which do not use `mmap()`). Open file handles use about 20 KB of memory (for the block index), so the extra file handles do not cause much memory overhead. At most 1000 will be open, and a fully synced node right now has about 1500 such files.
Profile of `loadblk` thread before changes: https://monad.io/maxopenfiles-master.svg
Profile of `loadblk` thread after changes: https://monad.io/maxopenfiles-increase.svg
Tree-SHA512: de54f77d57e9f8999eaf8d12592aab5b02f5877be8fa727a1f42cf02da2693ce25846445eb19eb138ce4e5045d1c65e14054df72faf3ff32c7655c9cfadd27a9
2018-03-29 14:56:44 +02:00
to check for issues affecting consensus compatibility.
For example, if LevelDB had a bug that accidentally prevented a key from being
returned in an edge case, and that bug was fixed upstream, the bug "fix" would
2019-08-06 03:43:45 +02:00
be an incompatible consensus change. In this situation, the correct behavior
Merge #12495: Increase LevelDB max_open_files
ccedbaf Increase LevelDB max_open_files unless on 32-bit Unix. (Evan Klitzke)
Pull request description:
Currently we set `max_open_files = 64` on all architectures due to concerns about file descriptor exhaustion. This is extremely expensive due to how LevelDB is designed.
When a LevelDB file handle is opened, a bloom filter and block index are decoded, and some CRCs are checked. Bloom filters and block indexes in open table handles can be checked purely in memory. This means that when doing a key lookup, if a given table file may contain a given key, all of the lookup operations can happen completely in RAM until the block itself is fetched. In the common case fetching the block is one disk seek, because the block index stores its physical offset. This is the ideal case, and what we want to happen as often as possible.
If a table file handle is not open in the table cache, then in addition to the regular system calls to open the file, the block index and bloom filter need to be decoded before they can be checked. This is expensive and is something we want to avoid.
The current setting of 64 file handles means that on a synced node, only about 4% of key lookups can be satisifed by table file handles that are actually open and in memory.
The original concerns about file descriptor exhaustion are unwarranted on most systems because:
* On 64-bit POSIX hosts LevelDB will open up to 1000 file descriptors using `mmap()`, and it does not retain an open file descriptor for such files.
* On Windows non-socket files do not interfere with the main network `select()` loop, so the same fd exhaustion issues do not apply there.
This change keeps the default `max_open_files` value (which is 1000) on all systems except 32-bit POSIX hosts (which do not use `mmap()`). Open file handles use about 20 KB of memory (for the block index), so the extra file handles do not cause much memory overhead. At most 1000 will be open, and a fully synced node right now has about 1500 such files.
Profile of `loadblk` thread before changes: https://monad.io/maxopenfiles-master.svg
Profile of `loadblk` thread after changes: https://monad.io/maxopenfiles-increase.svg
Tree-SHA512: de54f77d57e9f8999eaf8d12592aab5b02f5877be8fa727a1f42cf02da2693ce25846445eb19eb138ce4e5045d1c65e14054df72faf3ff32c7655c9cfadd27a9
2018-03-29 14:56:44 +02:00
would be to revert the upstream fix before applying the updates to Bitcoin's
2019-08-06 03:43:45 +02:00
copy of LevelDB. In general, you should be wary of any upstream changes affecting
Merge #12495: Increase LevelDB max_open_files
ccedbaf Increase LevelDB max_open_files unless on 32-bit Unix. (Evan Klitzke)
Pull request description:
Currently we set `max_open_files = 64` on all architectures due to concerns about file descriptor exhaustion. This is extremely expensive due to how LevelDB is designed.
When a LevelDB file handle is opened, a bloom filter and block index are decoded, and some CRCs are checked. Bloom filters and block indexes in open table handles can be checked purely in memory. This means that when doing a key lookup, if a given table file may contain a given key, all of the lookup operations can happen completely in RAM until the block itself is fetched. In the common case fetching the block is one disk seek, because the block index stores its physical offset. This is the ideal case, and what we want to happen as often as possible.
If a table file handle is not open in the table cache, then in addition to the regular system calls to open the file, the block index and bloom filter need to be decoded before they can be checked. This is expensive and is something we want to avoid.
The current setting of 64 file handles means that on a synced node, only about 4% of key lookups can be satisifed by table file handles that are actually open and in memory.
The original concerns about file descriptor exhaustion are unwarranted on most systems because:
* On 64-bit POSIX hosts LevelDB will open up to 1000 file descriptors using `mmap()`, and it does not retain an open file descriptor for such files.
* On Windows non-socket files do not interfere with the main network `select()` loop, so the same fd exhaustion issues do not apply there.
This change keeps the default `max_open_files` value (which is 1000) on all systems except 32-bit POSIX hosts (which do not use `mmap()`). Open file handles use about 20 KB of memory (for the block index), so the extra file handles do not cause much memory overhead. At most 1000 will be open, and a fully synced node right now has about 1500 such files.
Profile of `loadblk` thread before changes: https://monad.io/maxopenfiles-master.svg
Profile of `loadblk` thread after changes: https://monad.io/maxopenfiles-increase.svg
Tree-SHA512: de54f77d57e9f8999eaf8d12592aab5b02f5877be8fa727a1f42cf02da2693ce25846445eb19eb138ce4e5045d1c65e14054df72faf3ff32c7655c9cfadd27a9
2018-03-29 14:56:44 +02:00
what data is returned from LevelDB queries.
2016-12-02 16:20:17 +01:00
2017-09-25 02:38:47 +02:00
Scripted diffs
--------------
For reformatting and refactoring commits where the changes can be easily automated using a bash script, we use
scripted-diff commits. The bash script is included in the commit message and our Travis CI job checks that
the result of the script is identical to the commit. This aids reviewers since they can verify that the script
2019-11-19 15:21:31 +01:00
does exactly what it is supposed to do. It is also helpful for rebasing (since the same script can just be re-run
2017-09-25 02:38:47 +02:00
on the new master commit).
To create a scripted-diff:
- start the commit message with `scripted-diff:` (and then a description of the diff on the same line)
- in the commit message include the bash script between lines containing just the following text:
- `-BEGIN VERIFY SCRIPT-`
- `-END VERIFY SCRIPT-`
2019-08-06 03:43:45 +02:00
The scripted-diff is verified by the tool `test/lint/commit-script-check.sh` . The tool's default behavior, when supplied
2018-11-16 17:20:53 +01:00
with a commit is to verify all scripted-diffs from the beginning of time up to said commit. Internally, the tool passes
the first supplied argument to `git rev-list --reverse` to determine which commits to verify script-diffs for, ignoring
commits that don't conform to the commit message format described above.
For development, it might be more convenient to verify all scripted-diffs in a range `A..B` , for example:
```bash
test/lint/commit-script-check.sh origin/master..HEAD
```
2017-09-25 02:38:47 +02:00
2019-11-19 15:21:31 +01:00
### Suggestions and examples
If you need to replace in multiple files, prefer `git ls-files` to `find` or globbing, and `git grep` to `grep` , to
avoid changing files that are not under version control.
For efficient replacement scripts, reduce the selection to the files that potentially need to be modified, so for
example, instead of a blanket `git ls-files src | xargs sed -i s/apple/orange/` , use
`git grep -l apple src | xargs sed -i s/apple/orange/` .
Also, it is good to keep the selection of files as specific as possible — for example, replace only in directories where
you expect replacements — because it reduces the risk that a rebase of your commit by re-running the script will
introduce accidental changes.
Some good examples of scripted-diff:
- [scripted-diff: Rename InitInterfaces to NodeContext ](https://github.com/bitcoin/bitcoin/commit/301bd41a2e6765b185bd55f4c541f9e27aeea29d )
2021-07-19 12:39:22 +02:00
uses an elegant script to replace occurrences of multiple terms in all source files.
2019-11-19 15:21:31 +01:00
2020-04-10 01:45:33 +02:00
- [scripted-diff: Remove g_connman, g_banman globals ](https://github.com/bitcoin/bitcoin/commit/8922d7f6b751a3e6b3b9f6fb7961c442877fb65a )
2019-11-19 15:21:31 +01:00
replaces specific terms in a list of specific source files.
- [scripted-diff: Replace fprintf with tfm::format ](https://github.com/bitcoin/bitcoin/commit/fac03ec43a15ad547161e37e53ea82482cc508f9 )
does a global replacement but excludes certain directories.
To find all previous uses of scripted diffs in the repository, do:
```
git log --grep="-BEGIN VERIFY SCRIPT-"
```
2017-09-25 02:38:47 +02:00
2019-01-30 16:17:58 +01:00
Release notes
-------------
Release notes should be written for any PR that:
2019-01-30 16:17:58 +01:00
- introduces a notable new feature
- fixes a significant bug
- changes an API or configuration model
- makes any other visible change to the end-user experience.
Release notes should be added to a PR-specific release note file at
`/doc/release-notes-<PR number>.md` to avoid conflicts between multiple PRs.
All `release-notes*` files are merged into a single
[/doc/release-notes.md ](/doc/release-notes.md ) file prior to the release.
Release notes
-------------
Release notes should be written for any PR that:
2019-01-30 16:17:58 +01:00
- introduces a notable new feature
- fixes a significant bug
- changes an API or configuration model
- makes any other visible change to the end-user experience.
Release notes should be added to a PR-specific release note file at
`/doc/release-notes-<PR number>.md` to avoid conflicts between multiple PRs.
All `release-notes*` files are merged into a single
[/doc/release-notes.md ](/doc/release-notes.md ) file prior to the release.
2017-05-02 07:52:34 +02:00
RPC interface guidelines
--------------------------
A few guidelines for introducing and reviewing new RPC interfaces:
2019-08-06 03:43:45 +02:00
- Method naming: use consecutive lower-case names such as `getrawtransaction` and `submitblock` .
2017-05-02 07:52:34 +02:00
2019-08-06 03:43:45 +02:00
- *Rationale*: Consistency with the existing interface.
2017-05-02 07:52:34 +02:00
- Argument naming: use snake case `fee_delta` (and not, e.g. camel case `feeDelta` )
2019-08-06 03:43:45 +02:00
- *Rationale*: Consistency with the existing interface.
2017-05-02 07:52:34 +02:00
- Use the JSON parser for parsing, don't manually parse integers or strings from
arguments unless absolutely necessary.
- *Rationale*: Introduces hand-rolled string manipulation code at both the caller and callee sites,
2019-08-06 03:43:45 +02:00
which is error-prone, and it is easy to get things such as escaping wrong.
2017-05-02 07:52:34 +02:00
JSON already supports nested data structures, no need to re-invent the wheel.
2017-08-08 11:27:15 +02:00
- *Exception*: AmountFromValue can parse amounts as string. This was introduced because many JSON
2019-08-06 03:43:45 +02:00
parsers and formatters hard-code handling decimal numbers as floating-point
2017-05-02 07:52:34 +02:00
values, resulting in potential loss of precision. This is unacceptable for
2017-08-08 11:27:15 +02:00
monetary values. **Always** use `AmountFromValue` and `ValueFromAmount` when
2017-05-02 07:52:34 +02:00
inputting or outputting monetary values. The only exceptions to this are
`prioritisetransaction` and `getblocktemplate` because their interface
is specified as-is in BIP22.
- Missing arguments and 'null' should be treated the same: as default values. If there is no
2017-08-22 09:24:31 +02:00
default value, both cases should fail in the same way. The easiest way to follow this
2019-08-06 03:43:45 +02:00
guideline is to detect unspecified arguments with `params[x].isNull()` instead of
2017-08-22 09:24:31 +02:00
`params.size() <= x` . The former returns true if the argument is either null or missing,
while the latter returns true if is missing, and false if it is null.
2017-05-02 07:52:34 +02:00
- *Rationale*: Avoids surprises when switching to name-based arguments. Missing name-based arguments
are passed as 'null'.
- Try not to overload methods on argument type. E.g. don't make `getblock(true)` and `getblock("hash")`
do different things.
2020-06-11 10:39:04 +02:00
- *Rationale*: This is impossible to use with `dash-cli` , and can be surprising to users.
2017-05-02 07:52:34 +02:00
- *Exception*: Some RPC calls can take both an `int` and `bool` , most notably when a bool was switched
to a multi-value, or due to other historical reasons. **Always** have false map to 0 and
true to 1 in this case.
- Don't forget to fill in the argument names correctly in the RPC command table.
- *Rationale*: If not, the call can not be used with name-based arguments.
- Add every non-string RPC argument `(method, idx, name)` to the table `vRPCConvertParams` in `rpc/client.cpp` .
2020-06-11 10:39:04 +02:00
- *Rationale*: `dash-cli` and the GUI debug console use this table to determine how to
2017-05-02 07:52:34 +02:00
convert a plaintext command line to JSON. If the types don't match, the method can be unusable
from there.
- A RPC method must either be a wallet method or a non-wallet method. Do not
2019-08-06 03:43:45 +02:00
introduce new methods that differ in behavior based on the presence of a wallet.
2017-05-02 07:52:34 +02:00
- *Rationale*: as well as complicating the implementation and interfering
with the introduction of multi-wallet, wallet and non-wallet code should be
separated to avoid introducing circular dependencies between code units.
2017-09-06 19:38:33 +02:00
- Try to make the RPC response a JSON object.
2019-08-06 03:43:45 +02:00
- *Rationale*: If a RPC response is not a JSON object, then it is harder to avoid API breakage if
2017-09-06 19:38:33 +02:00
new data in the response is needed.
2017-11-15 14:14:20 +01:00
- Wallet RPCs call BlockUntilSyncedToCurrentChain to maintain consistency with
`getblockchaininfo` 's state immediately prior to the call's execution. Wallet
RPCs whose behavior does *not* depend on the current chainstate may omit this
call.
2020-06-11 10:39:04 +02:00
- *Rationale*: In previous versions of Dash Core, the wallet was always
2017-11-15 14:14:20 +01:00
in-sync with the chainstate (by virtue of them all being updated in the
same cs_main lock). In order to maintain the behavior that wallet RPCs
return results as of at least the highest best-known block an RPC
client may be aware of prior to entering a wallet RPC call, we must block
until the wallet is caught up to the chainstate as of the RPC call's entry.
This also makes the API much easier for RPC clients to reason about.
2018-03-19 17:17:53 +01:00
- Be aware of RPC method aliases and generally avoid registering the same
callback function pointer for different RPCs.
- *Rationale*: RPC methods registered with the same function pointer will be
considered aliases and only the first method name will show up in the
`help` rpc command list.
- *Exception*: Using RPC method aliases may be appropriate in cases where a
new RPC is replacing a deprecated RPC, to avoid both RPCs confusingly
showing up in the command list.