modified: ci/Dockerfile.builder
All checks were successful
Gitea Actions Demo / Explore-Gitea-Actions (push) Successful in 1m12s

modified:   ci/Dockerfile.gitian-builder
	modified:   ci/matrix.sh
	modified:   ci/test_unittests.sh
	modified:   test/README.md
This commit is contained in:
SikkieNL 2024-12-17 20:59:46 +01:00
parent 5b1e0a61e0
commit d5f3b55544
5 changed files with 38 additions and 38 deletions

View File

@ -16,9 +16,9 @@ RUN apt-get update && apt-get install -y python3-pip
RUN pip3 install pyzmq # really needed? RUN pip3 install pyzmq # really needed?
RUN pip3 install jinja2 RUN pip3 install jinja2
# dash_hash # neobytes_hash
RUN git clone https://github.com/dashpay/dash_hash RUN git clone https://github.com/neobytes-project/neobytes_hash
RUN cd dash_hash && python3 setup.py install RUN cd neobytes_hash && python3 setup.py install
ARG USER_ID=1000 ARG USER_ID=1000
ARG GROUP_ID=1000 ARG GROUP_ID=1000
@ -26,8 +26,8 @@ ARG GROUP_ID=1000
# add user with specified (or default) user/group ids # add user with specified (or default) user/group ids
ENV USER_ID ${USER_ID} ENV USER_ID ${USER_ID}
ENV GROUP_ID ${GROUP_ID} ENV GROUP_ID ${GROUP_ID}
RUN groupadd -g ${GROUP_ID} dash RUN groupadd -g ${GROUP_ID} neobytes
RUN useradd -u ${USER_ID} -g dash -s /bin/bash -m -d /dash dash RUN useradd -u ${USER_ID} -g neobytes -s /bin/bash -m -d /neobytes neobytes
# Extra packages # Extra packages
ARG BUILD_TARGET=linux64 ARG BUILD_TARGET=linux64
@ -45,13 +45,13 @@ RUN \
update-alternatives --set x86_64-w64-mingw32-g++ /usr/bin/x86_64-w64-mingw32-g++-posix; \ update-alternatives --set x86_64-w64-mingw32-g++ /usr/bin/x86_64-w64-mingw32-g++-posix; \
exit 0 exit 0
RUN mkdir /dash-src && \ RUN mkdir /neobytes-src && \
mkdir -p /cache/ccache && \ mkdir -p /cache/ccache && \
mkdir /cache/depends && \ mkdir /cache/depends && \
mkdir /cache/sdk-sources && \ mkdir /cache/sdk-sources && \
chown $USER_ID:$GROUP_ID /dash-src && \ chown $USER_ID:$GROUP_ID /neobytes-src && \
chown $USER_ID:$GROUP_ID /cache && \ chown $USER_ID:$GROUP_ID /cache && \
chown $USER_ID:$GROUP_ID /cache -R chown $USER_ID:$GROUP_ID /cache -R
WORKDIR /dash-src WORKDIR /neobytes-src
USER dash USER neobytes

View File

@ -10,8 +10,8 @@ ARG GROUP_ID=1000
# add user with specified (or default) user/group ids # add user with specified (or default) user/group ids
ENV USER_ID ${USER_ID} ENV USER_ID ${USER_ID}
ENV GROUP_ID ${GROUP_ID} ENV GROUP_ID ${GROUP_ID}
RUN groupadd -g ${GROUP_ID} dash RUN groupadd -g ${GROUP_ID} neobytes
RUN useradd -u ${USER_ID} -g dash -s /bin/bash -m -d /dash dash RUN useradd -u ${USER_ID} -g neobytes -s /bin/bash -m -d /neobytes neobytes
WORKDIR /dash WORKDIR /neobytes
USER dash USER neobytes

View File

@ -7,7 +7,7 @@ export BUILD_TARGET=${BUILD_TARGET:-linux64}
export PULL_REQUEST=${PULL_REQUEST:-false} export PULL_REQUEST=${PULL_REQUEST:-false}
export JOB_NUMBER=${JOB_NUMBER:-1} export JOB_NUMBER=${JOB_NUMBER:-1}
export BUILDER_IMAGE_NAME="dash-builder-$BUILD_TARGET-$JOB_NUMBER" export BUILDER_IMAGE_NAME="neobytes-builder-$BUILD_TARGET-$JOB_NUMBER"
export HOST_SRC_DIR=${HOST_SRC_DIR:-$(pwd)} export HOST_SRC_DIR=${HOST_SRC_DIR:-$(pwd)}
export HOST_CACHE_DIR=${HOST_CACHE_DIR:-$(pwd)/ci-cache-$BUILD_TARGET} export HOST_CACHE_DIR=${HOST_CACHE_DIR:-$(pwd)/ci-cache-$BUILD_TARGET}
@ -57,7 +57,7 @@ elif [ "$BUILD_TARGET" = "linux32" ]; then
export HOST=i686-pc-linux-gnu export HOST=i686-pc-linux-gnu
export PACKAGES="g++-multilib bc python3-zmq" export PACKAGES="g++-multilib bc python3-zmq"
export BITCOIN_CONFIG="--enable-zmq --enable-glibc-back-compat --enable-reduce-exports --enable-stacktraces LDFLAGS=-static-libstdc++" export BITCOIN_CONFIG="--enable-zmq --enable-glibc-back-compat --enable-reduce-exports --enable-stacktraces LDFLAGS=-static-libstdc++"
export USE_SHELL="/bin/dash" export USE_SHELL="/bin/neobytes"
export PYZMQ=true export PYZMQ=true
export RUN_UNITTESTS=true export RUN_UNITTESTS=true
export RUN_INTEGRATIONTESTS=true export RUN_INTEGRATIONTESTS=true

View File

@ -22,7 +22,7 @@ cd build-ci/dashcore-$BUILD_TARGET
if [ "$DIRECT_WINE_EXEC_TESTS" = "true" ]; then if [ "$DIRECT_WINE_EXEC_TESTS" = "true" ]; then
# Inside Docker, binfmt isn't working so we can't trust in make invoking windows binaries correctly # Inside Docker, binfmt isn't working so we can't trust in make invoking windows binaries correctly
wine ./src/test/test_dash.exe wine ./src/test/test_neobytes.exe
else else
make $MAKEJOBS check VERBOSE=1 make $MAKEJOBS check VERBOSE=1
fi fi

View File

@ -1,4 +1,4 @@
This directory contains integration tests that test dashd and its This directory contains integration tests that test neobytesd and its
utilities in their entirety. It does not contain unit tests, which utilities in their entirety. It does not contain unit tests, which
can be found in [/src/test](/src/test), [/src/wallet/test](/src/wallet/test), can be found in [/src/test](/src/test), [/src/wallet/test](/src/wallet/test),
etc. etc.
@ -6,10 +6,10 @@ etc.
There are currently two sets of tests in this directory: There are currently two sets of tests in this directory:
- [functional](/test/functional) which test the functionality of - [functional](/test/functional) which test the functionality of
dashd and dash-qt by interacting with them through the RPC and P2P neobytesd and neobytes-qt by interacting with them through the RPC and P2P
interfaces. interfaces.
- [util](test/util) which tests the dash utilities, currently only - [util](test/util) which tests the neobytes utilities, currently only
dash-tx. neobytes-tx.
The util tests are run as part of `make check` target. The functional The util tests are run as part of `make check` target. The functional
tests are run by the travis continuous build process whenever a pull tests are run by the travis continuous build process whenever a pull
@ -70,29 +70,29 @@ options. Run `test_runner.py -h` to see them all.
##### Resource contention ##### Resource contention
The P2P and RPC ports used by the dashd nodes-under-test are chosen to make The P2P and RPC ports used by the neobytesd nodes-under-test are chosen to make
conflicts with other processes unlikely. However, if there is another dashd conflicts with other processes unlikely. However, if there is another neobytesd
process running on the system (perhaps from a previous test which hasn't successfully process running on the system (perhaps from a previous test which hasn't successfully
killed all its dashd nodes), then there may be a port conflict which will killed all its neobytesd nodes), then there may be a port conflict which will
cause the test to fail. It is recommended that you run the tests on a system cause the test to fail. It is recommended that you run the tests on a system
where no other dashd processes are running. where no other neobytesd processes are running.
On linux, the test_framework will warn if there is another On linux, the test_framework will warn if there is another
dashd process running when the tests are started. neobytesd process running when the tests are started.
If there are zombie dashd processes after test failure, you can kill them If there are zombie neobytesd processes after test failure, you can kill them
by running the following commands. **Note that these commands will kill all by running the following commands. **Note that these commands will kill all
dashd processes running on the system, so should not be used if any non-test neobytesd processes running on the system, so should not be used if any non-test
dashd processes are being run.** neobytesd processes are being run.**
```bash ```bash
killall dashd killall neobytesd
``` ```
or or
```bash ```bash
pkill -9 dashd pkill -9 neobytesd
``` ```
@ -103,11 +103,11 @@ functional test is run and is stored in test/cache. This speeds up
test startup times since new blockchains don't need to be generated for test startup times since new blockchains don't need to be generated for
each test. However, the cache may get into a bad state, in which case each test. However, the cache may get into a bad state, in which case
tests will fail. If this happens, remove the cache directory (and make tests will fail. If this happens, remove the cache directory (and make
sure dashd processes are stopped as above): sure neobytesd processes are stopped as above):
```bash ```bash
rm -rf cache rm -rf cache
killall dashd killall neobytesd
``` ```
##### Test logging ##### Test logging
@ -120,13 +120,13 @@ default:
- when run directly, *all* logs are written to `test_framework.log` and INFO - when run directly, *all* logs are written to `test_framework.log` and INFO
level and above are output to the console. level and above are output to the console.
- when run on Travis, no logs are output to the console. However, if a test - when run on Travis, no logs are output to the console. However, if a test
fails, the `test_framework.log` and dashd `debug.log`s will all be dumped fails, the `test_framework.log` and neobytesd `debug.log`s will all be dumped
to the console to help troubleshooting. to the console to help troubleshooting.
To change the level of logs output to the console, use the `-l` command line To change the level of logs output to the console, use the `-l` command line
argument. argument.
`test_framework.log` and dashd `debug.log`s can be combined into a single `test_framework.log` and neobytesd `debug.log`s can be combined into a single
aggregate log by running the `combine_logs.py` script. The output can be plain aggregate log by running the `combine_logs.py` script. The output can be plain
text, colorized text or html. For example: text, colorized text or html. For example:
@ -153,9 +153,9 @@ import pdb; pdb.set_trace()
``` ```
anywhere in the test. You will then be able to inspect variables, as well as anywhere in the test. You will then be able to inspect variables, as well as
call methods that interact with the dashd nodes-under-test. call methods that interact with the neobytesd nodes-under-test.
If further introspection of the dashd instances themselves becomes If further introspection of the neobytesd instances themselves becomes
necessary, this can be accomplished by first setting a pdb breakpoint necessary, this can be accomplished by first setting a pdb breakpoint
at an appropriate location, running the test to that point, then using at an appropriate location, running the test to that point, then using
`gdb` to attach to the process and debug. `gdb` to attach to the process and debug.
@ -169,8 +169,8 @@ For instance, to attach to `self.node[1]` during a run:
use the directory path to get the pid from the pid file: use the directory path to get the pid from the pid file:
```bash ```bash
cat /tmp/user/1000/testo9vsdjo3/node1/regtest/dashd.pid cat /tmp/user/1000/testo9vsdjo3/node1/regtest/neobytesd.pid
gdb /home/example/dashd <pid> gdb /home/example/neobytesd <pid>
``` ```
Note: gdb attach step may require `sudo` Note: gdb attach step may require `sudo`