summaryrefslogtreecommitdiffstats
path: root/CONTRIBUTING.md
diff options
context:
space:
mode:
authorMatan Kushner <hello@matchai.me>2020-01-06 00:35:46 -0500
committerGitHub <noreply@github.com>2020-01-06 00:35:46 -0500
commit09fe0afc140873f060ab18fd02a35245143e5841 (patch)
treeedbbd015347e34f041fcdc88508fb250bad2903a /CONTRIBUTING.md
parent45e6b3e05284444e3237448515cfe599461b4c96 (diff)
ci: Remove Docker test environment from CI (#806)
Diffstat (limited to 'CONTRIBUTING.md')
-rw-r--r--CONTRIBUTING.md23
1 files changed, 6 insertions, 17 deletions
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index dee90d1f4..b74cacc8e 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -61,8 +61,8 @@ cargo fmt
Testing is critical to making sure starship works as intended on systems big and small. Starship interfaces with many applications and system APIs when generating the prompt, so there's a lot of room for bugs to slip in.
-Unit tests and a subset of acceptance tests can be run with `cargo test`.
-The full acceptance test suite can be run in a Docker container with the included [`./acceptance_test`](acceptance_test) script.
+Unit tests and a subset of integration tests can be run with `cargo test`.
+The full integration test suite is run on GitHub as part of our GitHub Actions continuous integration.
### Unit Testing
@@ -72,24 +72,13 @@ Unit tests should be fully isolated, only testing a given function's expected ou
The previous point should be emphasized: even seemingly innocuous ideas like "if we can see the directory, we can read it" or "nobody will have their home directory be a git repo" have bitten us in the past. Having even a single test fail can completely break installation on some platforms, so be careful with tests!
-### Acceptance Testing
+### Integration Testing
-Acceptance tests are located in the [`tests/`](tests) directory and are also written using the built-in Rust testing library.
+Integration tests are located in the [`tests/`](tests) directory and are also written using the built-in Rust testing library.
-Acceptance tests should test full modules or the entire prompt. All acceptance tests expecting the testing environment to have preexisting state or making permanent changes to the filesystem should have the `#[ignore]` attribute. All tests that don't depend on any preexisting state will be run alongside the unit tests with `cargo test`.
+Integration tests should test full modules or the entire prompt. All integration tests that expect the testing environment to have pre-existing state or tests that make permanent changes to the filesystem should have the `#[ignore]` attribute added to them. All tests that don't depend on any preexisting state will be run alongside the unit tests with `cargo test`.
-Acceptance tests require Docker to be installed, as they are run inside a Docker container. This can be done as described in the official [documentation](https://docs.docker.com/install/). The acceptance tests can then be executed by running the included [`./acceptance_test`](acceptance_test) script. It might be necessary to run [`./acceptance_test`](acceptance_test) with `sudo` if your user is not part of the `docker` group.
-
-
-For tests that depend on having preexisting state, whatever needed state will have to be added to the project's Dockerfile ([`tests/Dockerfile`](tests/Dockerfile)) as well as the project's Azure Pipelines configuration ([`azure-pipelines.yml`](azure-pipelines.yml)).
-
-The reason for having _both_ the Dockerfile as well as the Azure Pipelines configuration is in order to allow acceptance tests to be run on your local development environment via Docker, while also running our test suite on all supported OSes (Windows, Mac, Linux) on Azure Pipelines.
-
-### Benchmarking
-
-Benchmarks are located in the [`benches/`](benches) directory and are written using the [Criterion](https://crates.io/crates/criterion) library.
-
-For the time being, benchmarks aren't actively used, but we plan to integrate benchmark comparison reporting into our CI pipeline in the near future. For the time being, they can be manually run with `cargo bench`.
+For tests that depend on having preexisting state, whatever needed state will have to be added to the project's GitHub Actions workflow file([`.github/workflows/workflow.yml`](.github/workflows/workflow.yml)).
## Running the Documentation Website Locally