summaryrefslogtreecommitdiffstats
path: root/test/README-dev.md
blob: 56114fdc347d5e2d67d95276941e0af03c9c0a40 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
Guidelines for test developers
==============================

How to add recipes
------------------

For any test that you want to perform, you write a script located in
`test/recipes/`, named `{nn}-test_{name}.t`,
where `{nn}` is a two digit number and
`{name}` is a unique name of your choice.

Please note that if a test involves a new testing executable, you will need to
do some additions in test/build.info. Please refer to the section
["Changes to test/build.info"](README.md#changes-to-testbuildinfo) below.

Naming conventions
------------------

A test executable is named `test/{name}test.c`

A test recipe is named `test/recipes/{nn}-test_{name}.t`, where `{nn}` is a two
digit number and `{name}` is a unique name of your choice.

The number `{nn}` is (somewhat loosely) grouped as follows:

    00-04  sanity, internal and essential API tests
    05-09  individual symmetric cipher algorithms
    10-14  math (bignum)
    15-19  individual asymmetric cipher algorithms
    20-24  openssl commands (some otherwise not tested)
    25-29  certificate forms, generation and verification
    30-35  engine and evp
    60-79  APIs:
       60  X509 subsystem
       61  BIO subsystem
       65  CMP subsystem
       70  PACKET layer
    80-89  "larger" protocols (CA, CMS, OCSP, SSL, TSA)
    90-98  misc
    99     most time consuming tests [such as test_fuzz]

A recipe that just runs a test executable
-----------------------------------------

A script that just runs a program looks like this:

    #! /usr/bin/perl

    use OpenSSL::Test::Simple;

    simple_test("test_{name}", "{name}test", "{name}");

`{name}` is the unique name you have chosen for your test.

The second argument to `simple_test` is the test executable, and `simple_test`
expects it to be located in `test/`

For documentation on `OpenSSL::Test::Simple`,
do `perldoc util/perl/OpenSSL/Test/Simple.pm`.

A recipe that runs a more complex test
--------------------------------------

For more complex tests, you will need to read up on Test::More and
OpenSSL::Test.  Test::More is normally preinstalled, do `man Test::More` for
documentation.  For OpenSSL::Test, do `perldoc util/perl/OpenSSL/Test.pm`.

A script to start from could be this:

    #! /usr/bin/perl

    use strict;
    use warnings;
    use OpenSSL::Test;

    setup("test_{name}");

    plan tests => 2;                # The number of tests being performed

    ok(test1, "test1");
    ok(test2, "test1");

    sub test1
    {
        # test feature 1
    }

    sub test2
    {
        # test feature 2
    }

Changes to test/build.info
--------------------------

Whenever a new test involves a new test executable you need to do the
following (at all times, replace {NAME} and {name} with the name of your
test):

 * add `{name}` to the list of programs under `PROGRAMS_NO_INST`

 * create a three line description of how to build the test, you will have
   to modify the include paths and source files if you don't want to use the
   basic test framework:

       SOURCE[{name}]={name}.c
       INCLUDE[{name}]=.. ../include ../apps/include
       DEPEND[{name}]=../libcrypto libtestutil.a

Generic form of C test executables
----------------------------------

    #include "testutil.h"

    static int my_test(void)
    {
        int testresult = 0;                 /* Assume the test will fail    */
        int observed;

        observed = function();              /* Call the code under test     */
        if (!TEST_int_eq(observed, 2))      /* Check the result is correct  */
            goto end;                       /* Exit on failure - optional   */

        testresult = 1;                     /* Mark the test case a success */
    end:
        cleanup();                          /* Any cleanup you require      */
        return testresult;
    }

    int setup_tests(void)
    {
        ADD_TEST(my_test);                  /* Add each test separately     */
        return 1;                           /* Indicate success             */
    }

You should use the `TEST_xxx` macros provided by `testutil.h` to test all failure
conditions.  These macros produce an error message in a standard format if the
condition is not met (and nothing if the condition is met).  Additional
information can be presented with the `TEST_info` macro that takes a `printf`
format string and arguments.  `TEST_error` is useful for complicated conditions,
it also takes a `printf` format string and argument.  In all cases the `TEST_xxx`
macros are guaranteed to evaluate their arguments exactly once.  This means
that expressions with side effects are allowed as parameters.  Thus,

    if (!TEST_ptr(ptr = OPENSSL_malloc(..)))

works fine and can be used in place of:

    ptr = OPENSSL_malloc(..);
    if (!TEST_ptr(ptr))

The former produces a more meaningful message on failure than the latter.

Note that the test infrastructure automatically sets up all required environment
variables (such as `OPENSSL_MODULES`, `OPENSSL_CONF`, etc.) for the tests.
Individual tests may choose to override the default settings as required.