No, not the same two macros as MinUnit. These macros have a few more features, and thus require a bit more from the runtime. Together, the two macros are used to define test functions that run code in a new process, and that have consistent return values to indicate whether the test succeeded, failed, or had an error. All of the code in this post is available on GitHub, under the terms of the MIT License. What follows is a description of the code and how it works.

The first macro is called BEGIN_TEST, and it is used to begin a new test definition:

#define BEGIN_TEST(name)                        \
    static int test_##name(void)                \
    {                                           \
        pid_t pid = fork();                     \
        if (pid < 0) {                          \
            return -1;                          \
        }                                       \
        if (pid == 0) {                         \
            int failed = 0;

It starts by defining a new function, whose name is created using the ## token pasting operator. The function is declared static, which isn’t really necessary, but it fits in well with how I write tests: a single, “public” function that calls a set of related static test functions and aggregates their results.

The test function forks a new process, so that if the test code segfaults, it won’t take the whole test harness down, and the remaining tests can be run. Forking also allows us to test functions that should cause the process to exit. If the fork fails, then -1 is returned. This is the “error” return value, used indicates that something has gone wrong while running the test.

The rest of the macro is the start of the child process code, which is where the test code itself lives. The failed variable is a flag, which is provided for convenience. It works with the END_TEST macro:

#define END_TEST(expected)                                      \
            exit(failed ? !expected : expected);                \
        } else {                                                \
            int status;                                         \
            waitpid(pid, &status, 0);                           \
            if (WIFEXITED(status)) {                            \
                return WEXITSTATUS(status) == expected ? 0 : 1; \
            }                                                   \
            return -1;                                          \
        }                                                       \
    }

The expected argument is expected exit status of the child process after a success run. By making this a parameter, things like the popular “exit with an error status if malloc returns NULL” pattern become testable. However, the most common case is for the exit status to depend on the value of the failed variable, so that’s what the call to exit does. If failed is still 0 by the time the exit line is reached, then the expected exit status is used. Otherwise, !expected (the “unexpected” exit status) is used instead.

The else block is run in the parent process, and it calls waitpid on the child process’s PID, catching the exit status in a variable. If the child process exits normally, then its exit status is compared to the expected value. If they match, then the test function returns 0, which is the “success” value; otherwise, it returns 1, which is the “failed” value. If the child process did not exit normally because was killed by a signal, then -1 (the “error” value) is returned.

The two macros are intended to be used with a block of code in between, which defines a new scope for local variables:

BEGIN_TEST(contrived_example) {
  int a = 1;
  int b = 2;

  if (min(a, b) == b) {
     failed = 1;
  }

} END_TEST(EXIT_SUCCESS)

The preprocessor expands this to something like the following:1

static int test_contrived_example(void)
{
    pid_t pid = fork();
    if (pid < 0) {
        return -1;
    }
    if (pid == 0) {
        int failed = 0;
        {
            int a = 1;
            int b = 2;

            if (min(a, b) == b) {
                failed = 1;
            }
        }
        exit(failed ? !EXIT_SUCCESS : EXIT_SUCCESS);
    } else {
        int status;
        waitpid(pid, &status, 0);
        if (WIFEXITED(status)) {
            return WEXITSTATUS(status) == EXIT_SUCCESS ? 0 : 1;
        }                      
        return -1;
    }
}

Enhancements

Okay, so I lied a bit in the title. These two macros are sufficient for creating a set of test functions, and once those are created, it’s pretty easy to test their return values in some automated fashion. But by themselves, they make for a pretty bare-bones testing framework.

One easy enhancement is to define some macros for working with the failed variable inside the test blocks. I like FAIL_IF, which sets failed to 1 if some condition holds, and FAIL_UNLESS, which sets it to 1 if the condition doesn’t hold:

#define FAIL_IF(cond)     failed = (cond) ? 1 : failed;
#define FAIL_UNLESS(cond) failed = (cond) ? failed : 1;

BEGIN_TEST(total_failure) {
    FAIL_IF(2 == 2);
    FAIL_UNLESS(3 == 4);
} END_TEST(EXIT_SUCCESS)

These new macros are useful for writing tests, but what about running them and gathering results? I use a struct and a macro for this.2

typedef struct {
    int passed;  /* the number of passed tests */
    int failed;  /* the number of failed tests */
    int errors;  /* the number of errors       */
} tests_summary;

#define RUN_TEST(name, summary)                 \
    do {                                        \
        int ret;                                \
        printf("%s: ", #name);                  \
        fflush(stdout);                         \
        ret = test_##name();                    \
        if (ret == 0) {                         \
            printf("passed\n");                 \
            (summary).passed++;                 \
        } else if (ret == 1) {                  \
            printf("failed\n");                 \
            (summary).failed++;                 \
        } else {                                \
            printf("failed to to an error\n");  \
            (summary).errors++;                 \
        }                                       \
    } while (0)

The tests_summary struct just holds counts for the number of passed and failed tests, and the number of errors that have occurred. The RUN_TEST macro takes a test name and a summary struct, runs the named test, and increments the appropriate field in the summary struct based on the test’s return value. Along the way, it prints some status information. Using these, one can easily run a suite of related tests gather the aggregate results for reporting:

void run_test_suite(void)
{
    tests_summary summary = { 0, 0, 0 };

    RUN_TEST(contrived_example, summary);
    RUN_TEST(total_failure, summary);

    if (summary.failed + summary.error > 0) {
        printf("Oh no! Some tests failed!\n");
    }
}
  1. I’ve left macros from the standard library unexpanded, and cleaned up the indentation.

  2. Unlike the other macros in this post, RUN_TEST could be replaced with an inline function. The (very slight) advantage of using a macro is that we can get the test function from just the name, instead of having to pass a string and a function pointer separately. One could write a macro that calls the inline function with the appropriate arguments: #define RUN_TEST(name, summary) run_test(#name, test_##name, summary), but I’m not sure that really buys you much…