Test library with function discovery in C.
- Functions with names starting with a certain prefix are automatically
executed as tests functions. The default prefix is
test_
. - No special test declaration or registration is needed.
- Console or TAP report.
- The test application must be a not-stripped ELF executable.
Create a test file, for example, test.c
. In the test file, include
testprefix.h
and create a test function. The name of the test function
must start with the prefix you want to use. For the example below, we will
use the default prefix, test_
.
#include "testprefix.h"
void test_that_will_fail()
{
ASSERT_TRUE(1 == 2, "one is not equal to two");
}
Here you can find the ASSERT_
and EXPECT_
macros available.
Now you need to create an ELF executable from testprefix.c
, the test file and
your source code.
Note
testprefix.c
defines a main()
function, so you need to keep your original
main()
out of the test binary.
# For a real test, you will want to add other source codes here
gcc -std=gnu99 testprefix.c test.c -o test_app
Execute the test application.
./test_app
The console output looks like this:
When an ASSERT_
macro fails, the test function is abandoned and marked as a
failure.
When an EXPECT_
macro fails, the execution of the test function continues,
despite being marked as a failure.
// Boolean tests
ASSERT_TRUE(COND, ...)
EXPECT_TRUE(COND, ...)
ASSERT_FALSE(COND, ...)
EXPECT_FALSE(COND, ...)
// Compare unsigned values
ASSERT_UINT_EQ(VAL1, VAL2, ...)
EXPECT_UINT_EQ(VAL1, VAL2, ...)
ASSERT_UINT_NE(VAL1, VAL2, ...)
EXPECT_UINT_NE(VAL1, VAL2, ...)
ASSERT_UINT_LT(VAL1, VAL2, ...)
EXPECT_UINT_LT(VAL1, VAL2, ...)
ASSERT_UINT_GT(VAL1, VAL2, ...)
EXPECT_UINT_GT(VAL1, VAL2, ...)
ASSERT_UINT_LE(VAL1, VAL2, ...)
EXPECT_UINT_LE(VAL1, VAL2, ...)
ASSERT_UINT_GE(VAL1, VAL2, ...)
EXPECT_UINT_GE(VAL1, VAL2, ...)
// Compare signed values
ASSERT_INT_EQ(VAL1, VAL2, ...)
EXPECT_INT_EQ(VAL1, VAL2, ...)
ASSERT_INT_NE(VAL1, VAL2, ...)
EXPECT_INT_NE(VAL1, VAL2, ...)
ASSERT_INT_LT(VAL1, VAL2, ...)
EXPECT_INT_LT(VAL1, VAL2, ...)
ASSERT_INT_GT(VAL1, VAL2, ...)
EXPECT_INT_GT(VAL1, VAL2, ...)
ASSERT_INT_LE(VAL1, VAL2, ...)
EXPECT_INT_LE(VAL1, VAL2, ...)
ASSERT_INT_GE(VAL1, VAL2, ...)
EXPECT_INT_GE(VAL1, VAL2, ...)
// Compare pointer values
ASSERT_PTR_EQ(PTR1, PTR2, ...)
EXPECT_PTR_EQ(PTR1, PTR2, ...)
ASSERT_PTR_NE(PTR1, PTR2, ...)
EXPECT_PTR_NE(PTR1, PTR2, ...)
// Compare the content of null-terminated strings
ASSERT_STR_EQ(STR1, STR2, ...)
EXPECT_STR_EQ(STR1, STR2, ...)
ASSERT_STR_NE(STR1, STR2, ...)
EXPECT_STR_NE(STR1, STR2, ...)
// Compare memory regions
ASSERT_MEM_EQ(PTR1, PTR2, SIZE, ...)
EXPECT_MEM_EQ(PTR1, PTR2, SIZE, ...)
ASSERT_MEM_NE(PTR1, PTR2, SIZE, ...)
EXPECT_MEM_NE(PTR1, PTR2, SIZE, ...)
Tip
All ASSERT_
and EXPECT_
macros can take a message as argument, which will
be included in the error message in case of failure.
ASSERT_TRUE(false, "iteration: %u", i);
To skip a test, use the macro SKIP(...)
before any other macro.
Optionally, a string can be passed to SKIP()
. This string will be part
of the report.
SKIP("Skipped for some reason.");
To explicitly fail a test, use the macro FAIL(...)
. The test will be aborted
immediately and marked as a failure. A string parameter, that will be included in
the report, can be provided.
FAIL("no other option, failing the test");
When an ASSERT_
macro fails, the test is aborted immediately. To avoid
resource leaks, it's possible to register a failure handler for a specific test.
The failure handler is a function with following format:
void handler_function(void *);
To set the failure handler, use the macro SET_TEST_FAILURE_HANDLER(HANDLER, HANDLER_ARG)
.
When HANDLER
is invoked, HANDLER_ARG
is passed as argument. Here is
an example where HANDLER_ARG
is a FILE *
.
// Custom function that closes a FILE
void file_closer(void *ptr)
{
FILE *stream = (FILE *)ptr;
if (stream != NULL) {
(void)fclose(stream);
}
}
void test_something()
{
FILE *stream = tmpfile();
ASSERT_PTR_NE(stream, NULL);
SET_TEST_FAILURE_HANDLER(file_closer, stream);
ASSERT_TRUE(false);
}
If you just want to deallocate one pointer, you can simply use free
as
your error handler.
void test_something_else()
{
uint8_t *buf = malloc(10);
SET_TEST_FAILURE_HANDLER(free, buf);
// If this fails, `free` will be invoked with `buf` as argument.
ASSERT_TRUE(...);
}
Optionally, global setup and teardown functions can be defined.
The global setup function is executed once, before the first test. If its
return value is not 0
, the test application exits immediately without
invoking TP_global_teardown
.
// Global setup prototype
int TP_global_setup();
The global teardown function is executed once, after the last test.
// Global teardown prototype
void TP_global_teardown()
When an assertion macro fails in a function that is called from multiple places, it can be hard to determine which call failed. This is because the error message contains only the location where the macro failed.
The macro TRACE_CALL
stores the file name and line number of a function
call. This information will be part of the error message for any failure
that occurs inside of the function call.
TRACE_CALL(my_helper_function());
Any failure (ASSERT/EXPECT) that happens in my_helper_function()
will
include a message like this:
<FileName>:<LineNumber>: my_helper_function() -- traced call
Some POSIX-specific functions are used in testprefix.c
. If possible,
specify -std=gnu99
to build it cleanly.
When executed with no arguments, the test application executes all tests
starting with test_
and prints the results to the standard output.
The following options are available to change this default behavior.
Usage: ./test_app [-p PREFIX] [-l | -h | -o FILE]
-p Set the test function PREFIX. The default is 'test_'.
-l List the tests that match PREFIX.
-h Show this help message.
-o Write a test report to FILE (TAP format).
-v Verbose (do not silence stdout/stderr).
Examples:
./test_app # Run all tests
./test_app -o output_tap.txt # Write the results to a file (TAP format)
./test_app -p test_suite1_ # Execute test functions which names start with 'test_suite1_'
./test_app -l # Print the name of all test functions
I would like to thank @kholeg. Most of the changes in the current API were based on the feedback from him.