Sitemap | Contact

Company Evaluations Purchase Support What´s new

Testwell CTC++

Test Coverage Analyzer for C/C++

Information in this document corresponds to the CTC++ version 9.0

1. INTRODUCTION

Testwell CTC++ is a powerful instrumentation-based code coverage and dynamic analysis tool for C and C++ code. With certain add-on components CTC++ can be used also on C#, Java and Objective-C code. Further, again with certain add-on components, CTC++ can be used to analyse code basically at any embedded target machines, also in very small ones (limited memory, no operating system,...).
 
CTC++ provides Line Coverage, Statement Coverage, Function Coverage, Decision Coverage, Multicondition Coverage, Modified Condition/Decision Coverage (MC/DC), Condition Coverage.
 
As a dynamic analysis tool, CTC++ shows the execution counters (how many times executed) in the code, i.e. more than a plain executed/not executed information. You can also use CTC++ to measure function execution costs (normally time) and to enable function entry/exit tracing at test time.

CTC++ is easy to use. When used in command-line mode (by makefiles or other build scripts), the instrumentation is just a front end phase at the compile/link command. No changes to the source files or build scripts are needed. Test runs are done with the instrumented program version, in the same way as with the original program. Coverage and other execution profiling reports can be obtained easily in straight text, HTML, XML, JSON (JavaScript Object Notation) and Excel input form. On some environments, e.g. Microsoft Visual Studio, CTC++ use  is possible directly from the compiler IDE.

CTC++'s overhead on the size and execution speed of the instrumented code is very reasonable. CTC++'s reporting is informative and well-organised. The reports give both a top-level view, which show the coverage percentages at various summary levels, and a detailed view, where the executed/not executed information is mapped to the actual source code locations.

CTC++ is delivered and licensed per host. The host here means the machine architecture and operating system where the ready-made CTC++ tool components can be run, for example Windows or Linux. See CTC++ availability.
 
The basic CTC++ delivery package, "CTC++ host-only", facilitates CTC++ use at the host, i.e. instrumenting the code to be measured at the host (e.g. Windows), where a compiler is used which is supported in the delivery package and which generates code on the host (e.g. Microsoft Visual C++, also some other compilers for Windows are supported), where the instrumented programs are run, and where the coverage reports are generated.
 
A CTC++ delivery package, except certain entry-level license, contains also "CTC++ Host-Target add-on" (HOTA) and "CTC++ Bitcov add-on" (Bitcov) packages. They facilitate instrumenting and cross-compiling the code basically with whatever C/C++ compiler to whatever target machine or execution context, and getting the coverage data back to the host for reporting. CTC++ adaptation packages are available on many commonly used cross-compilers and targets. You can also work up by yourself such adaptation for your specific compiler/target combination.
 
Logically the HOTA package gives same CTC++ measuring capabilities at the target as in the host. The Bitcov package is meant for limited memory targets. It measures the code coverage only as "executed/not excuted", not "how many times executed". And Bitcov does not support function execution time measuring.

There is still an add-on package for C# and another for Java. From CTC++ point of view C# and Java are seen as special dialects of C++, and with certain arrangements CTC++ instrumentation is connected to the C#/Java compilation phase, similarly as when instrumenting and compiling C/C++ code. The C#/Java run-time context is modeled as a special type of target for which the CTC++ support library has been implemented of the HOTA components, i.e. rewritten in C#/Java. The net result is that CTC++ gives of C#/Java code similar coverage and dynamic analysis information as here described for C/C++ code.

Testwell CTC++ Qualification Kit is available for customers who need to certify the tool use against safety standards ISO 26262, IEC 61508, EN-50128, or DO-178C.

CTC++ is an industry-strength tool, which has been used over 25 years in the IT industry.

Here are some quick links on some of the CTC++ capabilities.

CTC++ facilitates
  • Measuring code coverage = > ensure thorough testing, you know when to stop testing, etc.
    • Function coverage (instrumentation mode: functions called)
    • Decision coverage (instrumentation mode: additionally to function coverage, conditional expressions true and false in program branches, case branches in switch statements, catch exception handlers in C++, control transfers)
    • Statement coverage (analyzed from program control flow: percent of statements executed in functions/files/overall)
    • Line coverage (when program control flow analysis was possible, the code lines that were excuted/not executed are marked with green/red background color in the HTML report)
    • Multicondition coverage (instrumentation mode: additionally to decision coverage, in program branches have all the possible ways to evaluate a conditional expressions having && and || operators been exercised. Direct assignment statements like "variable = ... && ... || ... ;" are also measured for multicondition coverage.)
    • MC/DC coverage (like multicondition coverage, but it suffices that each elementary condition is shown to independently affect the conditional expression outcome--not so demanding than full multicondition coverage, used in DO-178B/DO-178C level A projects)
    • Condition coverage (like multicondition coverage, but it suffices that each elementary condition is shown to have been evaluated to true and false--not so demanding than full multicondition coverage)
  • Searching execution bottlenecks => no more guessing in algorithm tunings
    • Function execution timing (instrumentation mode: total, average, maximum time--if needed, the user may introduce his own time-taking function for measuring whatever is interesting in regard of function resource consumption)
    • Execution counters (how many times the CTC++ instrumentation probes have been executed)
  • Displaying function call trace = > helps in analyzing program behavior
    • You can provide the function, which makes the call tracing (perhaps displaying on the screen). At instrumentation phase CTC++ is made to call your trace function at the entry/exit of each function under test.
  • Comparing coverage reports of two test runs
    • For finding out the code locations the other covered and the other did not
    • Helps in combining test suites into one, which is faster to run but yields same total coverage
    • Running two test cases, where the other suggests a bug in the code. Take coverage difference report and see differences in what code was executed. Helps in locating the bug.
  • Conveniently presented test results
    • Hierarchical, color-coded, HTML-browsable coverage reports
    • Pure textual reports
    • Coverage data can be converted to Excel input file
    • XML report
    • JSON report
  • Ease of use
    • Instrumentation phase as a "front end" to the compilation command => very simple to use
    • No changes are needed to the C/C++ source files to be measured
    • "Ctc-builds" can be done with your existing build makefiles, which normally can be used as is
    • automated script-based use from command line
  • Mature product
    • Has been in demanding use in the IT industry for over 25 years
    • Long use experience with the most commonly used C/C++ compilers (VC++, gcc/g++,...) and with their versions. Many of them have their "extreme corner specialities", but which the tool can handle.
    • Also a proven record of working with numerous (~30+) cross-compilers
    • Full C and C++ support, including new C++ additions (lambda functions, trailing return type, range-based loop, etc.)
  • Usable "in the large"
    • Instrumentation overhead very reasonable
    • You can select what source files to instrument and with what instrumentation options
    • Besides full executables, static and dynamically loaded libraries (.lib/.a/.dll/.so) can also be measured
    • Capturing coverage data of never-ending processes is conveniently supported
    • Combined coverage report can be obtained of test runs of different programs and there are powerful means to select the source files that will be reported and in what coverage view they will be reported.
    • Combined coverage report can be obtained of test runs that are run at different machines.
    • Combined coverage report can be obtained of a code base, which has been built and tested for different "configurations" (in the code files there has been conditional compilations and other macro trickery, which have made the actually executed program variants different).
  • On some environments usable via IDE
  • Usable at embedded targets (CTC++ Host-Target or Bitcov add-on is needed)
    • The target can be effectively "whatever"
  • Good management and visibility of testing
    • Easy to read listings (textual and HTML)
    • In terms of the original source code
    • Untested code highlighted
    • Various summary level reports (in HTML)
    • TER-% (test effectiveness ratio) calculated per function, source file, directory, and overall

 2. HOW CTC++ WORKS

There are basically three steps in the use of CTC++ (command-line mode of use):
  • You use the CTC++ Preprocessor (ctc) utility for instrumenting and compiling the C or C++ source files of interest and for linking the instrumented program with the CTC++ run-time library. At this phase ctc maintains a symbol file, MON.sym by default, where it remembers the names of the instrumented files and what they contained. If you build your program by a makefile, you just prepend the make command by ctcwrap ctc-options , and all the emitted compile and link commands that the make emits will be done "in ctc control".
  • You execute the test runs with the instrumented program as you see necessary, in the same way as you would do test runs on the original not-instrumented program. When the instrumented code portions are executed, CTC++ collects the coverage and function timing history in memory. Normally at the end of the program, and automatically by CTC++, the collected counters are written  to a data file, MON.dat by default. If there was previous counters in the data file, they are summed up.
  • You use the CTC++ Postprocessor (ctcpost) utility for putting one or more symbol files and data files together and produce the human readable textual reports. One of them, the Execution Profile Listing, can be further processed with ctc2html utility for getting an easy-to-view hierarchical and color-coded HTML representation of the coverage information. You can obtain the coverage information also in XML and Excel form.

2.1. Building the Instrumented Program

Use of ctc is connected to the command by which you compile and link your programs. Adding 'ctc' and possible 'ctc-options'  in front of the original compilation command is all that is needed for instrumenting and compiling the source file. If the command also links, ctc automatically adds the CTC++ run-time library to the linkage. The ctc on-line help is:
Usage:
ctc [ctc-options] comp/link-command comp/link-options-and-files

[ctc-options]:
   [-i {f|d|m|te|ti}...] [-n symbolfile]
   [-v] [-V] [-k] [-h] [@optionsfile]...
   [-c conf-file[;conf-file]...]... [-C conf-param{=value|+value|-value}]...
   [-2comp] [-no-comp] [-no-templates] [-no-warnings]
On Windows with these actual source files with Visual C++ compiler in command-line mode we could give command:
ctc -i mte cl -Feprime.exe calc.c io.c prime.c
which instruments these three C files with multicondition and (exclusive) timing instrumentation modes, compiles the instrumented code with the 'cl' compiler, and links the instrumented target 'prime.exe' with 'cl' (the CTC++ run-time library is automatically added to the linkage). The same could have been obtained also with the following sequence of commands:
ctc -i mte cl -c calc.c
ctc -i mte cl -c io.c
ctc -i mte cl -c prime.c
ctc link -out:prime.exe calc.obj io.obj prime.obj
The first three commands instrument their argument source file. ctc also invokes the compiler on the instrumented version of the source file resulting an object file to the same place as the original compilation would have generated it. No changes to the source file is needed by the user and it remains intact.

In the last command ctc just repeats the linking command and adds CTC++ run-time library to it. The result is an instrumented executable in the same place as the original linking would have generated it.

Normally real programs are not built by explicitly issuing compile and link commands. Instead they are built by a makefile, for example as follows
nmake -f Makefile
which uses its own ruling to emit the elementary compile and link commands.  In this case the Makefile can be modified to emit the compile and link commands prefixed with ctc, for example
nmake -f Makefile "CC=ctc -i mte cl" "LNK=ctc link"
Or a simpler way is to use ctcwrap command to make the "build" to be a "ctc-build", as follows:
ctcwrap -i mte nmake -f Makefile
The ctcwrap command executes its argument command (here 'nmake') in a special context. In it all compile and link commands are changed to behave "ctc-wise" with the given instrumentation options (here with '-i mte'), i.e. they are run as if they would have 'ctc -i mte' in front of them. The net effect is that the building is "with CTC++".  The makefiles do not need modifications for the sake of using with CTC++. There are means to specify to CTC++ which files only will be instrumented of all the files that the makefile compiles.

2.2. Running the Tests with the Instrumented Program

You run the tests with the instrumented program. Due to instrumentation it is somewhat bigger and slower than originally. How much? It depends on what kind program control structures you have in your code, what instrumentation mode you have selected, what compiler optimization has been used, and have you instrumented all code files of your program or only some of them.  Increase to program size is normally a concern only in some embedded target cases, which have limited memory. When the largest multicondition instrumentation mode is used, a size increase of 50-70% could result on the instrumented code portions. But note that normally the programs have also non-instrumented portions like system libraries that are linked to it. Lower instrumentation gives lower overhead. In the Windows example Cube.exe case, which contains many system libraries (not instrumented) and where the CTC++ run-time library is in a separate DLL (see HTML form coverage report ) the actual program size grew only with 6%.
 
CTC++'s impact to the execution speed has been found to be very modest.

When the program logic executes the code in the instrumented files, the inserted instrumentation probes collect execution history in main memory. When the program ends (or at some explicit user-determined places), the execution counters are automatically written to a data file on disk. If the instrumented program is a never-ending process, there are simple means to add to the instrumented executable an auxiliary thread, which periodically writes the coverage data to the disk. Multiple executions accumulate the counters in the file as long as the instrumented file is the same as before.

To continue the example, the instrumented executable could be run as follows:

prime
Enter a number (0 for stop program): 2
2 IS a prime.

Enter a number (0 for stop program): 5
5 IS a prime.

Enter a number (0 for stop program): 20
20 IS NOT a prime.

Enter a number (0 for stop program): 0

The program was used and it behaved just like the original program. At the program end the CTC++ run-time system, which  has been linked to program, wrote the collected execution counters data to a data file, here to MON.dat.

2.3. Getting the Results of the Test Runs

Finally you use the ctcpost and ctc2html  utilities to get the results for the analysis, i.e. the various types of listings showing the information you initially asked for. ctcpost  is used first. Its on-line help is:
Usage:
ctcpost [general-options] [symbolfile]... [datafile]...
        [-ff|-fd|-fc|-fmcdc] [-nhe] [-w rpwidth] {{-p|-u|-t|-x|-j} rptfile}...
ctcpost [general-options] datafile... -a target-datafile
ctcpost [general-options] symbolfile... -a target-symbolfile
ctcpost [general-options] {-l|-L} {symbolfile|datafile}...


[general-options]:
   [-h] [-V] [@optionsfile]...
   [-c conf-file[;conf-file]...]... [-C conf-param{=value|+value}]...
   [-f source-file[;source-file]...]... [-nf source-file[;source-file]...]...
and Execution Profile Listing could be obtained as follows:
ctcpost MON.sym MON.dat -p profile.txt

Functions of a header file (if instrumentation on the header was asked for) are extracted from the code files where the header was included. The header file is reported as a separate file entity with its own coverage summary percents.

Here is a more complex example where coverage report is generated of two independently instrumented progams, and some files, which however have been instrumented, are excluded from the report:

ctcpost MON.sym MON.dat ..\prj2\MON.sym ..\prj2\MON.dat \
-nf "*\harnessdir\*" -nf "*\moc_*.cpp" -p profile.txt
With ctcpost you can get the following textual reports:
  • Execution Profile Listing shows the missing coverage as well as how many times each code location has been visited. This is the primary CTC++ report, normally worked onwards to HTML form. The report can be also generated with somewhat reduced coverage information compared how the code was instrumented. See examples
  • Untested Code Listing is similar to execution profile listing but shows only the places where test coverage is inadequate. The HTML form report shows also the untested information, at summary levels (TER%) and individual code locations.
  • Execution Time Listing shows the total (times of all calls summed up), average and maximum (longest) execution times of functions. In timing instrumentation there are two submodes of which you can select: inclusive timing (the time spent in the called instrumented function is included in the time of the calling function) and exclusive timing (the time spent in the called instrumented functions is excluded from the time of the calling function).
  • XML report contains the information that is in Execution Profile Listing and in Execution Time Listing, but is in XML form. This report is meant for postprocessing of the coverage data by your own XML-utility. This report is also used in getting a combined coverage report of a code base, which has been built and tested for different configurations.
  • JSON report contains same information than XML report, but in JSON format. The JSON format is more compact, lighgtweight and usable from Javascripts than XML format.
Normally the  Execution Profile Listing is straight away worked onwards to HTML form using ctc2html utility. Its on-line help is:
Usage:
  ctc2html [-i inputfile] [-o outputdir] [-t threshold]
           [-s sourcedir[;sourcedir]...]... [-nsb]
  ctc2html [-h] [--enable-help]
 
Command-line options:
  -i inputfile   Input Execution Profile Listing file. Default stdin
  -o outputdir   Output HTML directory. Default CTCHTML
  -t threshold   Set coverage TER% threshold, 0-100. Default 100
  -s sourcedir   Source files are searched also from this directory
  -nsb           Do not start HTML browser automatically (only Windows)
  -h             Display this command line help
  --enable-help  Display help of --enable-XXX options
Start browsing from file 'outputdir/index.html'
An example:
ctc2html -i profile.txt -t 85 -nsb
start CTCHTML\index.html
ctc2html   converts the Execution Profile Listing information to hierarchical, easily navigable, color-coded HTML representation. Also the actual source files are incorporated to the HTML. The generated HTML files can be viewed with normal web browsers.

The HTML representation is called "CTC++ Coverage Report". It is hierarchical and has six levels:
  • Overall Summary: General header information in full (from what data generated, when generated, what options used, etc. -- sometimes, in hard use, these can make up quite many lines) and overall summary of the code base with TER percentages.
  • Directory Summary: General header information (cutted to few first lines, if would be longer), directory TERs shown in histograms and numerically, coverage-% not meeting the suggested threshold percent (-t option) are shown in red color. TER over all directories.
  • Files Summary: Zoom-in to the files in the directories. Similar TERs and color-coding are shown but at file levels.
  • Functions Summary: Zoom-in to the methods and functions in the files. Similar TERs and color-coding are shown but at function levels.
  • Untested Code: Compact listing of untested code locations with links to the actual source file to the pertinent line.
  • Execution Profile: Zoom-in to the detailed view where the execution counters are shown with the source code. Lines that are not fully exercised with respect to the selected coverage criteria are highlighted in red.
See the example  HTML report (started in a new window).

2.4. Getting combined coverage reports

When same instrumented program is run multiple times, CTC++ automatically accumulates the execution coverage data to the results of the previous test runs.

When you have many independently instrumented programs and wish to get a combined coverage report of them, you can do it for example as follows:
ctcpost MON.sym ..\test2\MON.sym MON.dat ..\test2\MON.dat -p profile.txt
If your independently instrumented separate programs contain partially same source files, the above kind of ctcpost command will give you combined coverage of them, too. I.e. in the profile.txt file there is one entry of the file and it contains aggregated coverage of the file execution in different programs. The requirement on this to succeed is that the common files really represent the same level as CTC++ sees them, e.g. compiled with same flags, and the files have been instrumented in the same way.
 
In getting the coverage report you can specify that some selected files only are shown in the report and some others are not shown. Further, you can specify that the coverage information is shown in a lower coverage measure (e.g. in a compact function coverage view only) than the file was actually instrumented with.
 
Another usage scenario, which can occur in bigger development organisations, is the following: There is a code base, which is compiled and tested in many configurations. The original C/C++ files are unchanged as such, but in compiling the different configirations conditional compilation and macro resolvings make the actual code (C translation unit after C prepocessing) different. The "basic CTC++" refuses to generate combined coverage report of files, which are in the above way different. The management may however want to see one report, especially its "bottom line TER%", which sums up the coverage of the whole code base over its all configurations. For this need there is ctcxmlmerge utility. Its on-line help is:
Usage:
ctcxmlmerge input.xml... [-p profilefile] [-x xmlfile] [@optionsfile]
            [-f file[;file]...]... [-nf file[;file]...]... [-ndl]
ctcxmlmerge [-h]

Options:
   input.xml       Input XML file. One or more CTC++ XML coverage reports.
   -p profilefile  Output merged textual execution profile listing.
   -x xmlfile      Output merged summary XML report.
   @optionsfile    Input and options can be given also at optionsfile.
   -f file         Only these instrumented files are reported.
   -nf file        These instrumented files are not reported.
   -ndl            No drive letter (removed from reported filenames).
   -h              Display command line help.
An example:
ctcxmlmerge input1.xml input2.xml -p profile.txt -x summary.xml
The idea is  that each configuration is first instrumented and tested independently, and XML form report of its coverage is taken. Then the XML form coverage reports are merged with this utility to get one combined text form execution profile listing (-p). An XML summary report can also be obtained (-x), if wanting to further process the TER%s from XML form.
 
Execution hits on same code files, from different program configurations, even if due to macros and conditional compilation they were a bit different as CTC++ sees the code, are summed up and TER% is recalculated in the merged report.
 
The merged execution profile listing can be further converted to HTML form by ctc2html utility.

2.5. Connection to Excel

The coverage data from an Execution Profile Listing can be converted to converted to a file, which is suitable input to Excel (or to another spreadsheet application). The ctc2excel utility is used here. Its on-line help is:
Usage:
  ctc2excel [-i infile] [-o outfile] [-u] [-efs 'c'] [-full] [-nopath]
  ctc2excel -h

Command-line options:
  -i infile  Input Execution Profile Listing file . Default 'stdin'.
  -o outfile Output Excel TSV file. Contains coverage data of the functions,
             one line of each. Default 'stdout'.
  -u         Only untested (under 100% TER) functions are reported in outfile.
  -efs 'c'   Use c as Excel field separator. Default tab. Example -efs ';'.
  -full      Write to outfile also the header information and the summary
             information of the instrumented files and of overall.
  -nopath    Of the reported instrumented files drop off the path portion.
  -h         Displays this help text.
And an example of the use:
ctc2excel -i profile.txt -o excelinput.txt
start excel excelinput.txt

2.6. Function call trace

With certain simple arrangements you can instrument your code to produce a function call trace. It means that you provide your own trace function (and link to the executable), which CTC++ calls at the begin and return of each instrumented function. As a parameter the trace function gets the name of the function that was just called. Your trace function, then, can do whatever you see necessary, for example display the called function name on the screen.

As an example, the trace function might be the following:

#include <stdio.h>
void mytrace(char* fname, char* start_or_end) {
printf("mytrace: function %s %s\n", fname, start_or_end);
}

You may find this useful when wanting to analyze the dynamic behavior of your program. Another situation might be when your program crashes and you do not know where it happens. You just instrument the code for producing this "brute force call trace" and you see how far your program managed to get.

3. IDE INTEGRATIONS

CTC++ has been integrated to a couple of compilation system IDEs, all at Windows platform. The integration means that in the IDE Tools menu there has been added new commands, like
  • CTC++ Set... : for setting "instrumentation mode on" and specifying the ctc-options under which the coming builds in the IDE will be done. Later this command is used for setting "instrumentation mode off", i.e.  changing the builds to normal/ctc-free again.
  • CTC++ Report...: for getting the various coverage reports and starting some appropriate viewer (like notepad, html browser, Excel) on them.
See CTC++/Visual Studio Integration on how the integration looks at Visual Studio IDE.

Some IDEs support that the build can be commanded from command line. With many of them, in the next example with Visual Studio 2003, the"ctc-build" can be made with ctcwrap  as follows:
ctcwrap -i d devenv mySolution.sln /rebuild debug

4. SUPPORT FOR TARGET TESTING

HOTA:
Information in this chapter corresponds to the CTC++ Host-Target add-on (HOTA) version 6.0.

With "target testing" it is here meant that you have a host environment, where you do builds for some target machine, typically an embedded system. The used C/C++ cross-compiler and the target machine can be something with which CTC++ may not have been used before. You anyway want to instrument the code, compile it with the cross-compiler, run it at the target machine, and get the coverage data back to the host for reporting. The "CTC++ Host-Target add-on" (HOTA) package  provides this capability.

Technically the arrangement goes roughly as follows:
  • At the host you need to have normal CTC++ (host-only)  copy running. Additionally you need this HOTA package.
  • You "teach" to CTC++ the cross-compiler/linker command names and options. This is a one-time job. If the cross-compiler is a decent one and follows the common conventions, there is no bigger problem in this. We also have working configuration files for a number of cross-compilers.
  • The instrumented code needs a little CTC++ run-time support layer at the target. The HOTA package contains it in C source code form. It is "vanilla C", about 1000 lines of well-commented code, and it compiles with any C compiler, also with your cross-compiler. You need not touch that code.
  • However, into the run-time layer's use you need to implement the low-level data transfer layer, with which the coverage data is transferred to the host. The coverage data is in CTC++-internal encoded form as a sequence of printable ascii characters. No CTC++ internal knowledge is needed in its handling, just writing it one char at a time to somewhere.  Ultimately the char stream needs to be transferred to the host machine to a text file. If you can write at the target machine the data to a local text file (and separately move it to the host later), the work is simple. For this alternative the delivery package has a compile-ready implementation, which uses normal C text file I/O. In some cases the coverage data needs to be written to some communication channel, which  your program in the host listens and writes to a file. Developing this low-level data transfer layer is a one-time job. At test time the coverage data writing is normally a one-time act at the end of the instrumented program execution. The data volumes that need to be transferred are rather modest.
  • The instrumentation phase at the host for the target is similar as instrumentation for the host. Only the cross-compiler is used (not the one that compiles for the host) and the slightly modified target-specific run-time layer is linked to the instrumented code (not the one that is linked in the ctc-builds to the host).
  • You run the tests at the target. Depending how you have arranged to data transfer, you get the coverage data to host side where you feed it to ctc2dat  utility and you get MON.dat file. That after the reports can be taken normally at the host by ctcpost and ctc2html utilities.

CTC++ Host-Target is designed to work also in contexts where it is not known

  • what hardware architecture the target has
  • what operating system, if any, the target runs
  • what brand and version is the C/C++ cross compiler for the target
  • do the host and target machines use the same endianness in binary data
  • do the target machine (cross-compiler) use same amount of bits for basic C integral types for storing the execution counters
The Host-Target add-on package is also usable when the code under test is operating system kernel code. Read more from  Kernelcoverage.

BITCOV:
Bitcov is a derivative work based on HOTA. It is meant for small embedded micro-controller kind of targets, which have very little free data memory for CTC++'s use, or where the HOTA style to transfer the coverage data as an encoded ASCII stream to the host is difficult.

In Bitcov there is one global bit array in the target main memory where the execution hits are recorded, one bit per probe. For example with a 1000 byte array 8000 probes could be recorded. It might be well enough for a reasonable size instrumented program, around 30000 lines of instrumented code. In normal instrumentation CTC++ run-time data area consumption would be in a similar case 8000 * 4 bytes (one counter is normally 4 bytes) = 32000 bytes + something more for CTC++'s internal control data needs. At the target there essentially is no CTC++ run-time layer at all.

After the test run the bit array is captured to the host to a file, either so that the initiative is at the host who pulls the coverage data (e.g. by debugger means) from the target memory, or so that the initiative is at the target side who pushes the coverage data to the host (as it is possible at the target, e.g. embedded in the instrumented code). At the host there is a utility (dmp2txt), which is used to convert the memory dump to ctc-understandable characted encoded coverage data. That after normal CTC++ tool chain (ctc2dat, ctcpost, ctc2html) is used to get the human readable coverage report. 

In Bitcov timing instrumentation is not supported. In coverage reports the counters are reduced to 0 (not executed) and 1 (executed) while in normal coverage reports the counter value tells also how many times the code at the probe location was executed.

BYTECOV:
Of Bitcov there is also a Bytecov variant. In it the not_hit/hit (0/1) information is stored in a byte array, one byte per probe. In typical machine instruction sets a plain setting a byte in memory takes less instructions than setting an individual bit in memory ("code-bloat"), and it can save more than what is lost when a byte array (not bit array) is used for storing the coverage information ("data-bloat").

5. SUMMARY OF BENEFITS

CTC++ is a versatile tool to be used in testing and tuning of all kinds of applications written in C or C++.
Testing becomes more efficient
The execution profile listing reveals the parts of code which have not yet been executed. The coverage information helps to design the missing test cases. On the other hand, CTC++ helps to determine when to stop testing (from code coverage point of view), thus preventing the waste of the costly human resources.
Testing becomes a measurable, well-managed activity
The summary listing with TER-% histograms characterizing the reached test coverage gives valuable information for project management purposes at a glance.
Usable for program dynamic analysis and performance tuning
When CTC++ is used for improving the performance of a program, it can easily show where the bottlenecks of the program are (functions that are executed most often and functions that consume most time). Also production level applications can be monitored when they are used in their real environment. The function call "brute force" tracing capability can be useful in analyzing the program behavior when no better tools are available.
Use of CTC++ is easy and fast
CTC++ is easy to use. No modifications of the user's code is needed. Instrumentation takes place by just adding 'ctc' in front of the compilation/link command. Enforcing the existing makefiles to build instrumented targets instead of the original non-instrumented ones is very straightforward. This does not need any changes to the makefiles itself.  Browsing the coverage results in HTML is very easy. The overall picture is shown in color-coded histograms of coverage percentages. Zooming to the detailed level can be be done with only a few mouse clicks and the untested code locations are clearly shown as mapped to the original source code.
 Independent instrumentation of source file
The source files of the executable program are instrumented independently of each other, or left as non-instrumented. The source files can be instrumented with such instrumentation modes that are appropriate in the situation at hand. Instrumented source files can be linked to different executables and yet the merged coverage of their executions in different executables can be obtained.
Configurability
CTC++ is easy to configure for specific needs by simply editing the textual configuration file. At the time instrumentation and getting the reports there are powerful means to fine-tune the process to obtain the desired result.
Usable for many purposes
CTC++ can be used for many purposes: measuring code coverage at various testing phases (module testing, integration testing, system testing), performance testing, optimization, comparing efficiency of algorithms, locating dead code, ...
Support for host-target and kernel code testing
CTC++ has powerful support for measuring code coverage at embedded targets. The instrumentation overhead is very moderate. The used C/C++ cross-compiler for the target, the target operating system and the target hardware type can be virtually of any brand and type. Applying on kernel code is something unique among coverage tools.
Supports both C and C++
With one CTC++ tool you can work both with C and C++ code.
Work motivation
The programmers/testers are likely to design more and better test cases to achieve higher test coverage when they have an easy-to-use tool for measuring it.
Usable with CTA++
CTC++ can be used together with Testwell's test harnessing tools CTA++ (C++ Test Aider), see  CTA++ description. Such usage combines the "black box" (behavioral, functional) and "white box" (structural) testing strategies for purposes of systematic module testing and reducing of testing costs.
Usable with other vendors' test execution frameworks
CTC++ can be seamlessly used with other vendors' unit test and system (GUI) test execution frameworks.
CTC++ has been compared to US Army Jeep...
Simple to use. Works. Can be driven on almost whatever terrain (especially when Host-Target add-on package is used).


6. OPERATING ENVIRONMENTS

CTC++ is available on several machine / operating system / C/C++ compiler environments, see the detailed list: CTC++ availability

The resource requirements of your normal C or C++ development environment are sufficient for using the CTC++.

Since July 2013 the "Testwell CTC++" tool is owned by Verifysoft Technology GmbH. See what's new .


To frontpage