if test_vald LOG_COMPILER = $(top_srcdir)/platform/linux-generic/test/wrapper-script TESTS = pktio/pktio_run \ …​

Here follows a dummy example of what wrapper-script could be:

[source,bash]

#!/bin/bash

The parameter, $1, is the name of the test executable to run

echo "WRAPPER!!!" echo "running $1!"

run the test:

$1 # remember the test result: res=$?

echo "Do something to clean up the mess here :-)" # return the test result. exit $res

Note how the above script stores the return code of the test executable to
return it properly to the automake test harness.

[[platform-specific-tests]]
==== Defining platform specific tests

Sometimes, it may be necessary to call platform specific system calls to check
some functionality: For instance, testing `odp_cpumask_*` could involve checking
the underlying system CPU mask. On linux, such a test would require using the
CPU_ISSET macro, which is linux specific. Such a test would be written in
`<PLATFORM_SPECIFIC>/<test-group>/<interface>/cpumask/...` The contents of
this directory would be very similar to the contents of the platform agnostic
side cpu_mask tests (including a `Makefile.am`...), but platform specific test
would be written there.
`<PLATFORM_SPECIFIC>/Makefile.am` would then trigger the building of the
platform specific tests (by listing their module name in `SUBDIRS` and therefore
calling the appropriate `Makefile.am`) and then it would call both the platform
agnostic executable(s) and the platform specific test executable.

The shm module of the linux-generic ODP API does have a validation
test written this way. You can see it at:
`test/linux-generic/validation/api/shmem`

==== Marking validation tests as inactive

The general policy is that a full run of the validation suite (a `make check`)
must pass at all times. However a particular platform may have one or more test
cases that are known to be unimplemented either during development or
permanently, so to avoid these test cases being reported as failures it's useful
to be able to skip them. This can be achieved by creating a new test executable
(still on the platform side), giving the platform specific initialization code
the opportunity to modify the registered tests in order to mark unwanted tests
as inactive while leaving the remaining tests active. It's important that the
unwanted tests are still registered with the test framework to allow the fact
that they're not being tested to be recorded.

The `odp_cunit_update()` function is intended for this purpose, it is used to
modify the properties of previously registered tests, for example to mark them
as inactive. Inactive tests are registered with the test framework but aren't
executed and will be recorded as inactive in test reports.

In `test/common_plat/validation/api/foo/foo.c`, define all
validation tests for the 'foo' module:

[source,c]
------------------
odp_testinfo_t foo_tests[] = {
	ODP_TEST_INFO(foo_test_a),
	ODP_TEST_INFO(foo_test_b),
	ODP_TEST_INFO_NULL
};

odp_suiteinfo_t foo_suites[] = {
	{"Foo", foo_suite_init, foo_suite_term, foo_tests},
	ODP_SUITE_INFO_NULL
};
------------------

In `<platform>/validation/api/foo/foo_main.c`, register all the tests defined in
the `foo` module, then mark a single specific test case as inactive:

[source,c]
------------------
static odp_testinfo_t foo_tests_updates[] = {
	ODP_TEST_INFO_INACTIVE(foo_test_b),
	ODP_TEST_INFO_NULL
};

static odp_suiteinfo_t foo_suites_updates[] = {
	{"Foo", foo_suite_init, foo_suite_term, foo_tests_updates},
	ODP_SUITE_INFO_NULL
};

int foo_main(void)
{
	int ret = odp_cunit_register(foo_suites);

	if (ret == 0)
		ret = odp_cunit_update(foo_suites_updates);

	if (ret == 0)
		ret = odp_cunit_run();

	return ret;
}
------------------

So `foo_test_a` will be executed and `foo_test_b` is inactive.

It's expected that early in the development cycle of a new implementation the
inactive list will be quite long, but it should shrink over time as more parts
of the API are implemented.

==== Conditional Tests

Some tests may require specific conditions to make sense: for instance, on
pktio, checking that sending a packet larger than the MTU is rejected only makes
sense if packets can indeed, on that ODP implementation, exceed the MTU.
A test can be marked conditional as follows:

[source,c]
------------------
odp_testinfo_t foo_tests[] = {
	...
	ODP_TEST_INFO_CONDITIONAL(foo_test_x, foo_check_x),
	...
	ODP_TEST_INFO_NULL
};

odp_suiteinfo_t foo_suites[] = {
	{"Foo", foo_suite_init, foo_suite_term, foo_tests},
	ODP_SUITE_INFO_NULL
};
------------------

Foo_test_x is the usual test function. Foo_check_x is the test precondition,
i.e. a function returning a Boolean (int).
It is called before the test suite is started. If it returns true, the
test (foo_test_x) is run. If the precondition function (foo_check_x above)
returns false, the test is not relevant (or impossible to perform) and it will
be skipped.

=================
*Note*

Conditional tests can be marked as inactive, keeping the precondition
function. Both the test and the precondition function will be skipped,
but re-activating the test is then just a matter of changing back the macro
from ODP_TEST_INFO_INACTIVE to ODP_TEST_INFO_CONDITIONAL:

[source,c]
------------------
	...
	/* active conditional test */
	ODP_TEST_INFO_CONDITIONAL(foo_test_x, foo_check_x),

	/* inactive conditional test */
	ODP_TEST_INFO_INACTIVE(foo_test_y, foo_check_y),
	...
------------------
=================

==== helper usage ====

The tests (both platform agnostic and platform dependent tests) make use of
a set of functions defined in a helper library. The helper library tries to
abstract and regroup common actions that applications may perform but
which are not part of the ODP API (i.e. mostly OS system calls).
Using these functions is recommended, as running the tests on a different OS
could (hopefully) be as simple as changing the OS related helper lib.

In the linux helper, two functions are given to create and join ODP threads:

`odph_odpthreads_create()`

`odph_odpthreads_join()`

These two functions abstract what an ODP thread really is and their usage
is recommended as they would be implemented in other OS`s helper lib.

Five older functions exist to tie and ODP thread to a specific implementation:

`odph_linux_pthread_create()`

`odph_linux_pthread_join()`

`odph_linux_process_fork_n()`

`odph_linux_process_fork()`

`odph_linux_process_wait_n()`

The usage of these functions should not occur within ODP examples nor tests.
The usage of these functions in other application is not recommended.

[[typedefs]]
== ODP Abstract Types and Implementation Typedefs
ODP APIs are defined to be abstract and operate on abstract types. For example,
ODP APIs that perform packet manipulation manipulate objects of type
`odp_packet_t`. Queues are represented by objects of type `odp_queue_t`, etc.

Since the C compiler cannot compile code that has unresolved abstract types,
the first task of each ODP implementation is to decide how it wishes to
represent each of these abstract types and to supply appropriate `typedef`
definitions for them to make ODP applications compilable on this platform.

It is recommended that however a platform wishes to represent ODP abstract
types, that it do so in a strongly typed manner. Using strong types means
that an application that tries to pass a variable of type `odp_packet_t` to
an API that expects an argument of type `odp_queue_t`, for example, will result
in a compilation error rather than some difficult to debug runtime failure.

The *odp-linux* reference implementation defines all ODP abstract types strongly
using a set of utility macros contained in
`platform/linux-generic/include/odp/api/plat/strong_types.h`. These macros
can be used or modified as desired by other implementations to achieve strong
typing of their typedefs.

=== Typedef approaches
ODP abstract types serve two distinct purposes that each implementation must
consider. First, they shield applications from implementation internals, thus
facilitating ODP application portability. Equally important, however, is that
implementations choose typdefs and representations that permit the
implementation to realize ODP APIs efficiently. This typically means that the
handles defined by typedefs are either a pointer to an implementation-defined
struct or else an index into an implementation-defined resource table. The two
LNG-provided ODP reference implementations illustrate both of these approaches.
The *odp-dpdk* implementation follows the former approach (pointers) as this
offers the highest performance. For example, in *odp-dpdk* an
`odp_packet_t` is a pointer to an `rte_mbuf` struct, which is how DPDK
represents packets. The *odp-linux* implementation, by contrast, uses indices
as this permits more robust validation support while still being highly
efficient. In general, software-based implementations will typically favor
pointers while hardware-based implementations will typically favor indices.

=== ABI Considerations
An _Application Binary Interface_ is a specification of the _representation_
of an API that guarantees that applications can move between implementations
of an API without recompilation. ABIs thus go beyond the basic source-code
portability guarantees provided by APIs to permit binary portability as well.

It is important to note that ODP neither defines nor prohibits the specification
of ABIs. This is because ODP itself is an _Abstract API Specification_. As
noted earlier, abstract APIs cannot be compiled in the absence of completion
by an implementation that instantiates them, so the question of ABIs is
really a question of representation agreement between multiple ODP
implementations. If two or more ODP implementations agree on things like
typedefs, endianness, alignments, etc., then they are defining an ABI which
would permit ODP applications compiled to that common set of instantiations
to inter operate at a binary as well as source level.

==== Traditional ABI
ABIs can be defined at two levels. The simplest ABI is within a specific
Instruction Set Architecture (ISA). So, for example, an ABI might be defined
among ODP implementations for the AArch64 or x86 architectures. This
traditional approach is shown here:

.Traditional ABI Structure
image::abi_traditional.svg[align="center"]

In the traditional approach, multiple target platforms agree on a common set
of typedefs, etc. so that the resulting output from compilation is directly
executable on any platform that subscribes to that ABI. Adding a new platform
in this approach simply requires that platform to accept the existing ABI
specification. Note that since the output of compilation in a traditional ABI
is a ISA-specific binary that applications cannot offer binary compatibility
across platforms that use different ISAs.

==== Bitcode based ABI
An ABI an also be defined at a higher level by moving to a more sophisticated
tool chain (such as is possible using LLVM) that implements a split
compilation model. In this model, the output from a compiler is not directly
executable. Rather it is a standardized intermediate representation called
_bitcode_ that must be further processed to result in an executable image as
shown here:

.Bitcode ABI Structure
image::abi_llvm.svg[align="center"]

The key to this model is that the platform linking and optimization that is
needed to create a platform executable is a system rather than a developer
responsibility. The developer's output is a universal bitcode binary. From
here, the library system creates a series of _managed binaries_ that result
from performing link-time optimization against a set of platform definitions.
When a universal application is to run on a specific target platform, the
library system selects the appropriate managed binary for that target platform
and loads and runs it.

Adding a new platform in this approach involves adding the definition for that
platform to the library system so that a managed binary for it can be created
and deployed as needed. This occurs without developer involvement since the
bitcode format that is input to this backend process is independent of the
specific target platform. Note also that since bitcode is not tied to any ISA,
applications using bitcode ABIs are binary portable between platforms that use
different ISAs. This occurs without loss of efficiency because the process of
creating a managed binary is itself a secondary compilation and optimization
step. The difference is that performing this step is a system rather than a
developer responsibility.

== Configuration
Each ODP implementation will choose various sizes, limits, and similar
internal parameters that are well matched to its design and platform
capabilities. However, it is often useful to expose at least some of these
parameters and allow users to select specific values to use either
at compile time or runtime. This section discusses options for doing this,
using the configuration options offered in the `odp-linux` reference
implementation as an example.

=== Static Configuration
Static configuration requires the ODP implementation to be recompiled. The
reasons for choosing static configuration vary but can involve both design
simplicity (_e.g.,_ arrays can be statically configured) or performance
considerations (_e.g.,_ including optional debug code). Two approaches to
static configuration are `#define` statements and use of autotools.

==== `#define` Statements
Certain implementation limits can best be represented by `#define` statements
that are set at compile time. Examples of this can be seen in the `odp-linux`
reference implementation in the file
`platform/linux-generic/include/odp_config_internal.h`.

.Compile-time implementation limits (excerpt)
[source,c]
-----
/*
 * Maximum number of CPUs supported. Maximum CPU ID is CONFIG_NUM_CPU - 1.
 */
#define CONFIG_NUM_CPU 256

/*
 * Maximum number of pools
 */
#define ODP_CONFIG_POOLS 64
-----

Here two fundamental limits, the number of CPUs supported and the maximum
number of pools that can be created via the `odp_pool_create()` API are
defined. By using `#define`, the implementation can configure supporting
structures (bit strings and arrays) statically, and can also allow static
compile-time validation/consistency checks to be done using facilities like
`ODP_STATIC_ASSERT()`. This results in more efficient code since these limits
need not be computed at runtime.

Users are able to change these limits (potentially within documented absolute
bounds) by changing the relevant source file and recompiling that ODP
implementation.

==== Use of `autotools configure`
The ODP reference implementation, like many open source projects, makes use of
https://www.gnu.org/software/automake/faq/autotools-faq.html[autotools]
to simplify project configuration and support for various build targets.
These same tools permit compile-time configuration options to be specified
without requiring changes to source files.

In addition to the "standard" `configure` options for specifying prefixes,
target install paths, etc., the `odp-linux` reference implementation supports
a large number of static configuration options that control how ODP is
built. Use the `./configure --help` command for a complete list. Here we
discuss simply a few for illustrative purposes:

`--enable-debug`::
The ODP API specification simply says that "results are undefined" when
invalid parameters are passed to ODP APIs. This is done for performance
reasons so that implementations don't need to insert extraneous parameter
checking that would impact runtime performance in fast-path operations. While
this is a reasonable trade off, it can complicate application debugging.
To address this, the ODP implementation makes use of the `ODP_ASSERT()` macro
that by default disappears at compile time unless the `--enable-debug`
configuration option was specified. Running with a debug build of ODP trades
off performance for improved parameter/bounds checking to make application
debugging easier.

`--enable-user-guides`::
By default, building ODP only builds the code. When this option is specified,
the supporting user documentation (including this file) is also built.

`--disable-abi-compat`::
By default ODP builds with support for the ODP ABI, which permits application
binary portability across different ODP implementations targeting the same
Instruction Set Architecture (ISA). While this is useful in cloud/host
environments, it does involve some performance cost to provide binary
compatibility. For embedded use of ODP, disabling ABI compatibility means
tighter code can be generated by inlining more of the ODP implementation
into the calling application code. When built without ABI compatibility,
moving an application to another ODP implementation requires that the
application be recompiled. For most embedded uses this is a reasonable
trade off in exchange for better application performance on a specific
target platform.

=== Dynamic Configuration
While compile-time limits have the advantage of simplicity, they are also
not very flexible since they require an ODP implementation to be regenerated
to change them. The alternative is for implementations to support _dynamic
configuration_ that enables ODP to change implementation behavior without
source changes or recompilation.

The options for dynamic configuration include: command line parameters,
environment variables, and configuration files.

==== Command line parameters
Applications that accept a command line passed to their `main()` function can
use this to tailor how they use ODP. This may involve self-imposed limits
driven by the application or these can specify arguments that are to be
passed to ODP initialization via the `odp_init_global()` API. The syntax of
that API is:
[source,c]
-----
int odp_init_global(odp_instance_t *instance,
		    const odp_init_t *params,
		    const odp_platform_init_t *platform_params);
-----
and the `odp_init_t` struct is used to pass platform-independent parameters
that control ODP behavior while the `odp_platform_init_t` struct is used to
pass platform-specific parameters. The `odp-linux` reference platform does
not make use of these platform-specific parameters, however the `odp-dpdk`
reference implementation uses these to allow applications to pass DPDK
initialization parameters to it via these params.

ODP itself uses the `odp_init_t` parameters to allow applications to specify
override logging and abort functions. These routines are called to perform
these functions on behalf of the ODP implementation, thus better allowing
ODP to interoperate with application-defined logging and error handling
facilities.

==== Environment variables
Linux environment variables set via the shell provide a convenient means of
passing dynamic configuration values. Each ODP implementation defines which
environment variables it looks for and how they are used. For example, the
`odp-dpdk` implementation uses the variable `ODP_PLATFORM_PARAMS` as an
alternate means of passing DPDK initialization parameters.

Another important environment variable that ODP uses is `ODP_CONFIG_FILE`
that is used to specify the file path of a _configuration override file_, as
described in the next section.

==== Configuration files
The https://hyperrealm.github.io/libconfig/[libconfig] library provides a
standard set of APIs and tools for parsing configuration files. ODP uses this
to provide a range of dynamic configuration options that users may
wish to specify.

ODP uses a _base configuration file_ that contains system-wide defaults, and
is located in the `config/odp-linux-generic.conf` file within the ODP
distribution. This specifies a range of overridable configuration options that
control things like shared memory usage, queue and scheduler limits and tuning
parameters, timer processing options, as well as I/O parameters for various
pktio classes.

While users of ODP may modify this base file before building it, users can
also supply an override configuration file that sets specific values of
interest while leaving other parameters set to their defaults as defined by
the base configuration file. As noted earlier, the `ODP_CONFIG_FILE`
environment variable is used to point to the override file to be used.

=== Summary
There is a place for both static and dynamic configuration in any ODP
implementation. This section described some of the most common and
discussed how the ODP-supplied reference implementations make use of them.
Other ODP implementations are free to copy and/or build on these, or use
whatever other mechanisms are native to the platforms supported by those ODP
implementations.

== Glossary
[glossary]
worker thread::
    A worker is a type of ODP thread. It will usually be isolated from
    the scheduling of any host operating system and is intended for fast-path
    processing with a low and predictable latency. Worker threads will not
    generally receive interrupts and will run to completion.
control thread::
    A control thread is a type of ODP thread. It will be isolated from the host
    operating system house keeping tasks but will be scheduled by it and may
    receive interrupts.
ODP instantiation process::
    The process calling `odp_init_global()`, which is probably the
    first process which is started when an ODP application is started.
    There is one single such process per ODP instantiation.
thread::
    The word thread (without any further specification) refers to an ODP
    thread.
ODP thread::
    An ODP thread is a flow of execution that belongs to ODP:
    Any "flow of execution" (i.e. OS process or OS thread) calling
    `odp_init_global()`, or `odp_init_local()` becomes an ODP thread.
    This definition currently limits the number of ODP instances on a given
    machine to one. In the future `odp_init_global()` will return something
    like an ODP instance reference and `odp_init_local()` will take such
    a reference in parameter, allowing threads to join any running ODP instance.
    Note that, in a Linux environment an ODP thread can be either a Linux
    process or a linux thread (i.e. a linux process calling `odp_init_local()`
    will be referred as ODP thread, not ODP process).
event::
    An event is a notification that can be placed in a queue.
queue::
    A communication channel that holds events