Experience Embedded

Professionelle Schulungen, Beratung und Projektunterstützung

The Magic of Macros

Authors: András Gáspár, Dr. Lászlí Gianone, Dr. Gábor Tevesz

Beitrag - Embedded Software Engineering Kongress 2015

 

Developing software for embedded systems restricts the usage of possibilities trivially available in a PC environment. One field is the handling of consistent data structures, e.g. identifying all CAN messages and applying their attributes consistently. One way of managing this type of configuration constructions is to create PC based configurator applications that generate all required embedded code constructions. But there is another way to use the preprocessor for the same purpose.

Software development for embedded systems in series production usually suffers from a shortage of resources, since hardware design is always optimized to save costs. Thus, RAM and ROM usage as well as the necessary CPU load must be kept as low as possible. Optimizing the usage of resources can be challenging, and advanced software development methods may be required. This is especially true if reusable code parts shall be included in the software.

Since reused software modules always contain parts that are not necessary in the current project, support of conditionally compiled source code – i.e. usage of compiler switches – is obviously necessary. But it is not only the intense usage of compiler switches that makes a source code hard to maintain. Some software modules implement handling of relatively large amount of data. If the large data structures contain consistent segments, it can be rather difficult to manage them manually.

Typical functions of this type found in an embedded system are ECU diagnostics, NVRAM parameter handling, layout of communication channels, handling of diagnostic trouble codes etc. Though the actual implementation of these software modules can always be evaluated based on the agreed guidelines of the project, the proper maintenance of all data can be problematic if the data content changes on a frequent basis.

Instead of direct source code editing, storing all necessary data attributes in a proper database and generating all related source code sections for the embedded CPU by a code generator tool can be a clean and preferred solution in the long term. Due to the practically unlimited amount of data storage and programming capabilities of a PC environment, it is easy to find or even create a source code generator tool. However, the possibility to usea source code generator tool is sometimes not obvious.

In case an embedded application is actively related to safety critical functions, further requirements must be fulfilled, which further limits the availability of development methods. Apart from keeping strict software coding guidelines, there may be severe restrictions regarding the build and maintenance environment. One of the most common restrictions is the assurance of reliability of the used tools. This means that not only the development of source code of the embedded system itself shall be secured, but also all tools that actively participate in the building environment must be certified properly.

The ISO 26262 standard defines the necessary level of certification of the tools used in the development of safety critical applications. See part 8, section 11: Confidence in the use of software tools. If asystem features components at ASIL-C or ASIL-D, achieving the expected confidence of a generator tool is rather difficult. This confidence must be verified for the compilation toolchain creatingthe binaries to be downloaded in the flash of the embedded system. However, the final decision of a project may be to completely reject the use of further code generator tools. In this case, all source files must be maintained manually, meeting all rules of the regular software development process.

If a complex software structure has to be maintained manually, without the active support by any tool, there is a high risk of making mistakes. Even if a proper development process, appropriate reviews and tests are defined to avoid possible failures, software development may become difficult.

This article presents a possible solution for software projects written in ANSI C language to resolve or mitigate the contradiction of requirements for optimization, safety and maintenance. The central idea is to generate source code by using the C preprocessor. Since the C preprocessor is part of the already certified C compiler, its operation is reliable and its use is allowed even in the most safety critical application.

The Magic of Macros

To demonstrate how the C preprocessor can perform source code generation, we analyze a simple example of keeping consistency between data structures. File a.c contains a list of constants which shall match to their index names defined in file a.h as literals of an enumeration:

a.c

a.h

const uint16 ad[A_SIZE] = {

  15,

  8

}

 

enum {

  A_1,

  A_2,

  A_SIZE

}

 

Certainly, our target is to gain access to the constant values by means of the literals:

x = ad[A_2];

To be able to generate both the constant array and enumeration from the same source in an easily maintainable way, we need assignment between indices and values. This assignment can be solved in C syntax as follows:

MAC(A_1, 15)

MAC(A_2, 8)

This is a function-like statement, but the literal MAC can also be understood as a macro name for the C preprocessor. By using a C preprocessor macro instead of a direct C function call, you are able to calculate something during compilation instead of using the computational power of the target CPU.

Now the question is how to transfer the information from these C macros to real C code. The clue is the #include directive of C preprocessor, which technically simply merges the referred file contents at the current location. The above mentioned macros shall be moved to a specific file mac.inc, and instead of the data, this file shall be merged in the original files via #include directives:

a.c

a.h

#define MAC(a,b) b,

const uint16 ad[A_SIZE] = {

  #include “mac.inc”

}

 

#define MAC(a,b) a,

enum {

  #include “mac.inc”

  A_SIZE

}

mac.inc

 

MAC(A_1, 15)

MAC(A_2, 8)

 

 

 

To gain working source code, the macro MAC must be defined by #define directives before the #include directives are reached. The required definitions can also be seen above.

Source file a.c must include its header a.h, because the definition of A_SIZE is necessary for efficient compilation. In this case, the first macro definition of MAC inherited from file a.h must be removed by #undef directive before the current definition can be made in a.c:

#include "a.h"

...

#undef MAC

#define MAC(a,b) b,

const uint16 ad[A_SIZE] = {

  #include “mac.inc”

}

 

It is also possible to define partially different behavior for specific items. In case the macro MAC cannot always fully configure all necessary details, further macro EXT can be introduced for mac.inc:

a.c

mac.inc

#include "a.h"

...

#undef MAC

#undef EXT

#define MAC(a,b) b,

#define EXT(a,b,c) b+c,

const uint16 ad[A_SIZE] = {

  #include “mac.inc”

}

 

MAC(A_1, 15)

MAC(A_2, 8)

EXT(A_3, 2, 4)

 

 

This is identical to:

a.c

const uint16 ad[3] = {

  15,

  8,

  2+4,

}

 

At this point, file a.c has already become rather hard to follow. Even more macros and more data structures in the same file would make the source code even more difficult to understand. Therefore, further restructuring is suggested so that such a structure can be used efficiently. We introduce a new file a.inc for all macro definitions:

a.inc

#ifdef MAC

  #undef MAC

#endif

#ifdef DEF_A_TABLE

  #define MAC(a,b) b,

#elif defined DEF_A_INDEX

  #define MAC(a,b) a,

#else

  #define MAC(a,b)

#endif

 

#ifdef EXT

  #undef EXT

#endif

#ifdef DEF_A_TABLE

  #define EXT(a,b,c) b+c,

#elif defined DEF_A_INDEX

  #define EXT(a,b,c) a,

#else

  #define EXT(a,b,c)

#endif

#include "mac.inc"

 

Now, the original source code becomes this simple:

a.c

a.h

#include "a.h"

...

const uint16 a[A_SIZE] = {

  #define DEF_A_TABLE

  #include “a.inc”

  #undef DEF_A_TABLE

}

 

enum {

  #define DEF_A_INDEX

  #include “a.inc”

  #undef DEF_A_INDEX

  A_SIZE

}

 

 

 

At this stage, we have managed to get rid of consistent data maintenance in the original source files, but they are still readable. All variable information is collected in one common file mac.inc which is very simple to maintain. By simply adding a new macro statement or removing an old one in this file, all consistent source codes are automatically modified without requiring further attention. The macro definition file a.inc is also well structured, and it is easy to add or remove a new macro definition group.

Operating with macros in this way implements real source code generation. This means, not only constant arrays and enumerations can be maintained in this way, but using #include in a different context real executable code can be also created. The complexity of the generated output can be increased even on a large scale. However, the more complicated the structure definition, the more difficulties will be experienced when the software is traced in a debugger.

Evaluation of Implementation

Looking at the original source files and comparing them to the latest codes, the increase in complexity is obvious. Without experience in using preprocessor statements in such a way, it is more difficult to understand the source code. Debugging software parts by means of e.g. constant tables generated by macros can be also slightly disturbing.

On the other hand, the original target has been accomplished - source code with consistent data structures can be generated by means of a certified tool. Furthermore, maintenance of the data becomes really simple. Even complicated relations and many parallel structures can be easily maintained now, and potential human mistakes can be reduced dramatically.

It is also important to notice the allocation of data in the source code. While the original solution contains data distributed over different files, the new structure contains all data in one common place. Furthermore, the centralized data doe not contains any additional overhead, which can often not be avoided due to the limitations of the ANSI C syntax. This way, the file mac.inc becomes a very clean configuration for the current project.

To be able to assess whether using this technique gives real benefits or only makes the source code more complex, the expected amount of data must be considered.  Convincing advantages can be experienced in case of large data quantities that must be updated frequently over the lifetime of a software project.

Using macros and #include directives in this way is unusual - but not prohibited. Syntactically, it is fully compliant to all ANSI C standards. In practice, all tested C compilers and static code analyzer tools can understand these statements without any errors and warnings.

Considering the regulations of MISRA C 2012, 3 guidelines that are violated:

  1. MISRA 2012, Dir 4.9 (advisory): A function should be used in preference to a function-like macro where they are interchangeable.
  2. MISRA 2012, Rule 20.1 (advisory): #include directives should only be preceded by preprocessor directives or comments
  3. MISRA 2012, Rule 20.5 (advisory): #undef should not be used

 

It is important to note that all affected rules are advisory. Advisory rules are defined by MISRA this way:

6.2.3 Advisory guidelines
These are recommendations. However, the status of "advisory" does not mean that these items can be ignored, but rather that they should be followed as far as is reasonably practical. Formal deviation is not necessary for advisory guidelines but, if the formal deviation process is not followed, alternative arrangements should be made for documenting non-compliances. […]

This means that the presented structure is not fully in line with MISRA recommendations, however, the level of violation is only minor, and according to decision of software quality assurance, these deviations can be accepted.

Summary

The message of this article may not be the invention of a new guideline. It should not be considered to be an explicitly recommended solution for state-of-the-art software technology. The new software structure can be considered more difficult to understand and to debug. However, as long as it is difficult to integrate source code generator tools in safety critical embedded applications, such or similar "compromises" may be accepted. A project decision may be either to accept or to reject such a programming style.

Acceptance of the method can be based on individual preference, but the benefits are inevitable. A well-structured overview is achieved on most constant data elements clearly separated from the core software functions. All other software parts can be kept completely untouched in case of data related modifications.

The clean and well separated structure offers the most essential benefit: reviewing and testing modifications on the common data file is straightforward and simple. In long term, the faster and more reliable software modification practice can lead to professionally organized and well managed software development.

Abbreviations

ASIL

Automotive Safety Integrity Level (ISO26262)

ECU

Electronic Control Unit

NVRAM

Non-Volatile Random Access Memory

 

Bibliography

ISO 26262

Road vehicles – Functional safety

MISRA C 2012

Guidelines for the use of the C language in critical systems

 

 

Beitrag als PDF downloaden


Implementierung - unsere Trainings & Coachings

Wollen Sie sich auf den aktuellen Stand der Technik bringen?

Dann informieren Sie sich hier zu Schulungen/ Seminaren/ Trainings/ Workshops und individuellen Coachings von MircoConsult zum Thema Implementierung /Embedded- und Echtzeit-Softwareentwicklung.

 

Training & Coaching zu den weiteren Themen unseren Portfolios finden Sie hier.


Implementierung - Fachwissen

Wertvolles Fachwissen zum Thema Implementierung/ Embedded- und Echtzeit-Softwareentwicklung steht hier für Sie zum kostenfreien Download bereit.

Zu den Fachinformationen

 
Fachwissen zu weiteren Themen unseren Portfolios finden Sie hier.