The basics of embedded software testing




















For example, suppose you have an if statement without an else part:. You would know whether the TRUE condition is tested because you would see that the then statements were executed. Thus, if the decision statement is:. MCDC would also show you the logical conditions that lead to the decision outcome. Hardware Instrumentation. Emulation memories, logic analyzers, and IDEs are potentially useful for test-coverage measurements. In addition to these three general-purpose tools, special-purpose tools are used just for performance and test coverage measurements.

Emulation Memory. Some vendors include a coverage bit among the attribute bits in their emulation memory. When a memory location is accessed, its coverage bit is set. One problem with this technique is that it can be fooled by microprocessors with on-chip instruction or data caches.

If a memory section, called a refill line , is read into the cache but only a fraction of it is actually accessed by the program, the coverage bit test will be overly optimistic in the coverage values it reports. Even so, this is a good upper-limit test and is relatively easy to implement, assuming you have an ICE at your disposal. Logic Analyzers. Usually, to use a logic analyzer for coverage measurements, you must resort to statistical sampling.

For this type of measurement, the logic analyzer is slaved to a host computer. The host computer sends trigger commands to the logic analyzer at random intervals. The logic analyzer then fills its trace buffer without waiting for any other trigger conditions. The trace buffer is uploaded to the computer where the memory addresses, accessed by the processor while the trace buffer was capturing data, are added to a database.

Thus, the host computer needs to process a lot of redundant data. For example, when the processor is running in a tight loop, the logic analyzer collects a lot of redundant accesses. If access behavior is sampled over long test runs the test suite can be repeated to improve sampling accuracy , the sampled coverage begins to converge to the actual coverage.

Of course, memory caches also can distort the data collected by the logic analyzer. On-chip caches can mask coverage holes by fetching refill lines that were only partly executed. However, many logic analyzers record additional information provided by the processor. Still, the problem remains that the data capture and analysis process is statistical and might need to run for hours or days to produce a meaningful result.

A good ISR is fast. If an ISR is infrequent, the probability of capturing it during any particular trace event is correspondingly low. Thus, coverage of ISR and other low-frequency code can be measured by making a separate run through the test suite with the logic analyzer set to trigger and trace just that code. Software Performance Analyzers. Finally, a hardware-collection tool is commercially available that facilitates the low-intrusion collection method of hardware assist without the disadvantage of intermittent collection of a logic analyzer.

Many ICE vendors manufacture hardware-based tools specifically designed for analyzing test coverage and software performance. Also, they are designed to collect data continuously, so no gaps appear in the data capture, as with a logic analyzer.

Sometimes these tools come already bundled into an ICE, others can be purchased as hardware or software add-ons for the basic ICE. These tools are described in more detail on the next page.

Performance Testing. The last type of testing to discuss in this series is performance testing. This is the last to be discussed because performance testing, and, consequently, performance tuning, are not only important as part of your functional testing but also as important tools for the maintenance and upgrade phase of the embedded life cycle. Performance testing is crucial for embedded system design and, unfortunately, is usually the one type of software characterization test that is most often ignored.

Measuring performance is one of the most crucial tests you need to make on your embedded system. For products that are incredibly cost sensitive, however, this is an example of engineering at its worst. Why overdesign a system with a faster processor and more and faster RAM and ROM, which adds to the manufacturing costs, lowers the profit margins, and makes the product less competitive, when the solution is as simple as finding and eliminating the hot spots in the code?

On any cost-sensitive embedded system design, one of the most dramatic events is the decision to redesign the hardware because you believe you are at the limit of performance gains from software redesign. Mostly, this is a gut decision rather than a decision made on hard data. On many occasions, intuition fails. Modern software, especially in the presence of an RTOS, is extremely difficult to fully unravel and understand.

How to Test Performance. In performance testing, you are interested in the amount of time that a function takes to. Many factors come into play here. Some factors that can change the execution time each time the function is executed are:. Thus, the best you can hope for is some statistical measure of the minimum, maximum, average, and cumulative execution times for each function that is of interest. Figure below shows a performance analysis test tool, which uses software instrumentation to provide the stimulus for the entry-point and exit-point measurements.

These tags can be collected via hardware tools or RTOS services. Figure Performance analysis tool display showing the minimum, maximum, average, and cumulative execution times for the functions shown in the leftmost column.

Dynamic Memory Use. Dynamic memory use is another valuable test provided by many of the commercial tools. This is infinitely preferable to dealing with a nonreproducible system lock-up once every two or three weeks.

Figure below shows one such memory management test tool. Figure Memory Management Test Tool. Note from the trenches: Performance testing and coverage testing are not entirely separate activities. Coverage testing not only uncovers the amount of code your test is exercising, it also shows you code that is never exercised dead code that could easily be eliminated from the product. The command file worked well enough, so no one bothered to remove some of the extraneous libraries that it pulled in.

Thus, you can see how coverage testing can provide you with clues about where you can excise code that does not appear to be participating in the program. I say probably because on some architectures, the dead code can force the compiler to generate more time-consuming long jumps and branches.

Moreover, larger code images and more frequent jumps can certainly affect cache performance. Conceptually, performance testing is straightforward. You use the link map file to identify the memory addresses of the entry points and exit points of functions. You then watch the address bus and record the time whenever you have address matches at these points.

However, suppose your function calls other functions, which call more functions. What is the elapsed time for the function you are trying to measure? Also, if interrupts come in when you are in a function, how do you factor that information into your equation? Fortunately, the commercial tool developers have built in the capability to unravel even the gnarliest of recursive functions.

Hardware-based tools provide an attractive way to measure software performance. Def-use analysiscan be performed on a program using iterative algorithms. Data flowtesting chooses tests that exercise chosen def-use pairs.

The test first causes a certain value to be assigned at thedefinition and then observes the result at the use point to be surethat the desired value arrived there. Frankl and Weyuker [Fra88] have defined criteria forchoosing which def-use pairs to exercise to satisfy a well-behavedadequacy criterion. Testing Loops We can write some specialized tests for loops.

Since loopsare commonand often perform important steps in the program, it is worthdeveloping loopcentric testing methods. If the number of iterations isfixed, then testing is relatively simple. However, many loops havebounds that are executed at run time. Consider first the case of asingle loop as follows:. It would be too expensive to evaluate the above loop for allpossible termination conditions. However, there are several importantcases that we should try at a minimum.

These cases are summarized below. There are many possible strategies for testing nested loops. Onething to keep in mind is which loops have fixed versus variable numbersof iterations. Beizer [Bei90] suggests an inside-out strategy fortesting loops with multiple variable iteration bounds.

First,concentrate on testing the innermost loop as above — the outer loopsshould be controlled to their minimum numbers of iterations. After theinner loop has been thoroughly tested, the next outer loop can betested more thoroughly, with the inner loop executing a typical numberof iterations. This strategy can be repeated until the entire loop nesthas been tested.

Clearly, nested loops can require a large number oftests. It may be worthwhile to insert testing code to allow greatercontrol over the loop nest for testing. Black-Box Testing Black-box tests are generated without knowledge of the code beingtested.

When used alone, black-box tests have a low probability offinding all the bugs in a program. But when used in conjunction withclear-box tests they help provide a well-rounded test set, sinceblack-box tests are likely to uncover errors that are unlikely to befound by tests extracted from the code structure. Blackbox tests can really work. For instance, when asked to test aninstrument whose front panel was run by a microcontroller, oneacquaintance of the author used his hand to depress all the buttonssimultaneously.

The front panel immediately locked up. This situationcould occur in practice if the instrument were placed face-down on atable, but discovery of this bug would be very unlikely via clear-boxtests.

One important technique is to take tests directly from thespecification for the code under design. The specification should statewhich outputs are expected for certain inputs. Tests should be createdthat provide specified outputs and evaluate whether the results alsosatisfy the inputs.

We can't test every possible input combination, but some rules ofthumb help us select reasonable sets of inputs. When an input can rangeacross a set of values, it is a very good idea to test at the ends ofthe range. For example, if an input must be between 1 and 10, 0, 1, 10, and 11are all important values to test.

We should be sure to consider testsboth within and outside the range, such as, testing values within therange and outside the range. We may want to consider tests well outsidethe valid range as well as boundary-condition tests. Random tests form one category of black-box test. Random values are generated with a given distribution. The expectedvalues are computed independently of the system, and then the testinputs are applied.

A large number of tests must be applied for theresults to be statistically significant, but the tests are easy togenerate. Another scenario is to test certain types of data values.

Forexample, integer-valued inputs can be generated at interesting valuessuch as 0, 1, and values near the maximum end of the data range. Illegal values can be tested as well. Regression tests form an extremely importantcategory of tests. When tests are created during earlier stages in thesystem design or for previous versions of the system, those testsshould be saved to apply to the later versions of the system. Clearly,unless the system specification changed, the new system should be ableto pass old tests.

In some cases old bugs can creep back into systems, such as when anold version of a software module is inadvertently installed. In othercases regression tests simply exercise the code in different ways thanwould be done for the current version of the code and thereforepossibly exercise different bugs. Some embedded systems, particularly digital signal processingsystems, lend themselves to numerical analysis.

Signal processingalgorithms are frequently implemented with limited-range arithmetic tosave hardware costs. Aggressive data sets can be generated to stressthe numerical accuracy of the system. These tests can often begenerated from the original formulas without reference to the source. Evaluating Function Tests How much testing is enough? Horgan and Mathur [Hor96] evaluated the coverage oftwo well-known programs, TeX and awk. They used functional tests forthese programs that had been developed over several years of extensivetesting.

Upon applying those functional tests to the programs, theyobtained the code coverage statistics shown in Figure below. The columns refer to various types of test coverage: block refers tobasic blocks, decision to conditionals, puse to a use of a variable ina predicate decision , and c-use to variable use in a nonpredicatecomputation.

These results are at least suggestive that functionaltesting does not fully exercise the code and that techniques thatexplicitly generate tests for various pieces of code are necessary toobtain adequate levels of code coverage.

Methodological techniques are important for understanding thequality of your tests. For example, if you keep track of the number ofbugs tested each day, the data you collect over time should show yousome trends on the number of errors per page of code to expect on theaverage, how many bugs are caught by certain kinds of tests, and so on.

One interesting method for analyzing the coverage of your tests iserror injection. First, take your existing code and add bugs to it,keeping track of where the bugs were added. Then run your existingtests on the modified program. By counting the number of added bugsyour tests found, you can get an idea of how effective the tests are inuncovering the bugs you haven't yet found.

This method assumes that you can deliberately inject bugs that areof similar varieties to those created naturally by programming errors. If the bugs are too easy or too difficult to find or simply requiredifferent types of tests, then bug injection's results will not berelevant. Of course, it is essential that you finally use the correctcode, not the code with added bugs.

Performance Testing Because embedded systems often have real-time deadlines, we mustconcern ourselves with testing for performance, not just functionality. Performance testing determines whether the required result wasgenerated within a certain amount of time.

In many cases, we areinterested in the worst-case execution time, although in some cases wemay want to verify the best-case or averagecase execution time. Performance analysis is very important here. Performance analysiscan tell us what path causes the worst-case or other case executiontime. Example Template What is Security Testing? Report a Bug. Previous Prev.

Next Continue. Home Testing Expand child menu Expand. SAP Expand child menu Expand. Web Expand child menu Expand. Must Learn Expand child menu Expand. Big Data Expand child menu Expand. Live Project Expand child menu Expand. AI Expand child menu Expand. Toggle Menu Close. Search for: Search. Embedded testing is done on embedded systems or chips it can be a black box or white box testing.

He can be reached at ABerger bothell. At the end of every sprint you get a subset of the complete functionality that can be demoed to the potential customer. You must Sign in or Register to post a comment. This site uses Akismet to reduce spam. Learn how your comment data is processed. You must verify your email address before signing in. Check your email for your verification email, or enter your email address in the form below to resend the email.

Please confirm the information below before signing in. Already have an account? Sign In. Please check your email and click on the link to verify your email address. We've sent an email with instructions to create a new password. Your existing password has not been changed. Sorry, we could not verify that email address.

Enter your email below, and we'll send you another email. Thank you for verifiying your email address. We didn't recognize that password reset code. We've sent you an email with instructions to create a new password. Skip to content Search for:. Home Technical Article The basics of embedded software testing: Part 1. Why Test? Which Tests? Run new additional tests. Choosing Test Cases In the ideal case, you want to test every possible behavior in your program.

Coverage Tests The weakness of functional testing is that it rarely exercises all the code. Gray-Box Testing Because white-box tests can be intimately connected to the internals of the code, they can be more expensive to maintain than black-box tests.

Tags: Industry. Previous Using SystemC to build a system-on-chip platform. Next The basics of embedded software testing: Part You may have missed. January 13, Nitin Dahad.

January 12, Nitin Dahad. We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. However, you may visit "Cookie Settings" to provide a controlled consent.

Cookie Settings Accept All. Manage consent. Close Privacy Overview This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website.

We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.

Necessary Necessary. Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously. The cookie is used to store the user consent for the cookies in the category "Analytics". The cookies is used to store the user consent for the cookies in the category "Necessary".



0コメント

  • 1000 / 1000