您的当前位置:首页正文

3Verification Planning

2022-10-16 来源:爱站旅游
导读3Verification Planning
CHAPTER 2VERIFICATION PLANNINGAs stated in the previous chapter—and in several other published works—more effortis required to verify a design than to write the RTL code for it. As early as 1974, BrianKernighan, creator of the C language, stated that “Everyone knows debugging is twiceas hard as writing a program in the first place.” A lot of effort goes into specifyingthe requirements of the design. Given that verification is a larger task, even moreeffort should go into specifying how to make sure the design is correct.Every design team signs up for first-time success. No one plans for failures andmultiple design iterations. But how is first-time success defined? How can resourcesbe appropriately allocated to ensure critical functions of the design are notjeopardized without a definition of what functionality is critical? The verificationplan is that specification. And that plan must be based on the intent of the design, notits implementation. Of course, corner cases created by the implementation, which arenot apparent from the initial intent, have to be verified but that should be done oncethe initial intent-based verification plan has been completed.This chapter will be of interest to verification lead engineers and project managers. Itwill help them define the project requirements, allocate resources, create a workschedule and track progress of the project of time. Examples in this chapter are basedon OpenCore Ethernet IP Core Specification, Revision 1.19, November 27, 2002.This document can be found in the Examples section of the companion Web site:http://vmm-sv.orgVerification Methodology Manual for SystemVerilog17Verification PlanningPLANNING PROCESSThe traditional approach of writing verification plans should be revised to takeadvantage of new verification technologies and methodologies. The new verificationconstructs in SystemVerilog offer new promises of productivity but only if theverification process is designed to take advantage of them. Individual computers, likeindividual language features, can improve the productivity of the person sitting infront of it. But taking advantage of the network, like the synergies that exist amongthose language features, can be achieved only by redesigning the way an entirebusiness processes information; and it can dramatically improve overall efficiency.The traditional verification planning process—when there is one—involvesidentifying testcases targeting a specific function of the design and describing aspecific set of stimulus to apply to the design. Sometimes, the testcase may be self-checking and looks for specific symptoms of failures in the output stream observedfrom the design. Each testcase is then allocated to individual engineers forimplementation. Other than low-level bus-functional models directly tied to thedesign’s interface, little is reused between testcases. This approach to verification issimilar to specifying a complete design block by block and hoping that, once puttogether, it will meet all requirements.A design is specified in stages: first requirements, then architecture and finallydetailed implementation. The verification planning process should follow similarsteps. Each step may be implemented as separate cross-referenced documents, or bysuccessive refinement of a single document.Functional Verification RequirementsThe purpose of defining functional verification requirements is to identify theverification requirements necessary for the design to fulfill the intended function.These requirements will form the basis from which the rest of the verificationplanning process will proceed. These requirements should be identified as early aspossible in the project life cycle, ideally while the architectural design is being carriedout. It should be part of a project’s technical assessment reviews.It is recommended that the requirements be identified and reviewed by a variety ofstakeholders from both inside and outside the project team. The contributors shouldinclude experienced design, verification and software engineers so that therequirements are defined from a hardware and a software perspective. The reviewsare designed to ensure that the identified functional verification requirements arecomplete.18Verification Methodology Manual for SystemVerilogPlanning ProcessRule 2-1 —A definition of what the design does shall be specified.Defining what the design does—what type of input patterns it can handle, what errorsit can sustain—is part of the verification requirements. These requirements ensurethat the design implements the intended functionality. These requirements are basedon a functional specification document of the design agreed upon by the design andverification teams.These requirements are outlined, separate from the functional specification document.Example 2-1.Ethernet IP Core Verification RequirementsR3.1/14/0R3.1/13/0R3.1/13/1R4.2.3/1Packets are limited to MAXFL bytesDoes not append a CRCAppends a valid CRCFrames are transmittedRule 2-2 —A definition of what the design must not do shall be specified.Defining what makes the behavior of the design incorrect is also part of theverification requirements. These requirements ensure that functional errors in thedesign will not go unnoticed. The section titled \"Response Checking\" on page31specifies guidelines on how to look for errors.A functional specification document is concerned with specifying the intendedbehavior. That is captured by Rule 2-1. Verification is concerned with detectingerrors. But it can only detect errors that it is looking for. The verification requirementsmust outline which errors to look for. There is an infinite number of ways somethingcan go wrong. The verification requirements enumerate only those errors that arerelevant and probable, given the functionality and architecture of the design.Rule 2-3 —Any functionality not covered by the verification process shall bedefined.It is not possible to verify the behavior of the design under conditions it is notexpected to experience in real life. The conditions considered to be outside the usagespace of the design must be outlined to clearly delineate what the design is and is notexpected to handle.For example, a datacom design may be expected to automatically recover from aparity error, a complete failure of the input protocol or a loss of power. But aVerification Methodology Manual for SystemVerilog19Verification Planningprocessor may not be expected to recover from executing invalid instruction codes ora loss of program memory.Example 2-2.Ethernet IP Core Verification RequirementsR3.1/9/0R4.2.3/2Frames are lost only if attempt limit isreachedFrames are transmitted in BD orderRule 2-4 —Requirements shall be uniquely identified.Each verification requirement must have a unique identifier. That identifier can thenbe used in cross-referencing the functional specification document, testcaseimplementation and functional coverage points.Rule 2-5 —Requirement identifiers shall never be reused within the same project.As the project progresses and the design and verification specifications are modified,design requirements will be added, modified or removed. Corresponding verificationrequirements will have to be added, modified or removed. When adding newrequirements, never reuse the identifier of previously removed verificationrequirements to avoid confusion between the obsolete and new requirements.Rule 2-6 —Requirements shall refer to the design requirement or specificationdocuments.The completeness of the functional verification requirements is a critical aspect of theverification process. Any verification requirement that is missing may cause afunctional failure in the design to go unnoticed. Cross-referencing the functionalverification requirements with the design specification will help ensure that allfunctional aspects of the design are included in the verification plan.Furthermore, verification is complex enough without verifying something that is notultimately relevant to the final design. If something is not specified, don’t verify it.Do not confuse this idea with an incomplete specification. The former is a \"don’tcare.\" The latter is a problem that must be fixed.Recommendation 2-7 —Requirements should be ranked.Not all requirements are created equal. Some are critical to the correct operation ofthe design, others may be worked around if they fail to operate. Yet others areoptional and included for speculative functionality of the end product.20Verification Methodology Manual for SystemVerilogPlanning ProcessRanking the requirements lets them be prioritized. Resources should be allocated tothe most important requirements first. The decision to tape-out the design shouldsimilarly be taken when the most important functional verification requirements havebeen met.Example 2-3.Ethernet IP Core Verification Requirements RankingR3.1/14/0Packets are limited to MAXFL bytesSHOULDR3.1/14/1Packets can be up to 64kBSHOULDR3.1/14/2Packets can be up to 1500 bytesMUSTRecommendation 2-8 —Requirements should be ordered.Many requirements depend on the correct operation of other requirements. The latterrequirements must be verified first. Dependencies between requirements should bedocumented.For example, verifying that all configuration registers can be correctly written to mustbe completed before verifying the different configurations.Example 2-4.Ethernet IP Core Verification Requirements OrderR4.2.3/1R3.1/13/0R3.1/14/2R3.1/13/1Frames are transmittedDoes not append a CRCPackets can be up to 1500 bytesAppends a valid CRCRecommendation 2-9 —The requirements should be translated into a functionalcoverage model.A functional coverage model allows the automatic tracking of the progress of theverification implementation. This model also enables a coverage-driven verificationstrategy that can leverage automation in the verification environment to minimize theamount of code necessary to implement all of the requirements of the functionalverification. More details on functional coverage modeling and coverage-drivenverification can be found in Chapter 6. For example, Example 6-8 shows how therequirements shown in Example 2-5 can be translated into a functional coveragemodel.Verification Methodology Manual for SystemVerilog21Verification PlanningExample 2-5.Ethernet IP Core Verification Requirements Coverage ModelR3.1/14/0Packets are limited to MAXFL bytesAt least one packet transmitted with length:MAXFL-4MAXFL-1MAXFLMAXFL+1MAXFL+465535MAXFL set toDefault (1536)15181500HUGEN set to01Cross coverage offrame length x MAXFL x HUGEN valueRecommendation 2-10 —Implementation-specific requirements should be specifiedusing coverage properties.Some functional verification requirements are dictated by the implementation chosenby the designer and are not apparent in the functional specification of the design.These functional verification requirements create corner cases that only the designeris aware of. These requirements should be specified in the RTL code itself throughcoverage properties.For example, the nature of the chosen implementation may have introduced a FIFO(first-in, first-out). Even though the presence of this FIFO is not apparent in thedesign specification, it still must be verified. The designer should specify coverageproperties to ensure that the design was verified to operate properly when the FIFOwas filled and emptied.Note that if the FIFO was never supposed to be completely filled, as assertion shouldbe used on the FIFO is Full state instead of a coverage property.Verification Environment RequirementsThe primary aim of this step is to define the requirements of the verificationinfrastructure necessary to produce a design with a high degree of probability of beingbug-free. Based on the requirements identified in the previous step, this step identifiesthe resources required to implement the verification requirements.22Verification Methodology Manual for SystemVerilogPlanning ProcessRule 2-11 —Design partitions to be verified independently shall be identified.Not all design partitions are created equal. Some implement critical functionality withlittle system-level controllability or observability. Others implement day-to-daytransformations that are immediately observable on the output streams. The physicalhierarchy of a design is architected to make the specification of the individualcomponents more natural and to ease their integration, not to balance the relativecomplexity of their functional verification.Some functional verification requirements will be easier to meet by verifying aportion of the design on its own. Greater controllability and observability can beachieved on smaller designs. But smaller partitions also increases their number, whichincreases the number of verification environments that must be created and increasesthe verification requirements of their integration.The functional controllability and observability needs for each verificationrequirement must be weighed against the cost brought about by creating additionalpartitions.Recommendation 2-12 —Reusable verification components should be identified.Every independently verified design presents a set of interfaces that must be driven ormonitored by the verification environment. A subset of those interfaces will also bepresented by system-level verification of combinations of the independently-verifieddesigns.Interfaces that are shared across multiple designs—whether industry-standard orcustom-designed—should share the same transactors to drive or monitor them. Thissharing will reduce the number of unique verification components that will need to bedeveloped to build all of the required verification environments. To that end, it willoften be beneficial for the designs themselves to use common interfaces to facilitatethe implementation of the functional verification task.Different designs that share the same physical interfaces may have differentverification requirements. The verification components for those interfaces must beable to meet all of those requirements to be reusable across these environments.The opportunity for reusable verification components may reside at higher layers ofabstraction. Even if physical interfaces are different, they may transport the sameprotocol information. The protocol layer that is common to those interfaces should becaptured in a separate, reusable verification component. For example, MII (media-independent interface) and RMII (reduced media-independent interface) are twoVerification Methodology Manual for SystemVerilog23Verification Planningphysical interfaces for transporting Ethernet media access controller (MAC) frames.Although they have different signals, they transport the same data formats and obeythe same MAC-layer protocol rules. Thus, they should share the same MAC framedescriptor and MAC-layer transactor.Recommendation 2-13 —Models of the design at various levels of abstractionshould be identified.Many functional verification requirements do not need a model of the detailedimplementation—such as a RTL or gate-level model—to be met. Furthermore,requirements that can be met with a model at a higher level of abstraction can do sowith much greater performance. Some requirements, such as software validation, canbe met only with difficulty if only an implementation-level model is available.Having a choice of models of the design at various levels of abstraction can greatlyimprove the efficiency the verification process.Rule 2-14 —The supported design configurations shall be identified.Designs often support multiple configurations. The verification may focus on only asubset of the configurable parameters first and expand later on. Similarly, the designmay implement some configurable aspect before others with the verification processdesigned to follow a similar evolution.The configuration selection mechanism should also be identified. Are only a handfulof predetermined configurations going to be verified? Or, is the configuration goingto be randomly selected? If the configuration is randomly selected, what are therelevant combinations of configurable parameters that should be covered? Constraintscan be used to limit a random configuration to a supported configuration. Functionalcoverage should be used to record which configurations were verified.Some configurable aspects require compile-time modification of the RTL code. Forexample, setting a top-level parameter to define the number of ports on the device orthe number of bits in a physical interface is performed at compile-time. The model ofthe design is then elaborated to match. In these conditions, it is not possible torandomize the configuration because, by the time it is possible to invoke therandomize() method on a configuration descriptor, the model of the design hasalready been elaborated—and configured.A two-pass randomization of the design configuration may have to be used. On thefirst pass, the configuration descriptor is randomized and an RTL parameter-settingconfiguration file is written. On the second pass, the RTL parameter-settingconfiguration file is loaded with the model of the design, and the same seed as in the24Verification Methodology Manual for SystemVerilogPlanning Processfirst pass, is reused to ensure the same configuration descriptor—used by theverification environment—is randomly generated.Example 2-6.Supported Ethernet IP Core Configurations- Variable number of TxBD (TX_BD_NUM)- If TX_BD_NUM == 0x00: TX disabled- If TX_BD_NUM == 0x80: Rx disabled- MAXFL (PACKETLEN.15-0, MODER.14)- Optional CRC auto-append (MODER.13)Rule 2-15 —The response-checking mechanisms shall be identified.The functional verification requirements describe what the design is supposed to dowhen operating correctly. It should also specify what kind of failures the designshould not exhibit. The self-checking structure can easily determine if the right thingwas performed. But it is much more difficult to determine that no wrong things weredone in the process. For example, matching an observed packet with one of theexpected packets is simple: If none match, the observed packet is obviously wrong.But if one matches, the question remains: Was it the packet that should have come outnext?The self-checking mechanisms can report only failures against expectations. Themore obvious the symptoms of failures are, the easier it is to verify the response of thedesign. The self-checking mechanisms should be selected based on the anticipatedfailures they are designed to catch and the symptoms they present.Some failures may be obvious at the signal level. This type of response is mostefficiently verified using assertions. Some failures may be obvious as soon as anoutput transaction is observed. This type of response checking can be efficientlyimplemented using a scoreboarding mechanism. Other failures, such as performancemeasurements or fairness of accesses to shared resources, require statistical analysisof an output trace. This type of response checking is more efficiently implementedusing an offline checking mechanism.The available self-checking mechanisms are often dictated by the available means ofpredicting the expected response of the design. The existence of a reference ormathematical model may make an offline checking mechanism more cost effectivethan using a scoreboard.Verification Methodology Manual for SystemVerilog25Verification PlanningRule 2-16 —Stimulus requirements shall be identified.It will be necessary to apply certain stimulus sequences or put the design into certainstates to meet many of the functional verification requirements. Putting the designinto a certain state is performed by applying a certain stimulus sequence. Thus, theproblem is reduced to the ability of creating specific stimulus sequences to meet thefunctional verification requirements.Traditionally, a directed sequence of stimulus was used to meet those requirements.However, the methodology presented in this book recommends the use of randomgenerators to automatically generate stimulus to avoid writing a large number ofdirected testcases. Should the random stimulus fail to generate the required stimulussequences, they are to be constrained to increase the probability that they willgenerate them in subsequent simulations.The ability to constrain a random generator to create the required stimulus sequencesdoes not happen by accident. Generators must be designed based on the stimulussequences required by the verification requirements. They must offer the mechanismsto express constraints that will ultimately force the designed stimulus patterns to begenerated as part of a longer random stimulus stream. If it remains unlikely that therequired stimulus sequence will be randomly generated, then a directed stimulussequence has to be used.Example 2-7.Ethernet IP Core Stimulus Requirements- Random configuration- Maximum packet length (PACKETLEN.MAXFL, MODER.HUGEN)- Appending of CRC (MODER.CRCEN)- Number of transmit buffer descriptors (TX_BD_NUM)- Transmitted Packets- With and without CRC (TxBD.CRC)- Good & bad CRC- Bad CRC only if MODER.CRCEN or TxBD.CRC == 0- Various length (tied to maximum packet length)Rule 2-17 —Trivial tests shall be identified.Trivial tests are those that are run on the design first. Their objective is not to meetfunctional verification requirements, but to ascertain that the basic functionality of thedesign operates correctly before performing more exhaustive tests. Performing a writecycle followed by a read cycle, injecting a single packet or executing a series of nullopcodes are examples of trivial tests.26Verification Methodology Manual for SystemVerilogPlanning ProcessTrivial tests need not be directed. Just as they can be used to determine the basicliveliness of the design, they can be used to determine the basic correctness of theverification environment. A trivial test may be a simple constrained-random testconstrained to run for only a very short stimulus sequence. For example, a trivial testcould be constrained to last for only two cycles: the first one being a write cycle, thesecond one a read cycle and both cycles constrained to use the same address.The verification environment must be designed to support the creation of the trivialtests.Example 2-8.Trivial Tests for Ethernet IP Core- Rx disabled, transmit 1 packet- Tx disabled, receive 1 packetRecommendation 2-18 —Error injection mechanisms should be defined.Designs may be able to sustain certain types of errors or stimulus exceptions. Theseexceptions must be identified as well as the mechanism for injecting them. Then, it isnecessary to specify how correctness of the response to the exception will bedetermined.Based on the verification requirements, it is necessary to identify the error injectionmechanisms required in the verification environment. For low-level designs, it maybe possible to inject errors at the same time as stimulus is being generated. For morecomplex systems with layered protocols, low-level errors are often impossible toaccurately describe from the top of the protocol stack. Furthermore, there may beerrors that have to be injected independent of the presence of high-level stimulus.Exceptions may need to be synchronized with others stimulus—such as interruptrequests synchronized with various stages in the decode pipeline. Synchronizing anexception stream with a data stream may require using a multi-stream generator (see“Multi-Stream Generation” on page236).Example 2-9.Ethernet IP Core Error Injections- Collisions at various symbol offsets (early, latest early, earliest late, late)- Collide on all transmit attempts- Bad CRC on Rx frame- Bad DA on Rx frame when MODER.PRO == 0Verification Methodology Manual for SystemVerilog27Verification PlanningRecommendation 2-19 —Data sampling interfaces for functional coverage shouldbe identified.A functional coverage model should be used to track the progress toward thefulfilment of the functional verification requirements. The functional coverage modelwill monitor the verification environment and the design to ensure that eachrequirement has been met.This monitoring requires that a signature is used in the design or verificationenvironment to indicate that a particular verification requirement has been met. Byobserving the signature, the functional coverage model can record that therequirement corresponding to the signature’s coverage point has been met.For the coverage model to be able to observe those signatures, there must be set datasampling mechanisms. These mechanisms let the relevant data be observed by thefunctional coverage model. Data that is in different sampling domains or at the wronglevel of abstraction will require a significant amount of processing before it can beconsidered suitable for the functional coverage model. Planning for a suitable datasampling interface up front will simplify the implementation of the functionalcoverage model.Example 2-10.Coverage Sampling Interfaces for Ethernet IP Core- DUT configuration descriptor- Tx Frame after writing to TxBD- TxBD configuration descriptor after Tx frame writtenRecommendation 2-20 —Tests to be ported across environments should beidentified.To verify the correctness of the design integration or its integrity at differentimplementation stages, it may be necessary to port some tests to differentenvironments. For example, some block-level tests may need to be ported to thesystem-level environment. Another example is the ability to take a simulation test andreuse it in a lab set up on the actual device. Yet another example is the ability toreproduce a problem identified in the lab in the simulation environment.Tests that must be portable will likely have to be restricted to common featuresavailable in the different environments they will execute in. It is unlikely that thesetests will be able to arbitrarily stress the capability of the design as much as aparticular environment allows them to. Due to the different functional verificationrequirements met by the different verification environments, it is not realistic toexpect to be able to port all tests from one environment to another.28Verification Methodology Manual for SystemVerilogPlanning ProcessVerification Implementation PlanThe primary aim of implementing the functional verification plan is to ensure that theimplementation culminates in exhaustive coverage of the design and its functionalitywithin the project time scales. The implementation is based on the requirements of theverification environments as outlined above.This implementation plan should be started as early as possible in the project lifecycle. Ideally, it should be completed before the start of the RTL-coding phase of theproject and before any verification testbench code is written. This step is necessary toproduce a design with a high degree of probability of being bug-free.Recommendation 2-21 —Functional coverage groups and coverage propertiesshould be identified.The functional verification requirements should be translated into a functionalcoverage model to automatically track the progress of the verification project. Afunctional coverage model is implemented using a combination of covergroup orcover property. Which one is used depends on the nature of the available datasampling interface and the complexity of the coverage points.Coverage properties are better at sampling signal-level data in the design based on aclock signal. But they can implement only a single coverage point. Coverage groupsare better at sampling high-level data in the verification environment and canimplement multiple coverage points that use the same sampling interfaceChapter 6 provides more guidelines for implementing functional coverage models.Recommendation 2-22 —Configuration and simulation management mechanismsshould be defined.It must be easy—not just possible—to reproduce a simulation. It is necessary thatthere be a simple mechanism for ensuring that the exact model configuration used in asimulation be known. Which version of what source files, tools and libraries wereused? Similarly, it must be simple to record and reissue the exact simulationcommand that was previously used —especially using the same random seed.Command-line options cannot be source-controlled. Therefore a script should be usedto set detailed command-line options based on broad, high-level simulation options.Verification Methodology Manual for SystemVerilog29Verification PlanningRecommendation 2-23 —Constrainable dimensions in random generators shouldbe defined.The random generators must be able to be constrained to generate the requiredstimulus sequences. Constraining the generators may involve defining sequences ofdata. But it also may involve coordinating multiple independent data streams onto asingle physical channel or parallel channels, each stream itself made up of datasequence patterns. Constraints, state variables and synchronization events may needto be shared by multiple generator instances.Controlability of the randomization process require the careful design of the data andtransaction descriptor structures that are randomized and the generators thatrandomize them. The ability to constrain the generated data to create detailedstimulus scenarios tends to require more complex randomization processes. It may bemore efficient to leave a few complex stimulus sequences as directed stimulus, andleave the bulk of the data generation to a simple randomization process.Recommendation 2-24 —Stimulus sequences unlikely to be randomly generatedshould be identified.There are some stimulus requirements that will remain unlikely to be automaticallygenerated. Rather than complicate the random generators to create them or have tospecify an overly complicated set of constraints to coerce the generators, it may beeasier to specify them as directed stimulus sequences.Directed stimulus sequences need not be for the entire duration of a simulation. Theymay be randomly injected as part of a random stimulus stream.Recommendation 2-25 —End-of-test conditions should be identified.When a test is considered done is an apparently simple but important question.Running for a constant amount of time or data may hide a problem located in a deeperstate or by data being constantly pushed out by the forward pressure created besubsequent stimulus. Additional termination conditions could be defined: once acertain number of error messages have been reported, once a certain level of coveragehas been hit, a watchdog timer has expired or the design going idle—whatever idle ismust also be defined. The end-of-test condition could be created by only onecondition or require a combination of the termination conditions.Even when the end-of-test condition has been identified, how the simulation endsgracefully should be specified. There may be data that must be drained from the30Verification Methodology Manual for SystemVerilogResponse Checkingdesign or statistics registers to be read. The contents of memories may have to bedumped. The testbench may have to wait until the design becomes idle.Example 2-11.End of Test Conditions for Ethernet IP Core- After N frames have been transmitted- After M frames have been received- If TxEN and interrupt not asserted for more than X cyclesRESPONSE CHECKINGRule 2-2 requires the enumeration of all errors that must be detected by theverification environment. These detection mechanisms require a strategy forpredicting the expected response and to compare the observed response against thoseexpectations. This section focuses on these strategies. Guidelines for theimplementation of the self-checking structure can be found in section titled \"Self-Checking Structures\" on page246.It is difficult to describe a methodology for checking the response of a design becausethat response is unique to that design. Response checking can be described only ingeneral terms. A broad outline of various self-checking structures can be specified.The availability in the SystemVerilog language of high-level data structures greatlyfacilitates the implementation of response checking. But it is not possible to describethe details of its overall implementation without targeting it to a specific design.With traditional directed testcases, because the stimulus and functionality of thedesign are known, the expected response may be intellectually derived up front andhard-coded as part of the directed test. With random stimulus, although thefunctionality is known, the applied stimulus is not. The expected response must bycomputed based on the configuration and functionality of the design. The observedresponse is then compared against the computed response for correctness.It is important to realize that the response-checking structure in a verificationenvironment can only identify problems. Correctness is inferred from the failure tofind inconsistencies. If the response-checking structure does not explicitly check for aparticular symptom of failure, it will remain undetected. The functional verificationrequirements must include a definition of all possible symptoms of failure.Verification Methodology Manual for SystemVerilog31Verification PlanningRecommendation 2-26 —Response checking should be separate from stimulus.In directed tests, the response can be hardcoded in parallel with the stimulus. Thus, itcan be implemented in a more ad-hoc or distributed fashion, in the same programthat implements the stimulus. However, it is better to treat the response checking as anindependent function.Response checking that is hardcoded with the stimulus tends to focus on thesymptoms of failure of the feature targeted by the directed test. This coding stylecauses functionality to be repeatedly checked in tests that focus on the same feature.But a test targeting a specific feature may happen to exercise an unrelated fault. If theresponse checking is concerned only with the feature being verified, then the failurewill not be detected. This style of response checking may allow errors to go unnoticedif they occur in another functional aspect of the design.By separating the checking from the stimulus, all symptoms of failures can beverified at all times.Embedded MonitorsResponse is generally understood as being observed on the external outputs of thedesign under verification. However, limiting response to external interfaces only maymake it difficult to identify some symptoms of failure. If the verification environmentdoes not have a sufficient degree of observability over the design, much effort may bespent trying to determine the correctness to an internal design structure because it istoo far removed from the external interfaces. This problem is particularly evident insystems where internal buses or functional units may not be directly observable fromthe outside.Suggestion 2-27 —Design components can be replaced by transactors.Transactors need not be limited to interfacing with external interfaces. Likeembedded generators described in section titled \"Embedded Stimulus\" on page226,monitors can mirror or even replace an internal design unit and provide observabilityover that unit’s interfaces. The transaction-level interface of the embedded monitorremains externally accessible, making the mirrored or replaced unit interfaceslogically external.For example, an embedded RAM block could be replaced with a reactive transactor(slave), as illustrated in Figure 2-1. Correctness could be determined, not by dumping32Verification Methodology Manual for SystemVerilogResponse Checkingor replicating the entire content of the memory, but by observing and fulfilling—potentially injecting errors—each memory access in real time. CodeMemARM CoreAHB™ InterconnectAHBInterfaceSlaveFigure 2-1.Replacing a Slave Unit with a Reactive TransactorThis approach is implementation-dependent. As recommended by Recommendation2-32, assertions should be used to verify the response based on internal signals.However, assertions may not provide the high-level capabilities required to check theresponse. Assertions are also unable to provide stimulus, and thus cannot be used toreplace a reactive transactor.AssertionsThe term assertion means a statement that is true. From a verification perspective, anassertion is a statement of the expected behavior. Any detected discrepancy in theobserved behavior results in an error. Based on that definition, the entire testbench isjust one big assertion: It is a statement of the expected behavior of the entire design.But in design verification—and in this book—assertion refers to a property expressedusing a temporal expression.Using assertions to detect and debug functional errors has proven to be very effectiveas the errors are reported near—both in space and time—the ultimate cause of thefunctional defect. But despite their effectiveness, assertions are limited to the types ofproperties that can be expressed using clocked temporal expressions. Somestatements about the expected behavior of the design must still be expressed—or areeasier to express—using behavioral code. Assertions and behavioral checks can becombined to work cooperatively. For example, a protocol checker can use assertionsto describe the lower-level signaling protocol and use behavioral code to describe thehigher-level, transaction-oriented properties.Assertions work well for verifying local signal relationships. They can efficientlydetect errors in handshaking, state transitions and physical-level protocol rules. Theycan also easily identify unexpected or spurious conditions. On the other hand,assertions are not well suited for detecting data transformation, computation andordering errors. For example, assertions have difficulties verifying that all validVerification Methodology Manual for SystemVerilog33Verification Planningpackets are routed to the appropriate output ports according to their respectivepriorities.The following guidelines will help identify which response-checking requirementsshould be implemented using assertions or behaviorally in the verificationenvironment. More details on using assertions can be found in Chapters 3 and 7.Typical response-checking structures in verification environments are described in“Scoreboarding” on page38, “Reference Model” on page39 and “Offline Checking”on page40.Recommendation 2-28 —Assertions should be limited to verifying physical-levelassumptions and responses.Temporal expressions are best suited to cycle-based physical-level relationships.Although temporal expressions can be stated in terms of events representing high-level protocol events, the absence of a clock reference makes them more difficult tostate correctly. Furthermore, the information may already be readily available in atransactor, making the implementation of a behavioral check often simpler.Rule 2-29 —A response-checking requirement that must be met on different levelsof abstraction of the design shall be implemented using proceduralcode in the verification environment.Assertions are highly dependent on RTL or gate-level models. They cannot be easilyported to transaction-level models. Any response checking that must be performed atvarious abstraction levels of the design is better implemented in the testbench.Recommendation 2-30 —Response checking involving data storage, computations,transformations or ordering should be implemented in theverification environment.Temporal expressions are not very good at expressing large data storage requirements(such as ordering checks) and complex computations (such as a cyclic redundancychecks). Data transformation usually involves complex computations and some formof data storage. Data ordering checks involve multiple dimension queuing models.These types of checks are usually better implemented in the verification environment.34Verification Methodology Manual for SystemVerilogResponse CheckingRule 2-31 —Responses to be checked using formal analysis shall be implementedusing assertions.Formal tools cannot reason on arbitrary procedural code. They usually understandRTL coding style and temporal expressions. Some design structures are best verifiedusing formal tools. Their expected response must be specified using assertions.Recommendation 2-32 —Response checking involving signals internal to the designshould be implemented using assertions.Some symptoms of failures are not obvious at the boundary of the design. The failureis better detected on the internal structure implementing the desired functionality. Theexpressiveness of the temporal expressions usually makes such white-box verificationeasier to implement using assertions. Furthermore, being functionality internal to thedesign, it is unlikely that the equivalent check already exists in—or could beleveraged from—procedural code in a transactor. The response may also beinteresting to verify using formal tools, as described in Chapter 7.Recommendation 2-33 —Assumptions made or required by the implementation oninput signals should be checked using assertions.Often, the implementation depends on some assumed or required behavior of its inputsignals. Any violation of these assumptions or requirements will likely cause theimplementation to misbehave. Tracing the functional failure of a design, which isobserved on its outputs, to invalid assumptions or behavior on its input signals istime-consuming. These assertions capture the designer’s assumptions and knowledge.They can be reused whenever the design is reused and detect errors should it bereused in a context where an assumption or requirement no longer hold. Theassumptions and requirements may also be required to successfully verify theimplementation using formal tools, as described in Chapter 7.Recommendation 2-34 —Implementation-based assertions should be specified in-line with the design by implementation engineers.Assertions that are implied or assumed by a particular implementation of the designare specific to that implementation. Different implementations may have differentimplications or assumptions. Therefore, they should be embedded with the RTL code.They can be captured only by the designer as they require specific knowledge aboutthe implementation that only the designer has.Verification Methodology Manual for SystemVerilog35Verification PlanningRule 2-35 —Assertions shall be based on a requirement of the design or theimplementation.Assertions must be used to verify the intent of the design, not the language used toimplement it. They must be considered similar to comments and not simply capturethe obvious behavior of the language, as illustrated in the following example:Example 2-12.Trivial and Obvious Assertionalways @ (posedge clk) i <= i + 1;...a: assert property ( @(posedge clk) i == $past(i) + 1);AccuracyThe simplest comparison function compares the observed output of the design withthe predicted output on a cycle-by-cycle basis. But this approach requires that theresponse be accurately predicted down to the cycle level, a complex task. If the designspecification does not specify a particular end-to-end latency, why verify at a moreaccurate level of precision?The layered verification environment (see section titled \"Testbench Architecture\" onpage104) allows the separation of verifying the timing from the content. Theverification of the content of the design output can easily be performed with completeaccuracy: Either the content of the output matches the expected content or it does not.The verification of the timing of the design output can easily sustain irrelevantvariations. It may occur at different times, but as long as the output eventually comeswithin acceptable time boundaries, no error is reported.The timing of physical interfaces can also be verified separately from the data beingtransported. Transactors can verify that the relative placement of signal transitions fallwithin acceptable bounds, as specified by the protocol. But they do not verify thatthese transitions occur at specific points in absolute time.Ordering and sequencing are other aspects of accuracy. In some classes of designs, itmay be difficult to predict the exact order in which the output transactions will beobserved. Similarly, it may be difficult to determine in advance which particulartransactions will be dropped to maintain some higher priority functions in the design.Rather than trying to predict the exact sequence of the output, it may be sufficient topredict the relative order of independent streams of transactions or simply assume thatany transaction not observed on the output was dropped. Of course, any assumption36Verification Methodology Manual for SystemVerilogResponse Checkingthat could mask a functional defect should be independently confirmed through othermeans during the verification process.Rule 2-36 —Response checking shall not be more accurate than necessary.If it is not specified, don’t check for it. Suggestion 2-41 and Suggestion 2-42 describetypes of behavior that may be checked with varying degrees of accuracy.Recommendation 2-37 —Response checking should be transaction accurate.The response should be verified based on the correctness of the transaction data. Thetiming of transactions should only be verified with respect to the occurrence of othertransactions i.e., sequencing, ordering and maximum latency.Recommendation 2-38 —Only interfaces should be checked for timing accuracy.Transactors monitoring an interface should check that the timing of the signals on thatinterface is internally consistent and timing accurate. The relative position of signaltransitions should fall within acceptable bounds but not verified against an absolutetime reference.Interfaces should not be checked cycle by cycle to allow for nonfunctional variations.For example, whether a read cycle introduces zero or several wait states is notfunctionally relevant—unless the function being verified is the performance of theinterface.Recommendation 2-39 —The relative timing of different interfaces should not beverified.The relative timing of signal transitions on different interfaces should not be verified,unless some specified relationship exists between the interfaces.Recommendation 2-40 —Cycle-level accuracy should be checked only when thespecification is stated at the cycle level.If the functional verification requirement includes a cycle-level check of the responseor throughput of a design, then these requirements trump all previously statedrecommendations in this section. If it is specified, it must be verified.Suggestion 2-41 —It may not be necessary to predict the exact transaction executionorder.Verification Methodology Manual for SystemVerilog37Verification PlanningThis suggestion is a special case of Rule 2-36. It may be sufficient to verify that somerelative order is maintained. For example, check that independent streamsmultiplexed onto a single output stream are in order, but do not attempt to predict theexact inter-stream ordering. Another example would be out-of-order processorinstructions: As long as instructions are executed in order of data dependencies, theexact execution order may not need to be predicted.Suggestion 2-42 —It may not be necessary to predict exactly which transaction willbe dropped.This suggestion is a special case of Rule 2-36. In some applications—e.g., networkrouters, transactions can be dropped as part of normal operations of the designs. Is itimportant to predict which transaction will be dropped? Or that, if transactions areobserved to have been dropped, that the minimum number of transactions weredropped and for the right reasons, regardless of which ones were dropped? Forexample, would it be important to predict which packets were dropped to meetquality-of-service requirements? Or would it be sufficient to check that those packetsthat were dropped belong to the lowest quality-of-service class?Instead of predicting which transaction will be dropped, it may be sufficient toidentify that transactions were dropped and that it occurred if and only if a validcondition was present. Assertions can be used to detect the occurrence and duration ofdrop conditions, isolating the verification environment from implementation details.ScoreboardingA scoreboard is used to dynamically predict the response of the design. As illustratedin Figure 2-2, the stimulus applied to the design is concurrently provided to a transferfunction. The transfer function performs all transformation operations on the stimulusto produce the form of the final response then inserts it in a data structure. Observedresponse from the stimulus is forwarded to the compare function to verify that it is anexpected response. TransferFunctionStimulusDataStructureDUTFigure 2-2.ScoreboardingThe transfer function is a transaction-level reference model that usually operates inzero time. It may also be implemented using a reference or golden model. The data38Verification Methodology Manual for SystemVerilogCompareResponseResponse Checkingstructure stores the expected response until it can be compared against the observedoutput. The compare function looks up the expected response in the data structure toidentify if the observed response matches expectations. The data structure andcompare function handle any acceptable discrepancy between the observed responseand the expected output, such as ordering or latency.The transfer function and data structure are usually configurable to match theconfiguration of the DUT: Different configurations may yield different responses.Transfer functions may be implemented in C. The Direct Programming Interface maybe used to integrate them in the SystemVerilog environment. A directed test mayimplement its expected response using a test-specific transfer function that modelsonly the necessary subset of the functionality that is exercised.The term “scoreboard” is not well-defined in the industry. It sometimes refers to thestorage data structure only, sometimes it includes the transfer function as well, andsometimes it includes the comparison function. In this book, the term scoreboard isused to refer to the entire dynamic response-checking structure.Scoreboarding works well for verifying the end-to-end response of a design and theintegrity of the output data. It can efficiently detect errors in data computations,transformation and ordering. It can also easily identify missing or spurious data. Onthe other hand, it is not well suited for detecting errors whose symptoms of failuresare not obvious at the granularity of a single response. For example, scoreboardinghas difficulty verifying the fairness of internal resource allocations and quality-of-service arbitrations. It may also be difficult to use a scoreboard to measure overallperformance of the design under verification.Reference ModelA reference model, like a scoreboard, is used to dynamically predict the response ofthe design. As illustrated in Figure 2-3, the stimulus applied to the design isconcurrently provided the reference model. The output of the reference model iscompared against the observed response. ReferenceModelStimulusDUTCompareResponseFigure 2-3.Reference Model Verification Methodology Manual for SystemVerilog39Verification PlanningReference models have the same capabilities and challenges as scoreboards. Unlike ascoreboard, the comparison function works directly from the output of the referencemodel. The reference model must thus produce output in the same order as the designitself. However, there is no need to produce the output with the same latency or cycleaccuracy: The comparison function can handle latency and cycle discrepanciesbetween the expected and observed response. A reference model need not be pin-accurate with the design. A reference model can be at the transaction level, with ahigh-level transaction interface: The comparison of the observed response with theresponse of the reference model is performed at the transaction level, not at the cycle-by-cycle level.Using reference models depends heavily on their availability. If they are available,they should be used. If they are not available, scoreboarding techniques will be moreefficient to implement. More often than transfer functions, reference models areimplemented in C. The Direct Programming Interface may be used to integrate themin the SystemVerilog environment.Offline CheckingOffline checking is used to predict the response of the design before or after thesimulation of the design. As illustrated in Figure 2-4, in a pre-simulation prediction,the offline checker produces a description of the expected response, which isdynamically verified against the observed response during simulation. The comparefunction can dynamically compare the predicted response to the observed response ora utility can perform the comparison post-simulation. As illustrated in Figure 2-5, in apost-simulation prediction, the recorded stimulus and response of the design iscompared against the predicted result by the offline response checker. In both cases,the response can be checked at varying degrees of details and accuracy, from cycle-by-cycle to transaction-level with reordering.Using pre-simulation response prediction with dynamic response checking lets asimulation report any discrepancy while the design is in or near the state where theerror occurs. It also avoids needlessly running long simulations when a fatal erroroccurs early in the run. Pre-simulation checking cannot generate stimulus based onthe dynamic state of the design—such as the insertion of wait states—and may notexercise the design under all possible conditions.Offline checking works well for verifying the end-to-end response of a design and theintegrity of the output data based on executable system-level specifications ormathematical models. It can efficiently detect errors in data computations,transformation and ordering. Offline checking can also easily identify missing orspurious data. Post-simulation offline checking is also well suited for detecting errors40Verification Methodology Manual for SystemVerilogSummarywhose symptoms of failures are not obvious at the granularity of a single response.For example, it can verify the fairness of internal resource allocations and quality-of-service arbitrations by performing statistical analysis over the entire recordedresponse.ResponsePredictionStimulusDUTCompareResponseFigure 2-4.Pre-Simulation Offline Checking ResponsePredictionStimulusDUTCompareResponseFigure 2-5.Post-Simulation Offline CheckingOffline checking need not be implemented separately from the runtime simulationenvironment. The invocation of external programs necessary to generate the input,predict the response and compare it with the observed response can be done by thesimulator at the start or the end of the simulations. Although offline checking isusually used with a reference model, it can be used with scoreboarding techniquesimplemented as a separate offline program.SUMMARYThis chapter described the necessary steps required to plan a verification project.First, the requirements that must be met by the verification projects are defined. Theserequirements create specifications for the stimulus, response-checking and functionalcoverage aspects of the verification environment.Next, various strategies for computing or specifying the expected response of adesign under verification were presented. The different strategies have differentadvantages and limitations when comparing the observed response against expectedresults. A particular strategy may have to be used to identify certain classes offailures, which may not be as easily identifiable in another approach.Verification Methodology Manual for SystemVerilog41Verification PlanningAssertions are best for verifying implementation-specific and physical-levelrelationships; whereas testbenches are best for verifying transaction-level responses.Unlike testbenches, assertions are not limited to the primary DUT outputs to check itsresponse. The complete response of a design will be verified using a combination ofassertions and one or more verification environments.42Verification Methodology Manual for SystemVerilog

因篇幅问题不能全部显示,请点此查看更多更全内容