Blog

A Template For Evaluating Portable Stimulus

September 24, 2018

by Neil Johnson, Chief Technologist

I’ve written a lot about portable stimulus over the last year, all of it being theoretical. It’s been an exclusively high-level view; the potential value of portable stimulus, how it completes the verification continuum, opportunities to integrate it with verification flows and high value entry points.

This post is my first step beyond the theory. I’d like to supply verification engineers with a template for portable stimulus evaluations. My hope is that it’ll help people plan and justify an evaluation with their management, identify a reasonable goal and stay focused on an outcome that motivates an objective decision. The recipe I’m sharing here is one I followed in my own portable stimulus evaluation over the past week. Here are the steps I went through…


Step 1: Identify A Goal

The good news is that tool evaluations can provide fun, open-ended opportunities for engineers to acquaint themselves with new technology. The bad news is they often culminate in unscientific, shoulder-shrug recommendations based just as much on industry propaganda than on knowledge gained. I want portable stimulus evaluations to be different. I’d like to see evaluations fixed to a specific outcome that guide teams toward an objective decision on tooling.

I’ve recommended accelerated functional coverage closure as the best of three possible entry points for portable stimulus. That theme carries over to this evaluation; its purpose is to evaluate portable stimulus as a means for accelerating functional coverage closure.

Step 2: Identify A Test Subject

The reason for an evaluation is to gauge whether or not portable stimulus makes your team more productive. Seems the simplest way to do that is to measure and compare how a testbench performs with and without portable stimulus. More specifically, you can retrofit just one stimulus path in an existing testbench with portable stimulus then compare the performance on that path with that of the original version.

Step 2 in your evaluation, therefore, is choosing a testbench subject with at least one easily accessible stimulus path. Considerations to help determine a suitable candidate:

  • Loose coupling between the stimulus generation and the rest of the testbench; relative isolation with no dependencies is ideal.
  • Stimulus is neatly modeled as a stream of transactions that originate from a single point (i.e. transactions are created entirely within a UVM sequence/sequencer).
  • Transactions don’t require a response (it’ll be easier to manage one line of communication from the portable stimulus tool to the simulator and ignore the need for a response going back).
  • The testbench includes functional coverage that directly corresponds to the generated transaction stream (we’ll be looking to compare the coverage collected from a graph-based model relative to what is collected via constrained random from the legacy testbench so a coverage model that directly corresponds the transaction stream is important).
  • Transactions are complicated enough to get a good feel for performance but not so complicated that we risk sinking more time than necessary. A target could be transactions with 8-15 properties with a well contained legal constraint space.
  • Transactions should function well as atomic transactions (graph-based modeling of atomic transaction will be easier than modeling streams of transactions that depend on history and/or recursion).
  • Existing tests/regressions take 1-2 hours max, ideally less, so we can spend time evaluating, not waiting for tests to finish.

In my own evaluation, I didn’t have a suitable legacy testbench starting point so I quickly built my own, a severely stripped down, hand-crafted UVM AXI agent. The agent included a sequencer, driver and monitor, but because I wasn’t concerned with the actual pin level activity I cheated and connected the monitor directly to the driver via TLM. In fact the only “real” part of the agent was the sequence item I built – though even that I constrained to just the address channel. Beyond the agent proper, I built a functional coverage model for the address channel properties, an atomic transaction sequence and a test. With that, I could run a test for N transactions then spit out the coverage score; a benchmark against which I could compare the graph based stimulus.

Schedule Guideline: anywhere from a couple minutes to a couple hours. Likely to require discussion with teammates to find the most suitable candidate.

Step 3: Refactor Legacy Code

Once you’ve chosen a testbench and a stimulus path within the testbench, you’ll need to lay some of the groundwork for opening the testbench to the portable stimulus tool. This entails creating a empty socket – figuratively, not literally – into which you’ll later add the bits of code that handle the to-from communication with the new tooling.

In my AXI agent it was quite easy because it boiled down to me adding a new sequence (inherited from my atomic transaction sequence with the body empty) and the required plumbing for running that sequence. I’m sure you could do it in the transaction itself instead of the sequence if you wanted. Another alternative would be tapping into the transaction stream with an extension to the driver. Or maybe a new callback if that’s the way you roll. For a first crack I’m not sure it matters exactly how you do it, you just need a place to inject transactions from the portable stimulus tool. For me a new sequence made the most sense; I’m sure there’s 101 other ways to do it.

Schedule Guideline: Less than a day to refactor code you wrote yourself. A couple days to refactor code written by a teammate. If refactoring takes longer than 2 days it’s a good indication you’ve chosen the wrong testbench and/or stimulus path.

Step 4: Graph A Prototype

Next is the fun part: building graphs and generating stimulus. How you do this will vary based on who you are and how you learn best. Personally, I prefer to dig and dig and dig through code and packaged examples until I figure out for myself how to run a tool; that’s the way I learn best. Of course you could go the other way on this and you could start with some formal training. Given the point we’re at in portable stimulus tool adoption, I’m assuming vendors have built training of some kind into the price of the tooling.

However you best learn, formally or informally, what you’re looking for here is the bare minimum required to build a graph within the portable stimulus tool, run a simulation within the tool and see a result. The complexity of the graph at this point doesn’t matter; in fact the simpler, the better. I started with one property from my AXI transaction (logic [4:0] awid) and built a graph for generating all its possible settings (i.e. 0-31).

Schedule Guideline: A couple days to a couple weeks. If you’re like me and you can let the details slide, you’ll probably be a couple days. If you’re detail oriented and learn best from formal training, probably closer to a week or more. I’d set a hard limit at 2 weeks through for an eval. We’re shooting for good enough, no further.

Step 5: Integrate The Prototype and Testbench

You’ve laid the integration groundwork and you have a simple graph generating stimulus, now it’s time to plug the two together. This should be relatively straightforward, though I’m assuming also tool dependent. You need to have the simulation and portable stimulus tools both running and communicating, you need to be able to generate data with your graph, send that data from the portable stimulus tool to the injection point you created in the testbench, use it to populate a transaction field, then provide any required acknowledgement or handshake back to the portable stimulus tool.

In my evaluation, this took me a little while to get all the steps right for proper communication flow. I found sending two transactions especially helpful to validate my understanding of the protocol between the portable stimulus tool and simulator didn’t include any lockups or race conditions.

Schedule Guideline: a couple days max.

Schedule Disclaimer: as you learn the protocol between the portable stimulus tool and simulator there will be opportunities during integration to get sucked into a debug spiral. Have your AE on speed dial to avoid wasting too much time debugging protocol issues.

Step 6: Refine The Prototype

With the integration effectively pipe cleaned, the easiest part of our evaluation is filling in the rest of the model with other properties in your transaction. My graph ended up being a visual realization of the original transaction constraints and coverage bins; I’m assuming yours will be similar.

Schedule Guideline: half a day.

Step 7: Collect And Analyze Your Results

The moment of truth: run your graph-based stimulus through your testbench and see what happens. Compare your results with what you see from your unmodified testbench. Experiment with your model to see how different structures affect performance and results. Look for tool features and switches in the user guide that further accelerate the model’s effectiveness. Check back with your AE for low hanging fruit that’ll improve your result.

Schedule Guideline: a day or less.

Step 8: Packaging, Presentation And Follow-up

I’m a big believer in packaging so others can reproduce your results and see for themselves how a tool works. It doesn’t have to be formal or bulletproof, but take the time to tidy up when you’re done so your eval is easy to follow. Commit it to your code repository. Post instructions for running it on a wiki. In short, tidy up the playground to give other people a chance to get in there.

And remember the point of all this was to help your team make a decision. The best thing you can do is demonstrate your eval to the team real-time, share your findings, give your opinions and begin the decision making process.

Schedule Guideline: TBD.


As I type, I’ve made it up to ‘Accelerate Your Coverage’ and it’s been four days so far. I expect it’ll take a couple more to package and present. Looking back, I think this was a productive way to approach an evaluation. Traditionally, I’ve taken more of an open-ended-learn-the-tool type of approach purely out of interest. This has been decidedly different. Having a plan and purpose makes it feel more constructive and the results feel far more objective. A must, it seems, if you’re looking for an objective tooling decision.

-neil

Leave a Comment

Comments are closed.

More Articles