Portable Stimulus

Author: Neil Johnson, Principal Verification Consultant

A Template for Evaluating Portable Stimulus

A lot of papers have been written about portable stimulus over the year, most of them being theoretical. It’s been an exclusively high-level view; the potential value of portable stimulus, how it completes the verification continuum, opportunities to integrate it with verification flows and high value entry points.

This post is my first step beyond the theory. I’d like to supply verification engineers with a template for portable stimulus evaluations. My hope is that it’ll help people plan and justify an evaluation with their management, identify a reasonable goal and stay focused on an outcome that motivates an objective decision. The recipe I’m sharing here is one I followed in my own portable stimulus evaluation over the past week. Here are the steps I went through…

Step 1: Identify A Goal

The good news is that tool evaluations can provide fun, open-ended opportunities for engineers to acquaint themselves with new technology. The bad news is they often culminate in unscientific, shoulder-shrug recommendations based just as much on industry propaganda than on knowledge gained. I want portable stimulus evaluations to be different. I’d like to see evaluations fixed to a specific outcome that guide teams toward an objective decision on tooling.

I’ve recommended accelerated functional coverage closure as the best of three possible entry points for portable stimulus. That theme carries over to this evaluation; its purpose is to evaluate portable stimulus as a means for accelerating functional coverage closure.

Step 2: Identify A Test Subject

The reason for an evaluation is to gauge whether or not portable stimulus makes your team more productive. Seems the simplest way to do that is to measure and compare how a testbench performs with and without portable stimulus. More specifically, you can retrofit just one stimulus path in an existing testbench with portable stimulus then compare the performance on that path with that of the original version.

Step 2 in your evaluation, therefore, is choosing a testbench subject with at least one easily accessible stimulus path. Considerations to help determine a suitable candidate:

  • Loose coupling between the stimulus generation and the rest of the testbench; relative isolation with no dependencies is ideal.
  • Stimulus is neatly modeled as a stream of transactions that originate from a single point (i.e. transactions are created entirely within a UVM sequence/sequencer).
  • Transactions don’t require a response (it’ll be easier to manage one line of communication from the portable stimulus tool to the simulator and ignore the need for a response going back).
  • The testbench includes functional coverage that directly corresponds to the generated transaction stream (we’ll be looking to compare the coverage collected from a graph-based model relative to what is collected via constrained random from the legacy testbench so a coverage model that directly corresponds the transaction stream is important).
  • Transactions are complicated enough to get a good feel for performance but not so complicated that we risk sinking more time than necessary. A target could be transactions with 8-15 properties with a well contained legal constraint space.
  • Transactions should function well as atomic transactions (graph-based modeling of atomic transaction will be easier than modeling streams of transactions that depend on history and/or recursion).
  • Existing tests/regressions take 1-2 hours max, ideally less, so we can spend time evaluating, not waiting for tests to finish.

In my own evaluation, I didn’t have a suitable legacy testbench starting point so I quickly built my own, a severely stripped down, hand-crafted UVM AXI agent. The agent included a sequencer, driver and monitor, but because I wasn’t concerned with the actual pin level activity I cheated and connected the monitor directly to the driver via TLM. In fact the only “real” part of the agent was the sequence item I built – though even that I constrained to just the address channel. Beyond the agent proper, I built a functional coverage model for the address channel properties, an atomic transaction sequence and a test. With that, I could run a test for N transactions then spit out the coverage score; a benchmark against which I could compare the graph-based stimulus.

Step 3: Refactor Legacy Code

Once you’ve chosen a testbench and a stimulus path within the testbench, you’ll need to lay some of the groundwork for opening the testbench to the portable stimulus tool. This entails creating a empty socket – figuratively, not literally – into which you’ll later add the bits of code that handle the to-from communication with the new tooling.

In my AXI agent it was quite easy because it boiled down to me adding a new sequence (inherited from my atomic transaction sequence with the body empty) and the required plumbing for running that sequence. I’m sure you could do it in the transaction itself instead of the sequence if you wanted. Another alternative would be tapping into the transaction stream with an extension to the driver. Or maybe a new callback if that’s the way you roll. For a first crack I’m not sure it matters exactly how you do it, you just need a place to inject transactions from the portable stimulus tool. For me a new sequence made the most sense; I’m sure there’s 101 other ways to do it.

Step 4: Graph A Prototype

Next is the fun part: building graphs and generating stimulus. How you do this will vary based on who you are and how you learn best. Personally, I prefer to dig and dig and dig through code and packaged examples until I figure out for myself how to run a tool; that’s the way I learn best. Of course, you could go the other way on this and you could start with some formal training. Given the point we’re at in portable stimulus tool adoption, I’m assuming vendors have built training of some kind into the price of the tooling.

However, you best learn, formally or informally, what you’re looking for here is the bare minimum required to build a graph within the portable stimulus tool, run a simulation within the tool and see a result. The complexity of the graph at this point doesn’t matter; in fact the simpler, the better. I started with one property from my AXI transaction (logic [4:0] awid) and built a graph for generating all its possible settings (i.e. 0-31).

Step 5: Integrate the Prototype and Testbench

You’ve laid the integration groundwork and you have a simple graph generating stimulus, now it’s time to plug the two together. This should be relatively straightforward, though I’m assuming also tool dependent. You need to have the simulation and portable stimulus tools both running and communicating, you need to be able to generate data with your graph, send that data from the portable stimulus tool to the injection point you created in the testbench, use it to populate a transaction field, then provide any required acknowledgement or handshake back to the portable stimulus tool.

In my evaluation, this took me a little while to get all the steps right for proper communication flow. I found sending two transactions especially helpful to validate my understanding of the protocol between the portable stimulus tool and simulator didn’t include any lockups or race conditions.

Step 6: Refine the Prototype

With the integration effectively pipe cleaned, the easiest part of our evaluation is filling in the rest of the model with other properties in your transaction. My graph ended up being a visual realization of the original transaction constraints and coverage bins; I’m assuming yours will be similar.

Step 7: Collect And Analyze Your Results

The moment of truth: run your graph-based stimulus through your testbench and see what happens. Compare your results with what you see from your unmodified testbench. Experiment with your model to see how different structures affect performance and results. Look for tool features and switches in the user guide that further accelerate the model’s effectiveness. Check back with your AE for low hanging fruit that’ll improve your result.

Step 8: Packaging, Presentation and Follow-up

I’m a big believer in packaging so others can reproduce your results and see for themselves how a tool works. It doesn’t have to be formal or bulletproof, but take the time to tidy up when you’re done so your eval is easy to follow. Commit it to your code repository. Post instructions for running it on a wiki. In short, tidy up the playground to give other people a chance to get in there.

And remember the point of all this was to help your team make a decision. The best thing you can do is demonstrate your eval to the team real-time, share your findings, give your opinions and begin the decision-making process.

XtremeEDA is an experienced partner you can trust!!

Cadence Design Systems helps engineers pick up the development tempo. A leader in the market for electronic design automation (EDA) software, Cadence sells and leases software and hardware products used to design integrated circuits (ICs), printed circuit boards (PCBs), and other electronic systems. Semiconductor and electronics systems manufacturers use its products to build components for wireless devices, networking equipment, and other applications. The company also provides maintenance and support, and offers design and methodology consulting services. Customers have included Pegatron, Silicon Labs, and Texas Instruments. Cadence gets more than half of its sales from customers outside the US.

Synopsys, Inc. (Nasdaq:SNPS) provides products and services that accelerate innovation in the global electronics market. As a leader in electronic design automation (EDA) and semiconductor intellectual property (IP), Synopsys’ comprehensive, integrated portfolio of system-level, IP, implementation, verification, manufacturing, optical and field-programmable gate array (FPGA) solutions help address the key challenges designers face such as power and yield management, system-to-silicon verification and time-to-results. These technology-leading solutions help give Synopsys customers a competitive edge in quickly bringing the best products to market while reducing costs and schedule risk. For more than 25 years, Synopsys has been at the heart of accelerating electronics innovation with engineers around the world having used Synopsys technology to successfully design and create billions of chips and systems. The company is headquartered in Mountain View, California, and has approximately 90 offices located throughout North America, Europe, Japan, Asia and India.

asicNorth was established in January 2000 with one purpose in mind: deliver the highest quality design services possible. In an industry that can be quite volatile at times, it is important to have a design partner that you can depend upon to deliver the skills you need when you need them. A project can only be successful if there are:

Top quality skills on the team
Communication with the customer
Attention to detail
Cost sensitivity
Focus on the schedule

Today, asicNorth is enabling high-tech industry leaders and startups alike with a combination of digital, analog, and mixed-signal design capabilities. Driven to produce successful results, asicNorth is Making Chips Happen™.

Codasip delivers leading-edge RISC-V processor IP and high-level processor design tools, providing IC designers with all the advantages of the RISC-V open ISA, along with the unique ability to customize the processor IP. As a founding member of RISC-V International and a long-term supplier of LLVM and GNU-based processor solutions, Codasip is committed to open standards for embedded and application processors. Formed in 2014 and headquartered in Munich, Germany, Codasip currently has R&D centers in Europe and sales representatives worldwide. For more information about our products and services, visit www.codasip.com. For more information about RISC-V, visit www.riscv.org.

Founded in 1999, Avery Design Systems, Inc. enables system and SOC design teams to achieve dramatic functional verification productivity improvements through the use of

Formal analysis applications for RTL and gate-level X verification;

Robust Verification IP for PCI Express, USB, AMBA, UFS, MIPI, DDR/LPDDR, HBM, HMC, ONFI/Toggle, NVM Express, SCSI Express, SATA Express, eMMC, SD/SDIO, Unipro, CSI/DSI, Soundwire, and CAN FD standards.

Siemens EDA
The pace of innovation in electronics is constantly accelerating. To enable our customers to deliver life-changing innovations to the world faster and to become market leaders, we are committed to delivering the world’s most comprehensive portfolio of electronic design automation (EDA) software, hardware, and services.