Portable stimulus is becoming the new hot topic in verification that only got hotter with the release of the early adopter portable stimulus standard in February. As news and tooling for portable stimulus filters through the verification world, I thought it would be good to help potential users set some minimum expectations as they demo tools to find what works for them. Each vendor has their bells and whistles, but the following four motivating factors are a good low-bar starting point for people new to the technology.
Replace UVM Scenarios
I appreciate the structure that UVM imposes on verification engineers when it comes to building testbench infrastructure. Once you are used to it, it’s comforting to see environments built from agents, agents built from sequencers, drivers and monitors, components communicating via TLM and coverage fed from analysis ports. In fact, if we could boil UVM down to just the infrastructure and a UVM data type, I would take it. That would give us the nuts and bolts for look-and-feel usability considerations without imposing unnecessary functional restrictions.
To be clear, a big part of what I’d let go is everything UVM sequence related. UVM sequences are a poor solution to the coordinated behaviour required to model interactions with an SoC. They can work for a single stream of transactions. More than that and suddenly we have component, configuration and event handles intertwined with multiple levels of constraints and various threads through stimulus history. It can be a convoluted mess that is difficult to apply and even harder to understand.
Which takes us to our first must have for vendors building portable stimulus tools: the stimulus. At an absolute minimum, portable stimulus would supply teams with a user friendly way of modelling SoC stimulus that replaces our dependency on UVM sequences.
Reduce Simulation Cycles
The other problem with UVM sequences, and constrained random verification in general, is that the shotgun approach to driving traffic, no matter how well designed, is terribly inefficient in terms of simulation cycles. Especially toward the end of a project, it’s a lot of repetition with little new ground covered.
Portable stimulus tools should give teams an easy opportunity to refactor their simulator usage by re-deploying repetitive simulation cycles toward the dark, unexplored corners of the state space.
Bridge The Gap Between Hardware And Software
While it’s vigorously being addressed by EDA tool providers, there’s still a fairly large technology gap between simulation and higher level emulation and prototyping platforms. Doing lots of work in simulation and then porting that ahead is a non-trivial affair to say the least. Portable stimulus tools should should help in this area by making it easier to port tests from one technology to another.
A caveat to bridging the gap is that I still see the onus being on users to do most of the work. Development teams will still need to develop APIs that are suitable for low level hardware tests and higher level software tests. They’ll also have to make smart decisions about what gets ported and what isn’t worth the effort. But at a minimum, the tooling should go a long way toward enabling portability between simulation and higher level emulation, FPGA or software platforms.
A long time pet-peeve of mine regarding constrained random verification is that it requires a massive infrastructure investment to reap its advertised benefits. For teams transitioning from directed testing to constrained random, the time-to-initial-results (aka: time-to-first-passing-test) was lengthened dramatically – not to mention the fact that early results tend to severely underwhelm. If you measure progress relative to how much of a design has been verified, the weeks or months eaten by testbench development tend to look like a progress black hole.
Above and beyond the must haves of replacing UVM scenarios, reducing simulation cycles and bridging the gap between hardware and software verification, I sincerely hope portable stimulus tool providers learn from results blackout we accepted with the development of constrained random infrastructure and re-embrace the value of minimizing the time-to-initial-results. Call it rapid deployment, call it incremental deployment, call it whatever you want so long as we don’t continue down the path of suggesting users need to wait any longer than we already to do for measurable results.
The 2 Faces of Debug – by Neil JohnsonMay 16, 2018
Recently, I was part of a discussion about the different types of bugs that can pop up for verification teams. […]Learn More
Technology: A Means to an End? – by Claude CloutierMarch 7, 2018
As a professional engineering community, what are we doing? Courtenay, British Columbia – Ever since one of our ancient ancestors […]Learn More
AnySilicon – CEO Talk: Claude Joseph CloutierMarch 1, 2018
CEO Talk: Claude Cloutier discusses the XtremeEDA MultiProcessor Platform (XMPP), safety, security and the path to transformational client engagements with AnySilicon.com . . […]Learn More
Neil Johnson on TDD for hardware development in the System Design JournalMarch 1, 2018
XtremeEDA Chief Technologist Neil Johnson comments on the use of test-driven development (TDD) in hardware in this article from the […]Learn More