Wednesday, March 4, 2015

Report from DVCon 2015

I attended DVCon over the past two days.  It was my second time attending.  I took some notes while I was there which I (quickly - excuse poor sentence structure) summarized below.  Keep in mind that these notes are preliminary and are based on presentations alone.  I have yet to read any of the papers.

"Reuse C Test and UVM Sequence Utilizing TLM2, Register Model and Interrupt Handler" - HongLiang Liu - Advanced Micro Devices, Inc.

Author had system level tests written in C that used to target PCIe.  They moved to AXI.  The author demonstrates how he leveraged his existing test cases for the new bus protocol.  Essentially, the existing PCIe layer was modified to send AXI packets via TLM to the AXI UVM UVC.

For interrupt handling, the author proposes use of UVM resource DB.  Any UVM object or component can query resource (i.e. interrupt) and call wait_changed() on the resource handle.

"Engineered SystemVerilog Constraints" - Jeremy Ridgeway - Avago Technologies

Jeremy attempts to solve grand challenge problem of run-time modification of SystemVerilog constraints.  He breaks down a SV constraint into CNF form and add pre-/post- fix terms (sorry, incorrect terminology) that can be modified at run-time.  For example, query the coverage database, then update the constraint terms.

Theoretically, his approach is sound, but there is overhead in implementation.  I was skeptical whether it could be used in a real project and spoke to Jeremy after the presentation.  He is indeed using the approach in his small team.

My impression (not having tried to implement) is that the technique would substantially limit the user to a subset of what is supported by the SystemVerilog constraints language.

"Automated Performance Verification to Maximize Your ARMv8 Pulling Power" - Nicholas A. Heaton - Cadence Design Systems, Inc.

I had high hopes for this talk, but it feel that it fell short.  The author demonstrates how a "fake" testbench can be generated for the purposes of performance modelling.  The agents can be confirmed for specific workloads.  The agents are connected to CCI interface and (I believe) memory controller.  Performance data is collected from the system during run time.  Checks are done on the collected data, i.e. does agent 1 meet QoS requirements.  It was hard to tell, but it seems that the methodology was for performance verification of the only the incorrect (CCI), not the full memory subsystem.  I will need to read the paper.

"Design and Verification of a Multichip Coherence Protocol" - Shahid Ikram - Cavium, Inc.
and
"Table-Based Functional Coverage Management for SOC Protocols" - Shahid Ikram - Cavium, Inc.

This was interesting.  The author describes his methodology for specifying cache coherence protocol in a table format.  From his custom table format, he feeds the architectural spec of the coherence protocol to Jasper architecture modeling tool and does model checking.  He also points in the direction of a free model checker called spin which I will definitely check out.

From the same tables, Cavium can generate assertions and functional coverage models.  The automated approach allowed architects to very quickly iterate their coherence protocol.

"Randomizing UVM Config DB Parameters" - Jeremy Ridgeway - Avago Technologies

This was loosely related to Jeremy's first talk.  It seems to be an earlier effort.  He builds up his own constraint language based on SystemVerilog classes.

"Coverage Data Exchange Is No Robbery, Or Is It?" - Samiran Laha - Mentor Graphics Corp.

This was a discussion about whether coverage data can be shared across tools.  In my personal experience, it is a no.  The author states that progress is being made and Mentor is committed to open standards.

"Standard Regression Testing Does Not Work" - Daniel Hansson - Verifyter AB

Daniel gave was amounted to a marketing presentation.  Pindown has an interesting value proposition.

"Advanced Usage Models for Continuous Integration in Verification Environments" - John Dickol - Samsung Austin R&D Center

John gave a good talk here.  He gives three practical examples of a continuous integration system using Jenkins and GIT.

"Mining Coverage Data for Test Set Coverage Efficiency" - Monica C. Farkash - Univ. of Texas at Austin

The talk was a summary of Monica's PhD work.  Results of a data mining exercise for CPU verification were given.  Given a random test generation, associated scenario files and functional coverage, could any conclusions be drawn?  Author was able to give with confidence that reducing the number of instruction per random test would not substantially reduce the amount of coverage collected.

I thought this was a very interesting talk.  It would be interesting to collect some data from our random generators and do some visualization.  It would also be helpful to begin to classify functional coverage points that are easy or not easy to hit.  This information was considered in her analysis of the data.

"PANEL: SystemC –– Forever a Niche Player Or Rising Star of Chip Design?"

My recap: until someone starts real paying money for SystemC, it will remain a niche player.

"Coverage Driven Generation of Constrained Random Stimuli" - Raz Azaria - Cadence Design Systems, Inc.

Another unfortunate marketing presentation.  The author attempts to solve a grand challenge problem, can coverage data steer constraints... using Specman.  He did not clearly address the problem of how to correlate coverage data to a random variable.

"Navigating the Functional Coverage Black Hole: Be More Effective at Functional Coverage Modeling" - Paul Marriott - Verilab, Inc.

Paul presented practical advice for functional coverage planning, reviewing and modelling.  I had to leave half way through one, but I am looking forward to reading the paper.