Start Date
5-12-2025 12:00 PM
End Date
5-12-2025 1:00 PM
Description
LLMs are a very hot topic in the EDA space at the moment. Most of the research efforts are focused on creating better models for Verilog code writing tasks, and datasets for training and fine tuning. A capitally important part of this research topic is the creation of benchmarks to evaluate the performance of these models once we create them. Over time and through research efforts, models get significantly better, and start clearing these benchmarks at very high rates. Too high rates. The creation of benchmarks is a very laborious task. It always involves elaborating novel sample design ideas, writing the golden code solutions for each, and writing testing harnesses. Benchmarks typically do not focus on PPA analysis to establish performance metrics. Our work aims to address these shortcomings by creating a corpus of synthetically generated samples, with demonstrated novelty, and a unified framework for PPA analysis.
SELFBENCH-V and Unified PPA Analysis Framework
LLMs are a very hot topic in the EDA space at the moment. Most of the research efforts are focused on creating better models for Verilog code writing tasks, and datasets for training and fine tuning. A capitally important part of this research topic is the creation of benchmarks to evaluate the performance of these models once we create them. Over time and through research efforts, models get significantly better, and start clearing these benchmarks at very high rates. Too high rates. The creation of benchmarks is a very laborious task. It always involves elaborating novel sample design ideas, writing the golden code solutions for each, and writing testing harnesses. Benchmarks typically do not focus on PPA analysis to establish performance metrics. Our work aims to address these shortcomings by creating a corpus of synthetically generated samples, with demonstrated novelty, and a unified framework for PPA analysis.