Title: Fermilab AI associates discuss their research
Date and time: Thursday, May 27, 3 p.m. – 4 p.m. Central Time
Speakers: Benjamin Hawks, Diana Kafkes, Pavlo Lyalyutskyy, Fermilab
Hear three short presentations in one seminar as the Fermilab AI associates present their work.
In our research, we explored methods to shrink and optimize neural networks to increase compute efficiency without significant loss of model performance. Specifically, we introduced a straightforward combination of two existing optimization methods, Quantization Aware Training and Iterative Pruning, which we call “Quantization Aware Pruning (QAP).” In this presentation, we also highlight practical use cases for QAP through industry benchmarks from MLCommons’ TinyMLPerf V0.1 on low-power, low-cost off the shelf FPGA products.
We will discuss the offline machine learning development for an effort to precisely regulate the Gradient Magnet Power Supply (GMPS) at the Fermilab Booster accelerator complex. As part of this effort, we created and validated a digital twin of the Booster-GMPS control system as a safe environment to train a reinforcement learning agent. Additionally, we will cover the use of two domain adaptation techniques— Maximum Mean Discrepancy (MMD) and Domain Adversarial Neural Networks (DANNs). These techniques substantially improved model classification performance within the context of different galaxy merger data sets, and show great promise to improve the results of any model’s application to two discrepant datasets.
We investigate the use of Bayesian optimization in the application of accelerator control. Using both virtual accelerator simulation software, TraceWin, and a live accelerator at Fermilab, we show that Bayesian optimization is able to achieve convergence to sufficiently results in less than 100 optimization steps.
Zoom info can be found at https://fermipoint.fnal.gov/service/seminars/SitePages/Home.aspx