About Me

I am a final year PhD student in the computer science department at the University of Illinois at Urbana-Champaign working in the LLVM group supervised by Dr. Vikram Adve. My research interests lie at the intersection of Compilers, Approximate Computing, Systems, Deep Learning, and Static Analysis. I am particularly enthusiastic about building compiler infrastructure for approximate computing with the goal of improving application performance and energy-efficiency on resource-constrained edge systems. I take interest in developing abstractions, analyses, and techniques that enable the flexible use of approximate computing techniques with minimal user involvement.

Research Areas

  • Compilers

  • Systems for Machine Learning

  • Approximate Computing

  • Static Analysis 

News

  • [March 2021] Presented ApproxTuner at PPoPP'21 (Virtual). Video is here.

  • [January 2021] Passed Ph.D. Final Thesis Defense

  • [November 2021] ApproxTuner paper accepted at PPoPP'21

  • [October 2020] Presented ApproxTuner at LLVM-dev'20

  • [March 2020] Passed Ph.D. Preliminary Exam

  • [January 2020] Released HPVM, a retargetable compiler infrastructure for heterogeneous systems: https://gitlab.engr.illinois.edu/llvm/hpvm-release

  • [October 2019] ApproxHPVM paper presented at OOPSLA'19

  • [September 2019] ApproxHPVM paper accepted at OOPSLA'19

  • [September 2018] TRIMMER presented at ASE'18

  • [July 2018] TRIMMER accepted at ASE'18

  • [September 2017] Presented OpenMP-UVM in OpenMPCon'17

  • [January 2016] Joined the LLVM group at UIUC supervised by Dr. Vikram Adve

  • [August 2014] Joined the Computer Science PhD program at UIUC

 

Projects

 

ApproxTuner is an automatic framework for accuracy-aware optimization of tensor-based applications while requiring only high-level end-to-end quality specifications. ApproxTuner implements and manages approximations in algorithms, system software, and hardware. The key contribution in ApproxTuner is a novel three-phase approach to approximation-tuning that consists of development-time, install-time, and run-time phases. To enable efficient autotuning, we present a novel tuning technique called predictive approximation-tuning, which significantly speeds up autotuning by analytically predicting the accuracy impacts of approximations. We evaluate ApproxTuner across 10 convolutional neural networks (CNNs) and a combined CNN and image processing benchmark. For the evaluated CNNs, using only hardware-independent approximation choices we achieve a mean speedup of 2.1x (max 2.7x) on a GPU, and 1.3x mean speedup (max 1.9x) on the CPU, while staying within 1 percentage point of inference accuracy loss. For two different accuracy-prediction models, ApproxTuner speeds up tuning by 12.8x and 20.4x compared to conventional empirical tuning while achieving comparable benefits.

ApproxHPVM is a portable compiler IR and framework for accuracy-aware optimizations. ApproxHPVM includes analyses that automatically trade-off acceptable levels of accuracy for significant gains in performance and energy reduction. On popular deep learning workloads, ApproxHPVM shows performance improvements of up to 9x and energy reductions of up to 11x exploiting reduced precision compute on GPUs, and special purpose analog compute accelerators for deep learning.

TRIMMER is a software debloating infrastructure that removes application features that are unused with respect to a given user-specification. TRIMMER includes sophisticated analysis for specializing programs with respect to application-specific configuration files, inteprocedural constant propagation, and aggressive loop unrolling. For real-world  benchmarks, we observe code size reductions of 20% on average.

Contact Information:

 

4307 Siebel Center,

201 North Goodwin Avenue, Urbana, IL, 61801