
AI/ML Researcher · UC Davis
Suchinthaka Wanninayaka
AI/ML researcher with 9+ years of experience bridging academic research and real-world deployment. My expertise spans deep learning, generative AI, large language models (LLMs), vision-language models (VLMs), semantic communication, and federated learning. I focus on building scalable, production-grade systems that deliver measurable performance improvements.
I combine strong academic foundations with hands-on industry experience, enabling me to translate cutting-edge research into reliable, efficient solutions that create real impact in production environments.
Experience
Graduate Student Researcher
University of California, Davis · United States
Sep 2022 - PresentSoftware Engineer
Cut+Dry · Sri Lanka
Jun 2021 - Aug 2022Software Engineer
Axiata Digital Labs · Sri Lanka
Mar 2020 - Jun 2021Research Internship
Singapore University of Technology and Design (SUTD) · Singapore
Jun 2018 - Dec 2018Education
Ph.D. in Electrical and Computer Engineering
University of California, Davis · Davis, CA, USA
Sep 2022 - Jun 2026 (Exp.)M.S. in Electrical and Computer Engineering
University of California, Davis · Davis, CA, USA
Sep 2022 - Sep 2025BSc. Engineering (Hons) in Electronic and Telecommunication
University of Moratuwa · Sri Lanka
Dec 2015 - Jan 2020Projects
View all 15 projectsDiff-GO — Diffusion Goal-Oriented Communications
A noise-optimized diffusion framework with Noise-Restricted Forward Diffusion (NR-FD) process for ultra-high bandwidth efficiency semantic communication.
Diff-GO+ — Enhanced Generative Feedback Framework
Improved semantic image generation quality through local generative feedback (LGF) with dictionary learning for effective noise codebook design.
LaMI-GO — Latent Mixture Integration Framework
Task-driven latent integration using VQ-Diffusion models and VQGAN for ultra-high bandwidth efficiency while maintaining semantic fidelity.
TACO — Task Adaptation and Context Embedding
Vector Quantized VAE for task-driven semantic quantization and imitation learning, improving autonomous driving applications.
Ignition — High-Performance Transformer Training with CUDA
From-scratch transformer training pipeline with custom CUDA kernels, mixed-precision training, gradient accumulation, and distributed data parallelism with integrated profiling and benchmarking.
Compass — RLHF and DPO Alignment Toolkit
End-to-end alignment toolkit covering the full pipeline from reinforcement learning fundamentals through reward model training, RLHF, and Direct Preference Optimization, with LLM-as-judge evaluation.
Prism — Systematic LoRA/QLoRA Fine-Tuning Framework
Structured framework for parameter-efficient fine-tuning experiments using LoRA and QLoRA, providing reproducible ablation analysis across rank, alpha, quantization, and learning rate configurations.
Skills & Technologies
Programming Languages
AI/ML Frameworks
Generative AI & LLMs
GenAI Ecosystem
Infrastructure & MLOps
Research Domains
Publications
Google ScholarDiff-GO+: An Efficient Diffusion Goal-Oriented Communication System with Local Feedback
A. Wijesinghe, S. Zhang, S. Wanninayaka, W. Wang, Z. Ding
LaMI-GO: Latent Mixture Integration for Goal-Oriented Communications Achieving High Spectrum Efficiency
A. Wijesinghe, S. Wanninayaka, W. Wang, Y. Chao, S. Zhang, Z. Ding
Diff-GOn: Enhancing Diffusion Models for Goal-Oriented Communications
S. Wanninayaka, A. Wijesinghe, W. Wang, Y. Chao, S. Zhang, Z. Ding
Task-Driven Semantic Quantization and Imitation Learning for Goal-Oriented Communications
Y. Chao, Y. Chen, W. Wang, A. Wijesinghe, S. Wanninayaka, S. Zhang, Z. Ding
TACO: Rethinking Semantic Communications with Task Adaptation and Context Embedding
A. Wijesinghe, W. Wang, S. Wanninayaka, S. Zhang, Z. Ding
Diff-GO: Diffusion Goal-Oriented Communications with Ultra-High Spectrum Efficiency
A. Wijesinghe, S. Zhang, S. Wanninayaka, W. Wang, Z. Ding
Interested in collaborating?
I'm open to research collaborations, consulting, and new opportunities in AI/ML.