Suchinthaka Wanninayaka

AI/ML Researcher · UC Davis

Suchinthaka Wanninayaka

AI/ML researcher with 9+ years of experience bridging academic research and real-world deployment. My expertise spans deep learning, generative AI, large language models (LLMs), vision-language models (VLMs), semantic communication, and federated learning. I focus on building scalable, production-grade systems that deliver measurable performance improvements.

I combine strong academic foundations with hands-on industry experience, enabling me to translate cutting-edge research into reliable, efficient solutions that create real impact in production environments.

Generative AIComputer VisionSemantic CommunicationsMultimodal LearningFederated LearningNeural Network Optimization

Experience

Graduate Student Researcher

University of California, Davis · United States

Sep 2022 - Present

Software Engineer

Cut+Dry · Sri Lanka

Jun 2021 - Aug 2022

Software Engineer

Axiata Digital Labs · Sri Lanka

Mar 2020 - Jun 2021

Research Internship

Singapore University of Technology and Design (SUTD) · Singapore

Jun 2018 - Dec 2018

Education

Ph.D. in Electrical and Computer Engineering

University of California, Davis · Davis, CA, USA

Sep 2022 - Jun 2026 (Exp.)

M.S. in Electrical and Computer Engineering

University of California, Davis · Davis, CA, USA

Sep 2022 - Sep 2025

BSc. Engineering (Hons) in Electronic and Telecommunication

University of Moratuwa · Sri Lanka

Dec 2015 - Jan 2020

Diff-GODiffusion Goal-Oriented Communications

A noise-optimized diffusion framework with Noise-Restricted Forward Diffusion (NR-FD) process for ultra-high bandwidth efficiency semantic communication.

PyTorchDiffusion ModelsLPIPSHPC

Diff-GO+Enhanced Generative Feedback Framework

Improved semantic image generation quality through local generative feedback (LGF) with dictionary learning for effective noise codebook design.

PyTorchVQ-VAEDiffusion Models

LaMI-GOLatent Mixture Integration Framework

Task-driven latent integration using VQ-Diffusion models and VQGAN for ultra-high bandwidth efficiency while maintaining semantic fidelity.

PyTorchVQ-DiffusionVQ-GAN

TACOTask Adaptation and Context Embedding

Vector Quantized VAE for task-driven semantic quantization and imitation learning, improving autonomous driving applications.

PyTorchVQ-VAEImitation Learning

IgnitionHigh-Performance Transformer Training with CUDA

From-scratch transformer training pipeline with custom CUDA kernels, mixed-precision training, gradient accumulation, and distributed data parallelism with integrated profiling and benchmarking.

PyTorchCUDA C++DeepSpeedAccelerate

CompassRLHF and DPO Alignment Toolkit

End-to-end alignment toolkit covering the full pipeline from reinforcement learning fundamentals through reward model training, RLHF, and Direct Preference Optimization, with LLM-as-judge evaluation.

PyTorchHuggingFace TRLGymnasiumW&B

PrismSystematic LoRA/QLoRA Fine-Tuning Framework

Structured framework for parameter-efficient fine-tuning experiments using LoRA and QLoRA, providing reproducible ablation analysis across rank, alpha, quantization, and learning rate configurations.

HuggingFaceLoRA/QLoRAbitsandbytesW&B

Skills & Technologies

Programming Languages

PythonC++CUDAJavaJavaScriptTypeScriptMATLAB

AI/ML Frameworks

PyTorchJAXTensorFlowHugging Face (Transformers, PEFT, TRL)DeepSpeedAccelerateTransformerLensWeights & Biases

Generative AI & LLMs

Diffusion ModelsGANsVAEsLLMsVLMsLoRA/QLoRA Fine-tuningRLHFDPORAGMulti-Agent SystemsInference OptimizationLLM Evaluation

GenAI Ecosystem

LangChainLangGraphCrewAIMCPvLLMTensorRT-LLMChromaDBQdrantLangSmithOpenAI APIAnthropic API

Infrastructure & MLOps

DockerKubernetesTerraformHelmAWSFastAPIGitHub Actions CI/CDPrometheusLinux

Research Domains

Computer VisionMechanistic InterpretabilityFederated LearningMultimodal LearningSemantic CommunicationsDistributed TrainingHPC

Publications

Diff-GO+: An Efficient Diffusion Goal-Oriented Communication System with Local Feedback

A. Wijesinghe, S. Zhang, S. Wanninayaka, W. Wang, Z. Ding

IEEE Transactions on Wireless Communications·2025journal

LaMI-GO: Latent Mixture Integration for Goal-Oriented Communications Achieving High Spectrum Efficiency

A. Wijesinghe, S. Wanninayaka, W. Wang, Y. Chao, S. Zhang, Z. Ding

IEEE Transactions on Neural Networks and Learning Systems·2025journal

Diff-GOn: Enhancing Diffusion Models for Goal-Oriented Communications

S. Wanninayaka, A. Wijesinghe, W. Wang, Y. Chao, S. Zhang, Z. Ding

IEEE International Conference on Communications (ICC)·2025conference

Task-Driven Semantic Quantization and Imitation Learning for Goal-Oriented Communications

Y. Chao, Y. Chen, W. Wang, A. Wijesinghe, S. Wanninayaka, S. Zhang, Z. Ding

IEEE International Conference on Communications (ICC)·2025conference

TACO: Rethinking Semantic Communications with Task Adaptation and Context Embedding

A. Wijesinghe, W. Wang, S. Wanninayaka, S. Zhang, Z. Ding

IEEE Global Communications Conference (GLOBECOM)·2024conference

Diff-GO: Diffusion Goal-Oriented Communications with Ultra-High Spectrum Efficiency

A. Wijesinghe, S. Zhang, S. Wanninayaka, W. Wang, Z. Ding

IEEE International Conference on Communications Workshops (ICC Workshops)·2024conference

Interested in collaborating?

I'm open to research collaborations, consulting, and new opportunities in AI/ML.