Projects & Publications
Predictive Maintenance Intelligence for U.S. Navy Fleets
Research Scientist · New Jersey Institute of Technology · 2023 – Present
I architected a pipeline that fuses shipyard telemetry, historical maintenance logs, and expert annotations to forecast schedule risk. Python services orchestrate feature generation and conformal calibration, Postgres maintains the contract with upstream systems, and Tableau closes the loop with human decision-makers. The platform increased readiness planning accuracy by 40% while satisfying classified-data controls.
The research behind the system is captured in the peer-reviewed paper “A Computational Framework for Estimating Days of Maintenance Delay of Naval Ships” presented at EDBT 2025, where we detail the probabilistic modeling stack that underpins the forecasts.
Inverse Design Platform for Polymer Nanocomposites
Machine Learning Engineer · Materium Technologies · 2024 – 2025
Materium’s challenge was inverse design: start with optical performance goals and work backwards to candidate polymer nanocomposite formulations. I developed a Flask application that couples learned embeddings with physics-informed priors, giving materials scientists a living design notebook. Behind the scenes, we containerized the training and inference stack so new assay data could reshape the latent space in hours, not weeks.
Visit the Website
RLHF as a Knowledge Distillation Mechanism
Research Assistant · Toyota Technological Institute at Chicago · 2023
This study asked how much teacher choice matters when distilling preferences into large language models. By instrumenting PPO-based RLHF loops with comparative judgment tasks, I showed that pairing policies with stronger reference models yields 5–15% gains on METEOR, BLEU, and ROUGE. The work also produced a reproducible GPU experiment framework for the lab to extend into multimodal settings.
View the Code
Self-Supervised Learning with Denoising Autoencoders
M.S. Thesis · University of Chicago · 2022 – 2023
For my graduate thesis I revisited self-supervised learning with a question: could we escape the large-batch requirements of contrastive methods without sacrificing downstream performance? The resulting denoising autoencoder architecture matched SimCLR on core benchmarks while exposing how augmentation strength modulates representation quality.