Hello! I’m Charlie

My work sits at the intersection of natural language processing, reinforcement learning, and programming education. A core thread running through my research is building lightweight, locally deployable tools powered by small models, rather than relying on large proprietary systems.

I recently completed my Ph.D. in Computer Science at Aalto University, where my research focused on automating programming feedback using open-source language models.

You can find my full dissertation here.


Recent News

5 December 2025
My Ph.D thesis has been accepted for defense! Defense information

15 September 2025
Two papers accepted at SIGCSE TS 2026!

  • Aligning Small Language Models for Programming Feedback: Towards Scalable Coding Support in a Massive Global Course (first author)
  • Fine-Tuning Open-Source Models as a Viable Alternative to Proprietary LLMs for Explaining Compiler Messages (second author)

The first came out of my research visit at Stanford — we also put together a presentation page. The second is a collaboration with researchers at The University of New South Wales.

27 May 2025
Paper accepted at CSEDM 2025: “Reinforcement Learning for Programming Feedback: Aligning Small Language Models Without Human Preferences” (first author).

24 May 2025
Paper accepted at BEA 2025: “Direct Repair Optimization: Training Small Language Models for Educational Program Repair Improves Feedback Abilities” (first author).

2 May 2025
Gave a talk at Berkeley’s ACE Lab on feedback generation with small language models.

1 April 2025
Started a research visit at the Piech Lab at Stanford University (through June 2025).