Peng Jiang 蒋鹏
PhD Student, Computational Neuroscience · Tsinghua University
About
I'm a PhD student in Computational Neuroscience at Tsinghua University, Beijing, supervised by Prof. Xiaoxuan Jia in the Neural Coding Lab. My work sits at the intersection of neuroscience and AI, asking how intelligent systems — brains and models alike — represent, reuse, and generalize knowledge.
Currently I am working on Brain Foundation Model research — building large-scale pre-trained models for neural spiking data that can generalize across sessions, subjects, and brain regions. A central goal is to understand neural encoding mechanisms across modalities, and to leverage the learned universal representations for brain decoding tasks such as reconstructing perceived images and videos from neural activity. I am also deeply interested in real-time interactive digital humans and continue to explore this direction in parallel.
Research Interests
Education
PhD, Computational Neuroscience
Tsinghua University · Supervisor: Prof. Xiaoxuan Jia
B.Sc., Life Sciences (Minor: Computer Science)
Tsinghua University
Academic Experience
Multi-task Neural Representation Research
Neural Coding Lab, Tsinghua University
- Analyzed multi-task neural representations across brain regions using invasive Neuropixels recordings from mice
- Built multi-area RNN models to study hierarchical task representation in flexible cognition
- Investigated compositional generalization of task representations via continual learning with LoRA fine-tuning of LLMs
Entrepreneurship
Co-Founder & Algorithm Lead
Startup · Leave of absence from PhD
I took a leave of absence to co-found a startup building a real-time, audio-driven interactive digital human system based on 3D Gaussian Splatting (3DGS). We developed the full pipeline — from multi-camera capture & calibration to 3DGS-based head reconstruction and diffusion-based audio-to-expression driving.
Now
After returning to school in early 2026, I am focusing on Brain Foundation Model research — building large-scale pre-trained models for neural spiking data that generalize across sessions, subjects, and brain regions.
The core ambition is twofold. On the encoding side, I want to understand how the brain encodes multimodal information: what representations emerge across different brain regions, how they are structured, and what computational principles unify them. On the decoding side, I believe that universal neural representations learned during pre-training can serve as a strong foundation for visual decoding tasks — reconstructing the images and videos a subject is perceiving directly from their neural activity. Together, these two directions form a coherent loop: encoding teaches the model the brain's language; decoding proves that the model has truly understood it.
I remain deeply interested in real-time interactive digital humans and continue exploring this direction in parallel.