Ruijie Zheng
I am a second-year Ph.D. student in Computer Science at the University of Maryland, College Park, where I am fortunate to be advised by Prof. Furong Huang and Prof. Hal Daumé III. Before that, I obtained my Bachelor’s degree in Computer Science and Mathematics with high honors from the University of Maryland, College Park. My research spans a variety of topics in sequential decision making and reinforcement learning (RL), including multitask offline pretraining (foundational model for sequential decision making), self-supervised representation learning in visual RL, model-based RL, adversarial RL, etc. My long-term goal is to develop a generally capable, robust, and self-adaptive embodied agent, endowed with extensive prior knowledge from a broad spectrum of structured and unstructured data. You can find my CV here.
In visual RL, I developed a temporal contrastive representation learning mechanism, TACO that simultaneously learn state and action representations for online and offline visual RL algorithms. Building on top of TACO, Premier-TACO scales up to large-scale multitask offline pretraining, learning a universal visual representation for efficient adaptation to new tasks with few-shot imitation learning. Additionally, another of my recent work DrM pioneers the first visual RL algorithm mastering a diverse range of complex locomotion and manipulation tasks through the concept of dormant ratio.
Beyond visuo-motor policy learning, I have also worked on model-based RL, transfer-RL across different observation spaces, and adversarial RL to make policy robust against observation and communication attacks.