Jiayi Pan

Jiayi Pan

潘家怡

University of California, Berkeley

Hi 👋

I am a first-year PhD student at Berkeley AI Research. I work with Alane Suhr at Berkeley NLP Group.

I love playing with complex, general systems. Recently, this has primarily involved evaluating and improving (multi-modal) language models as decision-making agents. I also explore other interesting topics in ML/AI.

In 2019-2023, I was a happy undergrad at the University of Michigan and Shanghai Jiao Tong University where I worked with Joyce Chai, Dmitry Berenson, and Fan Wu.

I continuously reassess my lifestyle/objectives. Feedback is always welcome :)

Selected Awards/Honors:

Publications & Manuscripts

* denotes equal contribution
Autonomous Evaluation and Refinement of Digital Agents

Jiayi Pan, Yichi Zhang, Nickolas Tomlin, Yifei Zhou, Sergey Levine, Alane Suhr. Preprint 2024.

We design model-based evaluators to both evaluate and autonomously refine the performance of digital agents. We show that these open-ended evaluators can significantly improve agents' performance, through either fine-tuning or inference-time guidance, without any extra supervision.
Autonomous Evaluation and Refinement of Digital Agents
ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL

Yifei Zhou, Andrea Zanette, Jiayi Pan, Sergey Levine, Aviral Kumar. ICML 2024.

We present ArCHer, a framework for building multi-turn RL algorithms for training LLM agents. It preserves the flexibility of existing single-turn RL methods for LLMs like PPO, while accommodating multiple turns, long horizons, and delayed rewards effectively.
ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL
Inversion-Free Image Editing with Natural Language

Sihan Xu*, Yidong Huang*, Jiayi Pan, Ziqiao Ma, Joyce Chai. CVPR 2024.

We present an inversion-free editing (InfEdit) method that allows for consistent natural language guided image editing. InfEdit excels in complex editing tasks and is ~10X faster than prior methods.
Inversion-Free Image Editing with Natural Language
Grounding Visual Illusions in Language: Do Vision-Language Models Perceive Illusions Like Humans?

Yichi Zhang, Jiayi Pan, Yuchen Zhou, Rui Pan, Joyce Chai. EMNLP 2023.

Do Vision-Language Models, an emergent human-computer interface, experience visual illusions similarly to humans, or do they accurately depict reality? We created GVIL dataset to study this. Among other findings, we discover that larger models align more closely with human perception.
Grounding Visual Illusions in Language: Do Vision-Language Models Perceive Illusions Like Humans?
World-to-Words: Grounded Open Vocabulary Acquisition through Fast Mapping in Vision-Language Models

Ziqiao Ma*, Jiayi Pan*, Joyce Chai. ⭐️ ACL 2023 Outstanding Paper.

We study grounding and bootstrapping in open-world language learning through Grounded Open Vocabulary Acquisition. Our visually-grounded language model, OctoBERT, excels in learning grounded words quickly and robustly, particularly with unseen words.
World-to-Words: Grounded Open Vocabulary Acquisition through Fast Mapping in Vision-Language Models
SEAGULL: An Embodied Agent for Instruction Following through Situated Dialog

Team SEAGULL at UMich. 🏆 1st Place in the inaugural Alexa Prize SimBot Challenge.

We introduce SEAGULL, an interactive embodied agent which completes complex tasks in the Arena simulation environment through dialog with users. SEAGULL is engineered to be efficient, user-centric, and continuously improving.
SEAGULL: An Embodied Agent for Instruction Following through Situated Dialog
Data-Efficient Learning of Natural Language to Linear Temporal Logic Translators for Robot Task Specification

Jiayi Pan, Glen Chou, Dmitry Berenson. ICRA 2023.

We present a learning-based approach to translate from natural language commands to LTL specifications with only a handful of labeled data. It enables few-shot learning of LTL translators while achieving state-of-the-art performance.
Data-Efficient Learning of Natural Language to Linear Temporal Logic Translators for Robot Task Specification
DANLI: Deliberative Agent for Following Natural Language Instructions

Yichi Zhang, Jianing Yang, Jiayi Pan, Shane Storks, Nikhil Devraj, Ziqiao Ma, Keunwoo Peter Yu, Yuwei Bao, Joyce Chai. EMNLP 2022, Oral.

We introduce DANLI, a neural-symbolic deliberative agent that proactively reasons and plans according to its past experiences. DANLI achieves a 70% improvement on the challenging TEACh benchmark while improving transparency and explainability in its behaviors.
DANLI: Deliberative Agent for Following Natural Language Instructions

Contact

  • Email: jiayipan [AT] berkeley [DOT] edu

    Misc

    • I recently started to track what I consume and learn from. You can find them here.
    • I try to develop some habits. Currently, I am learning guitar and music theory.
    • Growing up, I lived in quite a few places: Chongqing, Xinyang, Chengdu, Shanghai, Ann Arbor, and now the Bay Area.
    • These days, I think a lot about how to align my research with a positive, counterfactual impact on the near future where AGI becomes a reality.
    • Before doing AI research, I was quite into physics and participated in the Physics Olympiad during my high school years (although I wasn’t exceptionally strong). I still occasionally read physics books.
    • I am a big fan of Elden Rings and Hollow Knight. I also play League of Legends with friends sometime.