Research

Our research focuses on building grounded AI systems that enable users to interact through natural language with digital and physical environments. We develop AI agents that translate language and perception into executable code and actions, empowering people to perform data science, control computers, and collaborate with robots. Our work spans three core areas: code generation for data science, grounding language in the digital world, and grounding language in the physical world.

Papers

Code Generation for Data Science
Grounding Language in the Digital World
Grounding Language in the Physical World
Others
AgentTrek: Agent Trajectory Synthesis via Guiding Replay with Web Tutorials

AgentTrek: Agent Trajectory Synthesis via Guiding Replay with Web Tutorials

Yiheng Xu, Dunjie Lu, Zhennan Shen, Junli Wang, Zekun Wang, Yuchen Mao, Caiming Xiong, Tao Yu

Preprint

Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction

Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction

Yiheng Xu, Zekun Wang, Junli Wang, Dunjie Lu, Tianbao Xie, Amrita Saha, Doyen Sahoo, Tao Yu, Caiming Xiong

Preprint

Spider 2.0: Evaluating Language Models on Real-World Enterprise Text-to-SQL Workflows

Spider 2.0: Evaluating Language Models on Real-World Enterprise Text-to-SQL Workflows

Fangyu Lei, Jixuan Chen, Yuxiao Ye, Ruisheng Cao, Dongchan Shin, Hongjin Su, Zhaoqing Suo, Hongcheng Gao, Wenjing Hu, Pengcheng Yin, Victor Zhong, Caiming Xiong, Ruoxi Sun, Qian Liu, Sida Wang, Tao Yu

Preprint

Attacking Vision-Language Computer Agents via Pop-ups

Attacking Vision-Language Computer Agents via Pop-ups

Yanzhe Zhang, Tao Yu, Diyi Yang

Preprint

BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval

BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval

Hongjin Su, Howard Yen, Mengzhou Xia, Weijia Shi, Niklas Muennighoff, Han-yu Wang, Haisu Liu, Quan Shi, Zachary S. Siegel, Michael Tang, Ruoxi Sun, Jinsung Yoon, Sercan O. Arik, Danqi Chen, Tao Yu

Preprint

Generative Representational Instruction Tuning

Generative Representational Instruction Tuning

Niklas Muennighoff, Hongjin Su, Liang Wang, Nan Yang, Furu Wei, Tao Yu, Amanpreet Singh, Douwe Kiela

ICLR 2024 AGI Workshop, Best Paper Award

OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments

OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments

Tianbao Xie, Danyang Zhang, Jixuan Chen, Xiaochuan Li, Siheng Zhao, Ruisheng Cao, Toh Jing Hua, Zhoujun Cheng, Dongchan Shin, Fangyu Lei, Yitao Liu, Yiheng Xu, Shuyan Zhou, Silvio Savarese, Caiming Xiong, Victor Zhong, Tao Yu

NeurIPS 2024

Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?

Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?

Ruisheng Cao, Fangyu Lei, Haoyuan Wu, Jixuan Chen, Yeqiao Fu, Hongcheng Gao, Xinzhuang Xiong, Hanchong Zhang, Yuchen Mao, Wenjing Hu, Tianbao Xie, Hongshen Xu, Danyang Zhang, Sida Wang, Ruoxi Sun, Pengcheng Yin, Caiming Xiong, Ansong Ni, Qian Liu, Victor Zhong, Lu Chen, Kai Yu, Tao Yu

NeurIPS 2024, Spotlight

EvoR: Evolving Retrieval for Code Generation

EvoR: Evolving Retrieval for Code Generation

Hongjin Su, Shuyang Jiang, Yuhang Lai, Haoyuan Wu, Boao Shi, Che Liu, Qian Liu, Tao Yu

EMNLP Findings 2024

OS-Copilot: Towards Generalist Computer Agents with Self-Improvement

OS-Copilot: Towards Generalist Computer Agents with Self-Improvement

Zhiyong Wu, Chengcheng Han, Zichen Ding, Zhenmin Weng, Zhoumianze Liu, Shunyu Yao, Tao Yu, Lingpeng Kong

Preprint

OpenAgents: An Open Platform for Language Agents in the Wild

OpenAgents: An Open Platform for Language Agents in the Wild

Tianbao Xie*, Fan Zhou*, Zhoujun Cheng*, Peng Shi*, Luoxuan Weng*, Yitao Liu*, Toh Jing Hua, Junning Zhao, Qian Liu, Che Liu, Leo Z. Liu, Yiheng Xu, Hongjin Su, Dongchan Shin, Caiming Xiong, Tao Yu

COLM 2024

Does Collaborative Human-LM Dialogue Generation Help Information Extraction from Human Dialogues?

Does Collaborative Human-LM Dialogue Generation Help Information Extraction from Human Dialogues?

Bo-Ru Lu, Nikita Haduong, Chia-Hsuan Lee, Zeqiu Wu, Hao Cheng, Paul Koester, Jean Utke, Tao Yu, Noah A. Smith, Mari Ostendorf

COLM 2024

Lemur: Harmonizing Natural Language and Code for Language Agents

Lemur: Harmonizing Natural Language and Code for Language Agents

Yiheng Xu*, Hongjin Su*, Chen Xing*, Boyu Mi, Qian Liu, Weijia Shi, Binyuan Hui, Fan Zhou, Yitao Liu, Tianbao Xie, Zhoujun Cheng, Siheng Zhao, Lingpeng Kong, Bailin Wang, Caiming Xiong, Tao Yu

ICLR 2024 Spotlight

Text2Reward: Automated Dense Reward Function Generation for Reinforcement Learning

Text2Reward: Automated Dense Reward Function Generation for Reinforcement Learning

Tianbao Xie*, Siheng Zhao*, Chen Henry Wu, Yitao Liu, Qian Luo, Victor Zhong, Yanchao Yang, Tao Yu

ICLR 2024 Spotlight

Instructor Embeddings: One Embedder, Any Task: Instruction-Finetuned Text Embeddings

Instructor Embeddings: One Embedder, Any Task: Instruction-Finetuned Text Embeddings

Hongjin Su*, Weijia Shi*, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Noah A. Smith, Luke Zettlemoyer, Tao Yu

ACL 2023 Findings

DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation

DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation

Yuhang Lai*, Chengxi Li*, Yiming Wang*, Tianyi Zhang*, Ruiqi Zhong*, Luke Zettlemoyer, Scott Wen-tau Yih, Daniel Fried, Sida Wang, Tao Yu

ICML 2023

Coder Reviewer Reranking for Code Generation

Coder Reviewer Reranking for Code Generation

Tianyi Zhang, Tao Yu, Tatsunori B. Hashimoto, Mike Lewis, Wen-tau Yih, Daniel Fried, Sida I. Wang

ICML 2023

Compositional Exemplars for In-context Learning

Compositional Exemplars for In-context Learning

Jiacheng Ye, Zhiyong Wu, Jiangtao Feng, Tao Yu, Lingpeng Kong.

ICML 2023

Batch Prompting: Efficient Inference with Large Language Model APIs

Batch Prompting: Efficient Inference with Large Language Model APIs

Zhoujun Cheng, Jungo Kasai, Tao Yu

EMNLP 2023 Industry Track

Binder: Binding Language Models in Symbolic Languages

Binder: Binding Language Models in Symbolic Languages

Zhoujun Cheng*, Tianbao Xie*, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, Tao Yu

ICLR 2023

Selective Annotation Makes Language Models Better Few-Shot Learners

Selective Annotation Makes Language Models Better Few-Shot Learners

Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu

ICLR 2023

UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models

UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models

Tianbao Xie*, Chen Henry Wu*, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir Radev, Caiming Xiong, Lingpeng Kong, Rui Zhang, Noah A. Smith, Luke Zettlemoyer, Tao Yu.

EMNLP 2022

ZeroGen: Efficient Zero-shot Learning via Dataset Generation

ZeroGen: Efficient Zero-shot Learning via Dataset Generation

Jiacheng Ye*, Jiahui Gao*, Qintong Li, Hang Xu, Jiangtao Feng, Zhiyong Wu, Tao Yu, Lingpeng Kong

EMNLP 2022

In-Context Learning for Few-Shot Dialogue State Tracking

In-Context Learning for Few-Shot Dialogue State Tracking

Yushi Hu, Chia-Hsuan Lee, Tianbao Xie, Tao Yu, Noah A. Smith, Mari Ostendorf

EMNLP Findings 2022

Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task

Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task

Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, Dragomir Radev

EMNLP 2018

Xlang
© Copyright 2023 XLANG Lab. All right reserved.