Research

Here's an overview of our research areas

Smart Agent-Based Modeling: Leveraging foundation models for agent-based modeling

query optimization

Smart agents are intelligent, adaptive, and computational entities. While humans are the canonical smart agents, the advent of foundation models - imbued with remarkable language, vision, and reasoning abilities that emulate human behavior - enables us to expand the concept of smart agents to agent-based modeling (ABM). This evolution leads to the introduction of smart agent-based modeling (SABM). Unlike traditional ABM, SABM incorporates foundation models as agents and formulates models using natural language. We employ SABM to investigate natural processes across various fields such as economics and behavioral science. We believe that SABM offers a more nuanced and realistic approach to enhancing our comprehension of natural systems.

Publication list

[1] Zengqing Wu, Run Peng, Shuyuan Zheng, Qianying Liu, Xu Han, Brian Inhyuk Kwon, Makoto Onizuka, Shaojie Tang, Chuan Xiao. Shall We Team Up: Exploring Spontaneous Cooperation of Competing LLM Agents. EMNLP 2024 Findings.
[2] Jiawei Wang, Renhe Jiang, Chuang Yang, Zengqing Wu, Makoto Onizuka, Ryosuke Shibasaki, Noboru Koshizuka, Chuan Xiao. Large Language Models as Urban Residents: An LLM Agent Framework for Personal Mobility Generation. arXiv preprint 2402.14744.
[3] Zengqing Wu, Run Peng, Xu Han, Shuyuan Zheng, Yixin Zhang, Chuan Xiao. Smart Agent-Based Modeling: On the Use of Large Language Models in Computer Simulations. arXiv preprint 2311.06330.
[4] Xu Han, Zengqing Wu, Chuan Xiao. "Guinea Pig Trials" Utilizing GPT: A Novel Smart Agent-Based Modeling Approach for Studying Firm Competition and Collusion. CIST 2023.

Resources

SABM case studies: https://github.com/Roihn/SABM
Spontaneous cooperation of LLM agents: https://github.com/wuzengqing001225/SABM_ShallWeTeamUp
Personal mobility generation: https://github.com/Wangjw6/LLMob/


Back to the research page