
大規模言語モデル最適化
Large language models (LLMs) are designed to handle and produce extensive natural language content. They develop an understanding of the structure, meaning, and knowledge embedded in human language datasets. Our focus includes three specific areas: (1) Fundamental technologies in Transformer-based LLMs, (2) Tailoring LLMs to specialized tasks, and (3) Refining methods for LLM agents.
関連論文
[1] Haochen Zhang, Yuyang Dong, Chuan Xiao, Masafumi Oyamada. Jellyfish: Instruction-Tuning Local Large Language Models for Data Preprocessing. EMNLP 2024.
[2] Haochen Zhang, Yuyang Dong, Chuan Xiao, Masafumi Oyamada. Large Language Models as Data Preprocessors. TaDA 2024.
[2] Haochen Zhang, Yuyang Dong, Chuan Xiao, Masafumi Oyamada. Large Language Models as Data Preprocessors. TaDA 2024.
研究予算
大規模言語モデルを用いてデータマネジメント課題への活用に関する相談(日本電気株式会社)
大規模言語モデルの高性能と高速化に関する相談(日本電気株式会社)