Research

現在鬼塚研が行っている様々な研究についてご紹介します.

大規模言語モデル最適化

query optimization

Large language models (LLMs) are designed to handle and produce extensive natural language content. They develop an understanding of the structure, meaning, and knowledge embedded in human language datasets. Our focus includes three specific areas: (1) Fundamental technologies in Transformer-based LLMs, (2) Tailoring LLMs to specialized tasks, and (3) Refining methods for LLM agents.

Publication list

[1] Haochen Zhang, Yuyang Dong, Chuan Xiao, Masafumi Oyamada. Jellyfish: Instruction-Tuning Local Large Language Models for Data Preprocessing. EMNLP 2024.
[2] Haochen Zhang, Yuyang Dong, Chuan Xiao, Masafumi Oyamada. Large Language Models as Data Preprocessors. TaDA 2024.

Funding

大規模言語モデルを用いてデータマネジメント課題への活用に関する相談(日本電気株式会社)
大規模言語モデルの高性能と高速化に関する相談(日本電気株式会社)

Resources

Jellyfish model: https://huggingface.co/NECOUDBFM/Jellyfish
Jellyfish dataset: https://huggingface.co/datasets/NECOUDBFM/Jellyfish-Instruct


研究紹介ページに戻る