Research

Here's an overview of our research areas

Optimizations for Large Language Models: Developing core technologies for enhancing the capabilities of large language models

query optimization

Large language models (LLMs) are designed to handle and produce extensive natural language content. They develop an understanding of the structure, meaning, and knowledge embedded in human language datasets. Our focus includes three specific areas: (1) Fundamental technologies in Transformer-based LLMs, (2) Tailoring LLMs to specialized tasks, and (3) Refining methods for LLM agents.

Publication list

[1] Haochen Zhang, Yuyang Dong, Chuan Xiao, Masafumi Oyamada. Jellyfish: Instruction-Tuning Local Large Language Models for Data Preprocessing. EMNLP 2024.
[2] Haochen Zhang, Yuyang Dong, Chuan Xiao, Masafumi Oyamada. Large Language Models as Data Preprocessors. TaDA 2024.

Funding

Consultation on the utilization of Large Language Models for data management challenges (NEC Corporation)
Consultation on the high performance and acceleration of Large Language Models (NEC Corporation)

Resources

Jellyfish model: https://huggingface.co/NECOUDBFM/Jellyfish
Jellyfish dataset: https://huggingface.co/datasets/NECOUDBFM/Jellyfish-Instruct


Back to the research page