Home Design Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs

Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs

1Allen Institute for AI   
2University of California, Los Angeles   
3University of Washington   

We introduce 🪄 Lumos, Language Agents with Unified Data Formats, Modular Design, and Open-Source LLMs. Lumos unifies a suite of complex interactive tasks and achieves competitive performance with GPT-4/3.5-based and larger open-source agents.

Lumos has following features:

  • 🧩 A General Agent Modular Framework
    • 🧩 Lumos consists of planning, grounding, and execution modules built based on LLAMA-2-7B and off-the-shelf APIs.
    • 🤗 Lumos utilizes a unified data format that encompasses multiple task types, thereby enabling the developed agent framework to conveniently support a range of interactive tasks.
  • 🌍 Diverse Training Data
    • 🌍 Lumos is trained with ~40K diverse high-quality subgoal/action annotations from ground-truth reasoning steps in existing benchmarks with GPT-4.
    • ⚒️ Lumos data can be instrumental for future research in developing open-source agents for complex interactive tasks.
  • 🚀 Competitive Performance
    • 🚀 Lumos is comparable or even beats GPT-series agents on web/complex QA tasks Mind2Web and HotpotQA, and larger open agents on math tasks.
    • 🚀 Lumos exceeds contemporaneous agents that have been fine-tuned with in-domain HotpotQA and Mind2Web annotations, such as FiReAct and AgentLM.
    • 🚀 Lumos outperforms open agent baseline formulations like chain-of-thoughts and integrated training.
    • 🚀 Lumos surpasses larger open LLM agents and domain-specific agents by a large margin on an unseen task, WebShop.

BibTeX


    @article{yin2023lumos,
      title={{Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs}},
      author={Yin, Da and Brahman, Faeze and Ravichander, Abhilasha and Chandu, Khyathi and Chang, Kai-Wei and Choi, Yejin and Lin, Bill Yuchen},
      journal={arXiv preprint arXiv:2311.05657},
      year={2023}
    }
    

🪄 Lumos Architecture

Lumos consists of following modules:

  • Planning Module:
    • Decompose a complex task into a series of high-level subgoals, which are written in natural language.
  • Grounding Module:
    • Convert the high-level subgoals produced by the planning module to low-level executable actions.
  • Execution Module:
    • Parse actions to a series of external tools including APIs, small neural models, and virtual simulators that interact with relevant tools and external environment.

🪄 Lumos Formulation

We attempt the following two Lumos formulations:

  • Lumos-Iterative (Lumos-I):
    • Generates one subgoal and its corresponding executable actions in each iteration according to the external environment and prior memory.
    • When generating the current t-th subgoal, the planning module requires the previous planned subgoals and the execution results of their grounded actions.
  • Lumos-Onetime (Lumos-O):
    • An efficient formulation that generates all the subgoals and grounded actions at once.

🪄 Lumos Training Annotations

Instead of using Self-Instruct method, we use LLMs to convert ground-truth intermediate reasoning steps into the expected high-quality annotations aligning with our proposed formulations.

Finally, we are able to generate ~40K annotations to train Lumos planning and grounding modules (one of the largest resources for language agent fine-tuning). The annotation sources cover web, complex QA and math task types. See our final annotation data in Huggingface Dataset and prompt details in Github.

Results

We first evaluate Lumos on complex QA, web and maths tasks.

We find that Lumos outperforms GPT-4/3.5-based agents on complex QA and web tasks.
In particular, Lumos outperforms GPT-4 5.1 step success rate on Mind2Web and GPT-3.5-turbo-based ReAct 5.1 LLM accuracy. Lumos also achieves better performance than 2-4x bigger language agents on maths tasks.

Comparison with Baseline Formulations

We compare Lumos formulation with other baseline formulations to train open-source agents. The baseline formulations are Chain-of-Thought Training
and Integrated Agent Training.

Lumos performs the best among the baselines on three different complex interative tasks.

Generalizability of Lumos

We first evaluate Lumos trained with the unified annotations composed by task-specific ones. We then test Lumos on an unseen complex interactive task, WebShop.

We find that after the unified training, Lumos would have slightly higher
performance on web and complex QA tasks. We also observe that Lumos can bring an improvement over domain-specific agents 5-10 reward improvement, and also better performance than larger agents with 13B and 30B sizes.

Further Analysis on Annotations

We also conduct deeper analysis about annotation quality and the choice of annotation formats. We answer the following questions:

  • Q1: How good is our converted training annotations?
  • Q2: Would it be better if we adopt low-level subgoals instead of our proposed high-level subgoals?

We find that by controling the training annotation size to be the same, our annotations can still help get better performance than the ones produced by Self-Instruct method and passed by rigorous
execution sanity checking. Also, we find that making planning module generate high-level subgoals would be a superior choice to generating a very long sequence of low-level subgoals.

Read whole article here

Leave a Reply

Your email address will not be published.