Configure LLM

Index

Configure the LLM model

Modules that use LLM model

Most of the modules that using LLM model can take llm parameter to specify the LLM model.

The following modules can use generator module, which including llama_index_llm.

Supporting LLM Models

We support most of the LLMs that LlamaIndex supports. You can use different types of LLM interfaces by configuring the llm parameter:

LLM Model Type

llm parameter

Description

OpenAI

openai

For OpenAI models (GPT-3.5, GPT-4)

OpenAILike

openailike

For models with OpenAI-compatible APIs (e.g., Mistral, Claude)

Ollama

ollama

For locally running Ollama models

Bedrock

bedrock

For AWS Bedrock models

For example, if you want to use OpenAILike model, you can set llm parameter to openailike.

nodes:
  - node_line_name: node_line_1
    nodes:
      - node_type: generator
        modules:
          - module_type: llama_index_llm
            llm: openailike
            model: mistralai/Mistral-7B-Instruct-v0.2
            api_base: your_api_base
            api_key: your_api_key

At the above example, you can see model parameter. This is the parameter for the LLM model. You can set the model parameter for LlamaIndex LLM initialization. The most frequently used parameters are model, max_token, and temperature. Please check what you can set for the model parameter at LlamaIndex LLM.

Common Parameters

The most frequently used parameters for LLM configuration are:

  • model: The model identifier or name

  • max_tokens: Maximum number of tokens in the response

  • temperature: Controls randomness in the output (0.0 to 1.0)

  • api_base: API endpoint URL (for hosted models)

  • api_key: Authentication key (if required)

For a complete list of available parameters, please refer to the LlamaIndex LLM documentation.

Add more LLM models

You can add more LLM models for AutoRAG. You can add it by simply calling autorag.generator_models and add new key and value. For example, if you want to add MockLLM model for testing, execute the following code.

Attention

It was major update for LlamaIndex to v0.10.0. The integration of llms must be installed to different packages. So, before add your model, you should find and install the right package for your model. You can find the package at here.

import autorag
from llama_index.core.llms.mock import MockLLM

autorag.generator_models['mockllm'] = MockLLM

Then you can use mockllm at config YAML file.

Caution

When you add new LLM model, you should add class itself, not the instance.

Plus, it must follow LlamaIndex LLM’s interface.

Integration list