autorag.data.qa.query package¶
Submodules¶
autorag.data.qa.query.llama_gen_query module¶
- async autorag.data.qa.query.llama_gen_query.concept_completion_query_gen(row: Dict, llm: BaseLLM, lang: str = 'en') Dict [source]¶
- async autorag.data.qa.query.llama_gen_query.custom_query_gen(row: Dict, llm: BaseLLM, messages: List[ChatMessage]) Dict [source]¶
- async autorag.data.qa.query.llama_gen_query.factoid_query_gen(row: Dict, llm: BaseLLM, lang: str = 'en') Dict [source]¶
autorag.data.qa.query.openai_gen_query module¶
- class autorag.data.qa.query.openai_gen_query.Response(*, query: str)[source]¶
Bases:
BaseModel
- model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}¶
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[Dict[str, FieldInfo]] = {'query': FieldInfo(annotation=str, required=True)}¶
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.
This replaces Model.__fields__ from Pydantic V1.
- query: str¶
- class autorag.data.qa.query.openai_gen_query.TwoHopIncrementalResponse(*, answer: str, one_hop_question: str, two_hop_question: str)[source]¶
Bases:
BaseModel
- answer: str¶
- model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}¶
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[Dict[str, FieldInfo]] = {'answer': FieldInfo(annotation=str, required=True), 'one_hop_question': FieldInfo(annotation=str, required=True), 'two_hop_question': FieldInfo(annotation=str, required=True)}¶
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.
This replaces Model.__fields__ from Pydantic V1.
- one_hop_question: str¶
- two_hop_question: str¶
- async autorag.data.qa.query.openai_gen_query.concept_completion_query_gen(row: Dict, client: AsyncOpenAI, model_name: str = 'gpt-4o-2024-08-06', lang: str = 'en') Dict [source]¶
- async autorag.data.qa.query.openai_gen_query.factoid_query_gen(row: Dict, client: AsyncOpenAI, model_name: str = 'gpt-4o-2024-08-06', lang: str = 'en') Dict [source]¶
- async autorag.data.qa.query.openai_gen_query.query_gen_openai_base(row: Dict, client: AsyncOpenAI, messages: List[ChatMessage], model_name: str = 'gpt-4o-2024-08-06')[source]¶
- async autorag.data.qa.query.openai_gen_query.two_hop_incremental(row: Dict, client: AsyncOpenAI, model_name: str = 'gpt-4o-2024-08-06', lang: str = 'en') Dict [source]¶
Create a two-hop question using incremental prompt. Incremental prompt is more effective to create multi-hop question. The input retrieval_gt has to include more than one passage.
- Returns:
The two-hop question using openai incremental prompt