autorag.nodes.promptmaker package

Submodules

autorag.nodes.promptmaker.base module

autorag.nodes.promptmaker.base.prompt_maker_node(func)[source]

autorag.nodes.promptmaker.fstring module

autorag.nodes.promptmaker.fstring.fstring(prompt: str, queries: List[str], retrieved_contents: List[List[str]]) List[str][source]

Make a prompt using f-string from a query and retrieved_contents. You must type a prompt or prompt list at config yaml file like this:

nodes: - node_type: prompt_maker

modules: - module_type: fstring

prompt: [Answer this question: {query}

{retrieved_contents},

Read the passages carefully and answer this question: {query}

Passages: {retrieved_contents}]

param prompt:

A prompt string.

param queries:

List of query strings.

param retrieved_contents:

List of retrieved contents.

return:

Prompts that made by f-string.

autorag.nodes.promptmaker.long_context_reorder module

autorag.nodes.promptmaker.long_context_reorder.long_context_reorder(prompt: str, queries: List[str], retrieved_contents: List[List[str]], retrieve_scores: List[List[float]]) List[str][source]

Models struggle to access significant details found in the center of extended contexts. A study (https://arxiv.org/abs/2307.03172) observed that the best performance typically arises when crucial data is positioned at the start or conclusion of the input context. Additionally, as the input context lengthens, performance drops notably, even in models designed for long contexts.”.

nodes: - node_type: prompt_maker

modules: - module_type: long_context_reorder

prompt: [Answer this question: {query}

{retrieved_contents},

Read the passages carefully and answer this question: {query}

Passages: {retrieved_contents}]

param prompt:

A prompt string.

param queries:

List of query strings.

param retrieved_contents:

List of retrieved contents.

param retrieve_scores:

List of retrieve scores.

return:

Prompts that made by long context reorder.

autorag.nodes.promptmaker.run module

autorag.nodes.promptmaker.run.evaluate_generator_result(result_df: DataFrame, metric_inputs: List[MetricInput], metrics: List[str] | List[Dict]) DataFrame[source]
autorag.nodes.promptmaker.run.evaluate_one_prompt_maker_node(prompts: List[str], generator_funcs: List[Callable], generator_params: List[Dict], metric_inputs: List[MetricInput], metrics: List[str] | List[Dict], project_dir, strategy_name: str) DataFrame[source]
autorag.nodes.promptmaker.run.make_generator_callable_params(strategy_dict: Dict)[source]
autorag.nodes.promptmaker.run.run_prompt_maker_node(modules: List[Callable], module_params: List[Dict], previous_result: DataFrame, node_line_dir: str, strategies: Dict) DataFrame[source]

Run prompt maker node. With this function, you can select the best prompt maker module. As default, when you can use only one module, the evaluation will be skipped. If you want to select the best prompt among modules, you can use strategies. When you use them, you must pass ‘generator_modules’ and its parameters at strategies. Because it uses generator modules and generator metrics for evaluation this module. It is recommended to use one params and modules for evaluation, but you can use multiple params and modules for evaluation. When you don’t set generator module at strategies, it will use the default generator module. The default generator module is llama_index_llm with openai gpt-3.5-turbo model.

Parameters:
  • modules – Prompt maker modules to run.

  • module_params – Prompt maker module parameters.

  • previous_result – Previous result dataframe. Could be query expansion’s best result or qa data.

  • node_line_dir – This node line’s directory.

  • strategies – Strategies for prompt maker node.

Returns:

The best result dataframe. It contains previous result columns and prompt maker’s result columns which is ‘prompts’.

autorag.nodes.promptmaker.window_replacement module

autorag.nodes.promptmaker.window_replacement.window_replacement(prompt: str, queries: List[str], retrieved_contents: List[List[str]], retrieved_metadata: List[List[Dict]]) List[str][source]

Replace retrieved_contents with window to create a Prompt (only available for corpus chunked with Sentence window method) You must type a prompt or prompt list at config yaml file like this:

nodes: - node_type: prompt_maker

modules: - module_type: window_replacement

prompt: [Answer this question: {query}

{retrieved_contents},

Read the passages carefully and answer this question: {query}

Passages: {retrieved_contents}]

param prompt:

A prompt string.

param queries:

List of query strings.

param retrieved_contents:

List of retrieved contents.

param retrieved_metadata:

List of retrieved metadata.

return:

Prompts that made by window_replacement.

Module contents