This is the root element for the expert definition file.

Displayed name of the expert. Short description of the expert. Configure expert as being confidential and requiring authorization to access. RAG configuration. Tools configuration. Knowledge graph configuration. Topic generation configuration. Chat configuration. Prompts configuration. Agents configuration.

Configuration of Retrieval Augment Generation as used by this expert.

F2 configuration. Embedding configuration. Summary generation configuration. Retrieval configuration.

Configuration of F2 parameters.

External ID for F2 keyword that defines the expert area. This ID will be used to identify the records that will be indexed by the expert. Multiple external IDs for F2 keyword that defines the expert area. Records that contains at least one of the keywords are indexed by the expert.

Configuration of how the expert should create summary chunks for clusters of chunks. The expert will automatically generate these summary chunks and insert them together with the chunks extracted from the documents.

Enable or disable summary generation (default is true if SummaryGenerationType is included, otherwise default is false). Number of chunks to include per cluster. Use 0 to avoid generating summaries. A value of 1 is allowed, generating a sumary for each chunk, but probably not very useful. Number of words to generate in summary.

Configuration of how the expert retrieves document chunks.

Number of chunks to include in the response context. Enable or disable algorithm used to analyze query and decide whether or not to search documents to include in context. Threshold value for the semantic vector search ranking score. Only chunks with a higher score will be included. BM25 text search ranking algorithm configuration. Reranking configuration. Query expansion configuration. Query term extraction configuration. Query decomposition configuration.

Template for the output of questions and answers generated by the query expansion and decomposition process.

Template variables:

  • Question
  • Answer

Configuration of the tools available for the expert.

Include search tool.

Configuration of a tool for filtered searching of RAG data.

Name of search tool. Description of search tool. A notification to display when searching. Query parameter. Search parameters. Search result template.

Configuration of query parameter.

Name of parameter to use for search query.

Configuration of parameters for a tool.

Tool parameter.

Configuration of a single tool parameter.

Name of parameter. Description of parameter. Type of parameter.
String parameter.

Configuration of how the expert generates its internal knowledge graph for the documents. Knowledge graph creation is disabled by default as it is quite computing resource intensive and slow.

Enable or disable knowledge graph (default is false). Template for injecting the relevant knowledge data into the RAG generation prompt. The default template is "$Source$ ($SourceType$), $Relation$, $Target$ ($TargetType$).". Allowed entity types. Allowed relationship types.

List of entity types the expert includes when building the knowledge graph.

Included entity type.

List of relationship types the expert includes when building the knowledge graph.

Included relationship type.
Absolut threshold value for the BM25 text search ranking score. Only chunks with a higher score will be included. Threshold value for the BM25 text search ranking score relative to the average score. The value is interpreted as a multiplier to the average, so 1 equals the average and 1.5 equals 1.5 times the average. Only chunks with a higher score will be included.

Configuration of how the expert splits document into chunks and calculate the embedding of them.

Chunk splitting mode. Size of pseudo sentences when doing semantic splitting. This is the number of words included on each side of the potetial splitting locations. Number of words to include in chunks. This is also used with semantic splitting where smaller groups will be merged until they fit the chunk size. Number of overlapping words to include from previous and next chunks. Name of embedding model to use. A template for generating chunk text. Use $Prefix$, $Content$ and $Postfix$ for building up the chunks.
Split document into chunks of fixed size. Split document into chunks based on the semantic coherence of the words in the chunks.

Configuration of how the expert reranks search results.

Enable or disable reranking (default is false).

Configuration of how the expert identifies specific search terms for use in keyword searching.

Enable or disable query term extraction (default is true).

Configuration of how the expert expands a query into multiple variation of the same query in order to try a broader search.

Enable or disable query expansion (default is false).

Configuration of how the expert decomposes queries into smaller, more manageable, queries.

Enable or disable query decomposition (default is false). Reference to a prompt used to answer sub queries from the decomposed query.

Configuration of how the expert calculates topics for the included documents. Topics are calculate by a clustering algorithm which will group similar documents into clusters and then generate a topic name for the clusters.

Topic generation is experimental.

Number of topics to generate. Topic heading configuration.

Configuration of how the expert generates topic headings.

Number of most important words to base the heading of. Number of documents to include as examples pr. topic.

Configuration of chat.

Enable use of context generated by LLM for the conversation history.

List of prompts the expert can use.

Prompt configuration.
Prompt ID used to reference the prompt. Name of large language model to use. Language model parameter configuration. Filter applied to search results. System template for the prompt. Template for the prompt.

Configuration of a single prompt.

List of agents the expert provides.

Agent configuration.

Configuration of a single agent.

Configuration of parameters for the language model used in a prompt. The parameters corresponds to the parameters used by Ollama.

The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8). Sets the size of the context window used to generate the next token. (Default: 2048). Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = context size) Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. (Default: 1.1). Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. (Default: 0).

Configuration of filter to apply on search result before generating respons.

Require search result match.

Configuration of filter to apply on search result before generating responsed.

Name of column to apply restriction on. Required value.