VisibleThread -
Help Center Find helpful articles on different VisibleThread Products

Follow

How to configure VT Writer to leverage an LLM for Generative AI

This guide will help you configure Large Language Model (LLM) integration with VT Writer 5.1.3+, allowing you to enable content generation features.

Important Note: LLM integration requires additional infrastructure and costs beyond the base VT Writer deployment. Before proceeding, please consider:

  • Hardware requirements for self-hosted LLMs (typically requiring GPU resources)
  • API usage costs for cloud-based LLM providers
  • Network connectivity and security considerations
  • Ongoing maintenance responsibilities

See: Frequently Asked Questions: VT Writer, VTRAG, and LLM Integration

Overview

VT Writer can now integrate with LLMs to provide content generation capabilities. The system is designed to be flexible, working with various LLM providers through a standard configuration interface.

Understanding LLM Components

VT Writer works with two separate LLM components:

  1. VT Writer LLM: Used for generating content, rephrasing, and other generative AI tasks
  2. VTRAG Embedding LLM: Used specifically for creating vector embeddings when using document retrieval features. Requires VT Writer LLM to be enabled.

These components must be configured separately, even if you're using the same provider for both.

Prerequisites

Before configuring LLM features, ensure you have:

  • VT Writer 5.1.3 or above installed
  • Access to a compatible LLM provider (e.g. - OpenAI, Azure OpenAI, AWS Bedrock, or Ollama)
  • API keys and endpoint information for your chosen LLM provider
  • Administrator access to the VT Writer System Admin portal

Configuration Steps

1. Access the System Admin Portal

  1. Log in to VT Writer with administrator credentials
  2. Click the user menu and select "System Admin"
  3. Navigate to "System Settings" in the left sidebar

2. Locate the Generative AI Section

In the System Admin panel, locate the "Generative AI" section, which contains all LLM-related settings.

3. Enable Generative AI Features

Toggle the "Enable generative AI features" switch to the "On" position.

4. Configure LLM Settings

Complete the following fields:

  • Framework: Select your LLM provider from the dropdown (Ollama, OpenAI, Azure OpenAI, or AWS Bedrock)
  • Endpoint: Enter the API endpoint URL for your LLM provider (typical examples below)
    • For Ollama: http://your-ollama-server:11434
    • For OpenAI: https://api.openai.com/v1
    • For Azure OpenAI: Your deployment-specific endpoint
    • For AWS Bedrock: Your region-specific endpoint
  • Model: Enter the model name you wish to use
    • For Ollama: mistral-nemo (recommended) or another installed model
    • For OpenAI: gpt-4-turbo or similar
    • For Azure OpenAI: Your deployed model name
    • For AWS Bedrock: The model ID
  • API Key: Enter your API key or access token
    • For Ollama, this may be left blank if no authentication is configured

Example configuration for Ollama:

5. Optional: Enable Streaming

If your LLM provider supports streaming responses, you can enable the "Enable Streaming" option to see responses appear in real-time.

6. Save Changes

Click the "Save Changes" button to apply your configuration.

Recommended Self-Hosted Configuration: Ollama with Mistral NeMo

For customers who want a fully self-hosted solution, we recommend using Ollama with the Mistral NeMo model:

  1. Install Ollama on a server with appropriate GPU resources
  2. Pull the Mistral NeMo model: ollama pull mistral-nemo
  3. Configure VT Writer as follows:
    • Framework: Ollama
    • Endpoint: http://your-ollama-server:11434
    • Model: mistral-nemo
    • API Key: (leave blank if no authentication is configured)

This configuration provides a permissive license model optimized for text creation and summarization without requiring external API services.

See the Ollama configuration guide for information on self-hosted LLM

Using VTRAG for Document Context

If you want to use documents as context for your content generation, you'll need to configure VTRAG separately. Once configured, you can point VT Writer to it via the System Admin settings:

 

See our VTRAG Configuration Guide for detailed instructions.

Using VTRAG Functionality

Once VTRAG is configured:

  1. In the VT Writer user interface, look for the "Use Files" feature in the LLM prompt window
  2. Select documents to use as context when generating content
  3. VTRAG will process these documents and provide relevant context to your LLM prompts

Important Considerations

  • VT Writer does not include any LLM models; you must provide your own
  • The customer is entirely responsible for deploying, configuring, maintaining, and governing their LLM
  • Performance may vary depending on your chosen LLM provider and model
  • Content generation quality depends on the capabilities of your selected LLM
  • Costs will vary based on your chosen LLM solution:
    • Self-hosted solutions (Ollama) require hardware investment but have no per-token costs
    • Cloud API services (OpenAI, Azure, AWS) have ongoing token usage fees

Troubleshooting

If you encounter issues with your LLM integration:

  1. Verify network connectivity between VT Writer and your LLM provider
  2. Check that your API key and endpoint URL are correct
  3. Ensure your selected model is available through your provider
  4. Verify firewall rules allow communication on the required ports
  5. Check system logs for any error messages

For additional assistance, contact VisibleThread Support at support@visiblethread.com.

Was this article helpful?
0 out of 0 found this helpful

Get Additional Help

Visit our Helpdesk for additional help and support.