Settings

Configure API keys, models, and system prompts

Debug Mode

Show detailed request/response data for debugging API calls and LLM processing

When enabled, debug mode displays:

  • Request JSON sent to backend
  • Raw LLM response before processing
  • Processed response after parsing
  • API endpoint and timing information

Debate Display Mode

Choose how debate arena is displayed

Model Proxy

Intelligent proxy automatically selects optimal model based on task complexity

Falls back to cheaper models if primary model fails

Balanced Mode: GPT-4
~2.5s $0.015/request

API Keys

Provides access to multiple LLM providers

Chat Interface

OpenAI-compatible streaming endpoint for chat

Leave empty to use TruthForge API

Model used for interactive chat

Maximum width of chat interface (600-3000px, 0 for full width)

Remove saved analysis results and chat history from browser storage

Default Provider & Model

Used for all analysis steps unless overridden

Role-Specific Models

Workflow Step Models

System Prompts