Show detailed request/response data for debugging API calls and LLM processing
When enabled, debug mode displays:
- Request JSON sent to backend
- Raw LLM response before processing
- Processed response after parsing
- API endpoint and timing information
Configure API keys, models, and system prompts
Show detailed request/response data for debugging API calls and LLM processing
When enabled, debug mode displays:
Choose how debate arena is displayed
Intelligent proxy automatically selects optimal model based on task complexity
Falls back to cheaper models if primary model fails
Provides access to multiple LLM providers
OpenAI-compatible streaming endpoint for chat
Leave empty to use TruthForge API
Model used for interactive chat
Maximum width of chat interface (600-3000px, 0 for full width)
Remove saved analysis results and chat history from browser storage
Used for all analysis steps unless overridden