Prompt Configuration
Learn how to fine-tune your ChatNexus chatbot's response behavior and performance through advanced prompt settings.

Temperature Control
The temperature parameter controls how creative or focused your chatbot's responses will be, ranging from 0 to 1. A lower temperature (like the default 0.25) produces more consistent, predictable responses ideal for customer support and factual queries. Higher temperatures introduce more variety and creativity in responses.
- Default: 0.25
- Range: 0-1
- Recommended uses:
- 0.1-0.3: Customer support, technical documentation, factual responses
- 0.4-0.7: General conversation, content generation
- 0.7-1.0: Creative writing, brainstorming, unique content
Top K Configuration
Top K determines how many of the most relevant context chunks your chatbot considers when generating responses. This setting helps balance response accuracy with processing efficiency.
- Default: 10
- Impact: A higher value increases context consideration but may slow response time
- Optimal range: 5-15 for most use cases
Maximum Input Tokens
This setting defines the total token limit for each prompt, including conversation history and system instructions. Properly configured token limits help maintain performance while managing costs.
- Default: 5000
- Considerations:
- Includes both user input and conversation history
- Higher limits allow for more context but increase processing time
- Recommended to adjust based on your specific use case and plan limits
Best Practices:
- Start with default values and adjust based on user feedback
- Monitor response quality when modifying temperature
- Balance Top K with response time requirements
- Consider your subscription plan limits when setting maximum tokens
Need help optimizing these settings for your specific use case? Our support team is available to provide personalized guidance and recommendations.