Installation Issues
Module Not Found Error
Critical
Getting "Cannot find module" errors when trying to import the SDK.
Solution
Make sure you've installed all dependencies and are using the correct import path.
# Install dependencies
npm install
# Use correct import path
const { Agent } = require('./src/index.js');
# or
const { Agent } = require('ai-stream-sdk');
Node.js Version Issues
Medium
Getting errors related to Node.js version compatibility.
Solution
The AI Stream SDK requires Node.js 18.0.0 or higher. Update your Node.js version.
# Check Node.js version
node --version
# Update Node.js (using nvm)
nvm install 18
nvm use 18
# Or download from https://nodejs.org
API Key Issues
Missing API Keys
Critical
Getting errors about missing API keys for TTS or LLM providers.
Solution
Set up your API keys in the .env file or use mock providers for testing.
# Create .env file
cp env.example .env
# Add your API keys
echo "ELEVENLABS_API_KEY=sk-your-key-here" >> .env
echo "OPENAI_API_KEY=sk-your-key-here" >> .env
# Or use mock providers for testing
echo "TTS_PROVIDER=mock" >> .env
echo "LLM_PROVIDER=mock" >> .env
Invalid API Keys
High
API keys are set but getting authentication errors.
Solution
Verify your API keys are correct and have the necessary permissions.
# Test API key
curl -H "Authorization: Bearer $OPENAI_API_KEY" \
https://api.openai.com/v1/models
# Check key format
echo $ELEVENLABS_API_KEY | head -c 10
# Should start with 'sk-'
TTS Issues
TTS Generation Fails
High
Text-to-speech generation is failing or producing errors.
Solution
Check your TTS provider configuration and API limits.
# Test TTS providers
npm run test:tts
# Check API limits
# ElevenLabs: Check usage in dashboard
# OpenAI: Check usage in platform
# Fallback to mock TTS
export TTS_PROVIDER=mock
Audio File Not Generated
Medium
TTS completes but no audio file is created or file is empty.
Solution
Check file permissions and disk space.
# Check disk space
df -h
# Check file permissions
ls -la /tmp/
# Test with different output path
const audioPath = await agent.speak('test', '/tmp/test-audio.wav');
console.log('Audio path:', audioPath);
LLM Issues
LLM Generation Fails
High
AI thinking/response generation is failing.
Solution
Check your LLM provider configuration and API limits.
# Test LLM providers
node -e "
const { Agent } = require('./src/index.js');
const agent = new Agent({id: 'test'});
agent.think('Hello').then(console.log).catch(console.error);
"
# Check API limits
# OpenAI: Check usage in platform
# Claude: Check usage in console
# Fallback to mock LLM
export LLM_PROVIDER=mock
Poor AI Responses
Medium
AI responses are poor quality or not relevant.
Solution
Improve your prompts and adjust generation parameters.
// Better prompts
const response = await agent.think(
'You are a helpful AI assistant. Answer this question: ' + userQuestion,
{ maxTokens: 200, temperature: 0.7 }
);
// Adjust parameters
const response = await agent.think(prompt, {
maxTokens: 150, // Shorter responses
temperature: 0.8, // More creative
stop: ['\n\n'] // Stop sequences
});
Streaming Issues
Stream Connection Fails
Critical
Cannot connect to RTMP streaming endpoint.
Solution
Check your RTMP URL and ensure ffmpeg is installed.
# Check ffmpeg installation
ffmpeg -version
# Test RTMP URL
ffmpeg -f lavfi -i testsrc=duration=10:size=1280x720:rate=30 \
-f lavfi -i sine=frequency=1000:duration=10 \
-c:v libx264 -c:a aac -f flv rtmp://your-url
# Check RTMP URL format
echo $RTMP_URL
# Should be: rtmp://live.twitch.tv/app/YOUR_KEY
Stream Quality Issues
Medium
Stream has poor quality, buffering, or dropped frames.
Solution
Adjust streaming parameters for better quality.
// Adjust streaming parameters
const agent = new Agent({
id: 'quality-bot',
defaults: {
resolution: '1280x720', // Lower resolution
framerate: 30 // Lower framerate
}
});
// Use faster ffmpeg preset
// This is handled internally by the SDK
Debug Mode
Enable Debug Logging
Low
Get detailed logging information to debug issues.
Solution
Enable debug mode to see detailed logs.
# Enable debug mode
export DEBUG=*
node your-script.js
# Or enable specific debug categories
export DEBUG=ai-stream-sdk:*
node your-script.js
Test Individual Components
Low
Test each component separately to isolate issues.
Solution
Use the built-in test scripts to verify each component.
# Test TTS only
npm run test:tts
# Test basic functionality
npm run test:demo
# Test streaming
npm run start:example
Performance Issues
Slow Performance
Medium
SDK is running slowly or taking too long to respond.
Solution
Optimize your configuration and use faster providers.
// Use faster TTS providers
export TTS_PROVIDER=openai // Faster than ElevenLabs
// Use faster LLM models
export OPENAI_MODEL=gpt-3.5-turbo // Faster than GPT-4
// Reduce generation parameters
const response = await agent.think(prompt, {
maxTokens: 100, // Shorter responses
temperature: 0.7 // Lower temperature
});
Memory Issues
Medium
High memory usage or memory leaks.
Solution
Monitor memory usage and clean up resources.
// Monitor memory usage
console.log('Memory usage:', process.memoryUsage());
// Clean up audio files
const audioPath = await agent.speak('test');
// Use the file, then delete it
fs.unlinkSync(audioPath);
// Stop streams properly
agent.stopStream();