A comprehensive AI-powered platform for football statistics analysis and real-time insights using LangChain v3, multiple LLM models, and advanced agent orchestration.
- OpenAI GPT-4 Turbo: Primary model for complex analysis and reasoning
- Groq Mixtral-8x7b: Used for real-time processing and initial query routing
- Model Selection Logic: Automatic selection based on task complexity and requirements
- Orchestrates the entire query processing pipeline
- Manages agent delegation and task routing
- Handles fallback scenarios and error recovery
- Maintains processing state and debugging information
- Analysis Agent: Historical data analysis and statistical comparisons
- Realtime Agent: Live scores and current match statistics
- Enhancement Agent: Query refinement and context enrichment
- Security Agent: Query validation and scope verification
- Vector store integration for semantic search
- Redis-based document storage
- Dynamic context retrieval based on query relevance
- Automatic document embedding and indexing
- Support for multiple document types (team stats, player stats, tournament data)
- Redis-based Chat History: Persistent conversation storage
- Vector Store Memory: Efficient similarity search
- Context Window Management: Handles long-running conversations
- Session Management: User-specific conversation tracking
- Football Data Tool: Historical statistics and records
- Live Scores Tool: Real-time match data
- Stats Calculator Tool: Advanced statistical analysis
- Timeframe Tool: Temporal data processing
- Dynamic Tool Loading: Automatic tool discovery and registration
- Interactive command-line interface
- Real-time processing feedback
- Step-by-step execution visibility
- Debug information display
- Color-coded output for better readability
- Express-based HTTP server
- JSON request/response format
- Health check endpoints
- Error handling middleware
- Rate limiting and security features
- Security Validation
- Context Retrieval
- Query Enhancement
- Agent Selection
- Data Processing
- Response Generation
- JSON-formatted responses
- Confidence scoring
- Source attribution
- Processing metadata
- Error tracing
- Graceful degradation
- Fallback mechanisms
- Detailed error reporting
- Recovery strategies
src/agents/
βββ base.agent.ts # Base agent implementation
βββ supervisor.agent.ts # Main orchestration agent
βββ internal/ # Internal processing agents
βββ user/ # User-facing agents
src/tools/
βββ index.ts # Tool registry
βββ football.tool.ts # Football data tool
βββ live.tool.ts # Real-time data tools
βββ stats.tool.ts # Statistical analysis tools
src/services/
βββ rag.service.ts # RAG implementation
βββ redis.service.ts # Memory management
βββ cache.service.ts # Response caching
- Query Input (CLI/API)
- Security Validation
- Context Retrieval (RAG)
- Query Enhancement
- Agent Selection
- Tool Execution
- Response Generation
- Memory Update
- Node.js >= 18
- Redis server
- OpenAI API key
- Groq API key
# Clone the repository
git clone <repository-url>
# Install dependencies
npm install
# Configure environment variables
cp .env.example .env
# Edit .env with your API keys
# Start Redis
docker-compose up -d
# Run the application
npm run start
# Start the CLI
npm run cli
# Example queries:
- "How did Manchester United perform in 2023?"
- "Who was Liverpool's top scorer last season?"
- "Compare Arsenal and Chelsea's recent performance"
# Query endpoint
POST /query
{
"query": "Tell me about Manchester United's performance"
}
# Health check
GET /health
OPENAI_API_KEY
: OpenAI API keyGROQ_API_KEY
: Groq API keyREDIS_URL
: Redis connection stringPORT
: API server port
- Adjust temperature and other parameters in
config/config.ts
- Configure model selection logic in
supervisor.agent.ts
- Each agent has specific responsibilities and capabilities
- Agents can be extended or modified for custom use cases
- New agents can be added by implementing the base agent interface
- Tools provide specific functionalities
- New tools can be added by implementing the tool interface
- Tools are automatically discovered and registered
- RESTful endpoints for query processing
- JSON request/response format
- Error codes and handling
- Fork the repository
- Create a feature branch
- Submit a pull request
MIT License
- LangChain team for the excellent framework
- OpenAI and Groq for their LLM APIs
- Redis for memory management capabilities