🤖
Groq
✓ Officialby groq-official
About
Ultra-fast LLM inference using Groq's LPU hardware. Access Llama 4, Mixtral, and other models at speeds up to 500 tokens/second via MCP.
Categories
Works With
Frequently Asked Questions
What is the Groq MCP server?
Ultra-fast LLM inference using Groq's LPU hardware. Access Llama 4, Mixtral, and other models at speeds up to 500 tokens/second via MCP.
How do I install Groq?
Visit the GitHub repository for installation instructions.
What AI clients work with Groq?
Related Guides
Quick Info
- Install Type
- npm
- Author
- groq-official
- Categories
- 1
- Integrations
- 4
Related Servers
🧠✓
Memory
Knowledge graph-based persistent memory system. Store and retrieve contextual information.
🤖✓
Sequential Thinking
Dynamic and reflective problem-solving through thought sequences.
🔍
Exa
Search Engine made for AIs. Neural search with understanding of content meaning.
🗄️
Milvus
Search, Query and interact with data in your Milvus Vector Database.
🗄️
Chroma
Embeddings, vector search, document storage, and full-text search with the open-source AI application database.
Ad Placeholder