🤖

Groq

✓ Official

by groq-official

About

Ultra-fast LLM inference using Groq's LPU hardware. Access Llama 4, Mixtral, and other models at speeds up to 500 tokens/second via MCP.

Categories

Frequently Asked Questions

What is the Groq MCP server?
Ultra-fast LLM inference using Groq's LPU hardware. Access Llama 4, Mixtral, and other models at speeds up to 500 tokens/second via MCP.
How do I install Groq?
Visit the GitHub repository for installation instructions.
What AI clients work with Groq?