-
tashfeenahmed authored
Self-hosted OpenAI-compatible proxy that aggregates the free tiers of fourteen LLM providers — Google, Groq, Cerebras, SambaNova, NVIDIA, Mistral, OpenRouter, GitHub Models, Hugging Face, Cohere, Cloudflare, Zhipu, Moonshot, MiniMax — behind a single /v1/chat/completions endpoint. Server: - Express + SQLite, per-provider adapters with streaming and non-streaming support, automatic fallover on 429/5xx, per-key RPM/RPD/TPM/TPD tracking, sticky sessions for multi-turn, AES-256-GCM encrypted key storage, unified bearer-token auth, periodic health checks. Client: - React + Vite + shadcn/ui admin dashboard: keys, fallback chain (drag to reorder, color-coded per-provider monthly token budget), playground, analytics with per-provider breakdowns. Tooling: - GitHub Actions CI (server tests + client build), MIT license, README with provider-by-provider ToS review. For personal experimentation, not production.
04e15037
| Name |
Last commit
|
Last update |
|---|---|---|
| .github/workflows | ||
| client | ||
| repo-assets | ||
| server | ||
| shared | ||
| .env.example | ||
| .gitignore | ||
| LICENSE | ||
| README.md | ||
| free-ai-apis.pdf | ||
| free-ai-apis.typ | ||
| package-lock.json | ||
| package.json |