Open Source
Explore the latest AI open-source projects from GitHub and HuggingFace.
Explore the latest AI open-source projects from GitHub and HuggingFace.
## What is CL4R1T4S? **CL4R1T4S** (a stylized spelling of "claritas" — Latin for clarity) is an open-source repository collecting extracted and leaked system prompts from major AI products and coding assistants. With 24,400 stars on GitHub and an AGPL-3.0 license, it has become the go-to reference for AI transparency researchers, red teamers, and developers who want to understand what instructions power the AI tools they use daily. ## What's Inside The repository organizes system prompts by company and product into separate folders: | Company | Products Covered | |---------|------------------| | Anthropic | Claude (various deployments) | | OpenAI | ChatGPT, Codex | | Google | Gemini | | xAI | Grok | | Perplexity | Perplexity AI | | Cursor | Cursor IDE AI | | Windsurf | Windsurf IDE AI | | Devin | Devin autonomous agent | | Replit | Replit Ghostwriter | | Vercel | v0 AI code generator | | Mistral | Mistral chat interfaces | | Brave | Brave Leo AI | ## The Transparency Argument The project's premise is that AI system prompts function as invisible governance layers — they determine how models respond to users, what topics they avoid, and how they represent their own capabilities. CL4R1T4S argues that **"in order to trust the output, one must understand the input."** For security researchers, having reference system prompts enables: - **Prompt injection testing**: Understanding baseline instructions helps identify injection attack surfaces - **Behavior consistency auditing**: Comparing intended vs. observed behavior across AI deployments - **Red teaming**: Structured adversarial testing informed by actual system constraints ## Community and Research Value Beyond red teaming, the repository serves AI researchers comparing how different companies approach safety guardrails, persona maintenance, and tool-use instructions. Developers building with AI APIs use it to benchmark their own system prompts against industry practices. ## Why It Matters As AI systems take on more autonomous roles — writing code, browsing the web, executing tasks — understanding the instructions they operate under becomes a matter of public interest. CL4R1T4S occupies a contested but important space: making the invisible visible, and treating AI system design as something open to scrutiny rather than permanent trade secret.