Open Source
Explore the latest AI open-source projects from GitHub and HuggingFace.
Explore the latest AI open-source projects from GitHub and HuggingFace.
Hugging Face Skills is an open-source collection of standardized AI/ML task definitions designed for seamless integration with all major coding agents. Released under Apache-2.0, the project provides 8 self-contained skills that turn coding agents like Claude Code, OpenAI Codex, Google Gemini CLI, and Cursor into capable ML engineering assistants. ## Standardized Skill Format Each skill is a self-contained folder containing a SKILL.md file with YAML frontmatter (name and description) followed by detailed instructions for the coding agent. Helper scripts, configuration templates, and resource files are bundled together. This standardized format ensures consistent behavior across different agent platforms without requiring custom integration work. ## Eight Production-Ready Skills The repository ships with 8 skills covering the full ML lifecycle: Hub CLI operations (hugging-face-cli), dataset creation and management (hugging-face-datasets), model evaluation (hugging-face-evaluation), cloud compute jobs (hugging-face-jobs), model fine-tuning via TRL (hugging-face-model-trainer), research paper publishing (hugging-face-paper-publisher), reusable API tool building (hugging-face-tool-builder), and experiment tracking with visualization (hugging-face-trackio). ## Cross-Platform Agent Compatibility Skills work with Claude Code via plugin manifests, Cursor through native skill support, Gemini CLI via extensions, and OpenAI Codex through AGENTS.md fallback. The project handles platform differences internally, so skill authors write once and deploy everywhere. ## Model Training Integration The model-trainer skill enables fine-tuning of language models using TRL (Transformer Reinforcement Learning) on Hugging Face Jobs infrastructure. It supports SFT (Supervised Fine-Tuning), DPO (Direct Preference Optimization), GRPO, and reward modeling training methods, plus GGUF conversion for local deployment after training. ## Natural Language Invocation Users reference skills in natural language instructions to their coding agent. For example, saying 'Use the HF LLM trainer skill to estimate GPU memory for Qwen-2.5-7B' causes the agent to automatically load the corresponding skill instructions and execute the task. No API calls or manual configuration required. ## Community-Driven Ecosystem With 13 contributors and 141 commits on the main branch, the project is actively maintained by the Hugging Face team. The standardized skill format is designed to encourage community contributions, with clear documentation on how to author new skills for additional ML workflows.