promptfoo-evaluation by daymade

Configures and runs LLM evaluation using Promptfoo framework. Use when setting up prompt testing, creating evaluation configs (promptfooconfig.yaml), writing Python custom assertions, implementing llm-rubric for LLM-as-judge, or managing few-shot examples in prompts. Triggers on keywords like "promptfoo", "eval", "LLM evaluation", "prompt testing", or "model comparison".

Content & Writing
190 Stars
15 Forks
Updated Jan 15, 2026, 03:36 PM

Why Use This

This skill provides specialized capabilities for daymade's codebase.

Use Cases

  • Developing new features in the daymade repository
  • Refactoring existing code to follow daymade standards
  • Understanding and working with daymade's codebase structure

Install Guide

2 steps
  1. 1

    Download Ananke

    Skip this step if Ananke is already installed.

  2. 2

    Install inside Ananke

    Click Install Skill, paste the link below, then press Install.

    https://github.com/daymade/claude-code-skills/tree/main/promptfoo-evaluation

Skill Snapshot

Auto scan of skill assets. Informational only.

Valid SKILL.md

Checks against SKILL.md specification

Source & Community

Repository claude-code-skills
Skill Version
main
Community
190 15
Updated At Jan 15, 2026, 03:36 PM

Skill Stats

SKILL.md 393 Lines
Total Files 1
Total Size 0 B
License NOASSERTION