advanced-evaluation by muratcankoylan

This skill should be used when the user asks to "implement LLM-as-judge", "compare model outputs", "create evaluation rubrics", "mitigate evaluation bias", or mentions direct scoring, pairwise comparison, position bias, evaluation pipelines, or automated quality assessment.

Coding
6.4K Stars
517 Forks
Updated Jan 12, 2026, 02:03 AM

Why Use This

This skill provides specialized capabilities for muratcankoylan's codebase.

Use Cases

  • Developing new features in the muratcankoylan repository
  • Refactoring existing code to follow muratcankoylan standards
  • Understanding and working with muratcankoylan's codebase structure

Skill Snapshot

Auto scan of skill assets. Informational only.

Valid SKILL.md

Checks against SKILL.md specification

Source & Community

Skill Version
main
Community
6.4K 517
Updated At Jan 12, 2026, 02:03 AM

Skill Stats

SKILL.md 455 Lines
Total Files 1
Total Size 0 B
License NOASSERTION