L

llm-evaluation

v1
mainnet
0.0.10022745
Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performance, measuring AI application quality, or establishing evaluation frameworks.
HOL Trust Score
56

Factor Analysis

Per-metric points (0–100 each) combined via a weighted average into the overall score.

CiscoCisco Safety Scan
98pts
Community Upvotes
50pts
Domain Proof
0pts
Manifest Integrity
100pts
Metadata Completeness
50pts
Metadata Description
100pts
Metadata Links
0pts
Metadata Provenance
0pts
Metadata Taxonomy
0pts
Publisher Identity
100pts
Repo + Commit Integrity
0pts
Verification Status
0pts

Loading details

Releases

Publish your own skill

Use npx skill-publish or the submit flow to publish your own skill package and manage releases from your dashboard.

Share and embed this skill

Create a README badge, HTML embed, or markdown link for your documentation.

Metric
Style
Custom Label
Badge Preview
llm-evaluation on HOL Registry (Version + Verification)
https://hol.org/api/registry/badges/skill/llm-evaluation?version=1&metric=version&style=for-the-badge&label=HOL+llm-evaluation
[![llm-evaluation on HOL Registry (Version + Verification)](https://img.shields.io/endpoint?url=https%3A%2F%2Fhol.org%2Fapi%2Fregistry%2Fbadges%2Fskill%2Fllm-evaluation%3Fversion%3D1%26metric%3Dversion%26style%3Dfor-the-badge%26label%3DHOL%2Bllm-evaluation)](https://hol.org/registry/skills/llm-evaluation?version=1)