Super HN

New Show
   How Well Do Coding Agents Use Your Library? (stackbench.ai)
If coding agents are the new entry point to your library, how sure are you that they’re using it well?

I asked this question to about 50 library maintainers and dev tool builders, and the majority didn't really know.

Existing code generation benchmarks focus mainly on self-contained code snippets and compare models not agents. Almost none focus on library-specific generation.

So we built a simple app to test how well coding agents interact with libraries: • Takes your library’s docs • Automatically extracts usage examples • Tasks AI agents (like Claude Code) with generating those examples from scratch • Logs mistakes and analyzes performance

We’re testing libraries now, but it’s early days. If you're interested: Input your library, see what breaks, spot patterns, and share the results below.

We plan to expand to more coding agents, more library-specific tasks, and new metrics. Let us know what we should prioritize next.