Production-ready Go library for managing, testing, optimizing, and versioning prompts for Large Language Models (LLMs). Type-safe, performant, provider-agnostic.
go get github.com/klejdi94/loom
End-to-end flow: define versioned prompts → store in Postgres or Redis → promote → run via provider → record to analytics (Postgres/Redis) → view on dashboard.
| Step | What happens |
|---|---|
| Define | Build prompt (ID, version, system, template, variables). |
| Store | Registry: Postgres, Redis, file, or memory. |
| Promote | Mark a version as production. |
| Execute | Load production prompt, call provider (OpenAI, Cerebras, etc.). |
| Record | Analytics: persist run (prompt_id, version, latency, success) in Postgres or Redis. |
| Dashboard | Charts: runs over time, success rate by prompt/version. |
engine := loom.DefaultEngine()
prompt := loom.New("my-prompt").
WithSystem("You are a helpful assistant.").
WithTemplate("Answer: {{.question}}").
WithVariable("question", loom.String(loom.Required())).
Build(engine)
result, _ := prompt.Render(ctx, loom.Input{"question": "What is 2+2?"})
// PostgreSQL
db, _ := sql.Open("postgres", dsn)
reg, _ := registry.NewPostgresRegistry(db, "prompts", true)
// Redis
rdb := redis.NewClient(&redis.Options{Addr: "localhost:6379"})
reg := registry.NewRedisRegistry(rdb, "loom:prompts")
reg.Store(ctx, prompt)
reg.Promote(ctx, "my-prompt", "1.2.0", registry.StageProduction)
prod, _ := reg.GetProduction(ctx, "my-prompt")
// Persistent run history (survives restarts)
// Postgres
store, _ := analytics.NewPostgresStore(db, "prompt_runs")
// Redis
store := analytics.NewRedisStore(rdb, "loom:analytics:runs")
// Server: go run ./cmd/analytics-server -store=postgres -dsn=...
// Dashboard: go run ./cmd/dashboard -api=http://localhost:8080