SalvatanSalvatan
Open beta
Berlin
HomeProductDocsPricingTokenBlogAboutContact
Read DocsRequest Access

Blog

Thoughts on LLM engineering, evaluation practices, and building reliable AI systems.

Engineering
Best Practices

Why Prompt Versioning Matters

You version your code. Why not your prompts? A case for treating LLM workflows as production systems.

Dec 15, 2024
6 min
Evals
Engineering

Building Eval Harnesses That Matter

Most LLM evals are vanity metrics. Here is how to build tests that actually prevent regressions.

Dec 8, 2024
8 min
RAG
Monitoring

RAG Pipelines Fail Silently

Retrieval quality degrades over time. How to detect and fix it before users complain.

Nov 30, 2024
7 min
Token
Infrastructure

Why Solana for ML Infrastructure

On-chain protocol fees, transparent treasuries, and fast settlement for usage-based pricing.

Nov 22, 2024
5 min
Security
Engineering

Defending Against Prompt Injection

Practical strategies for hardening LLM systems against adversarial inputs.

Nov 15, 2024
9 min
Observability
DevOps

The LLM Observability Stack

What you need to monitor when LLMs are in production: beyond logs and traces.

Nov 8, 2024
7 min
Culture
Berlin

Berlin AI Scene in 2024

Notes from the ground: what early European AI builders are working on.

Oct 28, 2024
6 min
Engineering
Best Practices

From Prototype to Production LLM

The missing checklist for taking your LLM feature from demo to scalable product.

Oct 18, 2024
10 min
SalvatanSalvatan

Ship reliable LLM workflows with prompt versioning and evaluation harnesses.

Product

  • Features
  • Pricing
  • Docs
  • Roadmap

Company

  • About
  • Blog
  • Contact
  • Token

Resources

  • API Reference
  • SDK
  • GitHub
  • Status

Legal

  • Privacy
  • Terms
  • Security

© 2026 Salvatan. Built in Berlin.

No tracking - privacy friendly