About

Building infrastructure for reliable LLM systems.

Salvatan

Berlin
Prompt Engineer & AI Solutions Manager

I've been working with LLMs since the GPT-3 days, starting with content generation tools and eventually moving into production systems at early-stage AI companies.

The problem I kept hitting: prompts change constantly, but we treat them like config strings. No versioning, no systematic testing, no rollback strategy. When something breaks in production, you're scrambling through git history and Slack messages trying to figure out what changed.

PromptOps is the internal tooling I built at my last company, now available as a platform. It's designed for teams who need to ship LLM features reliably and iterate quickly without breaking things.

I'm based in Berlin, working on making prompt engineering less chaotic and more systematic.

Timeline

2021

Started exploring GPT-3 for content generation tools

2022

Joined early AI startup as prompt engineer

2023

Built internal eval infrastructure for production LLM workflows

2024

Moved to Berlin. Started PromptOps to make evals accessible

2025

Open beta launch. Token launch planned for Q1

Principles

Reliability over hype

LLM systems are non-deterministic. Rigorous testing and versioning make them trustworthy.

Transparency by default

Open eval results, public roadmap, on-chain treasury. No black boxes.

Tooling for everyone

Infrastructure should not require a team of ML engineers. Make it accessible.

Why Berlin?

Berlin's AI scene is pragmatic and less hype-driven than SF. People are building actual products, not pitching visions. The focus is on making things work for real users, with strong attention to privacy and compliance (GDPR-native thinking).

It's also a great place to build infrastructure: lower costs, strong technical talent, and a timezone that covers both EU and partial US overlap.