TeamCity has been JetBrains' CI/CD server since 2006. While everyone was busy writing Jenkins plugins and debugging YAML indentation in GitHub Actions, TeamCity quietly became one of the most polished build servers available — and most developers have never touched it.

This guide takes you from zero to running pipelines. You'll understand the architecture, the core concepts, and how to wire up a real build chain. No prior TeamCity experience needed — just familiarity with Git and Docker.

Official site →


Why TeamCity?

The CI/CD landscape is crowded. Here's why TeamCity deserves a look:

It's mature. Twenty years of production use. The kind of stability you only get from software that's survived real-world abuse at scale.

No plugin hell. Jenkins requires plugins for basic functionality — Git integration, pipeline visualization, decent UI. TeamCity ships with all of this built in. Fewer moving parts, fewer things to break during upgrades.

Kotlin DSL. Define your entire pipeline in Kotlin code, committed to your repo. Version-controlled CI/CD config that's actually a real programming language, not YAML with clever indentation.

Free tier is generous. The Professional Server License gives you 100 build configurations and 3 build agents at no cost. That's enough for most small-to-medium teams.

The UI is genuinely good. Build logs with real-time streaming, test failure analysis, build chain visualization — all without installing a single extension.


The Architecture: Server + Agents

This is the single most important thing to understand about TeamCity. Get this, and everything else falls into place.

┌─────────────────────────────────────────────┐
│              TeamCity Server                  │
│                                              │
│  ┌──────────┐  ┌──────────┐  ┌───────────┐  │
│  │ Web UI   │  │ Build    │  │ VCS       │  │
│  │ :8111    │  │ Queue    │  │ Monitor   │  │
│  └──────────┘  └──────────┘  └───────────┘  │
│  ┌──────────┐  ┌──────────┐  ┌───────────┐  │
│  │ Database │  │ Artifact │  │ User/Auth │  │
│  │ (PG/MySQL│  │ Storage  │  │ Manager   │  │
│  └──────────┘  └──────────┘  └───────────┘  │
└──────────┬──────────┬──────────┬─────────────┘
           │          │          │
     ┌─────▼──┐  ┌────▼───┐  ┌──▼───────┐
     │Agent 1 │  │Agent 2 │  │Agent 3   │
     │Linux   │  │Windows │  │macOS     │
     │Java 17 │  │.NET 8  │  │Xcode 15  │
     │Docker  │  │MSBuild │  │Swift     │
     └────────┘  └────────┘  └──────────┘

The Server

The server is the brain. It does not run builds. Read that again — the server never executes your build commands.

What it does:

  • Stores everything — configuration, build history, user accounts, permissions. Uses an internal HSQLDB database for quick setups, but production deployments should use PostgreSQL or MySQL.
  • Distributes work — when a build triggers, the server finds a compatible agent and assigns the job.
  • Serves the UI — the web interface runs on port 8111 by default.
  • Monitors repositories — polls your VCS roots for changes and triggers builds when commits land.
  • Collects results — test reports, code coverage, build artifacts all flow back to the server for storage and display.

Build Agents

Agents are the workers. They do the actual building, testing, and deploying.

Each agent runs on its own machine — could be a bare-metal server, a VM, a Docker container, or a cloud instance. Agents can run Linux, Windows, or macOS, and you can mix them freely. Need to compile C++ on Linux and run UI tests on macOS simultaneously? Two agents, two platforms, one pipeline.

When an agent starts up, it connects to the server and reports its capabilities: installed JDK versions, OS type, available tools (Docker, Maven, .NET SDK, etc.). The server uses this information to match builds to compatible agents automatically. A build that requires .NET won't get assigned to a Linux agent that only has Java.

Communication Flow

  1. Agent starts → connects to server → registers itself
  2. Developer pushes code → server detects change → build enters queue
  3. Server finds idle compatible agent → assigns build
  4. Agent executes build steps → streams logs back in real-time
  5. Build finishes → agent uploads artifacts to server
  6. Server updates UI, sends notifications

Scaling

Scaling TeamCity means adding agents. More agents = more parallel builds. The server handles all coordination — agents are stateless workers that can be spun up and torn down freely.

For dynamic workloads, TeamCity supports cloud profiles: auto-provision agents on AWS, GCP, or Azure when the queue backs up, destroy them when builds finish. You pay for compute only when you're building.


Core Concepts

These are the building blocks. Every TeamCity pipeline is assembled from these pieces.

Project

A container — the top-level organizational unit. Maps to a software project, a team, or a product line. Projects can nest: a "Backend" project might contain sub-projects for each microservice.

Projects hold build configurations, VCS roots, parameters, and templates. Permissions are set at the project level too — you can give Team A access to their project without exposing Team B's configs.

Build Configuration

A job definition. This is the core unit of work in TeamCity — a recipe that says "here's how to build/test/deploy this thing."

A build configuration contains:

  • Which repo to pull from (VCS roots)
  • What commands to run (build steps)
  • When to run them (triggers)
  • What to save afterward (artifact rules)
  • Variables and settings (parameters)

One project can have many build configurations: "Compile," "Unit Tests," "Integration Tests," "Deploy to Staging" — each is a separate build configuration.

VCS Root

The connection to your source code repository. A VCS root specifies the Git URL, branch spec, authentication credentials, and polling interval.

TeamCity watches VCS roots for changes. When a new commit appears, any build configuration with a VCS trigger on that root will fire.

One VCS root can be shared across multiple build configurations. Your "Compile" and "Test" configs can both point to the same repo without duplicating connection settings.

Build Step

A single task within a build. Steps execute sequentially — step 1 finishes, then step 2 starts.

TeamCity ships with native runners for most build tools:

  • JVM: Maven, Gradle, Ant
  • Microsoft: MSBuild, Visual Studio, .NET CLI
  • Containers: Docker, Docker Compose
  • Scripting: Command Line, PowerShell, Python
  • JavaScript: Node.js, npm, yarn
  • And more: NuGet, Rake, Duplicates Finder, Inspections

Each runner understands its tool deeply. The Gradle runner doesn't just shell out to gradle — it parses test results, extracts build statistics, and reports failures with structured data. That's the difference between native support and "just run a shell command."

Build Trigger

What causes a build to start. Four main types:

  • VCS Trigger — new commit detected on the monitored branch → build starts automatically. This is the most common trigger.
  • Schedule Trigger — cron-style scheduling. Nightly builds, weekly regression suites, Monday morning deploys.
  • Finish Build Trigger — Build A completes → Build B starts. This is how you chain builds into pipelines.
  • Remote Run Trigger — a developer manually triggers a personal build to test their changes before pushing. TeamCity runs the build with their uncommitted changes applied.

Build Queue

When a build triggers, it doesn't run immediately. It enters the build queue — a waiting room where builds sit until a compatible agent becomes available.

The queue is first-in-first-out by default, but you can adjust priorities. If three builds are queued and two agents are free, two builds start immediately and the third waits.

Build Artifacts

Output files from a build. Compiled binaries, Docker images, test reports, coverage data, installers — anything your build produces that you want to keep.

After a build finishes, the agent uploads artifacts to the server based on artifact rules you define. These artifacts are then downloadable from the UI, accessible via the REST API, and — critically — available as inputs to downstream builds in a pipeline.

Parameters

Variables that configure builds. Three flavors:

  • Configuration Parameters — simple key-value pairs, used in build step settings
  • System Properties — passed to build tools as system properties (-Dproperty=value in Java)
  • Environment Variables — injected into the build process environment

Parameters can be defined at the project level (inherited by all configs), the build configuration level, or the agent level. Password parameters are automatically masked in build logs — TeamCity will never print your secrets.

Templates

Reusable build configuration blueprints. Define a template with common build steps, triggers, and parameters, then create build configurations that inherit from it.

Changed the deployment script? Update the template. Every config based on that template picks up the change automatically. This matters when you have 50 microservices that all deploy the same way.


Pipelines: Build Chains

This is where TeamCity goes from "a build server" to "a deployment platform."

A build chain links multiple build configurations together using dependencies. Each configuration handles one stage of your pipeline, and TeamCity orchestrates the whole flow.

Example Pipeline

┌──────────┐    ┌────────────┐    ┌───────────────────┐
│ Compile  │───▶│ Unit Tests │───▶│ Integration Tests │
└──────────┘    └────────────┘    └─────────┬─────────┘
                                            │
                                  ┌─────────▼─────────┐
                                  │ Build Docker Image │
                                  └─────────┬─────────┘
                                            │
                                  ┌─────────▼─────────┐
                                  │ Deploy to Staging  │
                                  └─────────┬─────────┘
                                            │
                                  ┌─────────▼─────────┐
                                  │Deploy to Production│
                                  │  (manual gate)     │
                                  └───────────────────┘

Snapshot Dependencies

Build B has a snapshot dependency on Build A. This means: when Build B triggers, TeamCity first ensures Build A has run on the exact same source revision. If Build A hasn't run on that commit yet, TeamCity triggers it automatically.

This guarantees consistency across the entire chain. Your integration tests run against the same code that was compiled, not against whatever happened to be on the branch five minutes later.

Artifact Dependencies

Build B has an artifact dependency on Build A. This means: before Build B starts, it downloads specific artifacts from Build A. The compiled JAR, the Docker image tarball, the test data — whatever Build B needs as input.

Artifact dependencies separate concerns cleanly. Your "Build Docker Image" step doesn't need to recompile the code — it just downloads the JAR from the "Compile" step.

Parallel Execution

Here's where build chains get powerful. If two steps in the chain are independent — say, "Unit Tests" and "Linting" both depend on "Compile" but not on each other — TeamCity runs them in parallel on different agents automatically.

No configuration needed. TeamCity looks at the dependency graph, identifies independent branches, and parallelizes them. More agents = more parallelism = faster pipelines.


Pipeline as Code: Kotlin DSL

Clicking through a UI to configure pipelines doesn't scale. TeamCity's answer: define everything in Kotlin, commit it to your repository, version it with Git.

Create a .teamcity/settings.kts file in your repo root:

import jetbrains.buildServer.configs.kotlin.*
import jetbrains.buildServer.configs.kotlin.buildSteps.gradle
import jetbrains.buildServer.configs.kotlin.buildSteps.dockerCommand
import jetbrains.buildServer.configs.kotlin.triggers.vcs

version = "2024.12"

project {
    buildType(Build)
    buildType(Test)
    buildType(Deploy)
}

object Build : BuildType({
    name = "Build"

    vcs {
        root(DslContext.settingsRoot)
    }

    steps {
        gradle {
            tasks = "clean build"
            gradleWrapperPath = ""
        }
    }

    triggers {
        vcs {}
    }

    artifactRules = "build/libs/*.jar => artifacts"
})

object Test : BuildType({
    name = "Test"

    dependencies {
        snapshot(Build) {}
    }

    vcs {
        root(DslContext.settingsRoot)
    }

    steps {
        gradle {
            tasks = "test"
        }
    }
})

object Deploy : BuildType({
    name = "Deploy to Staging"

    dependencies {
        snapshot(Test) {}
        artifacts(Build) {
            artifactRules = "artifacts/*.jar => deploy/"
        }
    }

    steps {
        dockerCommand {
            commandType = build {
                source = file {
                    path = "Dockerfile"
                }
                namesAndTags = "myapp:%build.number%"
            }
        }
    }
})

This is a real programming language, not YAML with string interpolation. You get IDE autocompletion (IntelliJ, naturally), type safety, refactoring, and the ability to use loops, conditionals, and functions to generate configurations dynamically.

When TeamCity detects a .teamcity directory in your repo, it reads the Kotlin DSL and generates the pipeline configuration. Change the DSL, push the commit, and your pipeline updates automatically. Pull requests can modify the pipeline and reviewers can see exactly what changed.


Installation: Docker in Five Minutes

The fastest way to try TeamCity locally:

# Start the server
docker run -d --name teamcity-server \
  -p 8111:8111 \
  -v teamcity_data:/data/teamcity_server/datadir \
  -v teamcity_logs:/opt/teamcity/logs \
  jetbrains/teamcity-server

# Start an agent
docker run -d --name teamcity-agent \
  -e SERVER_URL="http://host.docker.internal:8111" \
  -v teamcity_agent_conf:/data/teamcity_agent/conf \
  jetbrains/teamcity-agent

What just happened:

The first container runs the TeamCity server — the brain. It exposes port 8111 for the web UI and persists its data in Docker volumes so you don't lose configuration between restarts.

The second container runs a build agent — a worker. The SERVER_URL environment variable tells the agent where to find the server. host.docker.internal resolves to your host machine from inside the container (works on Docker Desktop; on Linux, you may need --network host or your host's IP).

Open http://localhost:8111 in your browser. First-time setup walks you through:

  1. Accept the license and choose the internal database (fine for testing)
  2. Create an admin account
  3. Authorize the agent — the agent appears in the "Unauthorized" tab. Click "Authorize." Now TeamCity has a worker ready to run builds.

Takes about two minutes after the containers finish starting.


Your First Pipeline: Step by Step

With the server running and an agent authorized, let's build something real.

Step 1: Create a Project

Click "Create project" from the dashboard. Choose "From a repository URL."

Paste your Git repo URL (GitHub, GitLab, Bitbucket — doesn't matter). Enter credentials if it's private. TeamCity connects, reads the repo, and suggests a project name.

Step 2: Auto-Detection

TeamCity scans your repository and auto-detects build tools. Got a pom.xml? It suggests Maven steps. build.gradle? Gradle. package.json? npm. It's surprisingly accurate.

Accept the suggestions or configure manually. For a typical Java project, TeamCity will suggest a Gradle or Maven build step with reasonable defaults.

Step 3: Configure Build Steps

In the build configuration settings, go to Build Steps. You'll see the auto-detected step. Adjust if needed — change the Gradle task from build to clean build, add JVM arguments, set the working directory.

Add more steps: "Run shell script" for custom commands, "Docker" for container builds, etc. Steps execute in order — top to bottom.

Step 4: Add a VCS Trigger

Go to TriggersAdd new triggerVCS Trigger. With default settings, TeamCity polls your repository every 60 seconds and triggers a build when new commits appear.

Push a commit. Within a minute, your build queue shows a pending build. The agent picks it up, and you can watch logs stream in real-time.

Step 5: Create a Test Configuration

Create a second build configuration in the same project — call it "Run Tests." Add a build step that runs your test suite (gradle test, npm test, pytest, whatever fits your project).

Step 6: Chain Them

In the "Run Tests" configuration, go to DependenciesAdd new snapshot dependency → select your "Build" configuration.

Now when "Run Tests" triggers, TeamCity ensures "Build" ran on the same commit first. You have a two-stage pipeline.

Step 7: Add Artifact Publishing

In the "Build" configuration, go to General SettingsArtifact paths. Add a rule like:

build/libs/*.jar => artifacts

This tells TeamCity to upload any JAR files from build/libs/ after the build completes. In your "Run Tests" or "Deploy" config, add an artifact dependency to download these files.

Step 8: Watch It Run

Push a commit. TeamCity triggers "Build" → compiles → uploads artifacts → triggers "Run Tests" → downloads artifacts → runs tests → reports results. All visible in the build chain view — a visual graph showing each stage, its status, and its duration.

You now have a working CI pipeline.


Advanced Features Worth Knowing

Personal Builds and Pre-tested Commits

This is a TeamCity feature that doesn't get enough attention. A developer can submit their local changes to TeamCity for a personal build — the server runs the full build with those changes applied, without the developer pushing to the shared branch.

If the build passes, the changes can be automatically committed. If it fails, the shared branch stays clean. This is "pre-tested commits" — your main branch never sees broken code.

Build Cache

Cache Maven repositories, npm's node_modules, Gradle caches between builds. Agents can reuse cached dependencies instead of downloading them fresh every time. Cuts build times significantly for projects with heavy dependency trees.

Cloud Profiles

Define a cloud profile (AWS, GCP, Azure, or even Kubernetes) and TeamCity auto-provisions agents when the build queue grows. Builds finish, agents terminate. You get elastic scaling without maintaining a fleet of idle machines.

REST API

Everything in TeamCity is accessible via a comprehensive REST API. Trigger builds, query status, download artifacts, manage agents — all programmable. Useful for custom dashboards, ChatOps integrations, or scripting complex deployment workflows.

Composite Builds

Group multiple build configurations into a single logical unit. A "Release" composite build might include compile, test, package, and publish steps — but from the outside, it looks like one build with one status. Clean reporting for complex pipelines.

Investigations

When a build breaks, TeamCity can automatically assign an investigation to the developer whose commit caused the failure. That developer gets notified and the investigation tracks the issue until someone fixes it or marks it as resolved. Accountability without manual tracking.


TeamCity vs Jenkins vs GitHub Actions

An honest take:

Jenkins is the Swiss Army knife that's been in every DevOps engineer's pocket since 2011. It's free, it's infinitely extensible, and it can do literally anything — if you're willing to maintain 47 plugins, debug Groovy pipeline syntax, and accept that the UI looks like it was designed in 2008. Because it was. Jenkins works. It just requires constant gardening.

GitHub Actions is perfect for open-source projects and teams already living in GitHub. The marketplace has actions for everything, the YAML syntax is approachable, and you don't manage any infrastructure. But complex pipelines get ugly fast, debugging is painful (push-and-pray), and you're locked into GitHub's ecosystem. Running Actions on self-hosted runners for private projects partially negates the convenience.

TeamCity has the best out-of-box experience of the three. The UI is modern and information-dense. The Kotlin DSL is more powerful than YAML-based alternatives. Native runners understand build tools deeply instead of just shelling out. The free tier works for real teams, not just toy projects.

The downsides: it's heavier than Actions (you're running a JVM-based server), the ecosystem is smaller than Jenkins' plugin universe, and you're buying into JetBrains' world. If your team already uses IntelliJ and Kotlin, that's a feature. If not, it's a consideration.

My take: For a new team choosing today — if you're on GitHub and doing simple CI, use Actions. If you need complex pipelines, multi-platform builds, or pipeline-as-code that isn't YAML, TeamCity is the strongest option. Jenkins is for teams that already have it running and have invested in the plugin ecosystem.


Where to Go From Here

  1. Spin it up. Run the Docker commands from above. Ten minutes from reading this to seeing the UI.
  2. Connect a real repo. Point TeamCity at something you're actively working on.
  3. Read the Kotlin DSL docs. Once you define pipelines as code, you won't go back to clicking through UIs. TeamCity Kotlin DSL documentation →
  4. Explore cloud agents. If you're on AWS or GCP, cloud profiles are where TeamCity's scaling story gets interesting.

Full documentation: TeamCity Docs →


Running Builds Inside Docker Containers

By default, builds run directly on the agent's OS. But what if you need a specific environment — a particular Python version, Node 20, or a toolchain your agent doesn't have?

TeamCity supports running build steps inside Docker containers natively. You don't need to install anything special on the agent — just Docker.

Per-Step Docker Wrapper

Any build step can be wrapped in a Docker container. In the build step settings, expand "Docker Settings" and specify an image:

  • Docker image: python:3.12-slim
  • Pull image: Always (or If not exists)

That step now runs inside that container. The agent mounts the checkout directory automatically — your source code is available inside the container at the working directory.

Example: You have a build with 3 steps:

  1. Lint → runs in python:3.12-slim
  2. Test → runs in python:3.12-slim
  3. Build Docker Image → runs on the agent directly (needs Docker socket)

Each step can use a different image. Mix and match.

In Kotlin DSL:

object Build : BuildType({
    name = "Build and Test"

    vcs {
        root(DslContext.settingsRoot)
    }

    steps {
        script {
            name = "Lint"
            scriptContent = "pip install flake8 && flake8 src/"
            dockerImage = "python:3.12-slim"
        }
        script {
            name = "Run Tests"
            scriptContent = "pip install -r requirements.txt && pytest tests/"
            dockerImage = "python:3.12-slim"
        }
        dockerCommand {
            name = "Build Docker Image"
            commandType = build {
                source = file {
                    path = "Dockerfile"
                }
                namesAndTags = "myapp:%build.number%"
            }
        }
    }

    triggers {
        vcs {}
    }
})

Agent-Level Docker

Alternatively, you can run the entire agent as a Docker container. JetBrains provides official agent images with common tools pre-installed:

docker run -d --name teamcity-agent \
  -e SERVER_URL="http://teamcity-server:8111" \
  -v /var/run/docker.sock:/var/run/docker.sock \
  jetbrains/teamcity-agent

Mount the Docker socket so the agent can run Docker-in-Docker. This is common for build agents that need to build and push images.

Ansible Projects in TeamCity

TeamCity doesn't have a native "Ansible" build runner — but it doesn't need one. Ansible is a command-line tool, and TeamCity's Command Line runner handles it perfectly.

Auto-Detection

When TeamCity scans your repo, it looks for known build files: pom.xml (Maven), build.gradle (Gradle), package.json (npm), Dockerfile, etc. Ansible playbooks (*.yml) are NOT auto-detected because YAML files could be anything.

You'll need to set up the build steps manually. That's fine — it takes 2 minutes.

Setting Up Ansible in TeamCity

Option 1: Agent has Ansible installed

If your build agent has Ansible installed (or you use a Docker image with Ansible), add a Command Line build step:

ansible-playbook -i inventory/staging playbook.yml --syntax-check
ansible-playbook -i inventory/staging playbook.yml --check
ansible-playbook -i inventory/staging playbook.yml

Three steps: syntax check → dry run → apply. Clean pipeline.

Option 2: Run inside Docker

Use TeamCity's Docker wrapper with an Ansible image:

  • Docker image: cytopia/ansible:latest (or willhallonline/ansible:latest)
  • Command: ansible-playbook -i inventory/staging playbook.yml

No Ansible installation needed on the agent.

Full Ansible Pipeline Example (Kotlin DSL)

Here's a real-world Ansible pipeline with linting, syntax check, dry run, and deployment — with a manual approval gate before production:

project {
    buildType(AnsibleLint)
    buildType(AnsibleDryRun)
    buildType(AnsibleDeployStaging)
    buildType(AnsibleDeployProd)
}

object AnsibleLint : BuildType({
    name = "1. Lint Ansible"

    vcs {
        root(DslContext.settingsRoot)
    }

    steps {
        script {
            name = "YAML Lint"
            scriptContent = "yamllint -d relaxed ."
            dockerImage = "cytopia/yamllint:latest"
        }
        script {
            name = "Ansible Lint"
            scriptContent = "ansible-lint playbook.yml"
            dockerImage = "cytopia/ansible-lint:latest"
        }
        script {
            name = "Syntax Check"
            scriptContent = "ansible-playbook playbook.yml --syntax-check"
            dockerImage = "cytopia/ansible:latest"
        }
    }

    triggers {
        vcs {}  // Run on every push
    }
})

object AnsibleDryRun : BuildType({
    name = "2. Dry Run (Check Mode)"

    vcs {
        root(DslContext.settingsRoot)
    }

    steps {
        script {
            name = "Ansible Check Mode"
            scriptContent = """
                ansible-playbook -i inventory/staging playbook.yml \
                  --check --diff
            """.trimIndent()
            dockerImage = "cytopia/ansible:latest"
        }
    }

    dependencies {
        snapshot(AnsibleLint) {}  // Only runs if lint passes
    }
})

object AnsibleDeployStaging : BuildType({
    name = "3. Deploy to Staging"

    vcs {
        root(DslContext.settingsRoot)
    }

    steps {
        script {
            name = "Deploy"
            scriptContent = """
                ansible-playbook -i inventory/staging playbook.yml \
                  --diff -v
            """.trimIndent()
            dockerImage = "cytopia/ansible:latest"
        }
    }

    dependencies {
        snapshot(AnsibleDryRun) {}  // Only runs if dry run passes
    }
})

object AnsibleDeployProd : BuildType({
    name = "4. Deploy to Production"

    vcs {
        root(DslContext.settingsRoot)
    }

    // Requires manual approval — no automatic trigger
    params {
        param("teamcity.approval.required", "true")
    }

    steps {
        script {
            name = "Deploy to Production"
            scriptContent = """
                ansible-playbook -i inventory/production playbook.yml \
                  --diff -v
            """.trimIndent()
            dockerImage = "cytopia/ansible:latest"
        }
    }

    dependencies {
        snapshot(AnsibleDeployStaging) {}  // Only after staging succeeds
    }
})

This gives you a 4-stage pipeline:

Push → Lint → Dry Run → Deploy Staging → [Manual Approval] → Deploy Production

Each stage only runs if the previous one passes. Production requires manual approval. The entire thing runs in Docker containers — your agents don't need Ansible installed.

SSH Keys and Secrets

Ansible needs SSH access to your servers. In TeamCity:

  1. Go to Project Settings → SSH Keys and upload your private key
  2. Add a Build Feature → SSH Agent to your build configuration
  3. TeamCity injects the key into the agent's SSH agent — Ansible uses it automatically
  4. For vault passwords, use TeamCity Parameters with type "Password" — they're masked in logs

Compiled by AI. Proofread by caffeine. ☕*