Home Insights Rethinking Local Kubernetes Development

Rethinking Local Kubernetes Development

Insights

Fast, secure, production-aligned workflows with Tilt.dev and kind.

Local Kubernetes development is often where productivity quietly stalls. Despite Kubernetes being the standard platform for modern applications, many teams still struggle with slow feedback loops, fragile setups, and an over-reliance on shared or remote clusters – especially in regulated capital markets environments.

“Over my time working with Kubernetes, cloud platforms, and platform engineering teams, I’ve seen these issues repeatedly, particularly in organisations operating under strict security and access controls. Developers want fast, realistic local environments; platform teams need governance, consistency, and control. Too often, those goals are treated as mutually exclusive.

This post shares a practical approach we’ve used to bridge that gap. By combining tools such as Tilt, kind, and Helm in a structured way, it’s possible to build local Kubernetes environments that are fast, secure, reproducible, and closely aligned with production – without weakening platform guardrails.”

The problem: why local Kubernetes development breaks down

Kubernetes has become the de facto platform for building and running modern applications. For many teams, however, developing locally on Kubernetes remains slow, complex, and frustrating. Feedback loops are long, deployment setups are heavyweight, and developers often find themselves dependent on shared or remote clusters even for simple code changes. The result is lost time, reduced productivity, and a familiar pattern of environments that behave differently depending on where they run.

These challenges are amplified in highly regulated industries such as financial services. Developers typically operate in strict least-privilege environments, where access to infrastructure, clusters, and registries is intentionally constrained. While these controls are necessary, they make local development significantly harder. Standing up a platform that both complies with regulatory requirements and enables efficient development is a non-trivial task, particularly when teams must balance delivery pressures with day-to-day operational responsibilities.

The impact is felt most acutely during onboarding. New developers are faced with long setup guides, manual configuration steps, and brittle assumptions about local state. Reproducing a realistic cluster environment on a laptop is rarely straightforward, and inconsistencies quickly lead to the familiar “works on my machine” problem. Over time, these frictions accumulate, slowing teams down and increasing reliance on platform or infrastructure teams for routine development workflows.

Why traditional workflows fail in regulated environments

In regulated environments, the challenges of local Kubernetes development are not simply the result of poor tooling or lack of discipline. They are a consequence of operating models that must prioritise control, security, and separation of duties – which often comes at the expense of developer feedback and iteration speed. 

From an infrastructure perspective, supporting development teams typically involves a fragmented workflow. Application manifests, Helm charts, and CI/CD pipelines are frequently maintained as separate concerns. Developers depend on platform or infrastructure teams to apply configuration changes, troubleshoot deployments, and rebuild or redeploy container images after each meaningful code change.

A typical development cycle follows a familiar pattern. A developer updates code and opens a pull request. A CI/CD pipeline builds and pushes a container image to an internal registry. An infrastructure engineer then deploys the image manually into a restricted development or test namespace. Because direct access to clusters is limited, logs and errors are surfaced indirectly – via tickets, chat messages, or screenshots – rather than through hands-on investigation.

While this model is understandable in environments with strict access controls, it introduces significant friction. Feedback cycles stretch from minutes into hours or days. Infrastructure teams become an unintentional bottleneck for routine development tasks. Developers lose visibility into how their applications behave once deployed, and simple experiments require coordination across multiple teams.

Over time, this separation erodes efficiency on both sides. Platform teams spend increasing effort on repetitive deployment and troubleshooting requests, while developers adapt by batching changes, relying on shared clusters, or avoiding local testing altogether. The result is a workflow that satisfies governance requirements but struggles to scale as teams and systems grow in complexity.

The architectural principles of a better local workflow

Addressing these challenges requires more than incremental improvements to existing tooling or faster CI pipelines. What is needed is a different way of thinking about local development on Kubernetes – one that balances developer productivity with the security, control, and governance demands of regulated environments.

At its core, a better local workflow must drastically shorten the feedback loop. Developers should be able to make a change and see the result running in Kubernetes within seconds, not minutes. Long rebuild and redeploy cycles discourage experimentation and push teams towards batching changes, which in turn slows learning and increases risk.

At the same time, local development environments must remain aligned with production. Using different manifests, deployment mechanisms, or configuration models locally almost guarantees divergence over time. A viable approach must allow teams to reuse the same deployment artefacts – such as Helm charts and Kubernetes manifests – across local and production environments, reducing inconsistency and surprise.

Security cannot be an afterthought. In least-privilege environments, developers should not be required to manage credentials manually or store sensitive information on their machines. Access to registries, clusters, and configuration should be handled through well-defined Kubernetes primitives, ensuring that secure practices are embedded into the workflow rather than imposed as external constraints.

Reproducibility is equally critical. A local environment should be predictable and repeatable across the team, so that onboarding a new developer or troubleshooting an issue does not depend on undocumented setup steps or machine-specific state. Ideally, the entire environment – clusters, registries, dependencies, and supporting services – can be bootstrapped in a consistent way with minimal manual intervention.

Finally, any solution must respect the role of platform and infrastructure teams. Rather than bypassing governance controls, a better model provides guardrails through declarative configuration and controlled automation. Developers gain the ability to iterate productively within clearly defined boundaries, while infrastructure teams retain visibility and control over how environments are constructed and operated.

Tilt acts as a bridge between developer intent and secure infrastructure.

From principles to practice: the solution stack

To put these principles into practice, we adopted a local development stack that combines a small number of complementary tools, each with a clearly defined role. Together, they form a coherent system that enables fast feedback, production alignment, security by default, and reproducibility – without weakening governance or control.

  • Tilt.dev acts as the orchestration layer for local development. It continuously watches source code, rebuilds container images when required, and updates running Kubernetes resources automatically. The behaviour of the environment is defined declaratively in a Tiltfile, written in Starlark (a Python-like configuration language), which brings together build logic, Kubernetes manifests, and deployment rules in a single place. This makes local development workflows explicit and repeatable with easy programmability.
  • For the Kubernetes runtime itself, we use kind (Kubernetes in Docker). kind provides lightweight, disposable Kubernetes clusters that run locally in Docker, making it well suited to reproducible development environments. Clusters can be created, configured, and torn down programmatically, ensuring that every developer works against a consistent baseline that closely resembles a real Kubernetes environment.
  • A local Docker registry completes the core setup. Rather than pushing images to a remote registry, developers build images locally and push them to a registry running on their machine. The kind cluster is configured to pull images directly from this registry using containerd configuration patches. This approach significantly reduces network overhead, removes unnecessary dependencies on shared infrastructure, and shortens the build-deploy cycle. Everyone works with the same image sources and the same cluster configuration, which means far fewer “it works on my machine” issues.
  • Helm is used to preserve alignment with production deployments. The same Helm charts and values used in production are reused locally, ensuring that application configuration, dependencies, and deployment behaviour remain consistent across environments. This reduces configuration drift and eliminates an entire class of issues that only surface when code is deployed outside a developer’s local setup.
  • Finally, lightweight Python automation invoked from the Tiltfile ties the environment together. These scripts handle tasks such as generating cluster configuration, bootstrapping registries, validating prerequisites, and ensuring that the environment is in a known-good state before development begins.

Combined, these components create a tight feedback loop: code changes trigger image rebuilds, images are pushed to a local registry, the local Kubernetes cluster pulls the updated image, and workloads are rolled out automatically. By using multi-stage builds, cached layers, and BuildKit parallelisation, the end-to-end cycle is reduced to seconds, making Kubernetes-based development feel immediate rather than heavyweight.

Implementation walkthrough: from setup to daily use

In this section I am focussing on the concrete implementation details of the local Kubernetes development environment. The intent is not to introduce new concepts, but to show how the components described earlier are assembled into a working, repeatable setup.

The diagram shows the overall setup, and the step-by-step implementation is described below.

Installing Tilt

On macOS:

Installation is easy and straightforward using Homebrew:

$ brew install tilt
$ tilt up

On Linux:

On Linux, Tilt provides native packages and installation scripts. Developers can install it using the official installer:

$ curl -fsSL https://raw.githubusercontent.com/tilt-dev/tilt/master/scripts/install.sh | bash

For distributions supporting Debian packages (Ubuntu, Debian), Tilt also offers .deb packages, while RPM-based systems (CentOS, RHEL, Fedora) can use the corresponding .rpm packages provided on the Tilt GitHub releases page.

On Windows:

Tilt is available as a standalone binary. It can be installed via Scoop:

$ scoop install tilt

or downloaded directly as an executable from the official release page and added to the system PATH. 

Once installed, developers can start a live development session by running:

$ tilt up

Tilt automatically detects file changes, rebuilds and pushes container images, and updates the running environment. Its Starlark-based Tiltfile provides a flexible, declarative mechanism to define resources, automate workflows, and orchestrate how services are built and deployed during development.

Local Registry Setup

Run a local registry:

docker run -d -p 5000:5000 --restart=always --name registry registry:2

Build and push an image:

docker build -t localhost:5000/myapp:dev
docker push localhost:5000/myapp:dev

You can use a local_resource() together with deps argument to control exactly on which file or code change to trigger the build and push process.

Configuring kind to Use Local Registry

kind-config.yaml:

Create the cluster:

With this configuration in place, pods running in the kind cluster can pull images from the local registry. localhost:5000. This avoids reliance on a remote registry to reduce network overhead and shorten the overall development cycle. 

Tiltfile with Helm and Docker Builds

Whenever source code changes:

  1. Tilt rebuilds the image.
  2. Pushes to the local registry.
  3. Triggers a Helm upgrade.
  4. Kubernetes pulls the new image and rolls out updated pods.

Dependency Orchestration

Tilt can enforce resource order which is important for applications with supporting infrastructure dependencies. In the example below, aggregated logging relies on a clear dependency chain:

MinIO (object storage) → Loki (logging) → Grafana (observability).

Tilt ensures MinIO is deployed first, Loki configures against it, and Grafana auto-loads Loki as a datasource.

Common Issues and Fixes

A few things tend to come up during local Kubernetes development, especially when using Tilt.

Registry access errors are often caused by missing or misconfigured credentials. In these cases, it is important to verify that the appropriate docker-registry secret exists in the target namespace and is correctly referenced by the workloads.

Port collisions can occur when local ports are already in use by other processes. This is typically resolved by adjusting the extraPortMappings section in your kind-config.yaml:

Tools such as lsof or netstat can be used to identify conflicting processes. 

In some cases, Tilt successfully builds and pushes out a new image, but Kubernetes does not automatically redeploy the workload. This usually happens when a fixed image tag is reused. A manual restart can be triggered with:

If this happens frequently, Tilt can automate the restart through a small local_resource that reacts to image changes.

Local clusters may also differ from production in areas such as DNS configuration. When name resolution behaves unexpectedly, running diagnostic tools such as dig, nslookup, or curl inside a debugging container (for example, nicolaka/netshoot) can help identify missing records or incorrect search domains.

Finally, issues with Tilt live-update synchronisation are often caused by incorrect file filters or build context definitions. Reviewing the live_update configuration in the Tiltfile typically reveals where file path assumptions do not match the actual project structure.

A real-world Tiltfile: scaling to production-like local environments

To give a sense of how all of this comes together, this section illustrates how the approach described so far was used to support a production-like local Kubernetes environment. The goal is not to present a prescriptive template, but to demonstrate how the same principles and tools can be applied to much richer application stacks.

The Tiltfile brings together Docker, Helm, Kubernetes and supporting automation to recreate a multi-service stack that behaves very closely to production. Rather than treating local development as a simplified or disposable setup, this approach treats it as a first-class environment, designed to be reliable, repeatable and representative.

Local Docker Registry and Kind Cluster

The Tiltfile starts by setting up a local Docker registry on localhost:5000. All locally built images are pushed to this registry, removing the need for developers to rely on an external registry or developer-specific credentials for every build. Once the registry is running, it’s attached to the same Docker network as the kind cluster, allowing the cluster to pull images directly without any additional networking or proxy configuration. 

Next, a small Python helper script takes care of creating (or reusing) a kind cluster that is already configured to trust the local registry. This logic is integrated into the Tiltfile so that cluster lifecycle management becomes part of the normal development workflow. If the cluster already exists, Tilt continues without interruption; if not, it is created automatically.

Tilt Settings

Once the core infrastructure is in place, the Tiltfile applies a set of baseline Tilt settings such as the Kubernetes context, update parallelism, and the timeouts for resource updates. These settings tweaks make the environment responsive and predictable when working with a stack that includes lots of services.

Image Build and Push

Application images are then built and pushed to the local registry under Tilt’s control. For certain components, image builds are configured to run in manual trigger mode, ensuring that rebuilds occur only when explicitly requested by the developer. This provides additional control when working with large images or expensive build steps.

Helm Integration

Helm plays a central role in deploying the application and its dependencies. The Tiltfile loads a custom Helm extension, authenticates against a private Artifactory instance, and updates all charts. Tilt then handles the namespace creation and applies the necessary Kubernetes secrets – including registry credentials – so every service can pull images without issues.

Kubernetes Secrets and Namespaces

Secrets and namespaces are managed as part of the same workflow. By handling these concerns declaratively within Tilt, the environment avoids hidden dependencies and undocumented setup steps. Every service starts with the credentials and configuration it needs, in a way that mirrors how these concerns are handled in higher environments.

Helm Deployments

The bulk of the environment is brought up through a series of Helm deployments. The main application is deployed with explicit configuration for resource limits, environment variables and port mappings. Alongside it, a range of supporting services are deployed including components such as NUI, GridGain, KEDA, MinIO, Argo Workflows, and a full observability stack is built on Prometheus, Grafana, Loki, and the OpenTelemetry Collector. Each service can be tuned specifically for local development needs while still using the same underlying deployment mechanism as production.

Custom Resource Management

Some components, such as NUI or PubSubPlus, need extra configuration beyond what is practical to express purely through Helm values. For those, the Tiltfile invokes lightweight Python scripts to apply supplementary Kubernetes manifests once the Helm releases are complete. This allows the environment to remain consistent and fully automated, even when individual services have specialised requirements.

In combination, this Tiltfile acts as a blueprint for a fully automated, production-like local environment. It builds images, deploys services, configures observability, and stitches everything together so developers only need to run one command:

$ tilt up

The result is a local Kubernetes environment that is reproducible across the team, fast to iterate on, and straightforward to extend as the application landscape evolves.

Conclusion

A well-designed Tiltfile transforms local Kubernetes development from a slow, manual process into a fast, automated feedback loop. By combining Docker builds, Helm deployments, secrets management, custom automation and a comprehensive observability stack, Tilt becomes a powerful orchestration layer for complex microservice-based systems.

This example demonstrates how many moving parts – registries, clusters, charts, namespaces, and monitoring – can be unified into a single, reproducible developer workflow. Whether you’re adding new services, experimenting with configurations, or onboarding new team members, this approach helps teams maintain consistency while moving quickly. 

For teams building modern cloud-native applications, particularly in regulated environments, adopting Tilt in this structured way can significantly improve the local developer experience without compromising on security, control or alignment with production.

Veselin Hristov, January 2026


Phi helps organisations design secure, production-aligned Kubernetes development workflows that scale across teams and environments. If you’re looking to improve local developer experience without compromising control or governance, we would be happy to help.

Contact us for further information on relevant services: sales@phipartners.com