Skip to content

Tablez — Local Development Guide

How to run the Tablez platform locally using vcluster on an existing k3s/k8s cluster.


Prerequisites

  • .NET 10 SDK
  • Docker
  • kubectl
  • vcluster CLI
  • Flux CLI
  • An existing Kubernetes cluster (k3s, minikube, etc.)
  • GitHub access to tablez-dev org

1. Clone All Repos

mkdir -p ~/repos/tablez && cd ~/repos/tablez
for repo in tablez-contracts tablez-reservation tablez-guest tablez-restaurant \
            tablez-notification tablez-ai tablez-api-gateway tablez-web \
            tablez-migration tablez-gitops tablez-docs; do
  gh repo clone tablez-dev/$repo
done

2. Create a vcluster

vcluster provides an isolated Kubernetes environment inside your existing cluster. Tablez gets its own control plane without interfering with other workloads.

Current Setup

The Tablez vcluster runs on the Dell k3s cluster (dell-stig):

Resource Value
Cluster k3s v1.32.5, single node (dell-stig)
Access Tailnet IP 100.95.212.93:6443
vcluster name tablez
Namespace vcluster-tablez
Backing store Dedicated etcd (deployed as StatefulSet)
vcluster version 0.25.1

Create the vcluster

Use a dedicated etcd backing store. The default embedded SQLite (kine) is too slow for Flux controllers — API timeouts cause leader election loss and CrashLoopBackOff.

# Create with dedicated etcd (required for Flux)
cat > /tmp/vcluster-values.yaml <<EOF
controlPlane:
  backingStore:
    etcd:
      deploy:
        enabled: true
        statefulSet:
          resources:
            requests:
              memory: 256Mi
              cpu: 200m
  statefulSet:
    resources:
      requests:
        memory: 512Mi
        cpu: 500m
      limits:
        memory: 1Gi
EOF

vcluster create tablez --namespace vcluster-tablez \
  --connect=false \
  -f /tmp/vcluster-values.yaml

This creates three pods: tablez-* (vcluster syncer + API server), tablez-etcd-0 (dedicated etcd), and coredns-* (DNS).

Connect to the vcluster

# Connect (switches kubectl context, opens port-forward)
vcluster connect tablez --namespace vcluster-tablez

# This terminal must stay open — it runs a port-forward to the vcluster API server
# Use a separate terminal for kubectl commands

Important: vcluster connect switches your kubectl context to the vcluster. To return to the host cluster:

# Switch back to host cluster
kubectl config use-context default

# Or disconnect cleanly
vcluster disconnect

Verify

# From a separate terminal (while vcluster connect is running)
kubectl get nodes    # Should show host node(s)
kubectl get ns       # Should show default, kube-system

3. Install Flux CD

Flux runs inside the vcluster, completely isolated from ArgoCD (or any other GitOps tool) on the host cluster.

Access the vcluster API via its ClusterIP service — avoids port-forward instability.

# Get vcluster ClusterIP and certs
VC_IP=$(kubectl get svc tablez -n vcluster-tablez -o jsonpath='{.spec.clusterIP}')
CLIENT_CERT=$(kubectl get secret -n vcluster-tablez vc-tablez -o jsonpath='{.data.client-certificate}')
CLIENT_KEY=$(kubectl get secret -n vcluster-tablez vc-tablez -o jsonpath='{.data.client-key}')

# Create kubeconfig for the vcluster
cat > /tmp/vc-kube.yaml <<EOF
apiVersion: v1
kind: Config
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://${VC_IP}:443
  name: vcluster
contexts:
- context:
    cluster: vcluster
    user: vcluster-admin
  name: vcluster
current-context: vcluster
users:
- name: vcluster-admin
  user:
    client-certificate-data: $CLIENT_CERT
    client-key-data: $CLIENT_KEY
EOF

# Install Flux (source + kustomize + helm controllers)
flux install --components=source-controller,kustomize-controller,helm-controller \
  --kubeconfig=/tmp/vc-kube.yaml

# Create GitHub token secret for private repo access
kubectl --kubeconfig=/tmp/vc-kube.yaml create secret generic flux-system \
  -n flux-system \
  --from-literal=username=git \
  --from-literal=password="$GITHUB_TOKEN"

# Create GitRepository source
kubectl --kubeconfig=/tmp/vc-kube.yaml apply -f - <<EOF
apiVersion: source.toolkit.fluxcd.io/v1
kind: GitRepository
metadata:
  name: tablez-gitops
  namespace: flux-system
spec:
  interval: 1m
  ref:
    branch: main
  url: https://github.com/tablez-dev/tablez-gitops
  secretRef:
    name: flux-system
EOF

# Create Kustomizations (infrastructure first, then apps)
kubectl --kubeconfig=/tmp/vc-kube.yaml apply -f - <<EOF
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: infrastructure
  namespace: flux-system
spec:
  interval: 10m
  sourceRef:
    kind: GitRepository
    name: tablez-gitops
  path: ./infrastructure/overlays/local
  prune: true
  timeout: 3m
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: apps
  namespace: flux-system
spec:
  interval: 5m
  dependsOn:
    - name: infrastructure
  sourceRef:
    kind: GitRepository
    name: tablez-gitops
  path: ./apps/overlays/local
  prune: true
  timeout: 3m
EOF

Option B: Via vcluster connect

vcluster connect tablez --namespace vcluster-tablez
# In a separate terminal:
export GITHUB_TOKEN=<your-token>
flux bootstrap github \
  --owner=tablez-dev \
  --repository=tablez-gitops \
  --branch=main \
  --path=clusters/local \
  --personal=false

Note: flux bootstrap may fail via port-forward if the vcluster uses embedded SQLite (kine). Use Option A with a dedicated etcd-backed vcluster instead.

What Flux deploys

Flux automatically deploys from the gitops repo: - Infrastructure: PostgreSQL StatefulSet, Valkey Deployment, tablez namespace - ARC: Controller + 6 runner scale sets (Docker-in-Docker) for CI builds - Observability: OpenTelemetry Collector, Prometheus, Tempo, Loki, Grafana (observability namespace) - Apps: All service Deployments + Services (reservation, guest, restaurant, notification, ai, api-gateway)


4. Create Secrets

# PostgreSQL credentials
kubectl create secret generic postgres-credentials \
  --namespace tablez \
  --from-literal=username=postgres \
  --from-literal=password=localdev123

# ghcr.io pull secret (required — packages are private)
# Use a GitHub PAT with read:packages scope
kubectl create secret docker-registry ghcr-credentials \
  --namespace tablez \
  --docker-server=ghcr.io \
  --docker-username=tablez-dev \
  --docker-password="$GITHUB_PAT"
# ARC runner secret (required for self-hosted GitHub Actions runners)
# Use a GitHub PAT with repo scope for tablez-dev org
kubectl create secret generic github-pat \
  --namespace arc-runners \
  --from-literal=github_token="$GITHUB_PAT"

Required secrets (6 total — only manual step when moving to a new cluster):

Secret Namespace Purpose PAT scope
postgres-credentials tablez PostgreSQL auth N/A (local password)
ghcr-credentials tablez Pull private container images read:packages
github-pat arc-runners Register ARC runners + checkout code repo
cloudflared-token observability Cloudflare Tunnel for grafana.invotek.no N/A (Cloudflare tunnel token)
ghcr-auth flux-system Image reflector scans ghcr.io for new tags read:packages (same PAT as github-pat)

5. Verify Deployment

# Check Flux reconciliation
flux get kustomizations

# Check pods
kubectl get pods -n tablez

# Check services
kubectl get svc -n tablez

6. Run Services Locally (without k8s)

For rapid development, run individual services directly:

# Start PostgreSQL (Docker)
docker run -d --name tablez-postgres \
  -e POSTGRES_PASSWORD=localdev123 \
  -e POSTGRES_DB=tablez_reservation \
  -p 5432:5432 \
  postgres:17

# Start Valkey (Docker)
docker run -d --name tablez-valkey \
  -p 6379:6379 \
  valkey/valkey:8

# Run reservation service
cd tablez-reservation
dotnet run --project Tablez.Reservation.Api

# Run guest service (separate terminal)
cd tablez-guest
dotnet run --project Tablez.Guest.Api

Default connection strings point to localhost when no config override is set.


7. Port Forwarding (vcluster)

If services are running in vcluster, forward ports for local access:

kubectl port-forward -n tablez svc/reservation-api 8081:80 &
kubectl port-forward -n tablez svc/postgres 5432:5432 &
kubectl port-forward -n tablez svc/valkey 6379:6379 &

8. CI: Container Image Builds

Each service repo has a GitHub Actions workflow (.github/workflows/docker-build.yml) that builds and pushes to ghcr.io/tablez-dev/tablez-{service} on push to main. Images are tagged with latest and a sortable tag (main-<sha7>-<timestamp>) for Flux image automation.

Self-hosted runners (ARC)

All builds run on self-hosted ARC (Actions Runner Controller) runners on the Dell k3s cluster. Each repo has its own runner scale set with a manually defined Docker-in-Docker sidecar.

Runner label Repo Resources (req/limit)
arc-linux-tablez-reservation reservation 1 CPU/2Gi — 4 CPU/8Gi
arc-linux-tablez-guest guest 1 CPU/2Gi — 4 CPU/8Gi
arc-linux-tablez-restaurant restaurant 500m/1Gi — 2 CPU/4Gi
arc-linux-tablez-notification notification 500m/1Gi — 2 CPU/4Gi
arc-linux-tablez-ai ai 500m/1Gi — 2 CPU/4Gi
arc-linux-tablez-api-gateway api-gateway 500m/1Gi — 2 CPU/4Gi

Runner config lives in tablez-dev/tablez-gitopsinfrastructure/base/arc-runners/ (HelmReleases managed by Flux). Runners scale from 0 to 3 per repo. The ARC controller is also Flux-managed (infrastructure/base/arc-system/).

Everything is self-contained in the vcluster — ARC controller, runner scale sets, apps, and infrastructure. To move to a new cluster: install Flux, point at tablez-gitops, create secrets (see section 4), done.

To add a new tablez service: create a HelmRelease in infrastructure/base/arc-runners/, add it to the kustomization, and push to tablez-gitops.

DinD DNS and network: host

ARC runners use Docker-in-Docker (DinD) for container builds. The DinD sidecar is defined manually in each HelmRelease (not via containerMode: dind) so we can pass --dns flags to dockerd.

Why: BuildKit (used by docker/build-push-action) creates its own bridge network inside DinD. This bridge network has a broken DNS resolver in k3s — it cannot reach external hosts like api.nuget.org. Three layers of DNS config were tried:

Approach Works? Why
Pod dnsPolicy: None + dnsConfig No Only affects pod's /etc/resolv.conf, not BuildKit's bridge network
dockerd --dns=8.8.8.8 No BuildKit ignores Docker daemon DNS settings (moby/buildkit#734)
network: host on build-push-action Yes BuildKit uses the DinD container's network (= pod network) directly, bypassing the bridge

All service workflows use network: host:

- uses: docker/build-push-action@v6
  with:
    network: host    # Required — BuildKit bridge DNS is broken in k3s DinD
    context: .
    file: reservation/Dockerfile
    push: true

Security note: network: host here means BuildKit uses the pod's network namespace (CNI-managed), not the k8s node's host network. Build steps can reach cluster-internal services (CoreDNS, other pods). This is safe because builds only run trusted code (push to main + workflow_dispatch). If external contributors are ever added, this should be revisited.

Image automation (auto-deploy)

Flux image automation closes the CD loop — new images are automatically deployed without manual intervention.

Components (in infrastructure/base/image-automation/):

Resource Purpose
ImageRepository (×6) Scan ghcr.io/tablez-dev/* every 5m for new tags
ImagePolicy (×6) Select the tag with highest timestamp (most recent build)
ImageUpdateAutomation (×1) Commit updated tags to gitops repo

Tag format: main-<sha7>-<unix_timestamp> (e.g., main-a1b2c3d-1773128998). Pure SHA tags can't be sorted — the timestamp suffix provides ordering.

How tags are generated (in each service's docker-build.yml):

- name: Generate sortable tag
  id: tag
  run: |
    TIMESTAMP=$(date +%s)
    SHA=$(echo "${{ github.sha }}" | cut -c1-7)
    echo "sortable=main-${SHA}-${TIMESTAMP}" >> "$GITHUB_OUTPUT"

Setter markers in deployment manifests tell Flux which image fields to update:

image: ghcr.io/tablez-dev/tablez-reservation:main-a1b2c3d-1773128998 # {"$imagepolicy": "flux-system:tablez-reservation"}

Required secrets:

Secret Namespace Purpose
ghcr-auth flux-system Docker registry auth for image scanning (read:packages)

The ghcr-auth secret uses the same PAT as github-pat in arc-runners.

Commit convention: Flux commits use chore(images): prefix, which does NOT trigger release-please releases (only feat:, fix:, perf: do).

Multi-repo Docker build (all services)

All services reference tablez-contracts (for Tablez.Observability) via ProjectReference. Every CI workflow checks out both repos side-by-side and uses a multi-context Docker build:

steps:
  - uses: actions/checkout@v4
    with:
      path: reservation              # service code in ./reservation/
  - uses: actions/checkout@v4
    with:
      repository: tablez-dev/tablez-contracts
      token: ${{ secrets.CONTRACTS_TOKEN }}
      path: tablez-contracts         # contracts in ./tablez-contracts/
  - uses: docker/build-push-action@v6
    with:
      context: .                     # root context contains both dirs
      file: reservation/Dockerfile
      push: true
      network: host                  # required — see "DinD DNS" section above
      tags: ${{ steps.meta.outputs.tags }}

The Dockerfile copies both directories into the build context:

COPY tablez-contracts/ tablez-contracts/
COPY reservation/ reservation/
WORKDIR /src/reservation
RUN dotnet restore Tablez.Reservation.Api/Tablez.Reservation.Api.csproj

Required secrets

Secret Scope Purpose
GITHUB_TOKEN Automatic Push images to ghcr.io
CONTRACTS_TOKEN Org-level (tablez-dev, all repos) Checkout private tablez-contracts repo

CONTRACTS_TOKEN is set as an org-level secret on tablez-dev with visibility to all repos.

ProjectReference paths

In csproj files, contracts are referenced with ..\..\ (two levels up) to match the Docker build context layout:

<!-- From Tablez.Reservation.Api/Tablez.Reservation.Api.csproj -->
<ProjectReference Include="..\..\tablez-contracts\Tablez.Contracts\Tablez.Contracts.csproj" />

In Docker: from /src/reservation/Tablez.Reservation.Api/, ..\..\ resolves to /src/, then tablez-contracts//src/tablez-contracts/.

Locally: the same path works when repos are cloned side-by-side under a common parent (e.g., ~/repos/tablez/).


9. Build Images Locally

For testing container builds locally:

cd tablez-reservation
docker build -t tablez-reservation:dev .
docker run -p 8080:8080 \
  -e ConnectionStrings__Marten="Host=host.docker.internal;Database=tablez_reservation;Username=postgres;Password=localdev123" \
  tablez-reservation:dev

Project References

Services reference tablez-contracts as a local project (not NuGet) during development:

<!-- In any service .csproj -->
<ProjectReference Include="../../tablez-contracts/Tablez.Contracts/Tablez.Contracts.csproj" />

This is already configured in tablez-reservation and tablez-guest. When publishing to production, contracts will be distributed as a NuGet package.


10. vCluster Platform Dashboard

A web UI for managing the tablez vcluster, accessible at https://vcluster.invotek.no.

Resource Value
URL https://vcluster.invotek.no
Auth Cloudflare Access (Zero Trust) — invotekas@gmail.com only
Platform login admin / set during install
Version vCluster Platform 4.7.1 (Free tier)
License email post@stigjohnny.no
Installed on Host k3s cluster (vcluster-platform namespace)
Tunnel Cloudflare Tunnel (invotek-dell) in invotek namespace

How it works

Browser → Cloudflare Access (email gate) → Cloudflare Tunnel → cloudflared pod (invotek ns)
  → loft service (vcluster-platform ns) → vCluster Platform dashboard

Setup (one-time, already done)

# 1. Install platform
vcluster platform start

# 2. Import existing vcluster
vcluster platform add vcluster tablez --namespace vcluster-tablez --project default

# 3. Set custom domain and disable built-in tunnel
helm upgrade loft vcluster-platform -n vcluster-platform \
  --repo https://charts.loft.sh --reuse-values \
  --set config.loftHost=vcluster.invotek.no \
  --set tunnel.disabled=true

# 4. Deploy cloudflared connector (see cluster-gitops/apps/vcluster-platform/)
# 5. Cloudflare Access policy is managed via Terraform (tablez-gitops/terraform/access.tf)

GitOps

Cloudflared connector is managed in Stig-Johnny/cluster-gitops (private) → apps/vcluster-platform/. The tunnel token is a manual secret (cloudflared-invotek-token in invotek namespace).


11. Observability (Grafana + OpenTelemetry)

The LGTM stack (Loki, Grafana, Tempo, Prometheus) is deployed automatically by Flux in the observability namespace. All services export telemetry via OTLP to the OpenTelemetry Collector.

Access Grafana

Via browser (recommended): https://grafana.invotek.no (Cloudflare Zero Trust — invotekas@gmail.com only)

Via port-forward (fallback):

kubectl port-forward -n observability svc/grafana 3000:80

# Open http://localhost:3000
# Login: admin / tablez-local

Verify the stack

kubectl get pods -n observability
# Should show: grafana, prometheus, tempo, loki, otel-collector

kubectl get helmreleases -n observability
# All should show "Ready"

Service instrumentation

Each service references Tablez.Observability from tablez-contracts and adds two lines in Program.cs:

builder.Services.AddTablezObservability("Reservation");
builder.Logging.AddTablezLogging();

This configures tracing (ASP.NET Core, HttpClient, SignalR, MediatR, Marten), metrics (.NET runtime, HTTP), and structured log export — all via OTLP to the Collector.

Environment variable override

Services use OTEL_EXPORTER_OTLP_ENDPOINT to locate the Collector. Default is http://otel-collector.observability:4317 (in-cluster). For local development outside k8s:

export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
kubectl port-forward -n observability svc/otel-collector 4317:4317

Useful Commands

# Rebuild Marten projections (when schema changes)
# Run against the service directly — Marten handles it on startup

# Check Flux sync status
flux get sources git
flux get kustomizations

# Disconnect from vcluster
vcluster disconnect

# Delete vcluster (clean up)
vcluster delete tablez --namespace vcluster-tablez

Troubleshooting

Issue Fix
Marten connection refused Ensure PostgreSQL is running and connection string matches
Flux not reconciling flux reconcile kustomization apps --with-source
Pod CrashLoopBackOff Check logs: kubectl logs -n tablez deployment/reservation-api
vcluster won't connect Ensure host cluster is running: kubectl get nodes
PVC unbound on vcluster create Needs a StorageClass with Immediate binding mode (e.g., nfs-csi). Or disable persistence.
kubectl EOF errors Context may have switched to vcluster port-forward that's no longer running. Run kubectl config use-context default
TLS certificate error (remote cluster) If accessing k3s via Tailnet IP not in cert SANs, add insecure-skip-tls-verify: true to kubeconfig and remove certificate-authority-data. Long-term fix: add IP to k3s --tls-san flag.
Flux controllers CrashLoopBackOff Usually leader election timeout from slow kine/SQLite. Recreate vcluster with dedicated etcd (backingStore.etcd.deploy.enabled: true).
"too many open files" in vcluster Increase inotify limits on the host: echo 'fs.inotify.max_user_watches=524288' >> /etc/sysctl.d/99-vcluster.conf && sysctl -p /etc/sysctl.d/99-vcluster.conf
ImagePullBackOff on service pods Container images haven't been built yet. Push to main to trigger CI, or build locally with docker build + push to ghcr.io.
CI job stuck on "Waiting for a runner" ARC runner scale set not synced. Check Flux: flux get helmreleases -n arc-runners. Verify runner pods: kubectl get pods -n arc-runners.