Helm Chart Anatomy¶
The agent project includes a Helm chart in chart/ that produces all the
Kubernetes resources needed to run on OpenShift. This page documents every
template, how values.yaml keys map to resource fields, and the patterns the
chart uses to handle rolling updates and optional sidecars.
Chart metadata¶
Chart.yaml identifies the chart to Helm.
apiVersion: v2
name: ecosystem-test-agent
description: Helm chart for deploying a BaseAgent to OpenShift
version: 0.6.0
appVersion: 0.6.0
type: application
Chart name vs. your agent name
The chart is named ecosystem-test-agent because it ships with the
template's default. When you scaffold your own agent with
fips-agents create agent calculus-agent, the chart name matches your
project. The template helpers (ecosystem-test-agent.fullname, etc.)
would be named after your chart instead.
version tracks the chart itself. appVersion tracks the agent image and
appears in the app.kubernetes.io/version label on every resource.
Template helpers (_helpers.tpl)¶
The chart defines four named templates that other templates reference with
include. Understanding these is essential for reading the rest of the chart.
| Template | Output |
|---|---|
ecosystem-test-agent.name |
.Chart.Name, overridden by nameOverride. Truncated to 63 characters. |
ecosystem-test-agent.fullname |
<release>-<name>, overridden by fullnameOverride. Truncated to 63 characters. If the release name already contains the chart name, only the release name is used. |
ecosystem-test-agent.labels |
Full label set: chart version, selector labels, app.kubernetes.io/version, app.kubernetes.io/managed-by. |
ecosystem-test-agent.selectorLabels |
Minimal label set for matchLabels: app.kubernetes.io/name and app.kubernetes.io/instance. |
ecosystem-test-agent.chart |
<name>-<version> string for the helm.sh/chart label. |
Name override behavior¶
nameOverride |
fullnameOverride |
Resulting fullname |
|---|---|---|
"" (default) |
"" (default) |
<release>-ecosystem-test-agent |
"my-agent" |
"" |
<release>-my-agent |
| any | "custom" |
custom |
ConfigMap¶
Template: templates/configmap.yaml
Produces a ConfigMap named <fullname>-config. Every key under
values.config becomes a key-value pair in the ConfigMap's data section.
# values.yaml
config:
MODEL_ENDPOINT: https://vllm.apps.cluster.example.com/v1
MODEL_NAME: meta-llama/Llama-3.3-70B-Instruct
MAX_ITERATIONS: "50"
Produces:
apiVersion: v1
kind: ConfigMap
metadata:
name: release-ecosystem-test-agent-config
data:
MODEL_ENDPOINT: "https://vllm.apps.cluster.example.com/v1"
MODEL_NAME: "meta-llama/Llama-3.3-70B-Instruct"
MAX_ITERATIONS: "50"
All values are quoted in the template ({{ $value | quote }}), so numeric
strings like "50" are preserved correctly.
values.yaml keys¶
| Key | Type | Description |
|---|---|---|
config.<NAME> |
string | Injected as env var <NAME> into the agent container via envFrom. |
Prompts are not in ConfigMaps
Prompts, rules, and skills are baked into the container image, not injected via ConfigMaps. This provides version traceability -- the image SHA pins the exact prompt text. Only runtime-variable values (endpoints, model names, log levels) belong in the ConfigMap.
Deployment¶
Template: templates/deployment.yaml
This is the most complex template. It produces a Deployment with one required container (the agent) and one optional container (the code-execution sandbox).
Pod-level settings¶
spec:
replicas: {{ .Values.replicaCount }}
# ...
template:
metadata:
annotations:
checksum/config: {{ include ... | sha256sum }}
spec:
securityContext:
runAsNonRoot: true
| values.yaml key | Resource field | Default |
|---|---|---|
replicaCount |
spec.replicas |
1 |
The pod-level securityContext enforces runAsNonRoot: true, which satisfies
the OpenShift restricted-v2 SCC.
ConfigMap checksum annotation¶
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
This annotation contains a SHA-256 hash of the rendered ConfigMap. When a
ConfigMap value changes (e.g., you update MODEL_NAME in values.yaml), the
hash changes, which changes the pod template, which triggers a rolling update.
Without this annotation, updating only the ConfigMap would leave existing pods running with stale environment variables until they are manually restarted. Helm does not natively restart pods on ConfigMap changes -- this checksum pattern is the standard workaround.
Agent container¶
containers:
- name: agent
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
| values.yaml key | Resource field | Default |
|---|---|---|
image.repository |
image (name portion) |
ecosystem-test-agent |
image.tag |
image (tag portion) |
latest |
image.pullPolicy |
imagePullPolicy |
IfNotPresent |
service.port |
containerPort |
8080 |
resources.requests.cpu |
resources.requests.cpu |
100m |
resources.requests.memory |
resources.requests.memory |
256Mi |
resources.limits.cpu |
resources.limits.cpu |
500m |
resources.limits.memory |
resources.limits.memory |
512Mi |
The container's securityContext drops all capabilities and disallows
privilege escalation:
Environment injection¶
Environment variables reach the agent container through two paths:
envFrom-- All keys from the ConfigMap are injected as env vars.env-- Additional variables fromvalues.env(for Secret references or values outside the config section) and, when the sandbox is enabled, theSANDBOX_URLvariable.
# values.yaml -- referencing a Secret
env:
- name: API_KEY
valueFrom:
secretKeyRef:
name: agent-secrets
key: api-key
| values.yaml key | Resource field | Default |
|---|---|---|
env |
spec.containers[agent].env |
[] |
Health probes¶
Probes are disabled by default. When enabled, the agent must expose /healthz
and /readyz endpoints on the service port.
| values.yaml key | Resource field | Default |
|---|---|---|
probes.enabled |
Controls presence of livenessProbe/readinessProbe |
false |
When enabled, the probe configuration is:
| Probe | Path | Initial delay | Period |
|---|---|---|---|
| Liveness | /healthz |
10s | 30s |
| Readiness | /readyz |
5s | 10s |
Sandbox sidecar (conditional)¶
When sandbox.enabled is true, a second container is added to the pod. This
container runs the code-execution sandbox, which the agent's code_executor
tool reaches at localhost:8000 (pods share a network namespace).
The entire sidecar block is wrapped in {{- if .Values.sandbox.enabled }}.
When disabled (the default), no sandbox container, volumes, or env vars are
added -- the Deployment produces a single-container pod.
What changes when sandbox.enabled: true¶
- A
sandboxcontainer is added tospec.containers. SANDBOX_URL=http://localhost:8000is injected into the agent container'senv.- A
sandbox-tmpemptyDir volume is added tospec.volumesand mounted at/tmpin the sandbox container. - The sandbox container gets its own liveness and readiness probes on port 8000.
Sandbox container configuration¶
- name: sandbox
image: "{{ .Values.sandbox.image.repository }}:{{ .Values.sandbox.image.tag }}"
env:
- name: SANDBOX_PROFILE
value: {{ .Values.sandbox.profile | quote }}
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
| values.yaml key | Resource field | Default |
|---|---|---|
sandbox.enabled |
Controls presence of the sidecar | false |
sandbox.profile |
SANDBOX_PROFILE env var |
minimal |
sandbox.image.repository |
Sandbox container image name | code-sandbox |
sandbox.image.tag |
Sandbox container image tag | latest |
sandbox.image.pullPolicy |
Sandbox imagePullPolicy |
IfNotPresent |
sandbox.resources.requests.cpu |
Sandbox CPU request | 100m |
sandbox.resources.requests.memory |
Sandbox memory request | 128Mi |
sandbox.resources.limits.cpu |
Sandbox CPU limit | 500m |
sandbox.resources.limits.memory |
Sandbox memory limit | 256Mi |
The sandbox container enforces readOnlyRootFilesystem: true. The only
writable path is the emptyDir at /tmp, capped at 10Mi.
Available profiles: minimal, data-science, financial, code-analysis.
The profile controls which imports are allowed and which scan stages run
inside the sandbox.
Seccomp profile (optional)¶
When sandbox.seccomp.enabled is true, a Localhost seccomp profile is
attached to the sandbox container:
This profile blocks networking syscalls (socket, connect, bind) and
dangerous operations (ptrace, mount, io_uring) at the kernel level.
Prerequisites:
- Security Profiles Operator (SPO) installed on the cluster (GA since OCP 4.12).
- A custom SCC or SPO ProfileBinding that permits Localhost seccomp profiles
(the default
restricted-v2SCC only allowsRuntimeDefault).
| values.yaml key | Resource field | Default |
|---|---|---|
sandbox.seccomp.enabled |
Controls presence of seccompProfile on sandbox container |
false |
Service¶
Template: templates/service.yaml
Produces a ClusterIP Service that routes traffic to pods matching the selector labels.
spec:
type: ClusterIP
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
| values.yaml key | Resource field | Default |
|---|---|---|
service.port |
spec.ports[0].port |
8080 |
The targetPort is the named port http defined in the Deployment's
container spec, so they stay in sync automatically.
Route¶
Template: templates/route.yaml
Produces an OpenShift Route. The entire template is wrapped in
{{- if .Values.route.enabled }}, so no Route is created by default.
spec:
to:
kind: Service
name: {{ include "ecosystem-test-agent.fullname" . }}
weight: 100
port:
targetPort: http
tls:
termination: {{ .Values.route.tls.termination }}
insecureEdgeTerminationPolicy: {{ .Values.route.tls.insecureEdgeTerminationPolicy }}
| values.yaml key | Resource field | Default |
|---|---|---|
route.enabled |
Controls whether the Route is created | false |
route.host |
spec.host (omitted if empty) |
"" |
route.tls.termination |
spec.tls.termination |
edge |
route.tls.insecureEdgeTerminationPolicy |
spec.tls.insecureEdgeTerminationPolicy |
Redirect |
When route.host is empty, OpenShift auto-generates a hostname from the Route
name and the cluster's wildcard domain (e.g.,
release-ecosystem-test-agent.apps.cluster.example.com).
FIPS compatibility¶
No FIPS-specific chart configuration is required. The UBI base images ship
FIPS-aware OpenSSL and automatically respect the host kernel's fips=1 mode.
See the comments in values.yaml for validated behavior details.