Skip to main content

Kubernetes

Overview

Check out our Kubernetes Chart Repository on GitHub and our published Helm Charts.

Quick-start

helm repo add flagsmith https://flagsmith.github.io/flagsmith-charts/
helm install -n flagsmith --create-namespace flagsmith flagsmith/flagsmith
kubectl -n flagsmith port-forward svc/flagsmith-frontend 8080:8080

Then view http://localhost:8080 in a browser. This will install using default options, in a new namespace flagsmith.

Ingress configuration

The above is a quick and simple way of gaining access to Flagsmith, but in many cases will need to configure ingress to work with an ingress controller.

Port forwarding

In a terminal, run:

kubectl -n [flagsmith-namespace] port-forward svc/[flagsmith-release-name]-frontend 8080:8080

Then access http://localhost:8080 in a browser.

In a cluster that has an ingress controller, using the frontend proxy

In this configuration, api requests are proxied by the frontend. This is simpler to configure, but introduces some latency.

Set the following values for flagsmith, with changes as needed to accommodate your ingress controller, and any associated DNS changes.

Eg:

ingress:
frontend:
enabled: true
hosts:
- host: flagsmith.[MYDOMAIN]
paths:
- /

Then, once any out-of-cluster DNS or CDN changes have been applied, access https://flagsmith.[MYDOMAIN] in a browser.

In a cluster that has an ingress controller, using separate ingresses for frontend and api

Set the following values for flagsmith, with changes as needed to accommodate your ingress controller, and any associated DNS changes. Also, set the API_URL env-var such that the URL is reachable from a browser accessing the frontend.

Eg:

ingress:
frontend:
enabled: true
hosts:
- host: flagsmith.[MYDOMAIN]
paths:
- /
api:
enabled: true
hosts:
- host: flagsmith.[MYDOMAIN]
paths:
- /api/
- /health/

frontend:
extraEnv:
API_URL: 'https://flagsmith.[MYDOMAIN]/api/v1/'

Then, once any out-of-cluster DNS or CDN changes have been applied, access https://flagsmith.[MYDOMAIN] in a browser.

Minikube ingress

(See https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/ for more details.)

If using minikube, enable ingress with minikube addons enable ingress.

Then set the following values for flagsmith:

ingress:
frontend:
enabled: true
hosts:
- host: flagsmith.local
paths:
- /

and apply. This will create two ingress resources.

Run minikube ip. Set this ip and flagsmith.local in your /etc/hosts, eg:

192.168.99.99 flagsmith.local

Then access http://flagsmith.local in a browser.

Database configuration

By default, the chart creates its own PostgreSQL server within the cluster.

To connect the Flagsmith API to an external PostgreSQL server set the values under databaseExternal, eg:

postgresql:
enabled: false # turn off the chart-managed postgres

databaseExternal:
enabled: true
# Can specify the full URL
url: 'postgres://myuser:mypass@myhost:5432/mydbname'
# Or can specify each part (url takes precedence if set)
type: postgres
host: myhost
port: 5432
database: mydbname
username: myuser
password: mypass
# Or can specify a pre-existing k8s secret containing the database URL
urlFromExistingSecret:
enabled: true
name: my-precreated-db-config
key: DB_URL

Environment variables

The chart handles most environment variables required, but see the API readme for all available configuration options. These can be set using api.extraEnv, eg:

api:
extraEnv:
LOG_LEVEL: DEBUG

Resource allocation

By default, no resource limits or requests are set.

TODO: recommend some defaults

Replicas

By default, 1 replica of each of the frontend and api is used.

TODO: recommend some defaults.

TODO: consider some autoscaling options.

TODO: create a pod-disruption-budget

InfluxDB

By default, Flagsmith uses InfluxDB to store time series data. Currently this is used to measure:

  • SDK API traffic
  • SDK Flag Evaluations

Setting up InfluxDB is discussed in more detail in the Docs.

PgBouncer

By default, Flagsmith connects directly to the database - either in-cluster, or external. Can enable PgBouncer with pgbouncer.enabled: true to have Flagsmith connect to PgBouncer, and PgBouncer connect to the database.

All-in-one Docker image

The Docker image at https://hub.docker.com/r/flagsmith/flagsmith/ contains both the API and the frontend. To make use of this, set the following values:

api:
image:
repository: flagsmith/flagsmith # or some other repository hosting the combined image
tag: 2.14 # or some other tag that exists in that repository
separateApiAndFrontend: false

This switches off the Kubernetes deployment for the frontend. However, the ingress and service are retained, but all requests are handled by the API deployment.

Configuration

The following table lists the configurable parameters of the chart and their default values.

ParameterDescriptionDefault
api.image.repositorydocker image repository for flagsmith apiflagsmith/flagsmith-api
api.image.tagdocker image tag for flagsmith apiappVersion
api.image.imagePullPolicyIfNotPresent
api.image.imagePullSecrets[]
api.separateApiAndFrontendSet to false if using flagsmith/flagsmith image for the apitrue
api.replicacountnumber of replicas for the flagsmith api1
api.resourcesresources per pod for the flagsmith api{}
api.podLabelsadditional labels to apply to pods for the flagsmith api{}
api.extraEnvextra environment variables to set for the flagsmith api{}
api.nodeSelector{}
api.tolerations[]
api.affinity{}
api.podSecurityContext{}
api.defaultPodSecurityContext.enabledwhether to use the default security contexttrue
api.livenessProbe.failureThreshold5
api.livenessProbe.initialDelaySeconds10
api.livenessProbe.periodSeconds10
api.livenessProbe.successThreshold1
api.livenessProbe.timeoutSeconds2
api.readinessProbe.failureThreshold10
api.readinessProbe.initialDelaySeconds10
api.readinessProbe.periodSeconds10
api.readinessProbe.successThreshold1
api.readinessProbe.timeoutSeconds2
api.dbWaiter.image.repositorywillwill/wait-for-it
api.dbWaiter.image.taglatest
api.dbWaiter.image.imagePullPolicyIfNotPresent
api.dbWaiter.timeoutSecondsTime before init container will retry30
frontend.enabledWhether the flagsmith frontend is enabledtrue
frontend.image.repositorydocker image repository for flagsmith frontendflagsmith/flagsmith-frontend
frontend.image.tagdocker image tag for flagsmith frontendappVersion
frontend.image.imagePullPolicyIfNotPresent
frontend.image.imagePullSecrets[]
frontend.replicacountnumber of replicas for the flagsmith frontend1
frontend.resourcesresources per pod for the flagsmith frontend{}
frontend.apiProxy.enabledproxy API requests to the API service within the clustertrue
frontend.extraEnvextra environment variables to set for the flagsmith frontend{}
frontend.nodeSelector{}
frontend.tolerations[]
frontend.affinity{}
api.podSecurityContext{}
api.defaultPodSecurityContext.enabledwhether to use the default security contexttrue
frontend.livenessProbe.failureThreshold20
frontend.livenessProbe.initialDelaySeconds20
frontend.livenessProbe.periodSeconds10
frontend.livenessProbe.successThreshold1
frontend.livenessProbe.timeoutSeconds10
frontend.readinessProbe.failureThreshold20
frontend.readinessProbe.initialDelaySeconds20
frontend.readinessProbe.periodSeconds10
frontend.readinessProbe.successThreshold1
frontend.readinessProbe.timeoutSeconds10
postgresql.enabledif true, creates in-cluster PostgreSQL databasetrue
postgresql.serviceAccount.enabledcreates a serviceaccount for the postgres podtrue
nameOverrideflagsmith-postgres
postgresqlDatabaseflagsmith
postgresqlUsernamepostgres
postgresqlPasswordflagsmith
databaseExternal.enableduse an external database. Specify database URL, or all parts.false
databaseExternal.urlSee https://github.com/kennethreitz/dj-database-url#url-schema
databaseExternal.typeNote: Only postgres supported by default images.postgres
databaseExternal.port5432
databaseExternal.databaseName of the database within the server
databaseExternal.username
databaseExternal.password
databaseExternal.urlFromExistingSecret.enabledReference an existing secret containing the database URL
databaseExternal.urlFromExistingSecret.nameName of referenced secret
databaseExternal.urlFromExistingSecret.keyKey within the referenced secrt to use
influxdb.enabledtrue
influxdb.nameOverrideinfluxdb
influxdb.image.repositorydocker image repository for influxdbquay.io/influxdb/influxdb
influxdb.image.tagdocker image tag for influxdbv2.0.2
influxdb.image.imagePullPolicyIfNotPresent
influxdb.image.imagePullSecrets[]
influxdb.adminUser.organizationinfluxdata
influxdb.adminUser.bucketdefault
influxdb.adminUser.useradmin
influxdb.adminUser.passwordrandomly generated
influxdb.adminUser.tokenrandomly generated
influxdb.persistence.enabledfalse
influxdb.resourcesresources per pod for the influxdb{}
influxdb.nodeSelector{}
influxdb.tolerations[]
influxdb.affinity{}
influxdbExternal.enabledUse an InfluxDB not managed by this chartfalse
influxdbExternal.url
influxdbExternal.bucket
influxdbExternal.organization
influxdbExternal.token
influxdbExternal.tokenFromExistingSecret.enabledUse reference to a k8s secret not managed by this chartfalse
influxdbExternal.tokenFromExistingSecret.nameReferenced secret name
influxdbExternal.tokenFromExistingSecret.keyKey within the referenced secret to use
pgbouncer.enabledfalse
pgbouncer.image.repositorybitnami/pgbouncer
pgbouncer.image.tag1.16.0
pgbouncer.image.imagePullPolicyIfNotPresent
pgbouncer.image.imagePullSecrets[]
pgbouncer.replicaCount1
pgbouncer.podAnnotations{}
pgbouncer.resources{}
pgbouncer.podLabels{}
pgbouncer.extraEnv{}
pgbouncer.nodeSelector{}
pgbouncer.tolerations[]
pgbouncer.affinity{}
pgbouncer.podSecurityContext{}
pgbouncer.securityContext{}
pgbouncer.defaultSecurityContext.enabledtrue
pgbouncer.defaultSecurityContext{}
pgbouncer.livenessProbe.failureThreshold5
pgbouncer.livenessProbe.initialDelaySeconds5
pgbouncer.livenessProbe.periodSeconds10
pgbouncer.livenessProbe.successThreshold1
pgbouncer.livenessProbe.timeoutSeconds2
pgbouncer.readinessProbe.failureThreshold10
pgbouncer.readinessProbe.initialDelaySeconds1
pgbouncer.readinessProbe.periodSeconds10
pgbouncer.readinessProbe.successThreshold1
pgbouncer.readinessProbe.timeoutSeconds2
service.influxdb.externalPort8080
service.api.typeClusterIP
service.api.port8000
service.frontend.typeClusterIP
service.frontend.port8080
ingress.frontend.enabledfalse
ingress.frontend.ingressClassName
ingress.frontend.annotations{}
ingress.frontend.hosts[].hostchart-example.local
ingress.frontend.hosts[].paths[]
ingress.frontend.tls[]
ingress.api.enabledfalse
ingress.api.ingressClassName
ingress.api.annotations{}
ingress.api.hosts[].hostchart-example.local
ingress.api.hosts[].paths[]
ingress.api.tls[]

Development and contributing

Requirements

helm version > 3.0.2

To run locally

You can test and run the application locally on OSX using minikube like this:

# Install Docker for Desktop and then:

brew install minikube
minikube start --memory 8192 --cpus 4
helm install flagsmith --debug ./flagsmith
minikube dashboard

Test install Chart

Install Chart without building a package:

helm install flagsmith --debug ./flagsmith

Run template and check kubernetes resouces are made:

helm template flagsmith flagsmith --debug -f flagsmith/values.yaml

build chart package

To build chart package run:

helm package ./flagsmith