Line 10: | Line 10: | ||
Containers don't run the full set of background daemons, so there is no cron running in the openDCIM container. However, Kubernetes has a built-in function that will allow you to run periodic jobs. Here are some sample ones, which assume that you store the configuration environment variables in a secret named 'dcim'. | Containers don't run the full set of background daemons, so there is no cron running in the openDCIM container. However, Kubernetes has a built-in function that will allow you to run periodic jobs. Here are some sample ones, which assume that you store the configuration environment variables in a secret named 'dcim'. | ||
− | <b>poll-temp-sensors.yaml</ | + | <b>poll-temp-sensors.yaml</b> |
<pre> | <pre> | ||
apiVersion: batch/v1beta1 | apiVersion: batch/v1beta1 | ||
Line 34: | Line 34: | ||
</pre> | </pre> | ||
− | <b>poll-pdu-stats.yaml</ | + | <b>poll-pdu-stats.yaml</b> |
<pre> | <pre> | ||
apiVersion: batch/v1beta1 | apiVersion: batch/v1beta1 |
Revision as of 15:43, 3 August 2020
Contents
Soon to be changed - a Helm chart is being made to handle this
The following set of files works with release 20.01 and later for a manual build and containerizing an already configured database. If you use local file authentication, you need to modify the Dockerfile to copy over your .htaccess file, accordingly. If you are unsure what that is, I point back to the header at the top of this page - this is pre-release, work-in-process. Don't distract developers from actually completing work by asking them how to do things that they are trying to automate in the first place.
You should migrate your /pictures and /drawings folders to shared storage, such as NFS, which is used in the example deployment.yaml file. You will then change your paths in the openDCIM Configuration tab to assets/pictures and assets/drawings, and then run the
Deploying to Kubernetes
Periodic (CRON) Jobs
Containers don't run the full set of background daemons, so there is no cron running in the openDCIM container. However, Kubernetes has a built-in function that will allow you to run periodic jobs. Here are some sample ones, which assume that you store the configuration environment variables in a secret named 'dcim'.
poll-temp-sensors.yaml
apiVersion: batch/v1beta1 kind: CronJob metadata: name: poll-temp-sensors spec: schedule: "*/15 * * * *" jobTemplate: spec: template: spec: containers: - name: opendcim-sensors image: opendcim/opendcim:20.01 args: - /usr/bin/php - /var/www/html/poll_temperature_sensors.php envFrom: - secretRef: name: dcim restartPolicy: Never
poll-pdu-stats.yaml
apiVersion: batch/v1beta1 kind: CronJob metadata: name: poll-pdu-stats spec: schedule: "*/15 * * * *" jobTemplate: spec: template: spec: containers: - name: opendcim-pdu image: opendcim/opendcim:20.01 args: - /usr/bin/php - /var/www/html/poll_pdu_stats.php envFrom: - secretRef: name: dcim restartPolicy: Never
configmap.yaml
These are your environment variables that will change the behavior of openDCIM. They are dynamically updated, so as soon as you make a change in the configMap, it will change the values in the running containers.
apiVersion: v1
kind: ConfigMap
metadata:
name: opendcim
namespace: opendcim
data:
OPENDCIM_DB_HOST: mysql
OPENDCIM_DB_NAME: dcim
OPENDCIM_DB_PASS: dcim
OPENDCIM_DB_USER: dcim
OPENDCIM_AUTH_METHOD: "LDAP"
OPENDCIM_DEVMODE: "FALSE"
service.yaml
This defines a service so that it can be exposed through a LoadBalancer or, in this example, an ingress rule.
apiVersion: v1
kind: Service
metadata:
name: opendcim-svc
namespace: opendcim
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: opendcim
ingress.yaml
Definition of the inbound rule for ingress to the service. Swap out dcim.YOURDOMAIN.COM with the URL you are using. This ingress rule assumes that you are running cert-manager for automatic certificate management. Adjust accordingly.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
certmanager.k8s.io/cluster-issuer: ca-issuer
kubernetes.io/ingress.class: nginx
nginx.org/ssl-services: opendcim-service
name: opendcim-ingress
namespace: opendcim
spec:
rules:
- host: dcim.YOURDOMAIN.COM
http:
paths:
- backend:
serviceName: opendcim-svc
servicePort: 80
path: /
tls:
- hosts:
- dcim.YOURDOMAIN.COM
secretName: opendcim-tls
cert.yaml
This only works if you have CertManager installed and running in your Kubernetes cluster, otherwise you have to follow documentation on how to add the opendcim-tls secret the old fashioned way.
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: dcim.YOURDOMAIN.COM
namespace: opendcim
spec:
secretName: opendcim-tls
issuerRef:
name: ca-issuer
kind: ClusterIssuer
commonName: dcim.YOURDOMAIN.COM
dnsNames:
- dcim.YOURDOMAIN.COM
deployment.yaml
This is the main controller. Suggested minimum of 2 replicas for some level of fault tolerance, but that is only useful if your MySQL/Maria database is a fault-tolerant cluster. There are plenty of example online for how to set up a Galera cluster, including some in Kubernetes.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: opendcim
name: opendcim-deployment
spec:
replicas: 2
revisionHistoryLimit: 5
selector:
matchLabels:
app: opendcim
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: opendcim
spec:
containers:
- envFrom:
- configMapRef:
name: opendcim
image: MY_REPO/opendcim:latest
imagePullPolicy: Always
name: opendcim
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: "2"
memory: 2Gi
requests:
cpu: "1"
memory: 512Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /var/www/html/assets
name: opendcim-persistent-storage
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30
volumes:
- name: opendcim-persistent-storage
nfs:
path: /opendcim-data
server: nfs-server.yourdomain.com