(→Deploying to Kubernetes) |
(→This is a work-in-process Page for features that are not yet released) |
||
Line 3: | Line 3: | ||
The following set of files works with codebase 18.02 and later for a manual build and containerizing an already configured database. If you use local file authentication, you need to modify the Dockerfile to copy over your .htaccess file, accordingly. | The following set of files works with codebase 18.02 and later for a manual build and containerizing an already configured database. If you use local file authentication, you need to modify the Dockerfile to copy over your .htaccess file, accordingly. | ||
If you are unsure what that is, I point back to the header at the top of this page - this is pre-release, work-in-process. Don't distract developers from actually completing work by asking them how to do things that they are trying to automate in the first place. | If you are unsure what that is, I point back to the header at the top of this page - this is pre-release, work-in-process. Don't distract developers from actually completing work by asking them how to do things that they are trying to automate in the first place. | ||
+ | |||
+ | You should migrate your /pictures and /drawings folders to shared storage, such as NFS, which is used in the example deployment.yaml file. You will then change your paths in the openDCIM Configuration tab to assets/pictures and assets/drawings, and then run the | ||
== Building your Container == | == Building your Container == |
Revision as of 16:19, 11 December 2018
Contents
This is a work-in-process Page for features that are not yet released
The following set of files works with codebase 18.02 and later for a manual build and containerizing an already configured database. If you use local file authentication, you need to modify the Dockerfile to copy over your .htaccess file, accordingly. If you are unsure what that is, I point back to the header at the top of this page - this is pre-release, work-in-process. Don't distract developers from actually completing work by asking them how to do things that they are trying to automate in the first place.
You should migrate your /pictures and /drawings folders to shared storage, such as NFS, which is used in the example deployment.yaml file. You will then change your paths in the openDCIM Configuration tab to assets/pictures and assets/drawings, and then run the
Building your Container
Dockerfile
Modify for your locale.
FROM ubuntu:18.04
RUN apt-get update
COPY tzscript.sh /
RUN /tzscript.sh
RUN apt-get -y install mariadb-client libapache2-mod-webauthldap apache2 php php-mbstring php-snmp php-gd php-mysql php-zip php-gettext locales graphviz && rm -rf /var/lib/apt/lists/* && localedef -i en_US -c -f UTF-8 -A /usr
/share/locale/locale.alias en_US.UTF-8 && a2enmod rewrite authnz_ldap && rm /var/www/html/index.html
ENV LANG en_US.utf8
COPY dcim/ /var/www/html/
COPY dcim/db.inc.php-dist /var/www/html/db.inc.php
COPY 000-default.conf /etc/apache2/sites-available/
COPY php.ini /etc/php/apache2/
RUN mkdir -p /var/www/html/vendor/mpdf/ttfontdata && mkdir -p /var/www/html/assets && chown -R www-data:www-data /var/www/html && chmod 775 /var/www/html/assets /var/www/html/pictures /var/www/html/drawings /var/www/html/vend
or/mpdf/ttfontdata
CMD apachectl -D FOREGROUND
tzscript.sh
Modify for your timezone.
#!/bin/bash
export DEBIAN_FRONTEND=noninteractive
apt-get install -y tzdata
ln -fs /usr/share/zoneinfo/America/New_York /etc/localtime
dpkg-reconfigure --frontend noninteractive tzdata
php.ini
We'll have a full php.ini file when we tidy all of this up. Here are the few important bits modified from a standard distribution php.ini file.
max_execution_time = 180
max_input_time = 60
memory_limit = 1024M
post_max_size = 16M
file_uploads = On
upload_max_filesize = 16M
000-default.conf
Leave the logfile definitions as-is. The Dockerfile creates a symlink from them to /dev/stdout so that standard container logging includes the apache2 logs.
<VirtualHost *:80>
#ServerName www.example.com
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
<Directory "/var/www/html">
AllowOverride All
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Once you have those files, you can run (substitute MY_REPO with your repository information):
$ docker build . -t MY_REPO/opendcim:latest
$ docker push MY_REPO/opendcim:latest
Deploying to Kubernetes
configmap.yaml
These are your environment variables that will change the behavior of openDCIM. They are dynamically updated, so as soon as you make a change in the configMap, it will change the values in the running containers.
apiVersion: v1
kind: ConfigMap
metadata:
name: opendcim
namespace: opendcim
data:
OPENDCIM_DB_HOST: mysql
OPENDCIM_DB_NAME: dcim
OPENDCIM_DB_PASS: dcim
OPENDCIM_DB_USER: dcim
OPENDCIM_AUTH_METHOD: "LDAP"
OPENDCIM_DEBUG: "FALSE"
service.yaml
This defines a service so that it can be exposed through a LoadBalancer or, in this example, an ingress rule.
apiVersion: v1
kind: Service
metadata:
name: opendcim-svc
namespace: opendcim
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: opendcim
ingress.yaml
Definition of the inbound rule for ingress to the service. Swap out dcim.YOURDOMAIN.COM with the URL you are using. This ingress rule assumes that you are running cert-manager for automatic certificate management. Adjust accordingly.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
certmanager.k8s.io/cluster-issuer: ca-issuer
kubernetes.io/ingress.class: nginx
nginx.org/ssl-services: opendcim-service
name: opendcim-ingress
namespace: opendcim
spec:
rules:
- host: dcim.YOURDOMAIN.COM
http:
paths:
- backend:
serviceName: opendcim-svc
servicePort: 80
path: /
tls:
- hosts:
- dcim.YOURDOMAIN.COM
secretName: opendcim-tls
cert.yaml
This only works if you have CertManager installed and running in your Kubernetes cluster, otherwise you have to follow documentation on how to add the opendcim-tls secret the old fashioned way.
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: dcim.YOURDOMAIN.COM
namespace: opendcim
spec:
secretName: opendcim-tls
issuerRef:
name: ca-issuer
kind: ClusterIssuer
commonName: dcim.YOURDOMAIN.COM
dnsNames:
- dcim.YOURDOMAIN.COM
deployment.yaml
This is the main controller. Suggested minimum of 2 replicas for some level of fault tolerance, but that is only useful if your MySQL/Maria database is a fault-tolerant cluster. There are plenty of example online for how to set up a Galera cluster, including some in Kubernetes.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: opendcim
name: opendcim-deployment
spec:
replicas: 2
revisionHistoryLimit: 5
selector:
matchLabels:
app: opendcim
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: opendcim
spec:
containers:
- envFrom:
- configMapRef:
name: opendcim
image: MY_REPO/opendcim:latest
imagePullPolicy: Always
name: opendcim
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: "2"
memory: 2Gi
requests:
cpu: "1"
memory: 512Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /var/www/html/assets
name: opendcim-persistent-storage
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30
volumes:
- name: opendcim-persistent-storage
nfs:
path: /opendcim-data
server: nfs-server.yourdomain.com