This Cybus Kubernetes guide provides a detailed procedure to help admins adjust the persistent volume content permissions to ensure a smooth upgrade to Connectware 1.5.0 in Kubernetes environments.
Important: Connectware 1.5.0 introduces a significant change regarding container privileges. Now, containers are started without root privileges. This change causes an issue where files that persisted on volumes that were created by a user other than the one accessing them, cannot easily have their permissions changed. As a result, this upgrade requires the admin to manually update the permissions.
kubectl
configured with access to the target installationconnectware
in version 1.5.0
.connectware-agent
in version 1.1.0
.values.yaml
file specific to your installationBefore you begin the upgrade process, ensure that you have made the necessary preparations and that all relevant stakeholders are involved.
The following steps give you an overview of what you need to do. See below for detailed step-by-step instructions.
kubernetes-job.yml
file as described.1000:0
and permissions to 770
.70:0
and permissions to 770
.create-pvc-yaml.sh
containing the following:#!/usr/bin/env bash
#
# Creates a Kubernetes resource list of volumeMounts and volumes for Connectware (agent) deployments
#
function usage {
printf "Usage:n"
printf "$0 [--kubeconfig|-kc <kubeconfig_file>] [--context|-ctx <target_context>] [--namespace|-n <target_namespace>] [--no-external-agents] [--no-connectware]n"
exit 1
}
function argparse {
while [ $# -gt 0 ]; do
case "$1" in
--kubeconfig|-kc)
# a kube_config file for the Kubernetes cluster access
export KUBE_CONFIG="${2}"
shift
;;
--context|-ctx)
# a KUBE_CONTEXT for the Kubernetes cluster access
export KUBE_CONTEXT="${2}"
shift
;;
--namespace|-n)
# the Kubernetes cluster namespace to operate on
export NAMESPACE="${2}"
shift
;;
--no-external-agents)
export NO_EXTERNAL_AGENTS=true
shift
;;
--no-connectware)
export NO_CONNECTWARE=true
shift
;;
*)
printf "ERROR: Parameters invalidn"
usage
esac
shift
done
}
#
# init
export NO_EXTERNAL_AGENTS=false
export NO_CONNECTWARE=false
export KUBECTL_BIN=$(command -v kubectl)
argparse $*
shopt -s expand_aliases
# Check for kubectl paramaters and construct the ${KUBECTL_CMD} command to use
if [ ! -z $KUBE_CONFIG ]; then
KUBECONFIG_FILE_PARAM=" --kubeconfig=${KUBE_CONFIG}"
fi
if [ ! -z $KUBE_CONTEXT ]; then
KUBECONFIG_CTX_PARAM=" --context=${KUBE_CONTEXT}"
fi
if [ ! -z $NAMESPACE ]; then
NAMESPACE_PARAM=" -n${NAMESPACE}"
fi
KUBECTL_CMD=${KUBECTL_BIN}${KUBECONFIG_FILE_PARAM}${KUBECONFIG_CTX_PARAM}${NAMESPACE_PARAM}
volumes=$(${KUBECTL_CMD} get pvc -o name | sed -e 's/persistentvolumeclaim///g' )
valid_pvcs=""
volume_yaml=""
volumemounts_yaml=""
if [[ "${NO_CONNECTWARE}" == "false" ]]; then
valid_pvcs=$(cat << EOF
system-control-server-data
certs
brokerdata-*
brokerlog-*
workbench
postgresql-postgresql-*
service-manager
protocol-mapper-*
EOF
)
fi
if [[ "${NO_EXTERNAL_AGENTS}" == "false" ]]; then
# Add volumes from agent Helm chart
pvc_volumes=$(${KUBECTL_CMD} get pvc -o name -l connectware.cybus.io/service-group=agent | sed -e 's/persistentvolumeclaim///g' )
valid_pvcs="${valid_pvcs}
${pvc_volumes}"
fi
# Collect volumeMounts
for pvc in $volumes; do
for valid_pvc in $valid_pvcs; do
if [[ "$pvc" =~ $valid_pvc ]]; then
volumemounts_yaml="${volumemounts_yaml}
- name: $pvc
mountPath: /mnt/connectware_$pvc"
break
fi
done
done
# Collect volumes
for pvc in $volumes; do
for valid_pvc in $valid_pvcs; do
if [[ "$pvc" =~ $valid_pvc ]]; then
volume_yaml="${volume_yaml}
- name: $pvc
persistentVolumeClaim:
claimName: $pvc"
break
fi
done
done
# print volumeMounts
echo
echo "Copy this as the "volumeMounts:" section:"
echo "######################################"
echo -n " volumeMounts:"
echo "$volumemounts_yaml"
# print volumes
echo
echo "Copy this as the "volumes:" section:"
echo "######################################"
echo -n " volumes:"
echo "$volume_yaml"
Code-Sprache: YAML (yaml)
kubectl config use-context <my-cluster>
kubectl config set-context --current --namespace <my-connectware-namespace>
Code-Sprache: YAML (yaml)
chmod u+x create-pvc-yaml.sh
./create-pvc-yaml.sh
Code-Sprache: YAML (yaml)
Example output:
Copy this as the "volumeMounts:" section:
######################################
volumeMounts:
- name: brokerdata-broker-0
mountPath: /mnt/connectware_brokerdata-broker-0
- name: brokerdata-control-plane-broker-0
mountPath: /mnt/connectware_brokerdata-control-plane-broker-0
- name: brokerlog-broker-0
mountPath: /mnt/connectware_brokerlog-broker-0
- name: brokerlog-control-plane-broker-0
mountPath: /mnt/connectware_brokerlog-control-plane-broker-0
- name: certs
mountPath: /mnt/connectware_certs
- name: postgresql-postgresql-0
mountPath: /mnt/connectware_postgresql-postgresql-0
- name: protocol-mapper-agent-001-0
mountPath: /mnt/connectware_protocol-mapper-agent-001-0
- name: service-manager
mountPath: /mnt/connectware_service-manager
- name: system-control-server-data
mountPath: /mnt/connectware_system-control-server-data
- name: workbench
mountPath: /mnt/connectware_workbench
Copy this as the "volumes:" section:
######################################
volumes:
- name: brokerdata-broker-0
persistentVolumeClaim:
claimName: brokerdata-broker-0
- name: brokerdata-control-plane-broker-0
persistentVolumeClaim:
claimName: brokerdata-control-plane-broker-0
- name: brokerlog-broker-0
persistentVolumeClaim:
claimName: brokerlog-broker-0
- name: brokerlog-control-plane-broker-0
persistentVolumeClaim:
claimName: brokerlog-control-plane-broker-0
- name: certs
persistentVolumeClaim:
claimName: certs
- name: postgresql-postgresql-0
persistentVolumeClaim:
claimName: postgresql-postgresql-0
- name: protocol-mapper-agent-001-0
persistentVolumeClaim:
claimName: protocol-mapper-agent-001-0
- name: service-manager
persistentVolumeClaim:
claimName: service-manager
- name: system-control-server-data
persistentVolumeClaim:
claimName: system-control-server-data
- name: workbench
persistentVolumeClaim:
claimName: workbench
Code-Sprache: YAML (yaml)
kubernetes-job.yml
containing the following:---
apiVersion: batch/v1
kind: Job
metadata:
name: connectware-fix-permissions
labels:
app: connectware-fix-permissions
spec:
backoffLimit: 3
template:
spec:
restartPolicy: OnFailure
imagePullSecrets:
- name: cybus-docker-registry
containers:
- image: registry.cybus.io/cybus/connectware-fix-permissions:1.5.0
securityContext:
runAsUser: 0
imagePullPolicy: Always
name: connectware-fix-permissions
# Insert the volumeMounts section here
# volumeMounts:
# - name: brokerdata-broker-0
# mountPath: /mnt/connectware_brokerdata-broker-0
resources:
limits:
cpu: 200m
memory: 100Mi
# Insert the volumes section here
# volumes:
# - name: brokerdata-broker-0
# persistentVolumeClaim:
# claimName: brokerdata-broker-0
Code-Sprache: YAML (yaml)
kubernetes-job.yml
and integrate the output of the create-pvc-yaml.sh
script.kubectl scale sts,deploy -lapp.kubernetes.io/instance=<connectware-installation-name> --replicas 0
kubectl scale sts -lapp.kubernetes.io/instance=<connectware-agent-installation-name> --replicas 0
Code-Sprache: YAML (yaml)
kubectl apply -f kubernetes-job.yml
Code-Sprache: YAML (yaml)
kubectl logs -f job/connectware-fix-permissions
Code-Sprache: SAS (sas)
Example output:
Found directory: connectware_brokerdata-broker-0. Going to change permissions
Found directory: connectware_brokerdata-broker-1. Going to change permissions
Found directory: connectware_brokerdata-control-plane-broker-0. Going to change permissions
Found directory: connectware_brokerdata-control-plane-broker-1. Going to change permissions
Found directory: connectware_brokerlog-broker-0. Going to change permissions
Found directory: connectware_brokerlog-broker-1. Going to change permissions
Found directory: connectware_brokerlog-control-plane-broker-0. Going to change permissions
Found directory: connectware_brokerlog-control-plane-broker-1. Going to change permissions
Found directory: connectware_certs. Going to change permissions
Found directory: connectware_postgresql-postgresql-0. Going to change permissions
Postgresql volume identified, using postgresql specific permissions
Found directory: connectware_service-manager. Going to change permissions
Found directory: connectware_system-control-server-data. Going to change permissions
Found directory: connectware_workbench. Going to change permissions
All done. Found 13 volumes.
Code-Sprache: YAML (yaml)
kubectl delete -f kubernetes-job.yml
Code-Sprache: YAML (yaml)
kubectl delete svc -l app.kubernetes.io/instance=<connectware-installation-name>
Code-Sprache: YAML (yaml)
helm upgrade -n <namespace> <installation-name> <repo-name>/connectware --version 1.5.0 -f values.yaml
Code-Sprache: YAML (yaml)
Result: You have successfully upgraded Connectware to 1.5.0.
create-pvc-yaml.sh
This script is designed to automatically generate Kubernetes resource lists, specifically volumeMounts
and volumes
, for Connectware deployments on Kubernetes. It simplifies the preparation process for upgrading Connectware by:
--kubeconfig
, --context
, --namespace
, to specify the Kubernetes cluster and namespace.The script outputs sections that can be directly copied and pasted into the kubernetes-job.yml
file, ensuring that the correct volumes are mounted with the appropriate permissions for the upgrade process.
kubernetes-job.yml
This YAML file defines a Kubernetes job responsible for adjusting the permissions of the volumes identified by the create-pvc-yaml.sh
script. The job is tailored to run with root privileges, enabling it to modify ownership and permissions of files and directories within PVCs that are otherwise inaccessible due to the permission changes introduced in Connectware 1.5.0.
The job’s purpose is to ensure that all persistent volumes used by Connectware are accessible by the new, non-root container user, addressing the core upgrade challenge without compromising on security by avoiding the use of root containers in the Connectware deployment itself.
kubectl
installed (Install Tools).kubectl
configured with the current context pointing to your target cluster (Configure Access to Multiple Clusters).Add the Cybus connectware-helm
repository to your local Helm installation to use the connectware-agent
Helm chart to install Connectware agents in Kubernetes:
helm repo add cybus https://repository.cybus.io/repository/connectware-helm
Code-Sprache: YAML (yaml)
To verify that the Helm chart is available you can execute a Helm search:
helm search repo connectware-agent
Code-Sprache: YAML (yaml)
NAME | CHART VERSION | APP VERSION | DESCRIPTION |
cybus/connectware-agent standalone agents | 1.0.0 | 1.1.5 | Cybus Connectware |
As with all Helm charts, the connectware-agent
chart is configured using a YAML file. This file can have any name, however we will refer to it as the values.yaml file.
Create this file to start configuring your agent installation by using your preferred editor:
vi values.yaml
Code-Sprache: YAML (yaml)
To quickly install a single agent you only need to add your Connectware license key to your values.yaml file as the Helm value licenseKey
:
licenseKey: <your-connectware-license-key>
Code-Sprache: YAML (yaml)
You can now use this file to deploy your Connectware agent in a Kubernetes namespace of your choice:
helm upgrade -i connectware-agent cybus/connectware-agent -f values.yaml -n <namespace>
Code-Sprache: YAML (yaml)
Output
Release "connectware-agent" does not exist. Installing it now.
NAME: connectware-agent
LAST DEPLOYED: Mon Mar 13 14:31:39 2023
NAMESPACE: connectware
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for using the Cybus Connectware agent Helm chart!
For additional information visit https://cybus.io/
Number of agents: 1
--------------------
- agent
If any of these agents are new, please remember to visit Connectware's client registry to set up the connection to Connectware.
Hint: If you have agents stuck in a status other than "Running", you need to delete the stuck pods before a pod with your new configuration will be created.
Code-Sprache: YAML (yaml)
This will start a single Connectware agent named “agent”, which will connect to a Connectware installation deployed in the same namespace. Unlock the Client Registry in your Connectware admin UI to connect this agent. Refer to Client Registry — Connectware documentation to learn how to use the Client Registry to connect agents.
You can repeat the same command to apply any changes to your values.yaml file configuration in the future.
If you are not deploying the agent in the same Kubernetes namespace, or even inside the same Kubernetes cluster, you need to specify the hostname under which Connectware is reachable for this agent.
In the default configuration, the following network ports on Connectware must be reachable for the agent:
Specify the hostname of Connectware to which the agent connects to by setting the Helm value connectwareHost
inside the protocolMapperAgentDefaults
context of your values.yaml file. For Connectware deployments in a different Kubernetes namespace this is “connectware.<namespace>”.
Example
licenseKey: <your-connectware-license-key>
protocolMapperAgentDefaults:
connectwareHost: connectware.cybus # adjust to actual hostname of Connectware
Code-Sprache: YAML (yaml)
To connect to a Connectware that uses the separate control-plane-broker, you need to set the Helm value controlPlaneBrokerEnabled
to true
inside the protocolMapperAgentDefaults
section of your values.yaml file.
Example
licenseKey: <your-connectware-license-key>
protocolMapperAgentDefaults:
connectwareHost: connectware.cybus # adjust to actual hostname of Connectware
controlPlaneBrokerEnabled: true
Code-Sprache: YAML (yaml)
Note: This adds TCP/1884 to required network ports.
You can use the agent chart to install multiple Connectware agents. Every agent you configure needs to be named using the Helm value name
in a collection entry inside the context protocolMapperAgents
. This way, the default name “agent” will be replaced by the name you give the agent.
Example
licenseKey: <your-connectware-license-key>
protocolMapperAgentDefaults:
connectwareHost: connectware.cybus # adjust to actual hostname of Connectware
protocolMapperAgents:
- name: bender-robots
- name: welder-robots
Code-Sprache: YAML (yaml)
This will deploy two Connectware agents, named “bender-robots” and “welder-robots”, both of which will contact the Client Registry of Connectware inside the Kubernetes namespace “cybus”, as described in Client Registry — Connectware documentation
This quick start guide describes the steps to install the Cybus Connectware onto a Kubernetes cluster.
Please consult the article Installing Cybus Connectware for the basic requirements to run the software, like having access to the Cybus Portal to acquire a license key.
The following topics are covered by this article:
We assume that you are already familiar with the Cybus Portal and that you have obtained a license key or license file. Also see the prerequisites in the article Installing Cybus Connectware.
This guide does not introduce Kubernetes, Docker, containerization or tooling knowledge, we expect the system admin to know about their respective Kubernetes environment, which brings – besides wellknown standards – a certain specific custom complexity, e.g. the choice of certain load balancers, the management environment, storage classes and the like, which are up to the customer’s operations team and should not affect the reliability of Cybus Connectware deployed there, if the requirements are met.
Besides a Kubernetes cluster the following tools and resources are required:
To be able to start with Cybus Connectware on a Kubernetes cluster, use the prepared helm chart and the following steps:
helm repo add cybus https://repository.cybus.io/repository/connectware-helm
Code-Sprache: YAML (yaml)
helm repo update
helm search repo connectware [-l]
Code-Sprache: YAML (yaml)
values.yaml
. This file will be used to configure your installation of Connectware. Initially fill this file with this YAML content:global:
licensekey: <YOUR-CONNECTWARE-LICENSE-KEY>
setImmutableLabels: true
broker:
clusterSecret: <SOME-RANDOM-SECRET-STRING>
Code-Sprache: YAML (yaml)
ReadWriteOnce
and ReadWriteMany
access modes, please also set the value global.storage.storageClassName
to a StorageClass that does the following:storage:
storageClassName: “san-storage” # example value
Code-Sprache: YAML (yaml)
default-values.yaml
and checking if you want to make further adjustments. It is, for example, possible that you need to adjust the resource request/limit values for smaller test clusters. In this case copy and adjust the global.podResources
section from default-values.yaml
to values.yaml
.helm show values cybus/connectware > default-values.yaml
Code-Sprache: YAML (yaml)
helm install <YOURDEPLOYMENTNAME> cybus/connectware -f ./values.yaml --dry-run --debug -n<YOURNAMESPACE> --create-namespace
Code-Sprache: YAML (yaml)
Example
helm install connectware cybus/connectware -f ./values.yaml --dry-run --debug -ncybus --create-namespace
Code-Sprache: YAML (yaml)
helm install <YOURDEPLOYMENTNAME> cybus/connectware -f ./values.yaml --n<YOURNAMESPACE> --create-namespace
Code-Sprache: YAML (yaml)
helm upgrade <YOURDEPLOYMENTNAME> cybus/connectware -f ./values.yml -n<YOURNAMESPACE>
Code-Sprache: YAML (yaml)
When taking a look at the default-values.yaml
file you should check out these important values within the global
section:
licensekey
value handles the licensekey of the Connectware installation. This needs to be a production license key. This parameter is mandatory unless you set licenseFile
licenseFile
value is used to activate Connectware in offline mode. The content of a license file downloaded from the Cybus Portal has to be set (this is a single line of a base64 encoded json object)image
source and version using the image section. broker
section specifies MQTT broker related settings:
broker.clusterSecret
: the authentication secret for the MQTT broker cluster. Note: The cluster secret for the broker is not a security feature. It is rather a cluster ID so that nodes do not connect to different clusters that might be running on the same network. Make sure that the controlPlaneBroker.clusterSecret
is different from the broker.clusterSecret
.broker.replicaCount
: the number of broker instancescontrolPlaneBroker
section specifies MQTT broker related settings:
controlPlaneBroker.clusterSecret
: the authentication secret for the MQTT broker cluster. Note: The cluster secret for the broker is not a security feature. It is rather a cluster ID so that nodes do not connect to different clusters that might be running on the same network. Make sure that the controlPlaneBroker.clusterSecret
is different from the broker.clusterSecret
.controlPlaneBroker.replicaCount
: the number of broker instancescontrolPlaneBroker
is optional. To activate it, type controlPlaneBroker.enabled: true
. This creates a second broker cluster that handles only internal communications within Connectware.loadBalancer
section allows pre-configuration for a specific load balancerpodResources
set of values allows you to configure the number of CPU and memory resources per pod; by default some starting point values are set, but depending on the particular use case they need to be tuned in relation to the expected load in the system, or reduced for test setupsprotocolMapperAgents
section allows you to configure additional protocol-mapper instances in Agent mode. See the documentation below for more detailsHelm allows setting values by both specifying a values file (using -f
or --values
) and the --set
flag. When upgrading this chart to newer versions you should use the same arguments for the Helm upgrade command to avoid conflicting values being set for the Chart; this is especially important for the value of global.broker.clusterSecret
, which would cause the nodes not to form the cluster correctly, if not set to the same value used during install or upgrade.
For more information about value merging, see the respective Helm documentation.
After following all the steps above Cybus Connectware is now installed. You can access the Admin UI by opening your browser and entering the Kubernetes application URL https://<external-ip>
with the initial login credentials:
Username: admin
Password: admin
To determine this data, the following kubectl command can be used:
kubectl get svc connectware --namespace=<YOURNAMESPACE> -o jsonpath={.status.loadBalancer.ingress}
Code-Sprache: YAML (yaml)
Should this value be empty your Kubernetes cluster load-balancer might need further configuration, which is beyond the scope of this document, but you can take a first look at Connectware by port-forwarding to your local machine:
kubectl --namespace=<YOURNAMESPACE> port-forward svc/connectware 10443:443 1883:1883 8883:8883
Code-Sprache: YAML (yaml)
You can now access the admin UI at: https://localhost:10443/
If you would like to learn more how to use Connectware, check out our docs at https://docs.cybus.io/ or see more guides here.
The Kubernetes version of Cybus Connectware comes with a Helm Umbrella chart, describing the instrumentation of the Connectware images for deployment in a Kubernetes cluster.
It is publicly available in the Cybus Repository for download or direct use with Helm.
Cybus Connectware expects a regular Kubernetes cluster and was tested for Kubernetes 1.22 or higher.
This cluster needs to be able to provide load-balancer ingress functionality and persistent volumes in ReadWriteOnce
and ReadWriteMany
access modes provided by a default StorageClass unless you specify another StorageClass using the global.storage.storageClassName
Helm value.
For Kubernetes 1.25 and above Connectware needs a privileged namespace or a namespace with PodSecurityAdmission labels for warn
mode. In case of specific boundary conditions and requirements in customer clusters, a system specification should be shared to evaluate them for secure and stable Cybus Connectware operations.
Connectware specifies default limits for CPU and memory in its Helm values that need to be at least fulfilled by the Kubernetes cluster for production use. Variations need to be discussed with Cybus, depending on the specific demands and requirements in the customer environment, e.g., the size of the broker cluster for the expected workload with respect to the available CPU cores and memory.
Smaller resource values are often enough for test or POC environments and can be adjusted using the global.podResources
section of the Helm values.
In order to run Cybus Connectware in Kubernetes clusters, two new RBAC roles are deployed through the Helm chart and will provide Connectware with the following namespace permissions:
resource(/subresource)/action | permission |
---|---|
pods/list | list all containers get status of all containers |
pods/get pods/watch | inspect containers |
statefulsets/list | list all StatefulSets get status of all StatefulSets |
statefulsets/get statefulsets/watch | inspect StatefulSets |
resource(/subresource)/action | permission |
---|---|
pods/list | list all containers get status of all containers |
pods/get pods/watch | inspect containers |
pods/log/get pods/log/watch | inspect containers get a stream of container logs |
deployments/create | create Deployments |
deployments/delete | delete Deployments |
deployments/update deployments/patch | to restart containers (since we rescale deployments) |
The system administrator needs to be aware of certain characteristics of the Connectware deployment:
licenseFile
above)global.loadBalancer.addressPoolName
or by setting the metallb.universe.tf/address-pool
annotation using the global.ingress.service.annotations
Helm valueThe default-values.yaml
file contains a protocolMapperAgents section representing a list of Connectware agents to deploy. The general configuration for these agents is the same as described in the Connectware documentation.
You can copy this section to your local values.yaml file to easily add agents to your Connectware installation
The only required property of the list items is name
; if only this property is specified the chart assumes some defaults:
name
connectware
which is the DNS name of Connectware.storageSize
is set to 40 MB by default. The agents use some local storage which needs to be configured based on each use case. If a larger number of services is going to be deployed, this value should be specified and set to bigger values.You can check out the comments of that section in default-values.yaml
to see further configuration options.
You can find further information in the general Connectware Agent documentation.