This Cybus Kubernetes guide provides a detailed procedure to help admins adjust the persistent volume content permissions to ensure a smooth upgrade to Connectware 1.5.0 in Kubernetes environments.

Important: Connectware 1.5.0 introduces a significant change regarding container privileges. Now, containers are started without root privileges. This change causes an issue where files that persisted on volumes that were created by a user other than the one accessing them, cannot easily have their permissions changed. As a result, this upgrade requires the admin to manually update the permissions.

Prerequisites

Overview of the Upgrade

Before you begin the upgrade process, ensure that you have made the necessary preparations and that all relevant stakeholders are involved.

The following steps give you an overview of what you need to do. See below for detailed step-by-step instructions.

  1. Create the YAML snippets: Create YAML snippets for each PersistentVolumeClaim used by Connectware. This step is essential for identifying the volumes that need permission adjustments.
  2. Filter volumes: Carefully review the created snippets to ensure that only related volumes are included. This is critical to avoid unnecessary modifications and potential disruptions.
  3. Update the Kubernetes job file: Integrate the snippets into the kubernetes-job.yml file as described.
  4. Shutdown Connectware: Securely shut down all Connectware and agent pods to prevent any access conflicts during the permissions update. This step is crucial to ensure that the permissions changes are applied without interference.
  5. Run the Kubernetes job: Run the job to adjust permissions on all relevant volumes. This action addresses the core challenge posed by the upgrade and ensures that the new, non-root containers can access the files they need.
  6. Delete Kubernetes services: Remove all services used by Connectware in preparation for the new version. This step is necessary to accommodate updates that are incompatible with the existing service configurations.
  7. Apply Helm upgrade: Proceed with the Helm upgrade to Connectware version 1.5.0. This step completes the upgrade process and implements the new security measures and permissions model.

Important Notes

Upgrading Procedure

1. Creating YAML Snippets for relevant PersistentVolumeClaim names

#!/usr/bin/env bash
#
# Creates a Kubernetes resource list of volumeMounts and volumes for Connectware (agent) deployments
#
function usage {
    printf "Usage:n"
    printf "$0 [--kubeconfig|-kc <kubeconfig_file>] [--context|-ctx <target_context>] [--namespace|-n <target_namespace>] [--no-external-agents] [--no-connectware]n"
    exit 1
}

function argparse {

  while [ $# -gt 0 ]; do
    case "$1" in
      --kubeconfig|-kc)
        # a kube_config file for the Kubernetes cluster access
        export KUBE_CONFIG="${2}"
        shift
        ;;
      --context|-ctx)
        # a KUBE_CONTEXT for the Kubernetes cluster access
        export KUBE_CONTEXT="${2}"
        shift
        ;;
      --namespace|-n)
        # the Kubernetes cluster namespace to operate on
        export NAMESPACE="${2}"
        shift
        ;;
      --no-external-agents)
        export NO_EXTERNAL_AGENTS=true
        shift
        ;;
      --no-connectware)
        export NO_CONNECTWARE=true
        shift
        ;;
      *)
        printf "ERROR: Parameters invalidn"
        usage
    esac
    shift
  done
}

#
# init
export NO_EXTERNAL_AGENTS=false
export NO_CONNECTWARE=false
export KUBECTL_BIN=$(command -v kubectl)

argparse $*

shopt -s expand_aliases
# Check for kubectl paramaters and construct the ${KUBECTL_CMD} command to use
if [ ! -z $KUBE_CONFIG ]; then
  KUBECONFIG_FILE_PARAM=" --kubeconfig=${KUBE_CONFIG}"
fi
if [ ! -z $KUBE_CONTEXT ]; then
  KUBECONFIG_CTX_PARAM=" --context=${KUBE_CONTEXT}"
fi
if [ ! -z $NAMESPACE ]; then
  NAMESPACE_PARAM=" -n${NAMESPACE}"
fi

KUBECTL_CMD=${KUBECTL_BIN}${KUBECONFIG_FILE_PARAM}${KUBECONFIG_CTX_PARAM}${NAMESPACE_PARAM}

volumes=$(${KUBECTL_CMD} get pvc -o name | sed -e 's/persistentvolumeclaim///g' )
valid_pvcs=""
volume_yaml=""
volumemounts_yaml=""

if [[ "${NO_CONNECTWARE}" == "false" ]]; then
  valid_pvcs=$(cat << EOF
system-control-server-data
certs
brokerdata-*
brokerlog-*
workbench
postgresql-postgresql-*
service-manager
protocol-mapper-*
EOF
)
fi

if [[ "${NO_EXTERNAL_AGENTS}" == "false" ]]; then
  # Add volumes from agent Helm chart
  pvc_volumes=$(${KUBECTL_CMD} get pvc -o name -l connectware.cybus.io/service-group=agent | sed -e 's/persistentvolumeclaim///g' )
  valid_pvcs="${valid_pvcs}
  ${pvc_volumes}"
fi

# Collect volumeMounts
for pvc in $volumes; do
  for valid_pvc in $valid_pvcs; do
      if [[ "$pvc" =~ $valid_pvc ]]; then
        volumemounts_yaml="${volumemounts_yaml}
          - name: $pvc
            mountPath: /mnt/connectware_$pvc"
        break
      fi
  done
done

# Collect volumes
for pvc in $volumes; do
  for valid_pvc in $valid_pvcs; do
      if [[ "$pvc" =~ $valid_pvc ]]; then
        volume_yaml="${volume_yaml}
        - name: $pvc
          persistentVolumeClaim:
            claimName: $pvc"
          break
      fi
  done
done


# print volumeMounts
echo
echo "Copy this as the "volumeMounts:" section:"
echo "######################################"
echo -n "        volumeMounts:"
echo "$volumemounts_yaml"

# print volumes
echo
echo "Copy this as the "volumes:" section:"
echo "######################################"
echo -n "      volumes:"
echo "$volume_yaml"
Code-Sprache: YAML (yaml)
kubectl config use-context <my-cluster>
kubectl config set-context --current --namespace <my-connectware-namespace>
Code-Sprache: YAML (yaml)
chmod u+x create-pvc-yaml.sh
./create-pvc-yaml.sh
Code-Sprache: YAML (yaml)

Example output:

Copy this as the "volumeMounts:" section:
######################################
        volumeMounts:
          - name: brokerdata-broker-0
            mountPath: /mnt/connectware_brokerdata-broker-0
          - name: brokerdata-control-plane-broker-0
            mountPath: /mnt/connectware_brokerdata-control-plane-broker-0
          - name: brokerlog-broker-0
            mountPath: /mnt/connectware_brokerlog-broker-0
          - name: brokerlog-control-plane-broker-0
            mountPath: /mnt/connectware_brokerlog-control-plane-broker-0
          - name: certs
            mountPath: /mnt/connectware_certs
          - name: postgresql-postgresql-0
            mountPath: /mnt/connectware_postgresql-postgresql-0
          - name: protocol-mapper-agent-001-0
            mountPath: /mnt/connectware_protocol-mapper-agent-001-0
          - name: service-manager
            mountPath: /mnt/connectware_service-manager
          - name: system-control-server-data
            mountPath: /mnt/connectware_system-control-server-data
          - name: workbench
            mountPath: /mnt/connectware_workbench

Copy this as the "volumes:" section:
######################################
      volumes:
        - name: brokerdata-broker-0
          persistentVolumeClaim:
            claimName: brokerdata-broker-0
        - name: brokerdata-control-plane-broker-0
          persistentVolumeClaim:
            claimName: brokerdata-control-plane-broker-0
        - name: brokerlog-broker-0
          persistentVolumeClaim:
            claimName: brokerlog-broker-0
        - name: brokerlog-control-plane-broker-0
          persistentVolumeClaim:
            claimName: brokerlog-control-plane-broker-0
        - name: certs
          persistentVolumeClaim:
            claimName: certs
        - name: postgresql-postgresql-0
          persistentVolumeClaim:
            claimName: postgresql-postgresql-0
        - name: protocol-mapper-agent-001-0
          persistentVolumeClaim:
            claimName: protocol-mapper-agent-001-0
        - name: service-manager
          persistentVolumeClaim:
            claimName: service-manager
        - name: system-control-server-data
          persistentVolumeClaim:
            claimName: system-control-server-data
        - name: workbench
          persistentVolumeClaim:
            claimName: workbench
Code-Sprache: YAML (yaml)

2. Updating and Applying the Kubernetes Job

---
apiVersion: batch/v1
kind: Job
metadata:
  name: connectware-fix-permissions
  labels:
    app: connectware-fix-permissions
spec:
  backoffLimit: 3
  template:
    spec:
      restartPolicy: OnFailure
      imagePullSecrets:
      - name: cybus-docker-registry
      containers:
      - image: registry.cybus.io/cybus/connectware-fix-permissions:1.5.0
        securityContext:
          runAsUser: 0
        imagePullPolicy: Always
        name: connectware-fix-permissions
        # Insert the volumeMounts section here
        # volumeMounts:
        # - name: brokerdata-broker-0
        #   mountPath: /mnt/connectware_brokerdata-broker-0
        resources:
          limits:
            cpu: 200m
            memory: 100Mi
      # Insert the volumes section here
      # volumes:
      # - name: brokerdata-broker-0
      #   persistentVolumeClaim:
      #     claimName: brokerdata-broker-0


Code-Sprache: YAML (yaml)
kubectl scale sts,deploy -lapp.kubernetes.io/instance=<connectware-installation-name> --replicas 0
kubectl scale sts -lapp.kubernetes.io/instance=<connectware-agent-installation-name> --replicas 0
Code-Sprache: YAML (yaml)
kubectl apply -f kubernetes-job.yml
Code-Sprache: YAML (yaml)

3. Verifying and Cleaning Up

kubectl logs -f job/connectware-fix-permissions
Code-Sprache: SAS (sas)

Example output:

Found directory: connectware_brokerdata-broker-0. Going to change permissions
Found directory: connectware_brokerdata-broker-1. Going to change permissions
Found directory: connectware_brokerdata-control-plane-broker-0. Going to change permissions
Found directory: connectware_brokerdata-control-plane-broker-1. Going to change permissions
Found directory: connectware_brokerlog-broker-0. Going to change permissions
Found directory: connectware_brokerlog-broker-1. Going to change permissions
Found directory: connectware_brokerlog-control-plane-broker-0. Going to change permissions
Found directory: connectware_brokerlog-control-plane-broker-1. Going to change permissions
Found directory: connectware_certs. Going to change permissions
Found directory: connectware_postgresql-postgresql-0. Going to change permissions
Postgresql volume identified, using postgresql specific permissions
Found directory: connectware_service-manager. Going to change permissions
Found directory: connectware_system-control-server-data. Going to change permissions
Found directory: connectware_workbench. Going to change permissions
All done. Found 13 volumes.
Code-Sprache: YAML (yaml)
kubectl delete -f kubernetes-job.yml
Code-Sprache: YAML (yaml)
kubectl delete svc -l app.kubernetes.io/instance=<connectware-installation-name>
Code-Sprache: YAML (yaml)

4. Upgrading Connectware

helm upgrade -n <namespace> <installation-name> <repo-name>/connectware --version 1.5.0 -f values.yaml
Code-Sprache: YAML (yaml)

Result: You have successfully upgraded Connectware to 1.5.0.

Functionality of Provided Files

create-pvc-yaml.sh

This script is designed to automatically generate Kubernetes resource lists, specifically volumeMounts and volumes, for Connectware deployments on Kubernetes. It simplifies the preparation process for upgrading Connectware by:

The script outputs sections that can be directly copied and pasted into the kubernetes-job.yml file, ensuring that the correct volumes are mounted with the appropriate permissions for the upgrade process.

kubernetes-job.yml

This YAML file defines a Kubernetes job responsible for adjusting the permissions of the volumes identified by the create-pvc-yaml.sh script. The job is tailored to run with root privileges, enabling it to modify ownership and permissions of files and directories within PVCs that are otherwise inaccessible due to the permission changes introduced in Connectware 1.5.0.

The job’s purpose is to ensure that all persistent volumes used by Connectware are accessible by the new, non-root container user, addressing the core upgrade challenge without compromising on security by avoiding the use of root containers in the Connectware deployment itself.

Similar Articles

Prerequisites

Customizing the Search Filter for LDAP Authentication

If your LDAP directory uses a different property than “cn” as the username that is to be used, you can specify this property in the Helm value userRdn in the global.authentication.ldap context.

Example

global:
  authentication:
    ldap:
      enabled: true
      bindDn: CN=Users,DC=company,DC=tld
      url: ldap://my-dc.complany.tld:389
      userRdn: SN
Code-Sprache: YAML (yaml)

Prerequisites

Enabling TLS for LDAP Authentication

To use TLS for LDAP you only need to set a valid ldaps:// URL for the Helm value url in the global.authentication.ldap context. Remember to also adjust the TCP port number. By default LDAPS uses port 636.

Connectware will verify that the LDAP server presents a valid certificate before using it as authentication backend. Unless you have a certificate for your LDAP server that is signed by a valid root CA, you will need to provide the CA certificate that signed your LDAP server’s certificate. Alternatively you can disable certificate validation.

Providing the CA Certificate through Helm Values

You can simply provide the CA certificate in the Helm value caChain.cert in the global.authentication.ldap context. Provide the complete certificate chain necessary to validate the LDAP server’s certificate.

Example

global:
  authentication:
    ldap:
      enabled: true
      bindDn: CN=Users,DC=company,DC=tld
      url: ldaps://my-dc.complany.tld:636
      caChain:
        cert: |
           -----BEGIN CERTIFICATE-----
           MIIFpTCCA40CFGFL86145m7JIg2RaKkAVCOV1H71MA0GCSqGSIb3DQEBCwUAMIGN
           [skipped for brevity - include whole certificate]
           SKnBS1Y1Dn2e
           -----END CERTIFICATE-----
Code-Sprache: YAML (yaml)

As an alternative, you can provide the CA certificate through a manually create Kubernetes ConfigMap.

Providing the CA Certificate through a Kubernetes ConfigMap

To provide the CA certificate necessary to validate the certificate used by your LDAP server, you can manually create a Kubernetes ConfigMap that contains the certificate as a file named ca.crt. You will then provide the name of that ConfigMap in the Helm value caChain.existingConfigMap in the global.authentication.ldap context.

Example

Create the Kubernetes ConfigMap from a file named ca.crt in your current directory:

<code>kubectl -n <namespace> create cm cw-ldap-ca-cert --from-file ca.cr</code>
Code-Sprache: YAML (yaml)

Specify the name of the ConfigMap:

global:
  authentication:
    ldap:
      enabled: true
      bindDn: CN=Users,DC=company,DC=tld
      url: ldaps://my-dc.complany.tld:636
      caChain:
        existingConfigMap: cw-ldap-ca-cert
Code-Sprache: YAML (yaml)

Disabling Certificate Validation

While we do not recommend skipping certificate validation for production use, it is possible to tell Connectware to accept any certificate the LDAP server presents. To do so, simply set the Helm value caChain.trustAllCertificates in the global.authentication.ldap context to true.

Example

global:
  authentication:
    ldap:
      enabled: true
      bindDn: CN=Users,DC=company,DC=tld
      url: ldaps://my-dc.complany.tld:636
      caChain:
        trustAllCertificates: true
Code-Sprache: YAML (yaml)

Prerequisites

Manual Kubernetes Secret for LDAP Authentication Bind User

If you don’t want to provide the bind user for LDAP authentication through the Helm values bindDn and bindPassword within the global.authentication.ldap context, you can also manually create a Kubernetes secret in Connectware’s namespace through your preferred method of managing secrets in Kubernetes. You will then need to provide the name of this secret in the Helm value existingBindSecret.

This secret needs to contain two keys, bindDn and bindPassword, containing the parameters you did not specify directly as Helm values. If you want to use different keys, you can customize these as shown below.

Example

Create your Kubernetes secret:

kubectl -n <namespace> create secret generic my-ldap-user --from-literal=bindDn="CN=Bind User,CN=Users,DC=company,DC=tld" --from-literal=bindPassword="S3cretPassword"
Code-Sprache: YAML (yaml)

Specify the name of the Secret:

global:
  authentication:
    ldap:
      enabled: true
      existingBindSecret: my-ldap-user
      searchBase: CN=Users,DC=company,DC=tld
      url: ldap://my-dc.complany.tld:389
Code-Sprache: YAML (yaml)

Customizing Kubernetes Secret Keys

If you want to customize the keys used in the Kubernetes secret, you can do so and specify the keys you want to use instead in the Helm value existingBindSecretDnKey and existingBindSecretPasswordKey within the global.authentication.ldap context.

Example

Create your Kubernetes secret:

<code>kubectl -n <namespace> create secret generic custom-ldap-user --from-literal=username="CN=Bind User,CN=Users,DC=company,DC=tld" --from-literal=password="S3cretPassword"</code>
Code-Sprache: YAML (yaml)

Specify the name of the Secret in your values.yaml file:

global:
  authentication:
    ldap:
      enabled: true
      existingBindSecret: custom-ldap-user
      existingBindSecretDnKey: username
      existingBindSecretPasswordKey: password
      searchBase: CN=Users,DC=company,DC=tld
      url: ldap://my-dc.complany.tld:389
Code-Sprache: YAML (yaml)

Prerequisites

Configuring LDAP Authentication

When configuring LDAP authentication, you need to match Connectware’s setting to the capabilities of your LDAP server. There are two fundamental decisions to make:

  1. Choosing between “group” and “attribute” mode.
  2. Whether to use a bind user.

Connectware LDAP Modes

Connectware offers two modes for LDAP authentication:

You can read about them in the Connectware documentation. By default, “group” mode is activated.

Using a Bind User

A bind user is common in LDAP setup that use a more complicated directory structure. It is a limited user you create in your LDAP directory, that is usually a read-only user with the permission to search through the LDAP directory tree.

It is used when users don’t share a single LDAP base DN (e.g. are not in the same group). If your users are spread among the directory tree, you will likely want to use a bind user.

Enabling LDAP Authentication

To enable the LDAP feature in Connectware, you need to set the Helm value global.authentication.ldap.enabled to true.

Additionally, you always need to provide these Helm values within the global.authentication.ldap context:

ValueExampleDescription
bindDnCN=Users,DC=example,DC=orgbindDN contains either the LDAP base DN of users logging in, or the DN of a dedicated bind user that is able to search for the user trying to log in within the search base.
urlldap://dc.mycompany.tld:389URL of the LDAP server in format schema://hostname:port

Example

global:
  authentication:
    ldap:
      enabled: true
      bindDn: CN=Users,DC=company,DC=tld
      url: ldap://my-dc.complany.tld:389
Code-Sprache: YAML (yaml)

If you are using a bind user to search through the directory tree, you must specify the full DN of the bind user as bindDn, and also need to provide these values:

ValueExampleDescription
bindPasswordANc97WCO"!xcC=(bindPassword contains the password for the bind user as defined in your LDAP server.
searchBaseldap://dc.mycompany.tld:389URL of the LDAP server in format schema://hostname:port

Example

global:
  authentication:
    ldap:
      enabled: true
      bindDn: CN=connectwarebinduser,CN=Users,DC=company,DC=tld
      bindPassword: SuperS3cret!
      url: ldap://my-dc.complany.tld:389
      searchBase: CN=Users,DC=company,DC=tld
Code-Sprache: YAML (yaml)

If you don’t want to provide the bind user and its password through your Helm values, for example because you follow a GitOps approach for your Connectware deployment, you can also provide the bind user through a manually created Kubernetes secret that is specified in existingBindSecret. You can find detailed instructions in this article.

By providing a bindPassword through one of these mechanisms, the nature of bindDn changes from being a single base DN that contains all users that are allowed to log into Connectware, to containing the DN of a single user – the bind user. In this scenario, searchBase takes the role of containing the base DN which all users share, acting as the root from which a search for valid users will be performed.

Configuring Group Mode

To configure Connectware to use LDAP in group mode, you need to specify the LDAP attribute of your user, that specifies what LDAP groups they are part of. This is done through the Helm value memberAttribute within the global.authentication.ldap context. Additionally, mode must be set to group.

The default value of memberOf is often the correct choice, but you may have to adapt this to your LDAP server.

These LDAP groups are then mapped to Connectware roles using the Connectware UI as described in the Connectware docs.

Example

global:
  authentication:
    ldap:
      enabled: true
      bindDn: CN=Users,DC=company,DC=tld
      url: ldap://my-dc.complany.tld:389
      mode: group
      memberAttribute: memberOf
Code-Sprache: YAML (yaml)

Configuring Attribute Mode

To configure Connectware to use LDAP in attribute mode, you need to specify the LDAP attribute of your user, that specifies the Connectware role that is associated with the user. This is done through the Helm value rolesAttribute within the global.authentication.ldap context. Additionally, mode must be set to attribute.

The default value of employeeType is often the correct choice, but you may have to adapt this to your LDAP server.

Example

global:
  authentication:
    ldap:
      enabled: true
      bindDn: CN=Users,DC=company,DC=tld
      url: ldap://my-dc.complany.tld:389
      mode: attribute
      rolesAttribute: employeeType
Code-Sprache: YAML (yaml)

Further LDAP Topics

Enabling TLS for LDAP

Connectware supports connecting to LDAP servers that offer Transport Layer Security. You can find out how to configure this in this article.

Providing Bind User through an Existing Kubernetes Secret

You can provide the bind user through a manually created Kubernetes secret that is specified in existingBindSecret. You can find detailed instructions in this article.

Customizing the Search Filter

By default the username trying to log in acts as the search filter, but there may be advanced situations where this is not enough, for example when that matches multiple users. Visit this article to learn how to customize the search filter.

Customizing the User RDN

The user RDN describes what LDAP attribute contains the username. By default this uses cn, but if this is not correct for your LDAP setup, you can customize this using the userRdn Helm value. Find out more in this article.

Prerequisites

Customizing the Search Filter for LDAP Authentication

There are scenarios where it is usefull to extend the default search filter of Connectware. For example:

The filter that will be used by Connectware is (<userRdn>=<username>) wheras userRdn is defined as environment variable in your values.yml and username is the name the user enters during login.

Any extension will result in a filter of the current format:

<code>(&(<userRdn>=<username>)(<your extension>)</code>
Code-Sprache: YAML (yaml)

Info: You could test the filter by performing request with ldapsearch on your terminal (may require additional packages to be installed)

Example:

<code>ldapsearch -L -b "dc=example,dc=org" -D "cn=admin,dc=example,dc=org" -w admin_pass "(&(cn=User 1)(objectclass=iNetOrgPerson))"</code>
Code-Sprache: YAML (yaml)

Example

In the following example, we have two entries with an RDN cn=a.smith.

dc=example,dc=org
 cn=customers
   cn=a.smith
 cn=employees
    cn=a.smith
Code-Sprache: YAML (yaml)

Both users are named a.smith, but they are different entries. In a case like this you will use cn=employees,dc=ecample,dc=org as search base and actually won’t have a problem. But lets use dc=example,dc=org in order to create a simple example case for the filter extention.

We want to modify the filter in order to search only for entries that have cn=employees in their DN.

The search command to test on the terminal will for the employee a.smith will look like this:

<code>ldapsearch -L -b "dc=example,dc=org" -D "cn=admin,dc=example,dc=org" -w admin_pass "(&(cn=a.smith)(cn:dn:=employee))"</code>
Code-Sprache: YAML (yaml)

To modify Connectware, we only add the extension itself (cn:dn:=employee) to the configuration:

global:
  authentication:
    ldap:
      enabled: true
      existingBindSecret: my-ldap-user
      searchBase: CN=Users,DC=company,DC=tld
      searchFilter: cn:dn:=employees
      userRdn: cn
      url: ldap://my-dc.company.tld:389
Code-Sprache: YAML (yaml)

Important: Be aware the no surrounding brackets are used for the additional expression. Brackets within your expression could be used, e.g. &(objectClass=iNetOrgPerson)(cn:dn:=employees)

In this documentation, we use different variables in the code examples to explain the installation and configuration of Connectware. When you install and configure your Connectware, you can create your own variables.

The following variables are used in this documentation:

Namevariable
Name of the Connectware installation<installation-name>
Namespace of the Connectware installation<namespace>
values.yaml file<values.yaml>
Version number of the Connectware installation<current-version>
Version number of the Connectware version that you want to upgrade to<target-version>
Local name of the Connectware Helm repository<local-repo>

Example

diff <(helm show values <repo-name>/connectware --version <current-version>) <(helm show values <repo-name>/connectware --version <target-version>
Code-Sprache: YAML (yaml)

The values.yaml file is the configuration file for an application that is deployed through Helm. The values.yaml file allows you to configure your Connectware installation. For example, edit deployment parameters, manage resources, and update your Connectware to a new version.

In this documentation, we will focus on a basic Kubernetes configuration and commonly used parameters.

Note: We recommend that you store the values.yaml file in a version control system.

Creating a Copy of the Default values.yaml File

A Helm chart contains a default configuration. It is likely that you only need to customize some of the configuration parameters. We recommend that you create a copy of the default values.yaml file named default-values.yaml and a new, empty values.yaml file to customize specific parameters.

Example

helm show values cybus/connectware > default-values.yaml
Code-Sprache: YAML (yaml)

Creating a values.yaml File

When you have created the default-values.yaml file, you can create the values.yaml file to add your custom configuration parameters.

Example

vi values.yaml
Code-Sprache: YAML (yaml)

Specifying the License Key

To install Connectware, you need a valid license key.

Example

global:
licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
Code-Sprache: YAML (yaml)

Specifying the Broker Cluster Secret

You must specify a secret for the broker cluster. The cluster secret value is used to secure your broker cluster, just like a password.

Important: Treat the broker cluster secret with the same level of care as a password.

Example

global:
 licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
broker:
 clusterSecret: Uhoo:RahShie6goh # example value
Code-Sprache: YAML (yaml)

Allowing Immutable Labels

For a fresh Connectware installation, we recommend that you set best-practice labels on immutable workload objects like StatefulSet volumes.

Example

global:
licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
broker:
clusterSecret: Uhoo:RahShie6goh # example value
setImmutableLabels: true
Code-Sprache: YAML (yaml)

Specifying the Broker Cluster Replica Count (Optional)

By default, Connectware uses three nodes for the broker cluster that moves data. You can specify a custom number of broker nodes. For example, increase the broker nodes to handle higher data loads or decrease the broker nodes for a testing environment. 

Example

global:
licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
broker:
 clusterSecret: Uhoo:RahShie6goh # example value
 replicaCount: 5
setImmutableLabels: true
 clusterSecret: ahciaruighai_t2G # example value
Code-Sprache: YAML (yaml)

Activating a Separate Control-Plane Broker (Optional)

By default, Connectware uses the same broker for data payload processing and control-plane communication. You can use a separate control-plane broker. This might be useful for production environments, as it provides higher resilience and better manageability in cases of the data broker becoming slow to respond due to high load.

  1. In the values.yaml file, set the Helm value global.controlPlaneBroker.enabled to true.
  2. Specify a broker cluster secret in the Helm value global.controlPlaneBroker.clusterSecret.

Important: Treat the broker cluster secret with the same level of care as a password.

Example

global:
licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
broker:
 clusterSecret: Uhoo:RahShie6goh # example value
setImmutableLabels: true
controlPlaneBroker:
 enabled: true
 clusterSecret: ahciaruighai_t2G # example value
Code-Sprache: YAML (yaml)

Tipp: You can activate/deactivate this option within a scheduled maintenance window.

Specifying Which StorageClass Connectware Should Use (Optional)

A broker cluster can contain several Kubernetes StorageClasses. You can specify which StorageClass Connectware should use.

Example

global:
 licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
broker:
 clusterSecret: Uhoo:RahShie6goh # example value
setImmutableLabels: true
storage:
 storageClassName: gp2 # example value
Code-Sprache: YAML (yaml)

There are several configuration parameters to control the StorageClass of each volume that Connectware uses.

Specifying CPU and Memory Resources (Optional)

By default, Connectware is configured for high-performance systems and according to the guaranteed Quality of Service (QoS) class. However, you can use the Kubernetes resource management values requests and limits to specify the CPU and memory resources that Connectware is allowed to use.

Important: Adjusting CPU and memory resources can impact the performance and availability of Connectware. When you customize the settings for CPU and memory resources, make sure that you monitor the performance and make adjustments if necessary.

Example

global:
licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
broker:
    clusterSecret: Uhoo:RahShie6goh # example value
  setImmutableLabels: true
  podResources:
    distributedProtocolMapper:
      limits:
        cpu: 2000m
        memory: 3000Mi
      requests:
        cpu: 1500m
        memory: 1500Mi
Code-Sprache: YAML (yaml)

Remote agents are Connectware agents that are not directly integrated into the Connectware installation, but standalone deployments that are managed separately.

A common use case for remote agents is using a target infrastructure that is closer to the shop floor than the Connectware installation, but from a Kubernetes point of view they can also be deployed in the same namespace.

Cybus offers a Helm chart to comfortably deploy remote agents.

Please review the requirements for your Kubernetes cluster in Kubernetes cluster requirements for the connectware-agent Helm chart before proceeding with Installing Connectware agents using the connectware-agent Helm chart or Installing Connectware agents without a license key using the connectware-agent Helm chart.

Make sure that your Kubernetes cluster satisfies these requirements before installing the connectware-agent Helm chart.

Prerequisites

You can configure your Connectware agents deployed through the connectware-agent Helm chart by adjusting Helm values in your values.yaml file and re-applying it using a Helm upgrade.

Example:

<code>helm upgrade -i connectware-agent cybus/connectware-agent -f values.yaml -n <namespace></code>
Code-Sprache: YAML (yaml)

If you need help starting out with a values.yaml file, follow the Installing Connectware agents using the connectware-agent Helm chart article.

In our examples we will explain the parameters in the protocolMapperAgents Helm context, but unless otherwise noted they are also available to configure through protocolMapperAgentDefaults as mentioned in Configuration principles for the connectware-agent Helm chart.

Connectware’s agents are part of a Kubernetes StatefulSet. If any of them are not in the state “running” when you execute helm upgrade, you will need to manually delete the pod afterwards, for an updated pod to be scheduled.

Available Helm Values

Root-Level Helm Values

These values are on the root level of your values.yaml file.

Helm valueDescriptionDiscussed in
licenseKeyA valid license for Cybus ConnectwareInstalling Connectware agents using the connectware-agent Helm chart
protocolMapperAgentDefaultsThis set of configuration values is applied to all agents, unless they override specific valuesConfiguration principles for the connectware-agent Helm chart
protocolMapperAgentsA collection of Connectware agents to be deployed. Each collection entry can contain configuration to override the defaultsConfiguration principles for the connectware-agent Helm chart
fullnameOverrideOverride the full name of this installation, which is used as a name prefix. Use „“ to remove prefixingControlling the name of Kubernetes objects for the connectware-agent Helm chart
nameOverrideOverride the chart name of this installation, which is used as part of the name prefixControlling the name of Kubernetes objects for the connectware-agent Helm chart

protocolMapperAgentDefaults Helm Values

These values are within the protocolMapperAgentDefaults section and control the behavior of all deployed agents.

Helm valueDescriptionDiscussed in
connectwareHostDNS name under which the Connectware installation is available to the agentConfiguring target Connectware for the connectware-agent Helm chart
controlPlaneBrokerEnabledDefine if the Connectware installation uses the separate control-plane-broker featureConfiguring target Connectware for the connectware-agent Helm chart
image.nameThe name of the container image used for the agentConfiguring image name and version for the connectware-agent Helm chart
image.versionContainer version or tag used for the agentConfiguring image name and version for the connectware-agent Helm chart
image.registryContainer image registry to be used for the agent. Set to „“ to not specify a registryUsing a custom image registry for the connectware-agent Helm chart
image.pullPolicyKubernetes imagePullPolicy used for the agent. One of: Always, Never, IfNotPresentConfiguring image pull policy for the connectware-agent Helm chart
image.pullSecretsA collection of objects containing Kubernetes imagePullSecrets with a name attribute, to be used by the agentUsing a custom image registry for the connectware-agent Helm chart
mTLS.enabledDefine if mTLS (Certificate Authentication) is enabledUsing Mutual Transport Layer Security (mTLS) for agents with the connectware-agent Helm chart
mTLS.caChain.certThe Certificate Authority certificate chain as a literal PEM encoded stringUsing Mutual Transport Layer Security (mTLS) for agents with the connectware-agent Helm chart
mTLS.caChain.existingConfigMapAn existing Kubernetes ConfigMap containing the Certificate Authority certificate chain in a file named ca-chain.pemUsing Mutual Transport Layer Security (mTLS) for agents with the connectware-agent Helm chart
mqtt.tlsDefine if TLS (Transport Encryption) is enabledConfiguring target Connectware for the connectware-agent Helm chart
mqtt.controlHostOverride the default host for the control-plane MQTT connectionConfiguring target Connectware for the connectware-agent Helm chart
mqtt.dataHostOverride the default host for the data MQTT connectionConfiguring target Connectware for the connectware-agent Helm chart
mqtt.controlPortOverride the default port for the control-plane MQTT connectionConfiguring target Connectware for the connectware-agent Helm chart
mqtt.dataPortOverride the default port for the data MQTT connectionConfiguring target Connectware for the connectware-agent Helm chart
persistence.accessModeThe Kubernetes AccessMode to request for the persistent volume. One of: ReadWriteOnce, ReadWriteMany, ReadWriteOncePodConfiguring agent persistence for the connectware-agent Helm chart
persistence.sizeA Kubernetes Quantity to request as size for the persistent volumeConfiguring agent persistence for the connectware-agent Helm chart
persistence.storageClassNameThe name of the Kubernetes StorageClass to request for the persistent volumeConfiguring agent persistence for the connectware-agent Helm chart
podAntiAffinityDefine what type of podAntiAffinity to use for the agent. One of: none, soft, hardConfiguring podAntiAffinity for the connectware-agent Helm chart
podAntiAffinityOptionsDefine configuration values specific to podAntiAffinityConfiguring podAntiAffinity for the connectware-agent Helm chart
resources.requests.cpuKubernetes Quantity that describes the agents CPU requestsConfiguring compute resources for the connectware-agent Helm chart
resources.requests.memoryKubernetes Quantity that describes the agents memory requestsConfiguring compute resources for the connectware-agent Helm chart
resources.limits.cpuKubernetes Quantity that describes the agents CPU limitsConfiguring compute resources for the connectware-agent Helm chart
resources.limits.memoryKubernetes Quantity that describes the agents memory limitsConfiguring compute resources for the connectware-agent Helm chart
envA collection of objects with name and value describing environment variables passed to the agentConfiguring environment variables for the connectware-agent Helm chart
annotationsA set of Kubernetes annotations to be added to all agent resourcesConfiguring environment variables for the connectware-agent Helm chart
labelsA set of Kubernetes labels to be added to all agent resourcesConfiguring labels and annotations for the connectware-agent Helm chart
podAnnotationsA set of Kubernetes annotations to be added to the agent pod onlyConfiguring labels and annotations for the connectware-agent Helm chart
podLabelsA set of Kubernetes labels to be added to the agent pod onlyConfiguring labels and annotations for the connectware-agent Helm chart
nodeSelectorA set of Kubernetes labels a node must have for the agent to be scheduled on itAssigning agents to Kubernetes nodes for the connectware-agent Helm chart
securityContextDefine the Kubernetes SecurityContext for the agentConfiguring security context for the connectware-agent Helm chart
podSecurityContextDefine the Kubernetes SecurityContext for the agents podConfiguring security context for the connectware-agent Helm chart
service.annotationsA set of Kubernetes annotations to be added to the agents service onlyConfiguring labels and annotations for the connectware-agent Helm chart
service.labelsA set of Kubernetes labels to be added to the agents service onlyConfiguring labels and annotations for the connectware-agent Helm chart

protocolMapperAgents Helm Values

These values are within the protocolMapperAgents section, which is a list of agents you want to deploy and are configured per entry for an agent. See Configuration principles for the connectware-agent Helm chart for details. Additionally to the values listed here, all values under protocolMapperAgentDefaults are available per-agent.

Helm valueDescriptionDiscussed in
nameThe name of the Connectware agent. If you use mTLS this must match the certificates CN/SANInstalling Connectware agents using the connectware-agent Helm chart
mTLS.keyPair.certThe mTLS certificate chain as a literal PEM encoded stringUsing Mutual Transport Layer Security (mTLS) for agents with the connectware-agent Helm chart
mTLS.keyPair.keyThe mTLS private key as a literal PEM encoded stringUsing Mutual Transport Layer Security (mTLS) for agents with the connectware-agent Helm chart
mTLS.existingSecretAn existing Kubernetes Secret containing the mTLS certificate and key in files named tls.crt and tls.keyUsing Mutual Transport Layer Security (mTLS) for agents with the connectware-agent Helm chart

You can adjust the image pull policy used by your agents to any valid value supported by Kubernetes.

To change the image pull policy for the agent, specify the pull policy you want in the image.pullPolicy value inside the agents entry in the protocolMapperAgents context of your values.yaml file.

Example

protocolMapperAgents:
  - name: bender-robots
    connectwareHost: connectware.cybus # adjust to actual hostname of Connectware
    image:
      pullPolicy: Always
Code-Sprache: YAML (yaml)

Ihr Browser unterstützt diese Webseite nicht.

Liebe Besucher:innen, Sie versuchen unsere Website über den Internet Explorer zu besuchen. Der Support für diesen Browser wurde durch den Hersteller eingestellt, weshalb er moderne Webseiten nicht mehr richtig darstellen kann.
Um die Inhalte dieser Website korrekt anzeigen zu können, benötigen Sie einen modernen Browser.

Unter folgenden Links finden Sie Browser, für die unsere Webseite optimiert wurde:

Google Chrome Browser herunterladen Mozilla Firefox Browser herunterladen

Sie können diese Website trotzdem anzeigen lassen, müssen aber mit erheblichen Einschränkungen rechnen.

Diese Website trotzdem anzeigen.