How to debug helm upgrade failed with message: spec: Forbidden

Paul Pan
2021-09-08 22:47

Subject 

How to debug helm upgrade failed with message:  spec: Forbidden

Affected Versions

N/A

Description

Kubernetes statefulset has many properties and most are not changeable after deployed. When running a helm upgrade, helm will generate a new statefulset template. If the new template changed anything other than  'replicas', 'template', and 'updateStrategy' in the spec filed, you will see the following messages:

Error: UPGRADE FAILED: StatefulSet.apps "artifactory-ha-artifactory-ha-member" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden && StatefulSet.apps "artifactory-ha-artifactory-ha-primary" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden

In some cases, the error message will give the filed that's rejecting the change. However, for a generic error like the one above, you will need to find out what is causing the immutable error.

Changes in the statefulset template during helm upgrade usually comes from two sources:  change in values.yaml and change in chart version ( which changes the statefulset template in chart ). Here are some tips to the debug steps:

Debug and Fix

1. Identify the values.yaml change is usually the first steps. You maybe passing a new value with upgrade or didn't pass a value that was previous set this time. Use the command below to get the current effective values and compare.

$ helm get values <release-name> -n namespace
$ helm get values <release-name>  -n namespace -A

      Namespace is not needed on helm v2 and -A will gives you all the current values including chart default values

2. If we are sure that the change didn't come from values.yaml, we need to check the template itself. It could be from updates in statefulset templates itself ( changes in chart version/local modification/configmap/secrets is usually why this happens )

  Check if the chart version is changed with the upgrade command ( pass chart version in the upgrade if possible ).

  Check if there's any local modification ( charts is installed from local path )

  Check if any configmap/secrets that has small ages ( which indicates that it has been changed recently ), and if the statefulset could be referencing the configmap or secret.

  Last, use the –dry-run option to get the statefulset template created by the upgrade command and compare it with current statefuset.

  $ helm upgrade …..   –dry-run

3. Side notes:

  Make sure to check out the upgrade_notes of each product and search for any message related to "upgrade" in change log. For example, you may notice the following in changelogs.
https://github.com/jfrog/charts/blob/3b75f96ec9df3bc89c4c5e86c8df17e8d0e80aea/stable/distribution/CHANGELOG.md#10281—june-22-2021

If this is an upgrade and you are using the default PostgreSQL (postgresql.enabled=true), you need to pass previous 9.x/10.x/12.x's postgresql.image.tag, previous postgresql.persistence.size and databaseUpgradeReady=true

  If you didn't pass postgresql.image.tag values, helm upgrade may pick up a different values from upgrade. Thus it will create a new postgresql statefulset template which will results in the same upgrade forbidden error because you could be changing an forbidden field.

  4. Helm client versions or kubernetes versions

  The upgrade forbidden error could also be a results of the helm client version or Kubernetes versions. In such a case, you will not be able to find the difference in step 1-3. Check if there's been any change to helm client or kubernetes version with customer.

  5. Fix

  There are a few ways to work around the issue if we are not able to identify the source of change

  You can always delete the statefulset before running the upgrade command. Statefulset is an controller, deleting it does not have any direct impact. Just make sure you don't also delete the pod until upgrade command creates a new statefulset. After new statefulset is created, you may need to delete the pod for new pod to be generated ( Usually k8s automatically creates new pod itself )

  Another easier option is to use the –force command with upgrade which is kinda similar to above. Keep in mind, however, that there is currently a bug with k8s (https://github.com/kubernetes/kubernetes/issues/91459) which will lead to the following error if you use the –force option:

Error: UPGRADE FAILED: failed to replace object: Service "artifactory-ha" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "artifactory-ha-primary" is invalid: spec.clusterIP: Invalid value: "": field is immutable