Overview
The procedure to upgrade Mission Control depends on your installation type. We strongly recommend reading through this page before proceeding with your upgrade.
Mission Control is Moving to Artifactory as a Service
From JFrog Artifactory version 7.27, Mission Control has been integrated directly into Artifactory as a service. You will no longer need to install Mission Control to use the features it provides, only to enable the service in Artifactory.
The metrics capabilities that were provided by Mission Control will now be provided through JFrog Insight. To learn more about how to install Insight, see Installing Insight.
To learn more about how Mission Control has been integrated into Artifactory and to migrate to Mission Control microservice, see Migrating Platform Deployments and License Buckets.
You must install JFrog Insights to use trends and charts after you migrate to Mission Control microservice. For more information, see Migrating from Mission Control to Insight.
If you wish to continue using Mission Control, before upgrading, refer to the information on Mission Control Requirements and Supported Platforms for Mission Control.
Note
Make sure to use the same upgrade method (RPM, Debian, Docker, etc.) as the one you initially used to install Mission Control.
Upgrading to version 4.x for the first time?
It is recommended that you first review what's new with the latest JFrog Platform. Review the breaking changes, deprecated features and more.
Before You Proceed
JFrog Mission Control 4.x can only be installed as part of the JFrog Platform Deployment installation. Make sure this installation is completed first before continuing.
Default Home Directory
The default Mission Control home directory is defined according to the installation type. For additional details see the JFrog Product Directory Structure page.
Note: This guide uses $JFROG_HOME
to represent the JFrog root directory containing the deployed product.
Setting Up High Availability on Mission Control
High Availability configuration for Mission Control requires a cluster of 3 or more active/active nodes on the same LAN.
Upgrading from Versions below 3.5.1
Warning
Before you upgrade, ensure that the operating system version that you use is supported. See System Requirements for detailed information on operating system support.
To upgrade from version 3.5 and below, you first need to upgrade to version 3.5.1 as described in the Upgrading from Versions below 3.5.1, and then continue to upgrade from version 3.5.1 to 4.x.
Upgrading from version 3.5 and below to 4.x is not supported.
Upgrading from Version 3.5.1 to 4.x
Warning
Before you upgrade, ensure that the operating system version that you use is supported. See System Requirements for detailed information on operating system support.
JFrog Mission Control v4.x is only compatible with JFrog Artifactory v7.x. To upgrade, you must first install JFrog Artifactory 7.x. For more information, see Installing Artifactory.
There are several new concepts introduced in Mission Control 4.x, improving the installation and customization process. For more information, see What's New: Self-Hosted.
To upgrade to version 4.x, you'll need to first unpack the installer archive of Mission Control, without installing the services, and then export/import your licenses using the below migration procedure.
Note
When using Mission Control prior to release 4.7.5, and using the Export<>Import functionality to duplicate/replicate a Mission Control instance, the license buckets should be manually loaded post-import, as they are not included in the export.
Warning
Data other than your licenses, such as your service information and insight, will not be available after the upgrade.
Extract the contents of the compressed archive and go to the extracted folder.
tar -xvf jfrog-mc-<version>-<compose|rpm|deb>.tar.gz cd jfrog-mc-<version>-<compose|rpm|deb>
.env file included within the Docker-Compose archive
This .env file is used by docker-compose and is updated during installations and upgrades.
Notice that some operating systems do not display dot files by default. If you make any changes to the file, remember to backup before an upgrade.
Copy the
jfmcDataExport.sh
migration script from the<extracted folder>/third-party/postgresql/
directory, to the/tmp
directoryof the machine (or container) hosting your Mission Control v3.5.1 database.For Linux Archive installation, the script will be at
<extract-folder>/app/third-party/postgresql
.Run the following commands on the machine (or container) hosting the Mission Control v3.5.1 database.
Native Postgres Installation
chown postgres:postgres /tmp/jfmcDataExport.sh cd /tmp/ su postgres -c "POSTGRES_PATH=/opt/PostgreSQL/9.6/bin PGPASSWORD=password bash /tmp/jfmcDataExport.sh --output=/tmp"
Postgres in Docker container
docker exec -it <postgres_container_id> bash su postgres -c "POSTGRES_PATH=/usr/lib/postgresql/9.6/bin PGPASSWORD=password bash /tmp/jfmcDataExport.sh --output=/tmp" docker cp <postgres_container_id>:/tmp/jfmcDataExport.tar.gz /tmp/jfmcDataExport.tar.gz # If database host is different from JFrog Mission Control host, # Then copy /tmp/jfmcDataExport.tar.gz from database host to JFrog Mission Control host (e.g. with scp)
command-line options
--host=HOST database server host (default: "127.0.0.1") --port=PORT database server port (default: "5432") --user=USER database user name (default: "jfmc") --database=DATABASE database name to connect to (default: "mission_control") --schema=SCHEMA database schema name to connect to (default: "jfmc_server") --output=OUTPUT path to output dir where jfmcDataExport.tar.gz will be created (default: ".")
Check the output of the above command.
2019-10-28T16:13:18.277Z [shell] [INFO ] [] [jfmcDataExport.sh:425 ] [main] - Exporting license buckets... 2019-10-28T16:13:18.313Z [shell] [INFO ] [] [jfmcDataExport.sh:428 ] [main] - Exporting managed licenses... 2019-10-28T16:13:18.349Z [shell] [INFO ] [] [jfmcDataExport.sh:419 ] [main] - Bundling exported data... 2019-10-28T16:13:18.365Z [shell] [INFO ] [] [jfmcDataExport.sh:421 ] [main] - Mission Control data dumped to: ./jfmcDataExport.tar.gz
Run the command to erase old package for RPM and Debian installs. For zip, you need to stop and proceed.
Note: This step is only needed when you are installing Mission Control 4.x on the same server where the old version was running.
RPM - Remove old packages
# Uninstall Mission Control yum remove jfmc rm -fr /var/opt/jfrog/mission-control # Uninstall PostgreSQL /opt/PostgreSQL/9.6/uninstall-postgresql rm -fr /var/opt/postgres #For SystemD systems rm -fr /lib/systemd/system/postgresql-9.6.service /etc/systemd/system/multi-user.target.wants/postgresql-9.6.service systemctl daemon-reload systemctl reset-failed #For SystemV systems # Uninstall Elasticsearch yum remove -y elasticsearch-oss rm -fr /etc/elasticsearch rm -fr /usr/share/elasticsearch
Docker Installations - stop and remove the containers
# For docker-compose installations docker-compose -f ./jfmc-compose.json -p jfmc down # For docker installations mission-control stop docker ps -a --format '{{.Names}}' | grep ^jfmc_* | xargs docker rm -f
Debian - Remove old package
# Uninstall Mission Control apt-get purge jfmc rm -fr /var/opt/jfrog/mission-control # Uninstall PostgreSQL /opt/PostgreSQL/9.6/uninstall-postgresql rm -fr /var/opt/postgres rm -rf /var/spool/mail/postgres rm -rf /opt/PostgreSQL rm -rf /tmp/postgresql_installer_* rm -rf /etc/selinux/targeted/active/modules/100/postgresql #For SystemD systems rm -fr /lib/systemd/system/postgresql-9.6.service /etc/systemd/system/multi-user.target.wants/postgresql-9.6.service systemctl daemon-reload systemctl reset-failed #For SystemV systems rm -rf /etc/init.d/postgres-9.6 # Uninstall Elasticsearch apt-get purge elasticsearch-oss rm -fr /etc/elasticsearch rm -fr /usr/share/elasticsearch
Copy the exported data.
mkdir -p $JFROG_HOME/mc/var/bootstrap/mc cp /tmp/jfmcDataExport.tar.gz $JFROG_HOME/mc/var/bootstrap/mc # NOTE : The following is needed only for docker-compose installer chown -R 1050:1050 $JFROG_HOME/mc/var/bootstrap
Restart Mission Control.
Validate that the import was successful. The filename should be renamed to
jfmcDataExport.tar.gz.done
. It will be renamed tojfmcDataExport.tar.gz.failed
if the import procedure failed.Check the Mission Control log.
tail -f $JFROG_HOME/mc/var/log/console.log
Upgrading from Version 4.x to 4.x
Upgrading to Mission Control 4.6.x
Upgrading to Mission Control 4.6.x requires Artifactory 7.11.x. Prior to upgrading, to enable the new metrics and trends, you must perform the following:
Stop Artifactory and Mission Control services.
Upgrade to Artifactory 7.11.x and to Mission Control 4.6.x.
Update the Artifactory System YAML with the Elasticsearch URL, username, and password. For more information, see Enabling Trends.
Stop and start Artifactory for the changes to take effect.
The following upgrade methods are supported:
Interactive Script (Recommended)
Note
When you upgrade a Mission Control high availability cluster, ensure that you trigger the upgrade process on all the nodes simultaneously.
Interactive Script Upgrade (Recommended)
This supports all install types, including Docker Compose, RPM, and Debian.
Stop the service.
systemd OS
systemctl stop mc
systemv OS
service mc stop
Docker Compose
cd jfrog-mc-<version>-compose docker-compose -p mc down
Extract the contents of the compressed archive and go to the extracted folder.
Note: Make sure to merge your customizations in your current
docker-compose.yaml
file to the new extracted version of thedocker-compose.yaml
file.tar -xvf jfrog-mc-<version>-<compose|rpm|deb>.tar.gz cd jfrog-mc-<version>-<compose|rpm|deb>
Note
Copy the contents of the
.env
file in the previous installation to the newly created.env
file in this archive without copying the versions, as this will affect the upgrade.Run the installer script.
Note: if needed, the script will prompt you with a series of mandatory inputs, including the jfrogURL (custom base URL) and joinKey.
Compose
./config.sh
RPM/DEB
./install.sh
Start and manage the Mission Control service.
systemd OS
systemctl start|stop mc
systemv OS
service mc start|stop
Docker Compose
cd jfrog-mc-<version>-compose docker-compose -p mc up -d docker-compose -p mc ps docker-compose -p mc down
Access Mission Control from your browser at
http://<jfrogUrl>/ui
/, go to the Dashboard tab in the Application module in the UI.Check Mission Control Log.
tail -f $JFROG_HOME/mc/var/log/console.log
Manual RPM/Debian Upgrade
Stop the current server.
systemd OS
systemctl stop mc
systemv OS
service mc stop
Extract the contents of the compressed archive and go to the extracted folder.
tar -xvf jfrog-mc-<version>-<rpm|deb>.tar.gz cd jfrog-mc-<version>-<rpm|deb>
Configure Elasticsearch.
Note
If you are upgrading from Mission Control version 4.5.x and lower, you need to upgrade Elasticsearch. This package can be located in the extracted contents at
jfrog-mc-<version>-<rpm|deb>/third-party/elasticsearch/elasticsearch-oss-<version>.<rpm|deb>.
For upgrade steps, refer to Elasticsearch documentation.When connecting an external instance of Elasticsearch to Mission Control, add the following flag in the Shared Configurations of
$JFROG_HOME/mc/var/etc/system.
yaml
file and step (6) can be skipped.shared: elasticsearch: external: true
Recommended to install the Search Guard plugin when using Elasticsearch which is packaged with Mission Control. This will help ensure secure communication with Elasticsearch.
Search Guard package can be located in the extracted contents at
jfrog-mc-<version>-<rpm|deb>/third-party/elasticsearch/search-guard-<version>.zip
.For installation steps, refer to Search Guard documentation.
Add an admin user to Search Guard which will ensure authenticated communication with Elasticsearch.
The Search Guard configuration accepts a hashed password. Use the following command to generate the hash for the password. Also, add the username and password generated here in the Shared Configuration as specified in step (5) above.
/etc/elasticsearch/plugins/search-guard-7/tools/hash.sh -p <clear_text_password> #This will output a hashed password (<hash_password>), make a copy of it
Prepare the configuration snippet to add a new(admin) user with the hashed password obtained from the previous step
<username>: hash: "<hashed_password>" backend_roles: - "admin" description: "Insight Elastic admin user"
Paste the above snippet to the end of this file
sg_internal_users.yml
located at /etc/elasticsearch/plugins/search-guard-7/sgconfig/
Enable anonymous access to “_cluster/health” endpoint. This is required to check the health of the Elasticsearch cluster.
Enable anonymous auth in this file
sg_config.yml
at/etc/elasticsearch/plugins/search-guard-7/sgconfig/
sg_config: dynamic: http: anonymous_auth_enabled: true #set this to true
Map anonymous user
sg_anonymous
to backend rolesg_anonymous_backendrole
in this file sg_roles_mapping.yml
at/etc/elasticsearch/plugins/search-guard-7/sgconfig
sg_anonymous: backend_roles: - sg_anonymous_backendrole
Add this snippet to the end of this file
sg_roles.yml
located at /etc/elasticsearch
/plugins/search-guard-7/sgconfig
sg_anonymous: cluster_permissions: - cluster:monitor/health
Install Mission Control as a service on Red Hat-compatible Linux distributions, as a root user.
rpm
yum -y install ./mc/mc.rpm
Debian
dpkg -i ./mc/mc.deb
Set the Artifactory connection details.
Start and manage Mission Control.
service mc start|stop
Access Mission Control from your browser at:
http://<jfrogUrl>/ui/
, go the Dashboard tab in the Application module in the UI.Check Mission Control Log.
tail -f $JFROG_HOME/mc/var/log/console.log
Linux Archive Upgrade
Stop the current server.
Stop Mission Control
cd $JFROG_HOME/mc/app/bin ./mc.sh stop
Configure Elasticsearch.
Note
If you are upgrading from Mission Control version 4.5.x and lower, you need to upgrade Elasticsearch. This packagecan be located in the extracted contents at
mc/app/third-party/elasticsearch/elasticsearch-oss-<version>.tar.gz
. For upgrade steps, refer to Elasticsearch documentation.When connecting an external instance of Elasticsearch to Mission Control, add the following flag in the Shared Configurations of
$JFROG_HOME/mc/var/etc/system.
yaml
file and step (6) can be skipped.shared: elasticsearch: external: true
Recommended to install the Search Guard plugin when using Elasticsearch which is packaged with Mission Control. This will help ensure secure communication with Elasticsearch.
Search Guard package can be located in the extracted contents at
mc/app/third-party/elasticsearch/search-guard-<version>.zip
For installation steps, refer to Search Guard documentation.
Add an admin user to Search Guard which will ensure authenticated communication with Elasticsearch.
The Search Guard configuration accepts a hashed password. Use the following command to generate the hash for the password. Also, add the username and password generated here in the Shared Configuration as specified in step (5) above.
/etc/elasticsearch/plugins/search-guard-7/tools/hash.sh -p <clear_text_password> #This will output a hashed password (<hash_password>), make a copy of it
Prepare the configuration snippet to add a new(admin) user with the hashed password obtained from the previous step
<username>: hash: "<hashed_password>" backend_roles: - "admin" description: "Insight Elastic admin user"
Paste the above snippet to the end of this file
sg_internal_users.yml
located at/etc/elasticsearch/plugins/search-guard-7/sgconfig/
Enable anonymous access to “_cluster/health” endpoint. This is required to check the health of the Elasticsearch cluster.
Enable anonymous auth in this file
sg_config.yml
at/etc/elasticsearch/plugins/search-guard-7/sgconfig/
sg_config: dynamic: http: anonymous_auth_enabled: true #set this to true
Map anonymous user
sg_anonymous
to backend rolesg_anonymous_backendrole
in this file sg_roles_mapping.yml
at/etc/elasticsearch/plugins/search-guard-7/sgconfig
sg_anonymous: backend_roles: - sg_anonymous_backendrole
Add this snippet to the end of this file
sg_roles.yml
located at/etc/elasticsearch
/plugins/search-guard-7/sgconfig
sg_anonymous: cluster_permissions: - cluster:monitor/health
Extract the contents of the compressed archive and go to the extracted folder.
Untar
tar -xvf jfrog-mc-<version>-linux.tar.gz
Replace the existing $JFROG_HOME/mc/app with the new app folder.
Upgrade
# Export variables to simplify commands export JFROG_HOME=/opt/jfrog export JF_NEW_VERSION=/opt/jfrog/mc-4.x # Remove app rm -rf $JFROG_HOME/mc/app # Copy new app cp -r $JF_NEW_VERSION/app $JFROG_HOME/mc # Remove extracted new version rm -rf $JF_NEW_VERSION
Run the migration script to remove old service directories.
Run the migration script with the same privileges as you have in your current Mission Control installation. This script will remove old services directories, redundant service yaml files in router and translate your current configurations to the new configuration format, according to the new file system layout. $JFROG_HOME variable points to the new installation export JFROG_HOME=<Full path to jfrog directory, for example: /opt/jfrog> cd $JFROG_HOME/mc/app/bin ./migrate.sh Check that the migration has completed successfully, by reviewing the following files: - migration log: $JFROG_HOME/mc/var/log/migration.log - system.yaml configuration: $JFROG_HOME/mc/var/etc/system.yaml This newly created file will contain your current custom configurations in the new format.
Manage Mission Control.
$JFROG_HOME/app/bin/mc.sh start|stop
Access Mission Controlfrom your browser at:
http://<jfrogUrl>/ui/
, go to the Dashboard tab in the Application module in the UI.Check Mission Control Log.
tail -f $JFROG_HOME/mc/var/log/console.log
Helm Upgrade
Once you have a new chart version, you can update your deployment.
Non-Compatible Upgrades
In cases where a new version is not compatible with the existing deployed version (see the relevant Changelog), you will need to do the following:
Deploy a new version alongside the old version (and set a new release name)
Copy configurations and data from the old deployment to the new one
Note
Downtime is required to perform an upgrade.
Data export is done with a migration script called jfmcDataExport.sh
(available under the files directory in the Mission Control chart).
To upgrade.
Verify that you have upgraded Artifactory to v.7x. For more information, see Helm Upgrade.
Update the existing deployed version to the updated version.
helm upgrade mission-control jfrog/mission-control
Stop the old Mission Control pod (scale down replicas to 0) (PostgreSQL remains in place).
$ kubectl scale statefulsets <OLD_RELEASE_NAME>-mission-control --replicas=0
Export data from the old PostgreSQL instance in the following way.
Connect to the old PostgreSQL pod (you can get the name by running
kubectl get pods
).$ kubectl exec -it <OLD_RELEASE_NAME>-postgresql bash
Copy the jfmcDataExport.sh file and run the following commands.
$ kubectl cp ./jfmcDataExport.sh <OLD_RELEASE_NAME>-postgresql:/tmp/jfmcDataExport.sh $ chown postgres:postgres /tmp/jfmcDataExport.sh $ su postgres -c "PGPASSWORD=password bash /tmp/jfmcDataExport.sh --output=/tmp" if you are on 2x charts(operating system user postgres is not there) run ./jfmcDataExport.sh --output=/tmp and provide jfmc user password
Copy the exported file to your local system.
$ kubectl cp <OLD_RELEASE_NAME>-postgresql:/tmp/jfmcDataExport.tar.gz ./jfmcDataExport.tar.gz
Install the new Mission Control and copy the exported file.
Run the
helm install
with thenew version
calledmission-control-new
.Copy the exported tar file to the new Mission Control pod.
$ kubectl cp ./jfmcDataExport.tar.gz <NEW_RELEASE_NAME>-mission-control:/opt/jfrog/mc/var/bootstrap/mc/jfmcDataExport.tar.gz -c mission-control
Restart the new Mission Control pod.
Validate that the import was successful. The filename should be renamed to
jfmcDataExport.tar.gz.done
(it will be renamed tojfmcDataExport.tar.gz.failed
if the import procedure failed).
Run the following command to remove the old Mission Control deployment and Helm release.
helm delete <OLD_RELEASE_NAME>
Access Mission-Control from your browser at:
http://<jfrogUrl>/ui/
, then go to the Security & Compliance tab in the Application module in the UI.Check the status of your deployed Helm releases.
helm status mission-control
Mission Control should now be available.