Upgrading OpenStack Havana to Icehouse

openstack-logo
This article describe the way I updated (with Raphaël Walter !) from Havana to OpenStack Icehouse.

We will perform the update service by service in live such a way that we can upgrade the controller independently from the compute nodes, minimizing service disruption.

For each service, the process is as follows:

  • Take down the service
  • Upgrade the packages
  • Upgrade the database
  • Start up the service

All of the below as been tested for an Ubuntu installation.

Let’s go!

Preparing to upgrade OpenStack

First and foremost, we will make backups!

Backup the configuration files

Save the configuration files on all nodes, as shown here:

for service in keystone glance nova cinder openstack-dashboard
do mkdir $service-havana
done

for service in keystone glance nova cinder openstack-dashboard
do cp -r /etc/$service/* $service-havana/
done

Backup databases

Backup all databases on the controller:

mysqldump -u root -p --opt --add-drop-database --all-databases > havana-db-backup.sql

Update systems

On all nodes, we ensure that our systems are up to date:

apt-get update
apt-get upgrade

Update repositories

On all nodes, remove the repository for Havana packages and add the repository for Icehouse packages:

apt-add-repository -r cloud-archive:havana
apt-add-repository cloud-archive:icehouse
apt-get update

Keystone

We stop the keystone service:

service keystone stop

We install some dependencies and the new version of Keystone:

apt-get install python-six python-babel keystone
If you want to know why we need to install the python-six ans python-babel paquets, check in the troubleshooting section.

We upgrade the database schema:

keystone-manage db_sync

We can start the Keystone service:

service keystone start

Glance

We stop the glance and glance-registry services:

service glance-api stop
service glance-registry stop

We install some dependencies and the new version of Glance:

apt-get install python-iso8601 python-keystoneclient python-stevedore glance glance-api

If your tables are not in utf-8, you need to convert them:

mysql -u root -p
mysql> use glance
mysql> ALTER DATABASE glance DEFAULT CHARACTER SET utf8;
mysql> SET FOREIGN_KEY_CHECKS=0;
mysql> ALTER TABLE images CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci; # To do with every tables
mysql> SET FOREIGN_KEY_CHECKS=1;

We upgrade the database schema:

glance-manage db_sync

We can start the Glance service:

service glance start
service glance-registry start

Cinder

We stop the cinder-scheduler, cinder-volume and cinder-api services:

service cinder-scheduler stop
service cinder-volume stop
service cinder-api stop

We install some dependencies and the new version of Cinder:

apt-get install cinder-volume python-cinderclient cinder-scheduler cinder-api

We upgrade the database schema:

cinder-manage db sync

We can start the Cinder services:

service cinder-scheduler start
service cinder-volume start

Nova controller

We stop the nova services:

service nova-consoleauth stop
service nova-novncproxy stop
service nova-cert stop
service nova-conductor stop
service nova-scheduler stop
service nova-api stop

We install some dependencies and the new version of Nova:

apt-get install nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient

To enable Icehouse controller services to talk to Havana compute services, we need to set the compute=icehouse-compat option.
Find the [upgrade_levels] section and key in /etc/nova/nova.conf and make sure the version is set to icehouse-compat:

[upgrade_levels]
# Set a version cap for messages sent to compute services. If
# you plan to do a live upgrade from havana to icehouse, you
# should set this option to "icehouse-compat" before beginning
# the live upgrade procedure. (string value)
compute=icehouse-compat

The following configuration options are marked as deprecated in this release. See nova.conf.sample for their replacements. [GROUP]/option :

[DEFAULT]/rabbit_durable_queues
[rpc_notifier2]/topics
[DEFAULT]/log_config
[DEFAULT]/logfile
[DEFAULT]/logdir
[DEFAULT]/base_dir_name
[DEFAULT]/instance_type_extra_specs
[DEFAULT]/db_backend
[DEFAULT]/sql_connection
[DATABASE]/sql_connection
[sql][/sql]/connection
[DEFAULT]/sql_idle_timeout
[DATABASE]/sql_idle_timeout
[sql][/sql]/idle_timeout
[DEFAULT]/sql_min_pool_size
[DATABASE]/sql_min_pool_size
[DEFAULT]/sql_max_pool_size
[DATABASE]/sql_max_pool_size
[DEFAULT]/sql_max_retries
[DATABASE]/sql_max_retries
[DEFAULT]/sql_retry_interval
[DATABASE]/reconnect_interval
[DEFAULT]/sql_max_overflow
[DATABASE]/sqlalchemy_max_overflow
[DEFAULT]/sql_connection_debug
[DEFAULT]/sql_connection_trace
[DATABASE]/sqlalchemy_pool_timeout
[DEFAULT]/memcache_servers
[DEFAULT]/libvirt_type
[DEFAULT]/libvirt_uri
[DEFAULT]/libvirt_inject_password
[DEFAULT]/libvirt_inject_key
[DEFAULT]/libvirt_inject_partition
[DEFAULT]/libvirt_vif_driver
[DEFAULT]/libvirt_volume_drivers
[DEFAULT]/libvirt_disk_prefix
[DEFAULT]/libvirt_wait_soft_reboot_seconds
[DEFAULT]/libvirt_cpu_mode
[DEFAULT]/libvirt_cpu_model
[DEFAULT]/libvirt_snapshots_directory
[DEFAULT]/libvirt_images_type
[DEFAULT]/libvirt_images_volume_group
[DEFAULT]/libvirt_sparse_logical_volumes
[DEFAULT]/libvirt_images_rbd_pool
[DEFAULT]/libvirt_images_rbd_ceph_conf
[DEFAULT]/libvirt_snapshot_compression
[DEFAULT]/libvirt_use_virtio_for_bridges
[DEFAULT]/libvirt_iscsi_use_multipath
[DEFAULT]/libvirt_iser_use_multipath
[DEFAULT]/matchmaker_ringfile
[DEFAULT]/agent_timeout
[DEFAULT]/agent_version_timeout
[DEFAULT]/agent_resetnetwork_timeout
[DEFAULT]/xenapi_agent_path
[DEFAULT]/xenapi_disable_agent
[DEFAULT]/xenapi_use_agent_default
[DEFAULT]/xenapi_login_timeout
[DEFAULT]/xenapi_connection_concurrent
[DEFAULT]/xenapi_connection_url
[DEFAULT]/xenapi_connection_username
[DEFAULT]/xenapi_connection_password
[DEFAULT]/xenapi_vhd_coalesce_poll_interval
[DEFAULT]/xenapi_check_host
[DEFAULT]/xenapi_vhd_coalesce_max_attempts
[DEFAULT]/xenapi_sr_base_path
[DEFAULT]/target_host
[DEFAULT]/target_port
[DEFAULT]/iqn_prefix
[DEFAULT]/xenapi_remap_vbd_dev
[DEFAULT]/xenapi_remap_vbd_dev_prefix
[DEFAULT]/xenapi_torrent_base_url
[DEFAULT]/xenapi_torrent_seed_chance
[DEFAULT]/xenapi_torrent_seed_duration
[DEFAULT]/xenapi_torrent_max_last_accessed
[DEFAULT]/xenapi_torrent_listen_port_start
[DEFAULT]/xenapi_torrent_listen_port_end
[DEFAULT]/xenapi_torrent_download_stall_cutoff
[DEFAULT]/xenapi_torrent_max_seeder_processes_per_host
[DEFAULT]/use_join_force
[DEFAULT]/xenapi_ovs_integration_bridge
[DEFAULT]/cache_images
[DEFAULT]/xenapi_image_compression_level
[DEFAULT]/default_os_type
[DEFAULT]/block_device_creation_timeout
[DEFAULT]/max_kernel_ramdisk_size
[DEFAULT]/sr_matching_filter
[DEFAULT]/xenapi_sparse_copy
[DEFAULT]/xenapi_num_vbd_unplug_retries
[DEFAULT]/xenapi_torrent_images
[DEFAULT]/xenapi_ipxe_network_name
[DEFAULT]/xenapi_ipxe_boot_menu_url
[DEFAULT]/xenapi_ipxe_mkisofs_cmd
[DEFAULT]/xenapi_running_timeout
[DEFAULT]/xenapi_vif_driver
[DEFAULT]/xenapi_image_upload_handler

You need to edit /etc/nova/nova.conf and set the good options. In my case, I have had to replace the logdir option by log_dir:

sed -i -e "s/logdir/log_dir/g" /etc/nova/nova.conf

We upgrade the database schema:

nova-manage db sync

We can start the Nova services:

service nova-consoleauth start
service nova-novncproxy start
service nova-cert start
service nova-conductor start
service nova-scheduler start
service nova-api start

Nova compute

We stop the nova services:

service nova-compute stop
service nova-api-metadata stop
service nova-network stop

We install some dependencies and the new version of Nova:

apt-get install python-six nova-compute-kvm

We upgrade the database schema:

nova-manage db_sync

Because Icehouse brings libguestfs as a requirement. Installing Icehouse dependencies on a system currently running Havana may cause the Havana node to begin using libguestfs and break unexpectedly. It is recommended that libvirt_inject_partition=-2 be set on Havana nodes prior to starting an upgrade of packages on the system.
On the Compute nodes, edit /etc/nova/nova.conf and add:

libvirt_inject_partition=-2

As we have seen, many options are marked as deprecated in this release. See nova.conf.sample for their replacements. You need to edit /etc/nova/nova.conf and set the good options. In my case, I have had to replace the libvirt_cpu_mode, libvirt_cpu_model, libvirt_type, libvirt_use_virtio_for_bridges and logdir options:

sed -i -e "s/libvirt_cpu_mode/cpu_mode/g" /etc/nova/nova.conf
sed -i -e "s/libvirt_cpu_model/cpu_model/g" /etc/nova/nova.conf
sed -i -e "s/libvirt_type/virt_type/g" /etc/nova/nova.conf
sed -i -e "s/libvirt_use_virtio_for_bridges/use_virtio_for_bridges/g" /etc/nova/nova.conf
sed -i -e "s/logdir/log_dir /g" /etc/nova/nova.conf

We can start the Nova services:

service nova-compute start
service nova-api-metadata start
service nova-network start

We must not forget to unset the compute=icehouse-compat option that we set previously.
Find the [upgrade_levels] section and key in /etc/nova/nova.conf and unset icehouse-compat:

[upgrade_levels]
# Set a version cap for messages sent to compute services. If
# you plan to do a live upgrade from havana to icehouse, you
# should set this option to "icehouse-compat" before beginning
# the live upgrade procedure. (string value)
#compute=icehouse-compat

Horizon

There is nothing special to do to upgrade the Horizon service. You only have to upgrade it and restart the Apache HTTP Server:

apt-get install openstack-dashboard
service apache2 restart

OpenStack Icehouse

Troubleshooting

#1

Problem:

root@controller:~/backup-havana/keystone-havana# apt-get install keystone
Lecture des listes de paquets... Fait
Construction de l'arbre des dépendances
Lecture des informations d'état... Fait
keystone est déjà la plus récente version disponible.
0 mis à jour, 0 nouvellement installés, 0 à enlever et 102 non mis à jour.
1 partiellement installés ou enlevés.
Après cette opération, 0 o d'espace disque supplémentaires seront utilisés.
Souhaitez-vous continuer [O/n] ? O
Paramétrage de keystone (1:2014.1-0ubuntu1~cloud1) ...
Traceback (most recent call last):
  File "/usr/bin/keystone-manage", line 37, in 
    from keystone import cli
  File "/usr/lib/python2.7/dist-packages/keystone/cli.py", line 23, in 
    from keystone.common import sql
  File "/usr/lib/python2.7/dist-packages/keystone/common/sql/__init__.py", line 17, in 
    from keystone.common.sql.core import *
  File "/usr/lib/python2.7/dist-packages/keystone/common/sql/core.py", line 35, in 
    from keystone.openstack.common.db.sqlalchemy import models
  File "/usr/lib/python2.7/dist-packages/keystone/openstack/common/db/sqlalchemy/models.py", line 32, in 
    class ModelBase(six.Iterator):
AttributeError: 'module' object has no attribute 'Iterator'
dpkg : erreur de traitement de keystone (--configure) :
 le sous-processus script post-installation installé a retourné une erreur de sortie d'état 1
Des erreurs ont été rencontrées pendant l'exécution :
 keystone
E: Sub-process /usr/bin/dpkg returned an error code (1)

Solution:

apt-get install python-six

#2

Problem:

2014-04-29 10:52:37.210 22541 ERROR stevedore.extension [-] Could not load 'rabbit': (Babel 0.9.6 (/usr/lib/python2.7/dist-packages), Requirement.parse('Babel>=1.3'))
2014-04-29 10:52:37.210 22541 ERROR stevedore.extension [-] (Babel 0.9.6 (/usr/lib/python2.7/dist-packages), Requirement.parse('Babel>=1.3'))

Solution:

apt-get install python-babel 

#3

Problem:

root@controller:/etc/glance# glance-manage db_sync
2014-04-29 11:01:16.081 23930 CRITICAL glance [-] ValueError: Tables "image_locations,image_members,image_properties,image_tags,images,migrate_version" have non utf8 collation, please make sure all tables are CHARSET=utf8

Solution:
For each tables:

mysql -u root -p
mysql> use glance
mysql> ALTER DATABASE glance DEFAULT CHARACTER SET utf8;
mysql> SET FOREIGN_KEY_CHECKS=0;
mysql> ALTER TABLE images CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;

#4

Problem:
In /var/log/glance/registry.log:

2014-04-29 11:00:06.360 23667 TRACE stevedore.extension   File "/usr/lib/python2.7/dist-packages/stevedore/extension.py", line 101, in _load_one_plugin
2014-04-29 11:00:06.360 23667 TRACE stevedore.extension     plugin = ep.load()
2014-04-29 11:00:06.360 23667 TRACE stevedore.extension   File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 1988, in load
2014-04-29 11:00:06.360 23667 TRACE stevedore.extension     if require: self.require(env, installer)
2014-04-29 11:00:06.360 23667 TRACE stevedore.extension   File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2001, in require
2014-04-29 11:00:06.360 23667 TRACE stevedore.extension     working_set.resolve(self.dist.requires(self.extras),env,installer))
2014-04-29 11:00:06.360 23667 TRACE stevedore.extension   File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 588, in resolve
2014-04-29 11:00:06.360 23667 TRACE stevedore.extension     raise VersionConflict(dist,req) # XXX put more info here
2014-04-29 11:00:06.360 23667 TRACE stevedore.extension VersionConflict: (python-keystoneclient 0.3.2 (/usr/lib/python2.7/dist-packages), Requirement.parse('python-keystoneclient>=0.7.0'))

Solution:

apt-get install python-keystoneclient

#5

Problem:

2014-04-29 11:38:38.390 26395 WARNING glance.notifier [-] notifier_strategy was deprecated in favor of `notification_driver`
2014-04-29 11:38:38.428 26395 WARNING glance.notifier [-] notifier_strategy was deprecated in favor of `notification_driver`
2014-04-29 11:38:38.430 26395 WARNING glance.notifier [-] notifier_strategy was deprecated in favor of `notification_driver`
2014-04-29 11:38:38.432 26395 WARNING glance.notifier [-] notifier_strategy was deprecated in favor of `notification_driver`
2014-04-29 11:38:38.437 26395 WARNING glance.notifier [-] notifier_strategy was deprecated in favor of `notification_driver`
2014-04-29 11:38:38.440 26395 WARNING glance.notifier [-] notifier_strategy was deprecated in favor of `notification_driver`
2014-04-29 11:38:38.441 26395 WARNING glance.notifier [-] notifier_strategy was deprecated in favor of `notification_driver`

Solution:
In /etc/glance/glance-api.conf:

notification_driver = noop 
Vous aimez ?
[Total : 3    Moyenne : 5/5]
Summary
Upgrading OpenStack Havana to Icehouse
Article Name
Upgrading OpenStack Havana to Icehouse
Author

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée.