This document outlines several steps and notes for operators to reference when upgrading their heat from previous versions of OpenStack.
Note
This document is only tested in the case of upgrading between sequential releases.
Read and ensure you understand the release notes for the next release.
Make a backup of your database.
Upgrades are only supported one series at a time, or within a series.
Heat already supports “cold-upgrades”, where the heat services have to be down during the upgrade. For time-consuming upgrades, it may be unacceptable for the services to be unavailable for a long period of time. This type of upgrade is quite simple, follow the bellow steps:
Stop all heat-api and heat-engine services.
Uninstall old code.
Install new code.
Update configurations.
Do Database sync (most time-consuming step)
Start all heat-api and heat-engine services.
Note
Rolling Upgrade is supported since Pike, which means operators can rolling upgrade Heat services from Ocata to Pike release with minimal downtime.
A rolling upgrade would provide a better experience for the users and operators of the cloud. A rolling upgrade would allow individual heat-api and heat-engine services to be upgraded one at a time, with the rest of the services still available. This upgrade would have minimal downtime. Please check spec about rolling upgrades.
Multiple Heat nodes.
A load balancer or some other type of redirection device is being used in front of nodes that run heat-api services in such a way that a node can be dropped out of rotation. That node continues running the Heat services (heat-api or heat-engine) but is no longer having requests routed to it.
These following steps are the process to upgrade Heat with minimal downtime:
Install the code for the next version of Heat either in a virtual environment or a separate control plane node, including all the python dependencies.
Using the newly installed heat code, run the following command to sync the database up to the most recent version. These schema change operations should have minimal or no effect on performance, and should not cause any operations to fail.
heat-manage db_sync
At this point, new columns and tables may exist in the database. These DB schema changes are done in a way that both the N and N+1 release can perform operations against the same schema.
Create a new rabbitmq vhost for the new release and change the transport_url configuration in heat.conf file to be:
transport_url = rabbit://<user>:<password>@<host>:5672/<new_vhost>
for all upgrade services.
Stop heat-engine gracefully, Heat has supported graceful shutdown features (see the spec about rolling upgrades). Then start new heat-engine with new code (and corresponding configuration).
Note
Remember to do Step 4, this would ensure that the existing engines would not communicate with the new engine.
A heat-api service is then upgraded and started with the new rabbitmq vhost.
Note
The second way to do this step is switch heat-api service to use new vhost first (but remember not to shut down heat-api) and upgrade it.
The above process can be followed till all heat-api and heat-engine services are upgraded.
Note
Make sure that all heat-api services has been upgraded before you start to upgrade the last heat-engine service.
Warning
With the convergence architecture, whenever a resource completes the engine will send RPC messages to another (or the same) engine to start work on the next resource(s) to be processed. If the last engine is going to be shut down gracefully, it will finish what it is working on, which may post more messages to queues. It means the graceful shutdown does not wait for queues to drain. The shutdown leaves some messages unprocessed and any IN_PROGRESS stacks would get stuck without any forward progress. The operator must be careful when shutting down the last engine, make sure queues have no unprocessed messages before doing it. The operator can check the queues directly with RabbitMQ’s management plugin.
Once all services are upgraded, double check the DB and services
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.