Table of Contents
For maintenance and upgrade reasons, the server or some part of it may be required to shutdown. There are two typical scenarios:
-
Server-wide shutdown, where the entire server needs to be restarted.
-
Individual application retirement that does not cause the server to stop running.
There are times when the entire server might need to be shut down, and restarted later in a similar state.
To tell the server to shut down in a controlled way, a message must be sent to all LoginApps. This may be in the form of a Watcher message or a USR1 signal. The easiest way to do this is to use the script control_cluster.py with the option stop or via WebConsole. For more details, see Control Cluster.
Controlled startup and shutdown only work when the underlying database is MySQL, the XML database does not support this feature.
The state of a previous server run will be automatically used on server startup when a controlled shutdown has previously occurred. If however the server failed unexpectedly and was shutdown in an uncontrolled manner then it is started using the disaster recovery information. Entities that are marked as being for auto-load will be re-loaded at server start up. The auto-load data can be cleared using the ClearAutoLoad tool to reset the server persistent state to an initial empty state. For more information about the ClearAutoLoad tool, refer to The ClearAutoLoad tool.
The main information that is restored from the database is:
-
Spaces and their data
-
Game time
-
Which auto-loaded entities should be in each space.
When using MySQL as the underlying database, this information is stored in the following tables:
Data | Table name |
---|---|
Spaces and their data | bigworldSpaces, bigworldSpaceData |
Game time | bigworldGameTime |
Online entities | bigworldLogOns |
For details on these tables, see the document Server Programming Guide's section MySQL Database Schema → Non-Entity Tables
For details on related scripting, see the document Server Programming Guide's chapter Controlled Startup and Shutdown.
Individual BaseApps and CellApps can be retired. This can be useful if maintenance is required on a single machine where only BaseApps and CellApps are running.
When BaseApps retire, base entities and proxy entities are offloaded to other BaseApps, and connected clients will be reconnected with another BaseApp transparently.
It may take some time for the base entities and proxy entities to be offloaded from the retiring BaseApp to other BaseApps. The retiring BaseApp will shutdown once these entities have been successfully offloaded and a new backup cycle has completed, ensuring no data loss.
If another BaseApp terminates unexpectedly while the retirement is in progress, any entities that were offloaded to the dead BaseApp will be re-offloaded on to another BaseApp.
For redundancy reasons, it is recommended that retirement only be done if there are at least two other BaseApps running. This ensures that in the event of a unexpected BaseApp termination while retirement occurs, every base entity in the system is adequately backed up.
CellApps can also be retired. Each cell administered by the retiring CellApp will shrink in size gradually, each cell disappearing once the area reaches zero. Once there are no more cells, the CellApp will shut itself down.
A BaseApp or a CellApp can be individually retired by using the WebConsole in the ClusterControl module, by selecting Retire App from the action menu for the particular CellApp or BaseApp to be retired. Refer to the section on WebConsole.
A BaseApp or a CellApp can be individually retired by using the ControlCluster command line tool (see Server Command-Line Utilities), using the retireproc command. For example:
$ ./control_cluster.py retireproc cellapp01 $ ./control_cluster.py retireproc baseapp03