MySQL cluster with 2 management node
I have plan to create MySQL cluster with 2 management nodes. If you read my article before regarding MySQL cluster, we are just using one management node. How about if we require more management node in case the primary management node require maintenance or shutdown.

The configuration is different this time. It is because, there are two management nodes. And also this time, I will be using 10 servers. 2 management nodes, 4 data node and 4 MySQL node. For the purpose of simulation, I will be using VMware.

For easier administration, I will be defining the server hostname because I can just use hostname in the config. I will just to edit /etc/hosts file for all 10 servers.

192.168.0.240 ctrl1 ctrl1.inertz.org
192.168.0.231 ctrl2 ctrl2.inertz.org
192.168.0.232 meta1 meta1.inertz.org
192.168.0.233 meta2 meta2.inertz.org
192.168.0.234 meta3 meta3.inertz.org
192.168.0.235 meta4 meta4.inertz.org
192.168.0.236 mysql1 mysql1.inertz.org
192.168.0.237 mysql2 mysql2.inertz.org
192.168.0.238 mysql3 mysql3.inertz.org
192.168.0.239 mysql4 mysql4.inertz.org

The running server now is working like this;

[root@ctrl1 ~]# ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 4 node(s)
id=3 @192.168.0.232 (mysql-8.0.21 ndb-8.0.21, Nodegroup: 0, *)
id=4 @192.168.0.233 (mysql-8.0.21 ndb-8.0.21, Nodegroup: 0)
id=5 @192.168.0.234 (mysql-8.0.21 ndb-8.0.21, Nodegroup: 1)
id=6 @192.168.0.235 (mysql-8.0.21 ndb-8.0.21, Nodegroup: 1)

[ndb_mgmd(MGM)] 2 node(s)
id=1 @192.168.0.240 (mysql-8.0.21 ndb-8.0.21)
id=2 @192.168.0.231 (mysql-8.0.21 ndb-8.0.21)

[mysqld(API)] 4 node(s)
id=7 @192.168.0.236 (mysql-8.0.21 ndb-8.0.21)
id=8 @192.168.0.237 (mysql-8.0.21 ndb-8.0.21)
id=9 @192.168.0.238 (mysql-8.0.21 ndb-8.0.21)
id=10 @192.168.0.239 (mysql-8.0.21 ndb-8.0.21)

The configuration is basically the same, you just have to add the node required.

[ndb_mgmd default]
# Directory for MGM node log files
DataDir=/var/lib/mysql-cluster

[ndb_mgmd]
#Management Node 1
NodeId=1
HostName=ctrl1
[ndb_mgmd]
#Management Node 2
NodeId=2
HostName=ctrl2

[ndbd default]
NoOfReplicas=2      # Number of replicas
DataMemory=256M     # Memory allocate for data storage
IndexMemory=128M    # Memory allocate for index storage
#Directory for Data Node
DataDir=/var/lib/mysql-cluster

[ndbd]
#Data Node db1
HostName=192.168.0.232
[ndbd]
#Data Node db2
HostName=192.168.0.233
[ndbd]
#Data Node db3
HostName=192.168.0.234
[ndbd]
#Data Node db4
HostName=192.168.0.235

[mysqld]
#SQL Node db1
HostName=192.168.0.236
[mysqld]
#SQL Node db2
HostName=192.168.0.237
[mysqld]
#SQL Node db3
HostName=192.168.0.238
[mysqld]
#SQL Node db4
HostName=192.168.0.239

What is the different this time is how we start the master node.

The configuration for the master node is below;

[root@ctrl1 mysql-cluster]# cat /etc/init.d/ndb_mgmd
#!/bin/bash
# chkconfig: 345 99 01
# description: MySQL Cluster management server start/stop script

STARTMGM="ndb_mgmd -c ctrl1 --ndb-nodeid=1 --config-file=/var/lib/mysql-cluster/config.ini --configdir=/var/lib/mysql-cluster --initial"

start() {
        $STARTMGM
}
stop() {
       killall -15 ndb_mgmd
       sleep 1
       killall -9 ndb_mgmd
}
case "$1" in
  start)
        start
        ;;
  stop)
        stop
        ;;
  restart|reload)
        stop
        start
        RETVAL=$?
        ;;
  *)
        echo $"Usage: $0 {start|stop|restart}"
        exit 1
esac
[root@ctrl1 mysql-cluster]#

And for the second controller, we use the config below;

[root@ctrl2 ~]# cat /etc/init.d/ndb_mgmd
#!/bin/bash
# chkconfig: 345 99 01
# description: MySQL Cluster management server start/stop script

STARTMGM="ndb_mgmd -c ctrl2 --ndb-nodeid=2 --configdir=/var/lib/mysql-cluster"

start() {
        $STARTMGM
}
stop() {
       killall -15 ndb_mgmd
       sleep 1
       killall -9 ndb_mgmd
}
case "$1" in
  start)
        start
        ;;
  stop)
        stop
        ;;
  restart|reload)
        stop
        start
        RETVAL=$?
        ;;
  *)
        echo $"Usage: $0 {start|stop|restart}"
        exit 1
esac
[root@ctrl2 ~]#

Leave a Reply

Your email address will not be published. Required fields are marked *