MySQL Cluster
MySQL cluster topology

MySQL cluster is the type of database technology that using a cluster of MySQL server to achieved high availability and high throughput but still maintaining a very low latency. MySQL cluster using different database engine which is NDBCLUSTER.

In order to deploy MySQL cluster we test using 5 servers with 3 types of node. The management node, the Data node and SQL node.

1. Management Node – NDB_MGMD/MGM
-This is the server which is managing other node of the cluster. We can manage node on the cluster from management node.

2. Data Nodes – NDBD/NDB
-This is the server where the process of synchronizing and data replication take place.

3. SQL Nodes – MySQLD/API
-This the server interface used by applications to connect to database cluster.

For this guide we will use 5 CentOS 7 Virtual machine using VMware.
1. Management Node
db1.inertz.org 192.168.0.133
2. Data Node
db2.inertz.org 192.168.0.134
db3.inertz.org 192.168.0.135
3. SQL Node
db4.inertz.org 192.168.0.136
db5.inertz.org 192.168.0.137

Management Node

The first server we need to setup is the management node server. For this reason we need to download it first. You can get the software here : https://dev.mysql.com/downloads/cluster/. For this test we will use RPM bundle for the download page. We will download Red Hat Enterprise Linux 7 because it is compatible with Centos 7. The direct link to the down load is : https://dev.mysql.com/get/Downloads/MySQL-Cluster-8.0/mysql-cluster-community-8.0.21-1.el7.x86_64.rpm-bundle.tar. It is better to donload using wget and extract directly to the root folder.

This package need to install and remove first.

yum -y install perl-Data-Dumper
yum -y remove mariadb-libs

After extraction, we must install the rpm using below command;

rpm -Uvh mysql-cluster-community-common-8.0.21-1.el7.x86_64.rpm
rpm -Uvh mysql-cluster-community-libs-8.0.21-1.el7.x86_64.rpm
yum install yum-protectbase -y

*Optional – To avoid core RPM package replace by other repo

yum install epel-release -y 

*epel-release – Required to install because next RPM dependencies

yum localinstall  mysql-cluster-community-client-8.0.21-1.el7.x86_64.rpm
rpm -Uvh mysql-cluster-community-server-8.0.21-1.el7.x86_64.rpm
rpm -Uvh mysql-cluster-community-management-server-8.0.21-1.el7.x86_64.rpm

Now we need to configure the MySQL cluster. Create a new directory where we want to setup the cluster configureation.

mkdir -p /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
nano -w config.ini

 

[ndb_mgmd default]
# Directory for MGM node log files
DataDir=/var/lib/mysql-cluster
 
[ndb_mgmd]
#Management Node db1
HostName=192.168.0.133
 
[ndbd default]
NoOfReplicas=2      # Number of replicas
DataMemory=256M     # Memory allocate for data storage
IndexMemory=128M    # Memory allocate for index storage
#Directory for Data Node
DataDir=/var/lib/mysql-cluster
 
[ndbd]
#Data Node db2
HostName=192.168.0.134
 
[ndbd]
#Data Node db3
HostName=192.168.0.135
 
[mysqld]
#SQL Node db4
HostName=192.168.0.136
 
[mysqld]
#SQL Node db5
HostName=192.168.0.137

Save the file.

Then start the managemnet node.

[root@db1 mysql-cluster]# ndb_mgmd --config-file=/var/lib/mysql-cluster/config.ini
MySQL Cluster Management Server mysql-8.0.21 ndb-8.0.21
2020-09-03 17:58:46 [MgmtSrvr] INFO     -- The default config directory '/usr/mysql-cluster' does not exist. Trying to create it...
2020-09-03 17:58:46 [MgmtSrvr] INFO     -- Sucessfully created config directory
2020-09-03 17:58:46 [MgmtSrvr] WARNING  -- at line 12: [DB] IndexMemory is deprecated, will use Number bytes on each ndbd(DB) node allocated for storing indexes instead

Check the cluster status.

[root@db1 mysql-cluster]# ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2 (not connected, accepting connect from 192.168.0.134)
id=3 (not connected, accepting connect from 192.168.0.135)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @192.168.0.133  (mysql-8.0.21 ndb-8.0.21)

[mysqld(API)]   2 node(s)
id=4 (not connected, accepting connect from 192.168.0.136)
id=5 (not connected, accepting connect from 192.168.0.137)

We need to create startup script so the management node auto start the management server. We have to create init script inside ‘init.d’ since we setup the server manually.

nano -w /etc/init.d/ndb_mgmd

 

#!/bin/bash
# chkconfig: 345 99 01
# description: MySQL Cluster management server start/stop script

STARTMGM="ndb_mgmd --config-file=/var/lib/mysql-cluster/config.ini"

start() {
        $STARTMGM
}
stop() {
       killall -15 ndb_mgmd
       sleep 1
       killall -9 ndb_mgmd
}
case "$1" in
  start)
        start
        ;;
  stop)
        stop
        ;;
  restart|reload)
        stop
        start
        RETVAL=$?
        ;;
  *)
        echo $"Usage: $0 {start|stop|restart}"
        exit 1
esac
chmod 755 /etc/init.d/ndb_mgmd
chkconfig ndb_mgmd on
chkconfig --level 345 ndb_mgmd on
systemctl start ndb_mgmd
systemctl status ndb_mgmd

Management node is now complete and we can try to reboot the server and check the cluster status after reboot.

Data Node

For the data node, we will install similar rpm package with additional data node package. The data node, we will setup 2 server db2 and db3.

This package need to install and remove first.

yum -y install perl-Data-Dumper
yum -y remove mariadb-libs
rpm -Uvh mysql-cluster-community-common-8.0.21-1.el7.x86_64.rpm
rpm -Uvh mysql-cluster-community-libs-8.0.21-1.el7.x86_64.rpm
yum -y install yum-protectbase
yum -y install epel-release
yum localinstall  mysql-cluster-community-client-8.0.21-1.el7.x86_64.rpm
rpm -Uvh mysql-cluster-community-server-8.0.21-1.el7.x86_64.rpm
rpm -Uvh mysql-cluster-community-data-node-8.0.21-1.el7.x86_64.rpm

After that we need to edit my.cnf so we can connect to management server.

[mysqld]
ndbcluster
ndb-connectstring=192.168.0.133    # IP address of Management Node
 
[mysql_cluster]
ndb-connectstring=192.168.0.133     # IP address of Management Node

Then create the database cluster data folder.

mkdir -p /var/lib/mysql-cluster

Make sure firewall port open for 1186 and SELinux is disable for all node.

Try run the command ‘ndbd’ for both node and we show receive similar output as;

[root@db2 ~]# ndbd
2020-08-25 16:09:55 [ndbd] INFO     -- Angel connected to '192.168.0.133:1186'
2020-08-25 16:09:55 [ndbd] INFO     -- Angel allocated nodeid: 2
[root@db3 ~]# ndbd
2020-08-25 16:23:09 [ndbd] INFO     -- Angel connected to '192.168.0.133:1186'
2020-08-25 16:23:09 [ndbd] INFO     -- Angel allocated nodeid: 3

Now check the cluster status at the management node and data node should show connected.

[root@db1 mysql-cluster]# ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @192.168.0.134  (mysql-8.0.21 ndb-8.0.21, Nodegroup: 0, *)
id=3    @192.168.0.135  (mysql-8.0.21 ndb-8.0.21, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @192.168.0.133  (mysql-8.0.21 ndb-8.0.21)

[mysqld(API)]   2 node(s)
id=4 (not connected, accepting connect from 192.168.0.136)
id=5 (not connected, accepting connect from 192.168.0.137)


Since the server was setup manually we need to create startup script for both server.

nano -w /etc/init.d/ndbd

 

#!/bin/bash
# chkconfig: 345 99 01
# description: MySQL DATA Node start/stop script

STARTDATA="ndbd"

start() {
        $STARTDATA
}
stop() {
       killall -15 ndb_mgmd
       sleep 1
       killall -9 ndb_mgmd
}
case "$1" in
  start)
        start
        ;;
  stop)
        stop
        ;;
  restart|reload)
        stop
        start
        RETVAL=$?
        ;;
  *)
        echo $"Usage: $0 {start|stop|restart}"
        exit 1
esac

Make the script auto start during boot.

chmod 755 /etc/init.d/ndbd
chkconfig ndbd on
chkconfig --level 345 ndbd on
systemctl start ndbd
systemctl status ndbd

SQL/API Node

The last type of server is the easiest install because we do not need to setup startup script. We will install the server to db3 and db4.

This package need to install and remove first.

yum -y install perl-Data-Dumper
yum -y remove mariadb-libs

Then install the required package.

rpm -Uvh mysql-cluster-community-common-8.0.21-1.el7.x86_64.rpm
rpm -Uvh mysql-cluster-community-libs-8.0.21-1.el7.x86_64.rpm
yum install yum-protectbase
yum install epel-release
yum localinstall  mysql-cluster-community-client-8.0.21-1.el7.x86_64.rpm
rpm -Uvh mysql-cluster-community-server-8.0.21-1.el7.x86_64.rpm

Edit my.cnf file for the MySQL node.

[mysqld]
ndbcluster
ndb-connectstring=192.168.0.133      # IP address for server management node
default_storage_engine=ndbcluster     # Define default Storage Engine used by MySQL
 
[mysql_cluster]
ndb-connectstring=192.168.0.133       # IP address for server management node

And reboot both server.

Supposed MySQL database clustering complete if all the node connected to the cluster and can be confirmed using command;

[root@db1 mysql-cluster]# ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @192.168.0.134  (mysql-8.0.21 ndb-8.0.21, Nodegroup: 0, *)
id=3    @192.168.0.135  (mysql-8.0.21 ndb-8.0.21, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @192.168.0.133  (mysql-8.0.21 ndb-8.0.21)

[mysqld(API)]   2 node(s)
id=4    @192.168.0.136  (mysql-8.0.21 ndb-8.0.21)
id=5    @192.168.0.137  (mysql-8.0.21 ndb-8.0.21)

Ref : https://www.howtoforge.com/tutorial/how-to-install-and-configure-mysql-cluster-on-centos-7/

Leave a Reply

Your email address will not be published. Required fields are marked *