Hi,
Today I implemented two node Mysql NDB cluster with the help of “HowtoForge” ,But the tutorial with howtoforge is too old, I have done some modification and prepare the new tutorial with new versions for the open source community.
How to configure a MySQL 5 cluster with two nodes: Two DB (storage) nodes and one management node(with mysql proxy). This cluster is High-available by Mysql proxy(For checking if the other node is still alive or to split up the requests to the nodes of the MySQL cluster).
In this document I use Debian 6.0 for all Management node and CentOs 6.0 For DB nodes. The MySQL version I use in this setup is mysql-5.5.25.
1 My Servers
I use the following servers that are all in the same network (10.0.0.x in this example):
- sql1.unixsurgeon.com:10.0.0.8 MySQL cluster node 1
- sql2.unixsurgeon.com:10.0.0.9 MySQL cluster node 2
- Mgmt.unixsurgeon.com:10.0.0.7 MySQL cluster management server & Mysql Proxy For HA/Fail Over.
In addition to that we need a Public IP address : 202.0.X.X. It will be assigned to the MySQL Proxy so that applications have a single IP address to access the cluster.
Although we have two MySQL cluster nodes in my MySQL cluster, still there is need of third node, the MySQL cluster management server, for mainly one reason: if one of the two MySQL cluster nodes fails, and the management server is not running, then the data on the two cluster nodes will become inconsistent (“split brain“).
So normally we would need five machines for our setup:
2 MySQL cluster nodes + 1 cluster management server + 2 Load Balancers = 5
As the MySQL cluster management server does not use many resources, and the system would just sit there doing nothing, you can put first load balancer on the same machine, which saves us one machine, so we end up with four machines.
2 Set Up The MySQL Cluster Management Server :-
First we have to download MySQL Cluster 7.2.7 and install the cluster management server (ndb_mgmd) and the cluster management client (ndb_mgm- it can be used to monitor what”s going on in the cluster). The following steps are carried out onMgmt.unixsurgeon.com (10.0.0.7):
Mgmt.unixsurgeon.com:
mkdir /usr/src/mysql-mgm
cd /usr/src/mysql-mgm
wget http://cdn.mysql.com/Downloads/MySQL-Cluster-7.2/mysql-cluster-gpl-7.2.7-linux2.6-x86_64.tar.gz
tar xvfz mysql-cluster-gpl-7.2.7-linux2.6-x86_64.tar.gz
cd mysql-cluster-gpl-7.2.7-linux2.6-x86_64
mv bin/ndb_mgm /usr/bin
mv bin/ndb_mgmd /usr/bin
chmod 755 /usr/bin/ndb_mg*
cd /usr/src
rm -rf /usr/src/mysql-mgm
cd /usr/src/mysql-mgm
wget http://cdn.mysql.com/Downloads/MySQL-Cluster-7.2/mysql-cluster-gpl-7.2.7-linux2.6-x86_64.tar.gz
tar xvfz mysql-cluster-gpl-7.2.7-linux2.6-x86_64.tar.gz
cd mysql-cluster-gpl-7.2.7-linux2.6-x86_64
mv bin/ndb_mgm /usr/bin
mv bin/ndb_mgmd /usr/bin
chmod 755 /usr/bin/ndb_mg*
cd /usr/src
rm -rf /usr/src/mysql-mgm
Next, we must create the cluster configuration file,/var/lib/mysql-cluster/config.ini:
Mgmt.unixsurgeon.com:
mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
vi config.ini
cd /var/lib/mysql-cluster
vi config.ini
[NDBD DEFAULT] NoOfReplicas=2 DataMemory=80M IndexMemory=18M [MYSQLD DEFAULT] [NDB_MGMD DEFAULT] DataDir=/var/lib/mysql-cluster [TCP DEFAULT] # Section for the cluster management node [NDB_MGMD] # IP address of the management node (this system) HostName=10.0.0.7 # Section for the storage nodes [NDBD] # IP address of the first storage node HostName=10.0.0.8 DataDir= /var/lib/mysql-cluster [NDBD] # IP address of the second storage node HostName=10.0.0.9 DataDir=/var/lib/mysql-cluster # one [MYSQLD] per storage node [MYSQLD] [MYSQLD]
Please replace the IP addresses in the file appropriately.
Then we start the cluster management server:
Mgmt.unixsurgeon.com:
ndb_mgmd -f /var/lib/mysql-cluster/config.ini –configdir=/var/lib/mysql-cluster/
It makes sense to automatically start the management server at system boot time, so we create a very simple init script and the appropriate startup links:
Mgmt.unixsurgeon.com:
echo ”ndb_mgmd -f /var/lib/mysql-cluster/config.ini –configdir=/var/lib/mysql-cluster/” > /etc/init.d/ndb_mgmd
chmod 755 /etc/init.d/ndb_mgmd
chkconfig –-add ndb_mgmd (if there will be error that “service ndb_mgmd does not support chkconfig”) then just add “# chkconfig: 2345 95 20” at the top of /etc/init.d/ndb_mgmd file
chmod 755 /etc/init.d/ndb_mgmd
chkconfig –-add ndb_mgmd (if there will be error that “service ndb_mgmd does not support chkconfig”) then just add “# chkconfig: 2345 95 20” at the top of /etc/init.d/ndb_mgmd file
chkconfig ndb_mgmd on
3 Set Up The MySQL Cluster Nodes (Storage Nodes)
Now we install mysql on both sql1.unixsurgeon.com andsql2.unixsurgeon.com:
sql1.unixsurgeon.com / sql2.unixsurgeon.com:
groupadd mysql
useradd -g mysql mysql
cd /usr/local/
wget http://cdn.mysql.com/Downloads/MySQL-Cluster-7.2/mysql-cluster-gpl-7.2.7-linux2.6-x86_64.tar.gz
tar xvfz mysql-cluster-gpl-7.2.7-linux2.6-x86_64.tar.gz
ln -s mysql-cluster-gpl-7.2.7-linux2.6-x86_64 mysql
cd mysql
scripts/mysql_install_db –user=mysql
chown -R root:mysql .
chown -R mysql data
cp support-files/mysql.server /etc/init.d/
chmod 755 /etc/init.d/mysql.server
chkconfig mysql.server on
cd /usr/local/mysql/bin
mv * /usr/bin
cd ../
rm -fr /usr/local/mysql/bin
ln -s /usr/bin /usr/local/mysql/bin
useradd -g mysql mysql
cd /usr/local/
wget http://cdn.mysql.com/Downloads/MySQL-Cluster-7.2/mysql-cluster-gpl-7.2.7-linux2.6-x86_64.tar.gz
tar xvfz mysql-cluster-gpl-7.2.7-linux2.6-x86_64.tar.gz
ln -s mysql-cluster-gpl-7.2.7-linux2.6-x86_64 mysql
cd mysql
scripts/mysql_install_db –user=mysql
chown -R root:mysql .
chown -R mysql data
cp support-files/mysql.server /etc/init.d/
chmod 755 /etc/init.d/mysql.server
chkconfig mysql.server on
cd /usr/local/mysql/bin
mv * /usr/bin
cd ../
rm -fr /usr/local/mysql/bin
ln -s /usr/bin /usr/local/mysql/bin
Then we create the MySQL configuration file /etc/my.cnf on both nodes:
sql1.unixsurgeon.com / sql2.unixsurgeon.com:
vi /etc/my.cnf
[mysqld] ndbcluster # IP address of the cluster management node ndb-connectstring=10.0.0.7 [mysql_cluster] # IP address of the cluster management node ndb-connectstring=10.0.0.7 |
Next we create the data directories and start the MySQL server on both cluster nodes:
sql1.unixsurgeon.com / sql2.unixsurgeon.com:
mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
ndbd –initial
/etc/init.d/mysql.server start
ndbd –initial
/etc/init.d/mysql.server start
(Please note: we have to run ndbd –initial only when the start MySQL for the first time, and if /var/lib/mysql-cluster/config.ini onMgmt.unixsurgeon.comchanges.)
Now is a good time to set a password for the MySQL root user:
sql1.unixsurgeon.com / sql2.unixsurgeon.com:
mysqladmin -u root password yourrootsqlpassword
We want to start the cluster nodes at boot time, so we create an ndbd init script and the appropriate system startup links:
sql1.unixsurgeon.com / sql2.unixsurgeon.com:
echo ”ndbd” > /etc/init.d/ndbd
chmod 755 /etc/init.d/ndbd
chkconfig ndbd on
chmod 755 /etc/init.d/ndbd
chkconfig ndbd on
4 Test The MySQL Cluster
Our MySQL cluster configuration is already finished, now it”s time to test it. On the cluster management server (Mgmt.unixsurgeon.com), run the cluster management client ndb_mgm to check if the cluster nodes are connected:
Mgmt.unixsurgeon.com:
ndb_mgm
You should see this:
-- NDB Cluster -- Management Client -- ndb_mgm>
Now type show; at the command prompt:
show;
The output should be like this:
-- NDB Cluster -- Management Client -- ndb_mgm> show; Connected to Management Server at: 10.0.0.7:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 @10.0.0.8 (mysql-5.5.25 ndb-7.2.7, Nodegroup: 0, Master) id=3 @10.0.0.9 (mysql-5.5.25 ndb-7.2.7, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @10.0.0.7 (mysql-5.5.25 ndb-7.2.7) [mysqld(API)] 2 node(s) id=4 @10.0.0.8 (mysql-5.5.25 ndb-7.2.7) id=5 @10.0.0.9 (mysql-5.5.25 ndb-7.2.7) ndb_mgm>
If you see that your nodes are connected, then everything”s ok!
Type
quit;
to leave the ndb_mgm client console.
Now we create a test database with a test table and some data onsql1.unixsurgeon.com:
sql1.unixsurgeon.com:
mysql -u root -p
CREATE DATABASE mysqlclustertest;
USE mysqlclustertest;
CREATE TABLE testtable (i INT) ENGINE=NDBCLUSTER;
INSERT INTO testtable () VALUES (1);
SELECT * FROM testtable;
quit;
CREATE DATABASE mysqlclustertest;
USE mysqlclustertest;
CREATE TABLE testtable (i INT) ENGINE=NDBCLUSTER;
INSERT INTO testtable () VALUES (1);
SELECT * FROM testtable;
quit;
(Have a look at the CREATE statment: We must use ENGINE=NDBCLUSTER for all database tables that we want to get clustered! If you use another engine, then clustering will not work!)
The result of the SELECT statement should be:
-- NDB Cluster -- Management Client -- mysql> SELECT * FROM testtable; +------+ | i | +------+ | 1 | +------+ 1 row in set (0.03 sec)
Now we create the same database on sql2.unixsurgeon.com (yes, we still have to create it, but afterwards testtable and its data should be replicated tosql2.unixsurgeon.com because testtable uses ENGINE=NDBCLUSTER):
sql2.unixsurgeon.com:
mysql -u root -p
CREATE DATABASE mysqlclustertest;
USE mysqlclustertest;
SELECT * FROM testtable;
CREATE DATABASE mysqlclustertest;
USE mysqlclustertest;
SELECT * FROM testtable;
The SELECT statement should deliver you the same result as before onsql1.unixsurgeon.com:
mysql> SELECT * FROM testtable; +------+ | i | +------+ | 1 | +------+ 1 row in set (0.04 sec)
So the data was replicated from sql1.unixsurgeon.com tosql2.unixsurgeon.com. Now we insert another row into testtable:
sql2.unixsurgeon.com:
INSERT INTO testtable () VALUES (2);
quit;
quit;
Now let”s go back to sql1.unixsurgeon.com and check if we see the new row there:
sql1.unixsurgeon.com:
mysql -u root -p
USE mysqlclustertest;
SELECT * FROM testtable;
quit;
USE mysqlclustertest;
SELECT * FROM testtable;
quit;
You should see something like this:
mysql> SELECT * FROM testtable; +------+ | i | +------+ | 1 | | 2 | +------+ 2 rows in set (0.05 sec)
So both MySQL cluster nodes alwas have the same data!
Now let”s see what happens if we stop node 1 (sql1.unixsurgeon.com): Run
sql1.unixsurgeon.com:
killall ndbd
and check with
ps aux | grep ndbd | grep -iv grep
that all ndbd processes have terminated. If you still see ndbd processes, run another
killall ndbd
until all ndbd processes are gone.
Now let”s check the cluster status on our management server (Mgmt.unixsurgeon.com):
Mgmt.unixsurgeon.com:
ndb_mgm
On the ndb_mgm console, issue
show;
and you should see this:
ndb_mgm> show; Connected to Management Server at: 10.0.0.7:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 (not connected, accepting connect from 10.0.0.8) id=3 @10.0.0.9 (mysql-5.5.25 ndb-7.2.7, Nodegroup: 0, Master) [ndb_mgmd(MGM)] 1 node(s) id=1 @10.0.0.7 (mysql-5.5.25 ndb-7.2.7) [mysqld(API)] 2 node(s) id=4 @10.0.0.8 (mysql-5.5.25 ndb-7.2.7) id=5 @10.0.0.9 (mysql-5.5.25 ndb-7.2.7) ndb_mgm>
You see, sql1.unixsurgeon.com is not connected anymore.
Type
quit;
to leave the ndb_mgm console.
Let”s check sql2.unixsurgeon.com:
sql2.unixsurgeon.com:
mysql -u root -p
USE mysqlclustertest;
SELECT * FROM testtable;
quit;
USE mysqlclustertest;
SELECT * FROM testtable;
quit;
The result of the SELECT query should still be
mysql> SELECT * FROM testtable; +------+ | i | +------+ | 1 | | 2 | +------+ 2 rows in set (0.17 sec)
Ok, all tests went fine, so let”s start our sql1.unixsurgeon.com node again:
sql1.unixsurgeon.com:
ndbd
5 How To Restart The Cluster
Now let”s asume you want to restart the MySQL cluster, for example because you have changed/var/lib/mysql-cluster/config.ini on Mgmt.unixsurgeon.comor for some other reason. To do this, you use thendb_mgm cluster management client on Mgmt.unixsurgeon.com:
Mgmt.unixsurgeon.com:
ndb_mgm
On the ndb_mgm console, you type
shutdown;
You will then see something like this:
ndb_mgm> shutdown; Node 3: Cluster shutdown initiated Node 2: Node shutdown completed. 2 NDB Cluster node(s) have shutdown. NDB Cluster management server shutdown. ndb_mgm>
This means that the cluster nodes sql1.unixsurgeon.com andsql2.unixsurgeon.com and also the cluster management server have shut down.
Run
quit;
to leave the ndb_mgm console.
To start the cluster management server, do this on Mgmt.unixsurgeon.com:
Mgmt.unixsurgeon.com:
ndb_mgmd -f /var/lib/mysql-cluster/config.ini –configdir=/var/lib/mysql-cluster/
and on sql1.unixsurgeon.com and sql2.unixsurgeon.com
sql1.unixsurgeon.com / sql2.unixsurgeon.com:
ndbd
or, if you have changed /var/lib/mysql-cluster/config.ini onMgmt.unixsurgeon.com:
ndbd –initial
Afterwards, you can check on Mgmt.unixsurgeon.com if the cluster has restarted:
Mgmt.unixsurgeon.com:
ndb_mgm
On the ndb_mgm console, type
show;
to see the current status of the cluster. It might take a few seconds after a restart until all nodes are reported as connected.
Type
quit;
to leave the ndb_mgm console.
6. Failover/redundancy/load-balancing
The only thing that’s left is to setup a mysql-proxy, which all the clients will use as it’s MySQL hostname. This mysql-proxy is then ‘the middleman’, completely transparent for both the servers and the clients. Should a server go down, the clients won’t notice it. It also does automatic load-balancing. If you proceed with this, keep in mind that this mysql-proxy becomes a single-point-of-failure in the setup (hence it kinda makes the whole MySQL-cluster useless). In my setup, I chose to install the mysql-proxy on the mysql-mgmt machine. I’ve installed mysql-proxy on it’s own virtual host. Since this is virtualized, it’s also redundant should something happen. You could also use two physical machines, and use Linux HA etc, however that’s quite more complex than using a VM (at least if you already have virtualization available).
Repo for mysql-proxy :- rpm -ivh http://mirror.cogentco.com/pub/linux/epel/6/i386/epel-release-6-7.noarch.rpm
root@mysql-proxy:~# apt-get install mysql-proxy root@mysql-proxy:~# mkdir /etc/mysql-proxy root@mysql-proxy:~# cd /etc/mysql-proxy root@mysql-proxy:/etc/mysql-proxy# vim mysql-proxy.conf
Add the following to the mysql-proxy.conf-file;
[mysql-proxy] daemon = true proxy-address = 202.0.X.X:3306 proxy-skip-profiling = true keepalive = true event-threads = 50 pid-file = /var/run/mysql-proxy.pid log-file = /var/log/mysql-proxy.log log-level = debug proxy-backend-addresses = 10.0.0.8:3306,10.0.0.9:3306 proxy-lua-script=/usr/lib/mysql-proxy/lua/proxy/balance.lua Also add the following in to /etc/default/mysql-proxy ENABLED="true" OPTIONS="—defaults-file=/etc/mysql-proxy.conf" Now you can start/stop mysqlproxy with the following :- /etc/init.d/mysql-proxy start/stop or
Then you can start the mysql-proxy service;
root@mysql-proxy:/etc/mysql-proxy# mysql-proxy --defaults-file=/etc/mysql-proxy/mysql-proxy.conf
Now point your clients to use the hostname of the mysql-proxy server, and you’re good to go!
Comments