MySQL InnoDB cluster Deployment on oracle cloud

In this post, I will walk you through the creation of 3 node MySQL Innodb cluster on Linux.

For this demonstration , I have spin up 3 instances in oracle cloud , and I will use one of them as master instance (read/write) and two of them for group replication or slave(read only).

Make sure all the database server must be in same private subset , in our case ( CIDR Block: 10.0.0.0/24 )

This image has an empty alt attribute; its file name is image-46-1024x384.png

We will use the private ip of all nodes for node wide communication .

This is my host file, please make sure all nodes have this entry

[root@k8s-master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomainlocalhostlocalhost4.localdomain4
 ::1    localhost localhost.localdomain localhost6 localhost6.localdomain6
 10.0.0.5                       worker-node1
 10.0.0.7                       worker-node2
 10.0.0.2                       k8s-master

To make sure all instance can communicate with each other , I have created ll the instances on same VCN, same Regional subset and launched the 2 instances on diff AD.

TO make sure the instances can reach each other via the private ip address make sure you have properly configured your security list for ssh and icmp connection over intended port . Although the ICMP port was open at the vcn level, still it was having issue in pinging the server , then I added subnet level(/24) ICMP in security list , after that it was rechable.

Now that we have all the 3 server ready , we need to install and configure the following software on all nodes which is pre requisite for running a InnoDB group replication in MySQL.

Following are the main component of InnoDB clusters.

  1. MySQL group replication – Group of database servers. It replicates the MySQL databases across the multiple nodes, and it has fault tolerance. When changes in the data occur in the MySQL databases, it automatically replicates to the secondary nodes of the server.
  2. MySQL Router – MySQL Router is part of InnoDB Cluster and is lightweight middleware that provides transparent routing between your application and back-end MySQL Servers. It is used for a wide variety of use cases, such as providing high availability and scalability by routing database traffic to appropriate back-end MySQL servers .
  3. MySQL Shell – It is a configuration tool that can be used to connect, deploy, and manage the MySQL InnoDB cluster. MySQL Shell contains an Admin API that has a dba global variable. The dba variable is used to deploy and manage the InnoDB cluster

download the following software rpm from dev.mysql

This image has an empty alt attribute; its file name is image-44-1024x114.png
This image has an empty alt attribute; its file name is image-45-1024x121.png
  1. Installation of MySQL binaries and MySQL shell on all the nodes

Download the rpm and place it in your destination server using winscp of mobaxterm.

[root@k8s-master opc]# ls -ltr
total 28
-rwxrwxr-x 1 root root 26024 Aug 25 23:34 mysql80-community-release-el7-3.noarch.rpm
[root@k8s-master opc]#
[root@k8s-master opc]#

[root@k8s-master opc]# rpm -Uvh mysql80-community-release-el7-3.noarch.rpm
warning: mysql80-community-release-el7-3.noarch.rpm: Header V3 DSA/SHA1 Signature, key ID 5072e1f5: NOKEY
Preparing…                          ################################# [100%]
Updating / installing…
1:mysql80-community-release-el7-3  ################################# [100%]
Check if all the community rpms software or the binaries installed on the nodes 

[root@k8s-master opc]# yum repolist all | grep mysql
 mysql-cluster-7.5-community/x86_64 MySQL Cluster 7.5 Community   disabled
 mysql-cluster-7.5-community-source MySQL Cluster 7.5 Community - disabled
 mysql-cluster-7.6-community/x86_64 MySQL Cluster 7.6 Community   disabled
 mysql-cluster-7.6-community-source MySQL Cluster 7.6 Community - disabled
 mysql-cluster-8.0-community/x86_64 MySQL Cluster 8.0 Community   disabled
 mysql-cluster-8.0-community-source MySQL Cluster 8.0 Community - disabled
 mysql-connectors-community/x86_64  MySQL Connectors Community    enabled:    
 mysql-connectors-community-source  MySQL Connectors Community -  disabled
 mysql-tools-community/x86_64       MySQL Tools Community         enabled:    
 mysql-tools-community-source       MySQL Tools Community - Sourc disabled
 mysql-tools-preview/x86_64         MySQL Tools Preview           disabled
 mysql-tools-preview-source         MySQL Tools Preview - Source  disabled
 mysql55-community/x86_64           MySQL 5.5 Community Server    disabled
 mysql55-community-source           MySQL 5.5 Community Server -  disabled
 mysql56-community/x86_64           MySQL 5.6 Community Server    disabled
 mysql56-community-source           MySQL 5.6 Community Server -  disabled
 mysql57-community/x86_64           MySQL 5.7 Community Server    disabled
 mysql57-community-source           MySQL 5.7 Community Server -  disabled
 mysql80-community/x86_64           MySQL 8.0 Community Server    enabled:    
 mysql80-community-source           MySQL 8.0 Community Server -  disabled
Make sure your /etc/yum.repos.d/mysql-community.repo have enabled=1 for the version you want to install , for me I was installing mysql 8 

[mysql80-community]
 name=MySQL 8.0 Community Server
 baseurl=http://repo.mysql.com/yum/mysql-8.0-community/el/7/$basearch/
 enabled=1
 gpgcheck=1
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql

Now , first try to install the sql shell , Its up to you in which order you want to install all software , for me, When I was installing MySQL server first and and MySQL shell next , It was having some issue . So I first installed MySQL shell ad below

[root@k8s-master opc]# sudo yum install mysql-shell
 Loaded plugins: langpacks, ulninfo
 Resolving Dependencies
 --> Running transaction check
 ---> Package mysql-shell.x86_64 0:8.0.21-1.el7 will be installed
 --> Finished Dependency Resolution
 Dependencies Resolved
 =================================================================================================================================================
  Package                          Arch                        Version                           Repository                                  Size
 Installing:
  mysql-shell                      x86_64                      8.0.21-1.el7                      mysql-tools-community                       31 M
 Transaction Summary
 Install  1 Package
 Total download size: 31 M
Installed:
   mysql-shell.x86_64 0:8.0.21-1.el7
 Complete!

Install mysql binaries on all the nodes as below ,

[root@k8s-master opc]# sudo yum install mysql-community-server

 Loaded plugins: langpacks, ulninfo
 Resolving Dependencies
 --> Running transaction check
 ---> Package mysql-community-server.x86_64 0:8.0.21-1.el7 will be installed
 --> Processing Dependency: mysql-community-common(x86-64) = 8.0.21-1.el7 for package: mysql-community-server-8.0.21-1.el7.x86_64
 --> Processing Dependency: mysql-community-client(x86-64) >= 8.0.11 for package: mysql-community-server-8.0.21-1.el7.x86_64
 --> Running transaction check
 ---> Package mysql-community-client.x86_64 0:8.0.21-1.el7 will be installed
 --> Processing Dependency: mysql-community-libs(x86-64) >= 8.0.11 for package: mysql-community-client-8.0.21-1.el7.x86_64
 ---> Package mysql-community-common.x86_64 0:8.0.21-1.el7 will be installed
 --> Running transaction check
 ---> Package mysql-community-libs.x86_64 0:8.0.21-1.el7 will be installed
 --> Finished Dependency Resolution
 Dependencies Resolved
 =================================================================================================================================================
  Package                                   Arch                      Version                          Repository                            Size
 Installing:
  mysql-community-server                    x86_64                    8.0.21-1.el7                     mysql80-community                    499 M
 Installing for dependencies:
  mysql-community-client                    x86_64                    8.0.21-1.el7                     mysql80-community                     48 M
  mysql-community-common                    x86_64                    8.0.21-1.el7                     mysql80-community                    617 k
  mysql-community-libs                      x86_64                    8.0.21-1.el7                     mysql80-community                    4.5 M
 Transaction Summary
 Install  1 Package (+3 Dependent packages)
 Total download size: 551 M
 Installed size: 2.5 G
 Is this ok [y/d/N]: y

Dependency Installed:
mysql-community-client.x86_64 0:8.0.21-1.el7    mysql-community-common.x86_64 0:8.0.21-1.el7    mysql-community-libs.x86_64 0:8.0.21-1.el7
Complete!
 After installing the mysql server , restart the mysqld  
[root@k8s-master opc]# sudo service mysqld start
 Redirecting to /bin/systemctl start mysqld.service
 [root@k8s-master opc]#
 [root@k8s-master opc]#
 [root@k8s-master opc]# sudo systemctl start mysqld.service
 [root@k8s-master opc]#
 [root@k8s-master opc]#
 [root@k8s-master opc]#  sudo service mysqld status
 Redirecting to /bin/systemctl status mysqld.service
 ● mysqld.service - MySQL Server
    Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled)
    Active: active (running) since Tue 2020-08-25 23:51:43 GMT; 12s ago
      Docs: man:mysqld(8)
            http://dev.mysql.com/doc/refman/en/using-systemd.html
   Process: 11471 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
  Main PID: 11592 (mysqld)
    Status: "Server is operational"
     Tasks: 39
    Memory: 446.5M
    CGroup: /system.slice/mysqld.service
            └─11592 /usr/sbin/mysqld





After the successful installation of MySQL shell and MySQL server software ,try to reset the root password of MySQL server using the below command , by default when you install the MySQL using rpm , it will generate the default root password under /var/log/mysqld.log

[root@worker-node2 opc]# sudo grep 'temporary password' /var/log/mysqld.log
2020-08-25T23:18:11.421668Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: h,4UdijhaEr<

Now , login to mysql shell and change the root password 

[root@worker-node2 ~]# mysqlsh --sql root@localhost

 Please provide the password for 'root@localhost': 
 Save password for 'root@localhost'? [Y]es/[N]o/Ne[v]er (default No): yes
 MySQL Shell 8.0.21
 Copyright (c) 2016, 2020, Oracle and/or its affiliates. All rights reserved.
 Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
 Other names may be trademarks of their respective owners.
 Type '\help' or '\?' for help; '\quit' to exit.
 Creating a session to 'root@localhost'
 Fetching schema names for autocompletion… Press ^C to stop.
 Your MySQL connection id is 10 (X protocol)
 Server version: 8.0.21 MySQL Community Server - GPL
 No default schema selected; type \use  to set one.
 MySQL  localhost:33060+ ssl  SQL >
 MySQL  localhost:33060+ ssl  SQL >
 MySQL  localhost:33060+ ssl  SQL >
 MySQL  localhost:33060+ ssl  SQL > set password='P@ssw0rd2020' ;
 Query OK, 0 rows affected (0.0094 sec)
MySQL shell prompt will look like this

After changing the root password , We need to create a replication user in the database and provide all the privileges to that user , so that it will be able to perform replication with granted role or privileges .

MySQL  localhost:33060+ ssl  SQL > create user 'clusteradmin' identified by 'P@ssw0rd2020' ;
Query OK, 0 rows affected (0.0099 sec)
MySQL  localhost:33060+ ssl  SQL >
MySQL  localhost:33060+ ssl  SQL >
MySQL  localhost:33060+ ssl  SQL > grant all privileges on *.* to 'clusteradmin'@'%' with grant option ;
Query OK, 0 rows affected (0.0081 sec)
MySQL  localhost:33060+ ssl  SQL > reset master ;
Query OK, 0 rows affected (0.0093 sec)

Now , that all the nodes have the binaries installed and user created with privileges , our MySQL Databases are ready for addition in the cluster.

We need to check whether out MySQL server are ready to join the cluster using this simple precheck using dba.checkInstanceConfiguration utility. dba.checkInstanceConfiguration() function. This ensures that the instance satisfies the “InnoDB Cluster Requirements”

[root@k8s-master ~]# mysqlsh --log-level=DEBUG3

When DEBUG3 is set the MySQL Shell log file contains lines such as Debug: execute_sql( ... ) which contain the SQL queries that are executed as part of each AdminAPI call. The log file generated by MySQL Shell is located in ~/.mysqlsh/mysqlsh.log 


MySQL Shell 8.0.21
Copyright (c) 2016, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.
Type '\help' or '\?' for help; '\quit' to exit.
MySQL  JS >
MySQL  JS > dba.verbose=2
 2

( dba.verbose 2 adds debug output to the verbose output providing full information about what each call to AdminAPI executes)
 

MySQL  JS > dba.checkInstanceConfiguration('clusteradmin@k8s-master')

Please provide the password for 'clusteradmin@k8s-master': 
Save password for 'clusteradmin@k8s-master'? [Y]es/[N]o/Ne[v]er (default No): Y
 Validating local MySQL instance listening at port 3306 for use in an InnoDB cluster…
 This instance reports its own address as k8s-master:3306
 Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.
 Checking whether existing tables comply with Group Replication requirements…
 No incompatible tables detected
 Checking instance configuration…
 NOTE: Some configuration options need to be fixed:
 +--------------------------+---------------+----------------+--------------------------------------------------+
 | Variable                 | Current Value | Required Value | Note                                             |
 +--------------------------+---------------+----------------+--------------------------------------------------+
 | enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
 | gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
 | server_id                | 1             |     | Update read-only variable and restart the server |
 +--------------------------+---------------+----------------+--------------------------------------------------+
 Some variables need to be changed, but cannot be done dynamically on the server.
 NOTE: Please use the dba.configureInstance() command to repair these issues.
 {
     "config_errors": [
         {
             "action": "server_update+restart",
             "current": "OFF",
             "option": "enforce_gtid_consistency",
             "required": "ON"
         },
         {
             "action": "server_update+restart",
             "current": "OFF",
             "option": "gtid_mode",
             "required": "ON"
         },
         {
             "action": "server_update+restart",
             "current": "1",
             "option": "server_id",
             "required": ""
         }
     ],
     "status": "error"
 }

There was some parameter configuration issue exists in the cluster nodes , correct all these pre req by running dba.configureInstance() or dba.configureLocalInstance on all the nodes.

MySQL JS >  dba.configureLocalInstance('clusteradmin@k8s-master') ;

Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster…
This instance reports its own address as k8s-master:3306
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.
NOTE: Some configuration options need to be fixed:
 +--------------------------+---------------+----------------+--------------------------------------------------+
 | Variable                 | Current Value | Required Value | Note                                             |
 +--------------------------+---------------+----------------+--------------------------------------------------+
 | enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
 | gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
 | server_id                | 1             |     | Update read-only variable and restart the server |
 +--------------------------+---------------+----------------+--------------------------------------------------+
 Some variables need to be changed, but cannot be done dynamically on the server.
 Do you want to perform the required configuration changes? [y/n]: y
 Do you want to restart the instance after configuring it? [y/n]: y
 Configuring instance…
 The instance 'k8s-master:3306' was configured to be used in an InnoDB cluster.
 Restarting MySQL…
 NOTE: MySQL server at k8s-master:3306 was restarted.

Again run the precheck for checking readiness after running the above command

MySQL  JS > dba.checkInstanceConfiguration('clusteradmin@k8s-master') ;

Validating local MySQL instance listening at port 3306 for use in an InnoDB cluster…
This instance reports its own address as k8s-master:3306
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.
 Checking whether existing tables comply with Group Replication requirements…
 No incompatible tables detected
 Checking instance configuration…
 Instance configuration is compatible with InnoDB cluster
 The instance 'k8s-master:3306' is valid to be used in an InnoDB cluster.
 {
     "status": "ok"
 }

Now status looks good on all the nodes , status =ok tells us that all the nodes are ready to be part of an InnoDB cluster now .

Now , check if we can reach the other nodes from our master nodes , for enabling remote connection , Please make sure the bind adress values in /etc/my.cnf is set to bind-address = 0.0.0.0 and firewall is off

[root@k8s-master etc]# cat my.cnf
For advice on how to change settings please see
http://dev.mysql.com/doc/refman/8.0/en/server-configuration-defaults.html
[mysqld]
bind-address =0.0.0.0
 
datadir=/var/lib/mysql
Socket=/var/lib/mysql/mysql.sock

[root@k8s-master my.cnf.d]# systemctl stop firewalld
 
[root@k8s-master my.cnf.d]# systemctl status firewalld
 ● firewalld.service - firewalld - dynamic firewall daemon
    Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
    Active: inactive (dead) since Wed 2020-08-26 17:13:40 GMT; 4s ago

NOTE : Until here , whatever I had done , has to be done on all the nodes .

Run these commands only from primary nodes

Check the connectivity to remote server from master server

MySQL k8s-master:33060+ ssl JS > shell.connect(‘clusteradmin@worker-node1:3306’)
Creating a session to ‘clusteradmin@worker-node1:3306’
Fetching schema names for autocompletion… Press ^C to stop.
Closing old connection…
Your MySQL connection id is 13
Server version: 8.0.21 MySQL Community Server – GPL
No default schema selected; type \use to set one.

MySQL worker-node1:3306 ssl JS > \q

So, we can connect to the remote servers from master

Create the cluster from master server

MySQL  k8s-master:33060+ ssl  JS > cluster =dba.createCluster('ParwezInnoDBClu')

 A new InnoDB cluster will be created on instance 'k8s-master:3306'.
 Validating instance configuration at k8s-master:3306…
 This instance reports its own address as k8s-master:3306
 Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using 'k8s-master:33061'. Use the localAddress option to override.
Creating InnoDB cluster 'ParwezInnoDBClu' on 'k8s-master:3306'…
Adding Seed Instance…
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

Check the status of the cluster

MySQL k8s-master:33060+ ssl JS > cluster.status()
{
“clusterName”: “ParwezInnoDBClu”,
“defaultReplicaSet”: {
“name”: “default”,
“primary”: “k8s-master:3306”,
“ssl”: “REQUIRED”,
“status”: “OK_NO_TOLERANCE”,
“statusText”: “Cluster is NOT tolerant to any failures.”,
“topology”: {
“k8s-master:3306”: {
“address”: “k8s-master:3306”,
“mode”: “R/W”,
“readReplicas”: {},
“replicationLag”: null,
“role”: “HA”,
“status”: “ONLINE”,
“version”: “8.0.21”
}
},
“topologyMode”: “Single-Primary”
},
“groupInformationSourceMember”: “k8s-master:3306”
}

Now , add the instances in the cluster

MySQL k8s-master:33060+ ssl JS > cluster.addInstance(‘clusteradmin@k8s-master’) ;
Cluster.addInstance: The instance ‘k8s-master:3306’ is already part of this InnoDB cluster (RuntimeError)
Since , the master instance will be already added while creating the cluster

Add the second and third instances

MySQL  k8s-master:33060+ ssl  JS > cluster.addInstance('clusteradmin@worker-node1:3306')


NOTE: The target instance 'worker-node1:3306' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it.
 The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of 'worker-node1:3306' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.

 Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): C
 NOTE: Group Replication will communicate with other members using 'worker-node1:33061'. Use the localAddress option to override.
 Validating instance configuration at worker-node1:3306…
 This instance reports its own address as worker-node1:3306
 Instance configuration is suitable.
 A new instance will be added to the InnoDB cluster. Depending on the amount of
 data on the cluster this might take from a few seconds to several hours.
 Adding instance to the cluster…
 Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
 Clone based state recovery is now in progress.
 NOTE: A server restart is expected to happen as part of the clone process. If the
 server does not support the RESTART command or does not come back after a
 while, you may need to manually start it back.
 Waiting for clone to finish…
 NOTE: worker-node1:3306 is being cloned from k8s-master:3306
 ** Stage DROP DATA: Completed
 ** Clone Transfer
 FILE COPY  ############################################################  100%  Completed
 PAGE COPY  ############################################################  100%  Completed
 REDO COPY  ############################################################  100%  Completed
 ** Stage RECOVERY: \
 NOTE: worker-node1:3306 is shutting down…
 Waiting for server restart… ready
 worker-node1:3306 has restarted, waiting for clone to finish…
 Clone process has finished: 61.71 MB transferred in about 1 second (~61.71 MB/s) 
 State recovery already finished for 'worker-node1:3306'
 The instance 'worker-node1:3306' was successfully added to the cluster.

Similarly add the third instance also 
MySQL  k8s-master:33060+ ssl  JS > cluster.addInstance('clusteradmin@worker-node2:3306')

Now , check the status of the cluster

MySQL  k8s-master:33060+ ssl  JS > cluster.status()
 {
     "clusterName": "ParwezInnoDBClu",
     "defaultReplicaSet": {
         "name": "default",
         "primary": "k8s-master:3306",
         "ssl": "REQUIRED",
         "status": "OK_NO_TOLERANCE",
         "statusText": "Cluster is NOT tolerant to any failures.",
         "topology": {
             "k8s-master:3306": {
                 "address": "k8s-master:3306",
                 "mode": "R/W",
                 "readReplicas": {},
                 "replicationLag": null,
                 "role": "HA",
                 "status": "ONLINE",
                 "version": "8.0.21"
             },
             "worker-node1:3306": {
                 "address": "worker-node1:3306",
                 "mode": "R/O",
                 "readReplicas": {},
                 "replicationLag": null,
                 "role": "HA",
                 "status": "ONLINE",
                 "version": "8.0.21"
             }
               "worker-node2:3306": {
                 "address": "worker-node2:3306",
                 "mode": "R/O",
                 "readReplicas": {},
                 "replicationLag": null,
                 "role": "HA",
                 "status": "ONLINE",
                 "version": "8.0.21"
             } 
         },
         "topologyMode": "Single-Primary"
     },
     "groupInformationSourceMember": "k8s-master:3306"
 }

Now our InnoDB cluster is ready , lets check if it is replication from master server to the slave servers. for this I will be creating one database in Master server , and will create a tables in the same database and populate the table with some value . We will verify by checking if the tables and DB are automatically created on the slave nodes or not ?

Login to master server database instance

[root@k8s-master ~]# mysql -u root -p
 Enter password:
 Welcome to the MySQL monitor.  Commands end with ; or \g.
 Your MySQL connection id is 65
 Server version: 8.0.21 MySQL Community Server - GPL

mysql> create database innodbtest ;
Query OK, 1 row affected (0.00 sec)
mysql>  show databases;
 +-------------------------------+
 | Database                      |
 +-------------------------------+
 | information_schema            |
 | innodbtest                    |
 | mysql                         |
 | mysql_innodb_cluster_metadata |

mysql> create table customers (customer_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, first_name TEXT, last_name TEXT);
Query OK, 0 rows affected (0.04 sec)

mysql> insert into customers values (1, 'Diana','Lewis') ;
Query OK, 1 row affected (0.01 sec)

mysql> select * from customers ;
 +-------------+------------+-----------+
| customer_id | first_name | last_name |
+-------------+------------+-----------+
|           1 | Diana      | Lewis     |
+-------------+------------+-----------+
1 row in set (0.00 sec)
mysql> commit ;
Query OK, 0 rows affected (0.00 sec)

Login to one of the slave server ,say worker-node1 and check

[root@worker-node1 ~]# mysql -u root -p
 Enter password:
 Welcome to the MySQL monitor.  Commands end with ; or \g.
 Your MySQL connection id is 45
 Server version: 8.0.21 MySQL Community Server - GPL
 Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

mysql> show databases ;
 +-------------------------------+
 | Database                      |
 +-------------------------------+
 | information_schema            |
 | innodbtest                    |
 | mysql                         |
 | mysql_innodb_cluster_metadata |
 | performance_schema            |
 | sys                           |

So , the innodbtest database is created automatically , now check for tables and contents .

mysql> use innodbtest ;

mysql>  select * from customers ;
+-------------+------------+-----------+
| customer_id | first_name | last_name |
+-------------+------------+-----------+
|           1 | Diana      | Lewis     |
+-------------+------------+-----------+
 1 row in set (0.00 sec)

BOOM...The tables is created and data are replicated in real time .

Now , check if you are able to insert data on slave instance , because it is read only 

mysql> insert into customers values (2,'parwez','alam') ;
ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement
Yes , it worked as expected .

Now , at last install the MySQL router for connection fail over

[root@ip-172-31-33-105 ~]# mysqlrouter --bootstrap clusteradmin@ip-172-31-42-8.ap-south-1.compute.internal:3306 --directory /etc/mysqlrouter/mysql-router --user=root


Please enter MySQL password for clusteradmin:
Bootstrapping MySQL Router instance at '/etc/mysqlrouter/mysql-router'…
Creating account(s) (only those that are needed, if any)
Verifying account (using it to run SQL queries that would be run by Router)
 
 Storing account in keyring
 Adjusting permissions of generated files
 Creating configuration /etc/mysqlrouter/mysql-router/mysqlrouter.conf 
 MySQL Router configured for the InnoDB Cluster 'MOEInnodbDemo'
 After this MySQL Router has been started with the generated configuration
 $ mysqlrouter -c /etc/mysqlrouter/mysql-router/mysqlrouter.conf
 the cluster 'MOEInnodbDemo' can be reached by connecting to:
 MySQL Classic protocol
 Read/Write Connections: localhost:6446
 Read/Only Connections:  localhost:6447 
 MySQL X protocol
 Read/Write Connections: localhost:64460
 Read/Only Connections:  localhost:64470 
 [root@ip-172-31-33-105 mysql-router]# systemctl daemon-reload
 [root@ip-172-31-33-105 mysql-router]# systemctl start mysqlrouter-cluster
 [root@ip-172-31-33-105 mysql-router]#
 [root@ip-172-31-33-105 mysql-router]#
 [root@ip-172-31-33-105 mysql-router]# systemctl status mysqlrouter-cluster
 ● mysqlrouter-cluster.service - MySQL Router
    Loaded: loaded (/etc/systemd/system/mysqlrouter-cluster.service; bad; vendor preset: disabled)
    Active: active (running) since Wed 2020-09-23 08:12:20 UTC; 5s ago
  Main PID: 13311 (mysqlrouter)
    CGroup: /system.slice/mysqlrouter-cluster.service
            └─13311 /usr/bin/mysqlrouter -c /etc/mysqlrouter/mysql-router/mysqlrouter.conf

 Sep 23 08:12:20 ip-172-31-33-105.ap-south-1.compute.internal systemd[1]: [/etc/systemd/system/mysqlrouter-cluster.service:1] Assignment outside of s…oring.
 Sep 23 08:12:20 ip-172-31-33-105.ap-south-1.compute.internal systemd[1]: Started MySQL Router.
 Sep 23 08:12:20 ip-172-31-33-105.ap-south-1.compute.internal systemd[1]: Starting MySQL Router…
 Sep 23 08:12:20 ip-172-31-33-105.ap-south-1.compute.internal bash[13311]: logging facility initialized, switching logging to loggers specified in co…ration
[root@ip-172-31-33-105 mysql-router]# netstat -tulpn | grep mysql
 tcp        0      0 0.0.0.0:64470           0.0.0.0:*               LISTEN      13311/mysqlrouter
 tcp        0      0 0.0.0.0:64460           0.0.0.0:*               LISTEN      13311/mysqlrouter
 tcp        0      0 0.0.0.0:6446            0.0.0.0:*               LISTEN      13311/mysqlrouter
 tcp        0      0 0.0.0.0:6447            0.0.0.0:*               LISTEN      13311/mysqlrouter

Test Read Write

[root@ip-172-31-33-105 ~]# mysql -umoetest -hip-172-31-33-105 -P6446 -p
 Enter password:
 Welcome to the MySQL monitor.  Commands end with ; or \g.
 Your MySQL connection id is 707
 Server version: 8.0.21 MySQL Community Server - GPL
 Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
 Oracle is a registered trademark of Oracle Corporation and/or its
 affiliates. Other names may be trademarks of their respective
 owners.
 Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
 mysql>
 mysql> select user(), @@hostname, @@read_only, @@super_read_only ;
 +--------------------------+----------------------------------------------+-------------+-------------------+
 | user()                   | @@hostname                                   | @@read_only | @@super_read_only |
 +--------------------------+----------------------------------------------+-------------+-------------------+
 | moetest@ip-172-31-33-105 | ip-172-31-39-103.ap-south-1.compute.internal |           0 |                 0 |
 +--------------------------+----------------------------------------------+-------------+-------------------+
 1 row in set (0.00 sec)
 mysql> select @@port ;
 +--------+
 | @@port |
 +--------+
 |   3306 |
 +--------+
 1 row in set (0.00 sec)
 mysql>

Test Read Only

[root@ip-172-31-33-105 ~]# mysql -umoetest -hip-172-31-33-105 -P6447 -p
 Enter password:
 Welcome to the MySQL monitor.  Commands end with ; or \g.
 Your MySQL connection id is 209
 Server version: 8.0.21 MySQL Community Server - GPL
 Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
 Oracle is a registered trademark of Oracle Corporation and/or its
 affiliates. Other names may be trademarks of their respective
 owners.
 Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
 mysql>
 mysql> select user(), @@hostname, @@read_only, @@super_read_only ;
 +--------------------------+---------------------------------------------+-------------+-------------------+
 | user()                   | @@hostname                                  | @@read_only | @@super_read_only |
 +--------------------------+---------------------------------------------+-------------+-------------------+
 | moetest@ip-172-31-33-105 | ip-172-31-45-75.ap-south-1.compute.internal |           1 |                 1 |
 +--------------------------+---------------------------------------------+-------------+-------------------+
 1 row in set (0.01 sec)

 1,101 total views,  1 views today

Leave a Reply

Your email address will not be published. Required fields are marked *