Friday, May 22, 2020

Mysql InnoDB Cluster Implementation

Here I'm going to explain how to implement Mysql InnoDB cluster in single VM. After implementation, cluster architecture will be as follows.


Tested Environment:

Red Hat Enterprise Linux 7
Mysql 5.7

We can follow below step to implement this

  1. Install 3 Mysql instances
  2. Configure Mysql cluster
  3. Configure Mysql router
Prerequisites

Install Mysql, Mysql-shell and Mysql-Router
 
       
yum install mysql-community-server mysql-shell mysql-router

Install 3 Mysql Instances

Here I'm going to explain how to configure 1 Mysql instance. You can follow same steps to configure next 2 Mysql instances as well. 

Define a suitable directory and create below directory structure there.
  
data
log
my-3307.cnf
mysql-files

Edit my-3307.cnf  file as below,
       
[mysqld]
#EnabletheGroupReplicationplugin
plugin-load = group_replication.so

#Generalsettings-ensurethebinarylogisenabled,disableallnon-transactionalstorageenginesexceptCSV(usedforlogs),etc.
port = 3307
log_bin
disabled_storage_engines = MyISAM,BLACKHOLE,FEDERATED,ARCHIVE
ssl
auto_increment_increment = 7
auto_increment_offset = 1
socket = mysqld.sock
loose_mysqlx_socket = mysqlx.sock
basedir = /data1/mysql-cluster/3307
datadir = /data1/mysql-cluster/3307/data
log_error = /data1/mysql-cluster/3307/log/error.log
pid_file = /data1/mysql-cluster/3307/3307.pid
secure_file_priv = /data1/mysql-cluster/3307/mysql-files

#BinaryLogandReplication
server_id = 1
binlog_format = ROW
binlog_rows_query_log_events = ON
gtid_mode = ON
enforce_gtid_consistency = ON
log_slave_updates = ON
master_info_repository = TABLE
relay_log_info_repository = TABLE
transaction_write_set_extraction = XXHASH64
binlog_checksum = NONE
lower_case_table_names = 1
#GroupReplication
group_replication = FORCE_PLUS_PERMANENT
group_replication_start_on_boot = OFF
group_replication_local_address = 127.0.0.1:13307
group_replication_group_seeds = 127.0.0.1:13308,127.0.0.1:13309

You can check Mysql reference manual for parameter definitions in detail. Here auto_increment_increment and auto_increment_offset parameters you can ignore, as we are going to implement single master cluster. we have to adjust those values of we go for multi-master cluster. group_replication_local_address  parameter should be the 10000 + <mysql-port> [10000+3307 = 13307] and host IP. Here I have used localhost since I'm implementing cluster in a single VM. group_replication_group_seeds this  parameter defines the member nodes' IP and ports.

Once above file is ready we can start mysql instance.

1.1. Initialize Mysql data directory
mysqld --initialize-insecure 
--user=root 
--basedir=/data1/mysql-cluster/3307 
--datadir=/data1/mysql-cluster/3307/data
1.2. Start Mysql instance in the background
mysqld --defaults-file=/data1/mysql-cluster/3307/my-new.cnf 
--user=root  
--plugin-dir=/usr/lib64/mysql/plugin &
1.3. Mysql First login as root without a password
mysql -P 3307 -u root --skip-password --socket=/data1/mysql-cluster/3307/data/mysqld.sock
1.4. Setup Mysql root password
ALTER USER 'root'@'localhost' IDENTIFIED BY 'xxxxxxxx';
1.5. Login as root user
mysql -P 3308 -u root -pxxxxxxxxx --socket=/data1/mysql-cluster/3308/data/mysqld.sock
1.6. Create cluster user and grant the necessary permission.
SET sql_log_bin = OFF;
CREATE USER 'clsadmin'@'%' IDENTIFIED BY 'ClsPass123';
GRANT SELECT, RELOAD, SHUTDOWN, PROCESS, FILE, SUPER, REPLICATION SLAVE, REPLICATION CLIENT, CREATE USER ON *.* TO 'clsadmin'@'%' WITH GRANT OPTION;
GRANT INSERT, UPDATE, DELETE ON `mysql`.* TO 'clsadmin'@'%' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON `mysql_innodb_cluster_metadata`.* TO 'clsadmin'@'%' WITH GRANT OPTION;
SET sql_log_bin = ON;
GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SHOW VIEW, TRIGGER, UPDATE ON mysql_innodb_cluster_metadata_bkp.* TO 'clsadmin'@'%' WITH GRANT OPTION;
GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SHOW VIEW, TRIGGER, UPDATE ON mysql_innodb_cluster_metadata_previous.* TO 'clsadmin'@'%' WITH GRANT OPTION;
SELECT @@global.gtid_executed;
RESET MASTER;
Now Mysql 3307 instance is ready. you can configure the other 2 Mysql instances following the above-mentioned steps and changing necessary parameters.
root      37683      1  5 May20 ?        02:51:46 mysqld --defaults-file=/data1/mysql-cluster/3307/my-new.cnf --user=root --plugin-dir=/usr/lib64/mysql/plugin
root      51090      1  0 May20 ?        00:11:25 mysqld --defaults-file=/data1/mysql-cluster/3308/my-new.cnf --user=root --plugin-dir=/usr/lib64/mysql/plugin
root      78216      1  0 May19 ?        00:17:02 mysqld --defaults-file=/data1/mysql-cluster/3309/my-new.cnf --user=root --plugin-dir=/usr/lib64/mysql/plugin
Configure Mysql InnoDB cluster

Since we have 3 Mysql instaces in place, we can initiate a cluster

2.1. Login to Mysql shell
mysqlsh
2.2. Connect to one of Mysql instance using below command
shell.connect('clsadmin@127.0.0.1:3307')
2.3. Check the instance configuration using below command, We need to run below command for all 3 nodes
dba.checkInstanceConfiguration("clsadmin@localhost:3307");
2.4. Configure the instance. later you can run above command to verify the configuration. We need to run below command for all 3 nodes
dba.configureInstance("clsadmin@localhost:3307");
2.5. Create InnoDB cluster using below command
var cluster=dba.createCluster('myCluster');
2.6. Check cluster status using below commands
cluster.status()
cluster.options()
2.7. Add instances to the cluster
cluster.addInstance('clsadmin@127.0.0.1:3308')
cluster.addInstance('clsadmin@127.0.0.1:3309')
MySQL  JS > shell.connect('clsadmin@127.0.0.1:3307')
Creating a session to 'clsadmin@127.0.0.1:3307'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 354528
Server version: 5.7.30-log MySQL Community Server (GPL)
No default schema selected; type \use  to set one.

 MySQL  127.0.0.1:3307 ssl  JS > var cluster = dba.getCluster('myCluster')
 MySQL  127.0.0.1:3307 ssl  JS > cluster.status()
{
    "clusterName": "myCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "WCIGWAPP01:3307", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "WCIGWAPP01:3307": {
                "address": "WCIGWAPP01:3307", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "WCIGWAPP01:3308": {
                "address": "WCIGWAPP01:3308", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "WCIGWAPP01:3309": {
                "address": "WCIGWAPP01:3309", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "WCIGWAPP01:3307"
}
 MySQL  127.0.0.1:3307 ssl  JS > 
Now Mysql cluster is almost implemented. But we will need to configure Mysql router to connect applications.

Configure Mysql-Router

3.1. Configure Mysql-Router
mysqlrouter --bootstrap clsadmin@localhost:3307 --user=root --force
3.2. Start Mysql router in background
mysqlrouter -c /etc/mysqlrouter/mysqlrouter.conf &
3.3.  Check connectivity
mysqlsh --uri clsadmin@localhost:6446 --sql
Now we have Mysql InnoDB cluster. 
Please comment here if you have any clarifications.

Saturday, April 11, 2020

WSO2EI JVM Monitoring with ELK


Here is to visualize JVM statistics using Kibana dashboard. WSO2EI is running on JVM and ELK setup should be already implemented.
Below statistic will be available in Kibana after this integration

  1. Thread count
  2. Class count
  3. Heap memory
  4. Non-heap memory

Tested Environment:
wso2ei-6.1.1
jdk1.8.0_172-amd64
nrpe  3.2.1
filebeat 6.8.0
logstash 6.8.0
ElasticSearch 6.8.0
Kibana 6.8.0



Step 1:
Install check_jvm plugin in the VM as mentioned in below article,
nrpe plugin for JVM



Step 2:
Make sure that you have installed NRPE plugin successfully. Execute below commands and check output
Command
       
/usr/local/nagios/libexec/check_jvm -u igwstuser -n org.wso2.carbon.bootstrap.Bootstrap -p heap -w 2008096384 -c 3000218931
       
Output
       
OK - 826M |max=9544663040;commited=4462215168;used=865669640;
       
Command
       
/usr/local/nagios/libexec/check_jvm -u igwstuser -n org.wso2.carbon.bootstrap.Bootstrap -p non-heap -w 268435456 -c 300870912
       
Output
       
WARNING 267M |max=-1;;; commited=352649216;;;used=279245856;;;
       
Command
       
/usr/local/nagios/libexec/check_jvm -u igwstuser -n org.wso2.carbon.bootstrap.Bootstrap -p classes -w 25000 -c 30000
       
Output
       
OK - 22816 |classes=22816;;;
       
Command
       
/usr/local/nagios/libexec/check_jvm -u igwstuser -n org.wso2.carbon.bootstrap.Bootstrap -p threads -w 700 -c 800
       
Output
       
OK - 655 |threads=655;;;
       
Don’t worry about the warning and critical threshold values, here we are considering values only.
Important: You should execute the above commands from the user who is running WSO2EI.


Step 3:
Generate shell script to execute the above commands and write output to a log file

Step 4:
Add cronjob to execute the above shell script every 2 minutes.
       
*/2 * * * * cd /log/jvm-scripts; sh check_jvm.sh
       
Now data will be collected on the given location.
Step 5:
Here I’m assuming that ELK stack is already installed and configured. I will mention the configurations which are related to this topic only. You can refer below for filebeat and logstash configurations. make sure to restart filebeat and logstash service after adding new configurations.
Filebeat : read log file and push to logstash, please refer filebeat.yml in the below link.
Logstash : read the logs sent by filebeat, match to a correct index after applying to grok patterns  and then puch to elasticsearch, please refer logstash.conf in below link


Step 6:
Go to Kibana portal à Management à Elasticsearch à Index Management
Search for jvm and then you should be able to see relevant indexes were created in Elasticsearch  



Step 7:
Go to Kibana à Index patterns à Create index pattern à type “jvm*” and create index pattern.


Step 8:
Now you can create your own visualizations and dashboard to monitor JVM statistics.

You can download the Kibana visualizations here.
Feel free to comment here if anything.

Monday, February 24, 2020

NRPE Custom Plugin for Apigate ELK


We can use both Nagios and Elastic Alerts to generate alarms based on different matrices in ELK module. Here i'm going to describe how to integrate Nagios [NRPE custom plugin ] to  generate alarms based on API delays.

Here I have used default Apigate analytic module and all indices were based on Apigate product.

Step 1:

Create check_api_delay.py file in /usr/local/nagios/libexec/ directory or your own custom plugin directory.
check_api_delay.py

Step 2:

Go to nrpe.cfg file in /usr/local/nagios/etc or your default location and define NRPE command as below
       
#define check_api_delay command
#define check_api_delay command
define command{
        command_name    check_api_delay
        command_line    $USER1$/check_nrpe -H $HOSTADDRESS$ -c check_api_delay -a $ARG1$ $ARG2$ $ARG3$ $ARG4$ $ARG5$
}

#define services
define service{
        use                     local-service
        host_name               MONITORING_01
        service_description     DelayProfile-Payment
        check_command           check_api_delay!DOB-Payment!amount!5000!100!150
        normal_check_interval   2
}

define service{
        use                     local-service
        host_name               MONITORING_01
        service_description     DelayProfile-QueryProfile-subscribertype
        check_command           check_api_delay!QueryProfile!subscribertype!5000!80!100
        normal_check_interval   2
}

#define service groups
define servicegroup{
        servicegroup_name       IGW-API-Delay-Profile
        alias                   IGW API Delay Profile
        members                 MONITORING_01,DelayProfile-Payment,MONITORING_01,DelayProfile-QueryProfile-subscribertype
}
       
Step 4:

Restart nrpe and Nagios agent

File Sharing using NFS in GKE Cluster

 File Sharing using NFS in GKE Cluster There was a requirement to create common file sharing location which should be accessible by specific...