Sunday, September 12, 2021

File Sharing using NFS in GKE Cluster

 File Sharing using NFS in GKE Cluster


There was a requirement to create common file sharing location which should be accessible by specific pods distributed in all 4 worker nodes. PV and PVC is not suitable as PV disk can be mount to a pod which located in same zone.

There are different methods to overcome this issue and I selected NFS server method.

There are few steps:

1. Create NFS server deployment and expose it as a service
2. Mount PVC to above mentioned NFS deployment
3. Create PVC using NFS service
4. Mount PVC into required pods

I will explain above steps in details here.

01. Create persistent volume claim in GKE 

below code will create disk of 5GB which is used by persistent volume claim
       
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pv-provisioning
  labels:
    demo: nfs-pv-provisioning
spec:
  accessModes: [ "ReadWriteOnce" ]
  resources:
    requests:
      storage: 5Gi

02: Create NFS server deployment

This code will create NFS server deployment with created PVS
       
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-server
  template:
    metadata:
      labels:
        app: nfs-server
    spec:
      containers:
      - name: nfs-server
        image: k8s.gcr.io/volume-nfs:0.8
        ports:
          - name: nfs
            containerPort: 2049
          - name: mountd
            containerPort: 20048
          - name: rpcbind
            containerPort: 111
        securityContext:
          privileged: true
        volumeMounts:
          - mountPath: /resources
            name: mypvc
      volumes:
        - name: mypvc
          persistentVolumeClaim:
            claimName: nfs-pv-provisioning


03: Expose NFS server as service

Now we can expose NFS server as a service.
Here is faced an issue while creating the service using yam file. then I managed to expose using GCP console.
       
kind: Service
apiVersion: v1
metadata:
  name: nfs-service
spec:
  ports:
    - name: nfs
      port: 2049
    - name: mountd
      port: 20048
    - name: rpcbind
      port: 111
  selector:
    name: nfs-server

Once done need to check the cluster IP of NFS service. you will need it to configure pods' PVC as mentioned below

04: Run sample pod with NFS disk mount

Next need to create a persistent volume and persistent volume claim using NFS service.
       
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteMany
  nfs:
    server: <cluster IP of NFS service>
    path: "/"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 100Mi


Now here is a sample pod which is mounted to NFS service.
       
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-busybox
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nfs-busybox
  template:
    metadata:
      labels:
        app: nfs-busybox
    spec:
      containers:
      - image: busybox
        command:
          - sh
          - -c
          - 'while true; do date > /mnt/index-`hostname`.html; hostname >> /mnt/index-`hostname`.html; sleep $(($RANDOM % 5 + 5)); done'
        imagePullPolicy: IfNotPresent
        name: busybox
        volumeMounts:
          # name must match the volume name below
          - name: nfs
            mountPath: "/mnt"
      volumes:
      - name: nfs
        persistentVolumeClaim:
          claimName: nfs

I have scaled the deployment to simulate the shared disk over zones.







/mnt disk is mounted to NFS service successfully. 










Sunday, August 22, 2021

Azure Fundamentals (AZ-900) Exam Notes 02

 CERTIFICATION

Microsoft Certified: Azure Fundamentals


  • Azure Virtual Machine    
- Emulation of physical computer
- IaaS
- Scalability
    1. VM scale sets
    2. Azure Batch : pool of compute servers, raw computational power, super computer

 When to use?
        + Testing and Development
        + Running applications on cloud
        + Extending datacenter to Cloud
        + During DR
  • Azure App Service
- PaaS
- Types of App Service
        1. Web Apps : Hosting web apps, Languages: ASP.Net/ASP.net Core/ Java/Python/Ruby/PHP/Node js 
        2. API Apps : REST based web APIs
        3. Web Jobs : Run a program or script, can schedule
        4. Mobile Apps : Build backend for android or iOS apps
  • Azure Container Instance
- Deploy and Manage containers
  • Azure Kubernetes Service
- Orchestration service
  • Azure Functions and Logic Apps

Azure Functions Azure Logic Apps
State Statefull or Stateless Statefull
Development Coding (Imparative) Designer (Declarative)
Connectivity lot of binding types lot of connectors
Actions Coding for activity functions Ready-made actions
Monitoring Azure Application insights Azure portal, Log analytics
Management REST API, Visual Studio Azure portal, REST API, Powershell, Visual Studio
Purpose Execute code Execute workflows
  • Azure Virtual Desktop
- Desktop an application virtualization service
- Works across devices like Windows, Linux, Andriod, Mac and iOS

Saturday, August 21, 2021

Azure Fundamentals (AZ-900) Exam Notes 01

 CERTIFICATION

Microsoft Certified: Azure Fundamentals


Why Cloud Computing:
1. low cost
2. efficient manage
3. scalability

Azure Compute Services:    VM, VM Scale Sets, Kubernetes Service, Azure Batch, Azure Functions, Azure Container instance
Networking: Virtual network, Load balancer, Application Gateway, VPN Gateway, DNS, CDN, DDos Protection, Traffic Manager, ExpressRoute, Azure Firewall
Azure Storage: Blob Storage, File Storage, Queue Storage, Table Storage
Azure Databases: Cosmos Db, SQL DB, Database for MySql, Database for PostgreSQL, 
Web: Azure App Service
IoT: IoT Central, IoT HUB, IoT Edge
BigData: Synapse Analytics, HDInsight, Databricks

Azure Account > Subscriptions > Resource Groups > Resources

Public Cloud Private Cloud Hybrid Cloud
No CapEx Complete control over resources and security Most flexibility
Quick provisioning/De-provisioning Hardware must be purchased at the beginning Owner can determine where to run applications
Pay only for what they use Owner is responsible for hardware maintenance and updates Owner security, compliance and legal requirements

















Capital Expenditure (CapEx): up front cost of spending money 
Operational Expenditure (Opex) : spending money for service or product now



IaaSPaaS    SaaS
VMApp ServiceOffice 365
ServersIaaS + OSPaaS + Hosted applications
more flexiblefocus on developmentpay-as-you-go model
 
Azure Organization Levels:

1. Management Group
2. Subscription (define limits or quotas)
3. Resource Groups (Logical containers) : Azure resource manager - manager layer tat allows you to CRUD resources
4. Resources (VM, Database, etc)









Management Groups: 
  • 10000 management groups can be supported
  • supported 6 levels of depth
  • single parent
  • multiple children








Friday, May 22, 2020

Mysql InnoDB Cluster Implementation

Here I'm going to explain how to implement Mysql InnoDB cluster in single VM. After implementation, cluster architecture will be as follows.


Tested Environment:

Red Hat Enterprise Linux 7
Mysql 5.7

We can follow below step to implement this

  1. Install 3 Mysql instances
  2. Configure Mysql cluster
  3. Configure Mysql router
Prerequisites

Install Mysql, Mysql-shell and Mysql-Router
 
       
yum install mysql-community-server mysql-shell mysql-router

Install 3 Mysql Instances

Here I'm going to explain how to configure 1 Mysql instance. You can follow same steps to configure next 2 Mysql instances as well. 

Define a suitable directory and create below directory structure there.
  
data
log
my-3307.cnf
mysql-files

Edit my-3307.cnf  file as below,
       
[mysqld]
#EnabletheGroupReplicationplugin
plugin-load = group_replication.so

#Generalsettings-ensurethebinarylogisenabled,disableallnon-transactionalstorageenginesexceptCSV(usedforlogs),etc.
port = 3307
log_bin
disabled_storage_engines = MyISAM,BLACKHOLE,FEDERATED,ARCHIVE
ssl
auto_increment_increment = 7
auto_increment_offset = 1
socket = mysqld.sock
loose_mysqlx_socket = mysqlx.sock
basedir = /data1/mysql-cluster/3307
datadir = /data1/mysql-cluster/3307/data
log_error = /data1/mysql-cluster/3307/log/error.log
pid_file = /data1/mysql-cluster/3307/3307.pid
secure_file_priv = /data1/mysql-cluster/3307/mysql-files

#BinaryLogandReplication
server_id = 1
binlog_format = ROW
binlog_rows_query_log_events = ON
gtid_mode = ON
enforce_gtid_consistency = ON
log_slave_updates = ON
master_info_repository = TABLE
relay_log_info_repository = TABLE
transaction_write_set_extraction = XXHASH64
binlog_checksum = NONE
lower_case_table_names = 1
#GroupReplication
group_replication = FORCE_PLUS_PERMANENT
group_replication_start_on_boot = OFF
group_replication_local_address = 127.0.0.1:13307
group_replication_group_seeds = 127.0.0.1:13308,127.0.0.1:13309

You can check Mysql reference manual for parameter definitions in detail. Here auto_increment_increment and auto_increment_offset parameters you can ignore, as we are going to implement single master cluster. we have to adjust those values of we go for multi-master cluster. group_replication_local_address  parameter should be the 10000 + <mysql-port> [10000+3307 = 13307] and host IP. Here I have used localhost since I'm implementing cluster in a single VM. group_replication_group_seeds this  parameter defines the member nodes' IP and ports.

Once above file is ready we can start mysql instance.

1.1. Initialize Mysql data directory
mysqld --initialize-insecure 
--user=root 
--basedir=/data1/mysql-cluster/3307 
--datadir=/data1/mysql-cluster/3307/data
1.2. Start Mysql instance in the background
mysqld --defaults-file=/data1/mysql-cluster/3307/my-new.cnf 
--user=root  
--plugin-dir=/usr/lib64/mysql/plugin &
1.3. Mysql First login as root without a password
mysql -P 3307 -u root --skip-password --socket=/data1/mysql-cluster/3307/data/mysqld.sock
1.4. Setup Mysql root password
ALTER USER 'root'@'localhost' IDENTIFIED BY 'xxxxxxxx';
1.5. Login as root user
mysql -P 3308 -u root -pxxxxxxxxx --socket=/data1/mysql-cluster/3308/data/mysqld.sock
1.6. Create cluster user and grant the necessary permission.
SET sql_log_bin = OFF;
CREATE USER 'clsadmin'@'%' IDENTIFIED BY 'ClsPass123';
GRANT SELECT, RELOAD, SHUTDOWN, PROCESS, FILE, SUPER, REPLICATION SLAVE, REPLICATION CLIENT, CREATE USER ON *.* TO 'clsadmin'@'%' WITH GRANT OPTION;
GRANT INSERT, UPDATE, DELETE ON `mysql`.* TO 'clsadmin'@'%' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON `mysql_innodb_cluster_metadata`.* TO 'clsadmin'@'%' WITH GRANT OPTION;
SET sql_log_bin = ON;
GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SHOW VIEW, TRIGGER, UPDATE ON mysql_innodb_cluster_metadata_bkp.* TO 'clsadmin'@'%' WITH GRANT OPTION;
GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SHOW VIEW, TRIGGER, UPDATE ON mysql_innodb_cluster_metadata_previous.* TO 'clsadmin'@'%' WITH GRANT OPTION;
SELECT @@global.gtid_executed;
RESET MASTER;
Now Mysql 3307 instance is ready. you can configure the other 2 Mysql instances following the above-mentioned steps and changing necessary parameters.
root      37683      1  5 May20 ?        02:51:46 mysqld --defaults-file=/data1/mysql-cluster/3307/my-new.cnf --user=root --plugin-dir=/usr/lib64/mysql/plugin
root      51090      1  0 May20 ?        00:11:25 mysqld --defaults-file=/data1/mysql-cluster/3308/my-new.cnf --user=root --plugin-dir=/usr/lib64/mysql/plugin
root      78216      1  0 May19 ?        00:17:02 mysqld --defaults-file=/data1/mysql-cluster/3309/my-new.cnf --user=root --plugin-dir=/usr/lib64/mysql/plugin
Configure Mysql InnoDB cluster

Since we have 3 Mysql instaces in place, we can initiate a cluster

2.1. Login to Mysql shell
mysqlsh
2.2. Connect to one of Mysql instance using below command
shell.connect('clsadmin@127.0.0.1:3307')
2.3. Check the instance configuration using below command, We need to run below command for all 3 nodes
dba.checkInstanceConfiguration("clsadmin@localhost:3307");
2.4. Configure the instance. later you can run above command to verify the configuration. We need to run below command for all 3 nodes
dba.configureInstance("clsadmin@localhost:3307");
2.5. Create InnoDB cluster using below command
var cluster=dba.createCluster('myCluster');
2.6. Check cluster status using below commands
cluster.status()
cluster.options()
2.7. Add instances to the cluster
cluster.addInstance('clsadmin@127.0.0.1:3308')
cluster.addInstance('clsadmin@127.0.0.1:3309')
MySQL  JS > shell.connect('clsadmin@127.0.0.1:3307')
Creating a session to 'clsadmin@127.0.0.1:3307'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 354528
Server version: 5.7.30-log MySQL Community Server (GPL)
No default schema selected; type \use  to set one.

 MySQL  127.0.0.1:3307 ssl  JS > var cluster = dba.getCluster('myCluster')
 MySQL  127.0.0.1:3307 ssl  JS > cluster.status()
{
    "clusterName": "myCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "WCIGWAPP01:3307", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "WCIGWAPP01:3307": {
                "address": "WCIGWAPP01:3307", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "WCIGWAPP01:3308": {
                "address": "WCIGWAPP01:3308", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "WCIGWAPP01:3309": {
                "address": "WCIGWAPP01:3309", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "WCIGWAPP01:3307"
}
 MySQL  127.0.0.1:3307 ssl  JS > 
Now Mysql cluster is almost implemented. But we will need to configure Mysql router to connect applications.

Configure Mysql-Router

3.1. Configure Mysql-Router
mysqlrouter --bootstrap clsadmin@localhost:3307 --user=root --force
3.2. Start Mysql router in background
mysqlrouter -c /etc/mysqlrouter/mysqlrouter.conf &
3.3.  Check connectivity
mysqlsh --uri clsadmin@localhost:6446 --sql
Now we have Mysql InnoDB cluster. 
Please comment here if you have any clarifications.

Saturday, April 11, 2020

WSO2EI JVM Monitoring with ELK


Here is to visualize JVM statistics using Kibana dashboard. WSO2EI is running on JVM and ELK setup should be already implemented.
Below statistic will be available in Kibana after this integration

  1. Thread count
  2. Class count
  3. Heap memory
  4. Non-heap memory

Tested Environment:
wso2ei-6.1.1
jdk1.8.0_172-amd64
nrpe  3.2.1
filebeat 6.8.0
logstash 6.8.0
ElasticSearch 6.8.0
Kibana 6.8.0



Step 1:
Install check_jvm plugin in the VM as mentioned in below article,
nrpe plugin for JVM



Step 2:
Make sure that you have installed NRPE plugin successfully. Execute below commands and check output
Command
       
/usr/local/nagios/libexec/check_jvm -u igwstuser -n org.wso2.carbon.bootstrap.Bootstrap -p heap -w 2008096384 -c 3000218931
       
Output
       
OK - 826M |max=9544663040;commited=4462215168;used=865669640;
       
Command
       
/usr/local/nagios/libexec/check_jvm -u igwstuser -n org.wso2.carbon.bootstrap.Bootstrap -p non-heap -w 268435456 -c 300870912
       
Output
       
WARNING 267M |max=-1;;; commited=352649216;;;used=279245856;;;
       
Command
       
/usr/local/nagios/libexec/check_jvm -u igwstuser -n org.wso2.carbon.bootstrap.Bootstrap -p classes -w 25000 -c 30000
       
Output
       
OK - 22816 |classes=22816;;;
       
Command
       
/usr/local/nagios/libexec/check_jvm -u igwstuser -n org.wso2.carbon.bootstrap.Bootstrap -p threads -w 700 -c 800
       
Output
       
OK - 655 |threads=655;;;
       
Don’t worry about the warning and critical threshold values, here we are considering values only.
Important: You should execute the above commands from the user who is running WSO2EI.


Step 3:
Generate shell script to execute the above commands and write output to a log file

Step 4:
Add cronjob to execute the above shell script every 2 minutes.
       
*/2 * * * * cd /log/jvm-scripts; sh check_jvm.sh
       
Now data will be collected on the given location.
Step 5:
Here I’m assuming that ELK stack is already installed and configured. I will mention the configurations which are related to this topic only. You can refer below for filebeat and logstash configurations. make sure to restart filebeat and logstash service after adding new configurations.
Filebeat : read log file and push to logstash, please refer filebeat.yml in the below link.
Logstash : read the logs sent by filebeat, match to a correct index after applying to grok patterns  and then puch to elasticsearch, please refer logstash.conf in below link


Step 6:
Go to Kibana portal à Management à Elasticsearch à Index Management
Search for jvm and then you should be able to see relevant indexes were created in Elasticsearch  



Step 7:
Go to Kibana à Index patterns à Create index pattern à type “jvm*” and create index pattern.


Step 8:
Now you can create your own visualizations and dashboard to monitor JVM statistics.

You can download the Kibana visualizations here.
Feel free to comment here if anything.

Monday, February 24, 2020

NRPE Custom Plugin for Apigate ELK


We can use both Nagios and Elastic Alerts to generate alarms based on different matrices in ELK module. Here i'm going to describe how to integrate Nagios [NRPE custom plugin ] to  generate alarms based on API delays.

Here I have used default Apigate analytic module and all indices were based on Apigate product.

Step 1:

Create check_api_delay.py file in /usr/local/nagios/libexec/ directory or your own custom plugin directory.
check_api_delay.py

Step 2:

Go to nrpe.cfg file in /usr/local/nagios/etc or your default location and define NRPE command as below
       
#define check_api_delay command
#define check_api_delay command
define command{
        command_name    check_api_delay
        command_line    $USER1$/check_nrpe -H $HOSTADDRESS$ -c check_api_delay -a $ARG1$ $ARG2$ $ARG3$ $ARG4$ $ARG5$
}

#define services
define service{
        use                     local-service
        host_name               MONITORING_01
        service_description     DelayProfile-Payment
        check_command           check_api_delay!DOB-Payment!amount!5000!100!150
        normal_check_interval   2
}

define service{
        use                     local-service
        host_name               MONITORING_01
        service_description     DelayProfile-QueryProfile-subscribertype
        check_command           check_api_delay!QueryProfile!subscribertype!5000!80!100
        normal_check_interval   2
}

#define service groups
define servicegroup{
        servicegroup_name       IGW-API-Delay-Profile
        alias                   IGW API Delay Profile
        members                 MONITORING_01,DelayProfile-Payment,MONITORING_01,DelayProfile-QueryProfile-subscribertype
}
       
Step 4:

Restart nrpe and Nagios agent

Tuesday, November 12, 2019

Shell Script to simulate API requests for Load Tests


You can use below shell script to simulate a pre-defined load on an API.

       
#!/bin/bash

#########################################################
# This is simple shell script to simulate API load      #
# for given duration                                    #
# Eg: DURATION=60 and TPS=20, This script will trigger  #
# 20 requests per second for 1 minute.                  #
#########################################################


#define variables

set -x # run in debug mode
DURATION=60 # how long should load be applied ? - in seconds
TPS=20 # number of requests per second
end=$((SECONDS+$DURATION))

#start load
while [ $SECONDS -lt $end ];
do
    for ((i=1;i<=$TPS;i++)); do
        curl -X POST  -H 'Accept: application/json' -H 'Authorization: Bearer xxxxxxxxxxxxx' -H 'Content-Type: application/json' -d '{}' --cacert /path/to/cert/cert.crt -o /dev/null -s -w '%{time_starttransfer}\n' >> response-times.log &
    done
    sleep 1
done
wait

#end load
echo "Load test has been completed" 
       
Download the script from here.

Here while loop is to run this load for given time period
for loop is to fire given number of requests per second
cacert is optional - only used for https request, you can use -k option as well
-o /dev/null is to write the output to empty
-s is to silence the process
-w '%{timestarttransfer}%' is to retrieve the response time. This time will be written to a file
this request will run in background and script will initiate next request.

File Sharing using NFS in GKE Cluster

 File Sharing using NFS in GKE Cluster There was a requirement to create common file sharing location which should be accessible by specific...