eviden-logo

Evidian > Products > High Availability Software - Zero Extra Hardware > Kubernetes K3S: the simplest high availability cluster between two redundant servers

Kubernetes K3S: the simplest high availability cluster between two redundant servers

With the synchronous replication and automatic failover provided by Evidian SafeKit

How the Evidian SafeKit software simply implements a Kubernetes K3S high availability cluster between two redundant servers?

The solution for Kubernetes K3S

Evidian SafeKit brings high availability to Kubernetes K3S between two redundant servers. This article explains how to implement quickly a Kubernetes cluster on 2 nodes without NFS external storage, without an external configuration database and without specific skills.

Note that SafeKit is a generic product. You can implement with the same product real-time replication and failover of directories and services, databases, Docker, Podman, full Hyper-V or KVM virtual machines, Cloud applications (see the module list).

This clustering solution is recognized as the simplest to implement by our customers and partners. The SafeKit solution is the perfect solution for running Kubernetes applications on premise and on 2 nodes.

We have chosen K3S as the Kubernetes engine because it is a lightweight solution for IoT & Edge computing.

The k3s.safe mirror module implements:

  • 2 active K3S masters/agents running pods
  • replication of the K3S configuration database (MariaDB)
  • replication of persistent volumes (implemented by NFS client dynamic provisionner storage class: nfs-client)
  • virtual IP address, automatic failover, automatic failback

How it works?

The following table explains how the solution is working on 2 nodes. Other nodes with K3S agents (without SafeKit) can be added for horizontal scalability.

Kubernetes K3S components
SafeKit PRIM node SafeKit SECOND node
K3S (master and agent) is running pods on the primary node K3S (master and agent) is running pods on the secondary node
NFS Server is running on the primary node with:

  • a virtual IP/NFS port
  • exported NFS share
  • K3S persistent volumes
Persistent volumes are replicated synchronously and in real-time by SafeKit on the secondary node
MariaDB server is running on the primary node with:

  • a virtual IP/MariaDB port
  • K3S configuration database
The configuration database is replicated synchronously and in real-time by SafeKit on the secondary node

A simple solution

SafeKit is the simplest high availabiliy solution for running Kubernetes applications on 2 nodes and on premise.

SafeKit Benefits
Synchronous real-time replication for persistent volumes No external NAS/NFS storage for persistent volumes
Only 2 nodes for HA of Kubernetes No need for 3 nodes like with etcd database
Same simple product for virtual IP address, replication, failover, failback, administration, maintenance Avoid different technologies for virtual IP (metal-lb, BGP), HA of persistent volumes, HA of configuration database
Supports disaster recovery with two remote nodes Avoid replicated NAS storage

Partners, the success with SafeKit

This platform agnostic solution is ideal for a partner reselling a critical application and who wants to provide a redundancy and high availability option easy to deploy to many customers.

With many references in many countries won by partners, SafeKit has proven to be the easiest solution to implement for redundancy and high availability of building management, video management, access control, SCADA software...

Building Management Software (BMS)

Video Management Software (VMS)

Electronic Access Control Software (EACS)

SCADA Software (Industry)

How the SafeKit mirror cluster works with Kubernetes K3S?

Step 1. File replication at byte level in a mirror cluster

This step corresponds to the following figure. Server 1 (PRIM) runs the Kubernetes K3S components explained in the previous table. Clients are connected to the virtual IP address of the mirror cluster. SafeKit replicates in real time files opened by the Kubernetes K3S components. Only changes made by the components in the files are replicated across the network, thus limiting traffic (byte-level file replication).

File replication at byte level in a Kubernetes K3S mirror cluster

With a software data replication at the file level, only names of directories are configured in SafeKit. There are no pre-requisites on disk organization for the two servers. Directories to replicate may be located in the system disk. SafeKit implements synchronous replication with no data loss on failure contrary to asynchronous replication.

Step 2. Failover

When Server 1 fails, Server 2 takes over. SafeKit switches the cluster's virtual IP address and restarts the Kubernetes K3S components automatically on Server 2. The components find the files replicated by SafeKit uptodate on Server 2, thanks to the synchronous replication between Server 1 and Server 2. The components continue to run on Server 2 by locally modifying their files that are no longer replicated to Server 1.

Failover in a Kubernetes K3S mirror cluster

The failover time is equal to the fault-detection time (set to 30 seconds by default) plus the components start-up time. Unlike disk replication solutions, there is no delay for remounting file system and running file system recovery procedures.

Step 3. Failback and reintegration

Failback involves restarting Server 1 after fixing the problem that caused it to fail. SafeKit automatically resynchronizes the files, updating only the files modified on Server 2 while Server 1 was halted. This reintegration takes place without disturbing the Kubernetes K3S components, which can continue running on Server 2.

Failback in a Kubenetes mirror cluster

If SafeKit was cleanly stopped on server 1, then at its restart, only the modified zones inside files are resynchronized, according to modification tracking bitmaps.

If server 1 crashed (power off), the modification bitmaps are not reliable and not used. All the files bearing a modification timestamp more recent than the last known synchronization point are resynchronized.

Step 4. Return to byte-level file replication in the mirror cluster

After reintegration, the files are once again in mirror mode, as in step 1. The system is back in high-availability mode, with the Kubernetes K3S components running on Server 2 and SafeKit replicating file updates to the secondary Server 1.

Return to normal operation in a Kubernetes K3S mirror cluster

If the administrator wishes the Kubernetes K3S components to run on Server 1, he/she can execute a "swap" command either manually at an appropriate time, or automatically through configuration.

SafeKit free trial + mirror module for Kubernetes K3S + quick installation guide

Typical usage with SafeKit

Why a replication of a few Tera-bytes?

Resynchronization time after a failure (step 3)

  • 1 Gb/s network ≈ 3 Hours for 1 Tera-bytes.
  • 10 Gb/s network ≈ 1 Hour for 1 Tera-bytes or less depending on disk write performances.

Alternative

Why a replication < 1,000,000 files?

  • Resynchronization time performance after a failure (step 3).
  • Time to check each file between both nodes.

Alternative

  • Put the many files to replicate in a virtual hard disk / virtual machine.
  • Only the files representing the virtual hard disk / virtual machine will be replicated and resynchronized in this case.

Why a failover ≤ 32 replicated VMs?

  • Each VM runs in an independent mirror module.
  • Maximum of 32 mirror modules running on the same cluster.

Alternative

  • Use an external shared storage and another VM clustering solution.
  • More expensive, more complex.

Why a LAN/VLAN network between remote sites?

Alternative

  • Use a load balancer for the virtual IP address if the 2 nodes are in 2 subnets (supported by SafeKit, especially in the cloud).
  • Use backup solutions with asynchronous replication for high latency network.

SafeKit Modules for Plug&Play Redundancy and High Availability Solutions

Advanced clustering architectures

Several modules can be deployed on the same cluster. Thus, advanced clustering architectures can be implemented:

Evidian SafeKit Webinar

Evidian SafeKit Overview Slides

  • Demonstration
  • Examples of redundancy and high availability solution
  • Evidian SafeKit sold in many different countries with Milestone
  • 2 solutions: virtual machine cluster or application cluster
  • Distinctive advantages
  • More information on the web site

More slides

SafeKit Customers in all Business Activities

  • Best high availability use cases with SafeKit

    Best use cases [+]

  • High availability of video management, access control, building management with SafeKit

    Video management, access control, building management [+]

  • Harmonic has deployed more than 80 SafeKit clusters for TV broadcasting

    TV broadcasting [+]

  • Natixis uses SafeKit as a high availability solution for banking applications

    Finance [+]

  • Fives Syleps implements high availability with SafeKit for automated logistics

    Industry [+]

  • Air traffic control systems supplier, Copperchase, deploys SafeKit high availability in airports.

    Air traffic control [+]

  • Software vendor Wellington IT deploys SafeKit in banks

    Bank [+]

  • Paris transport company (RATP) chose the SafeKit high availability for metro lines

    Transport [+]

  • Systel deploys SafeKit in emergency call centers

    Healthcare [+]

  • ERP high availability and load balancing of the French army (DGA) are made with SafeKit.

    Government [+]

SafeKit High Availability Differentiators against Competition

Evidian SafeKit 8.2

All new features compared to 7.5 described in the release notes

Packages

One-month license key

Technical documentation

Training

Modules and quick installation

SafeKit 8.2 Training

Introduction

  1. Overview / pptx

    • Demonstration
    • Examples of redundancy and high availability solution
    • Evidian SafeKit sold in many different countries with Milestone
    • 2 solutions: virtual machine or application cluster
    • Distinctive advantages
    • More information on the web site
  2. Competition / pptx

    • Cluster of virtual machines
    • Mirror cluster
    • Farm cluster

Installation, Console, CLI

  1. Install and setup / pptx
    • Package installation
    • Nodes setup
    • Upgrade
  2. Web console / pptx
    • Configuration of the cluster
    • Configuration of a new module
    • Advanced usage
    • Securing the web console
  3. Command line / pptx
    • Configure the SafeKit cluster
    • Configure a SafeKit module
    • Control and monitor

Advanced configuration

  1. Mirror module / pptx
    • start_prim / stop_prim scripts
    • userconfig.xml
    • Heartbeat (<hearbeat>)
    • Virtual IP address (<vip>)
    • Real-time file replication (<rfs>)
    • How real-time file replication works?
    • Mirror's states in action
  2. Farm  module / pptx
    • start_both / stop_both scripts
    • userconfig.xml
    • Farm heartbeats (<farm>)
    • Virtual IP address (<vip>)
    • Farm's states in action
  1. Checkers / pptx
    • userconfig.xml
    • errd checker
    • intf and ip checkers
    • custom checker
    • splitbrain checker for a mirror module
    • tcp, ping, module checkers
    • Checkers in action

Troubleshooting

  1. Troubleshooting / pptx
    • Analyze yourself the logs
    • Take snapshots for support
    • Boot / shutdown
    • Web console / Command lines
    • Mirror / Farm / Checkers
    • Running an application without SafeKit

Support

  1. Evidian support / pptx
    • Get permanent license key
    • Register on support.evidian.com
    • Call desk