Maintain MySQL without downtime by using ProxySQL

ProxySQL

When running a MySQL database on a production environment, we would like to be able to take one of the DB nodes offline to do the maintenance work on that node, and later put that DB node back online after the maintenance.

To make this maintenance process transparent to the applications connecting to the MySQL Database, we are using ProxySQL to route the traffic from applications to the backend MySQL DB nodes.

ProxySQL is a high performance, high availability, protocol aware proxy for MySQL and forks (like Percona Server and MariaDB).

Whenever you would like to pull a MySQL DB from your cluster to maintain, there are normally the following concerns that you need to address:

  • How does it affect my in-flight transactions?
  • How does it affect my existing DB connections?
  • How does it affect my new transactions?

Based on the concept from Ratnadeep Debnath’s talk: Scale MySQL beyond limits with ProxySQL, we will manually simulate the process of doing maintenance on one DB node in a cluster behind a ProxySQL server.

Disclaimer: the docker-compose files are folked from Ratnadeep Debnath’s github repo.

You can find the source code on GitHub.

Our lab settings looks like this;

lab settings
  • proxysql — our container running ProxySQL
  • db1, db2 — our container running the two backend MySQL Databases; note: there is no synchronization between them in our lab.
  • simulated application — a bash command shell in our test docker container to simulate an application connecting to MySQL via ProxySQL.
  • proxysql admin — a bash command shell in our test docker container to manage msql_query_rules by connecting to the ProxySQL admin.
  • direct client to db1, direct client to db2 — bash command shells in our test docker container to check the data written on each backend DB.

Here are the high level steps:

  • Start docker containers and setup the test table
  • Start in-flight transaction
  • Switch destination_hostgroup for our test user
  • Commit the in-flight transaction
  • Shutdown db1
  • Test a new transaction after shutting down db1

Now, let’s get hands-on and manually simulate pulling out db1 from our dummy cluster;

Start docker containers and setup the test table

Start the docker containers by issuing command: docker-compose up -d

This will start the following docker containers;

Creating db_backend_switch_db1_1 … done
Creating db_backend_switch_db2_1 … done
Creating db_backend_switch_proxysql_1 … done
Creating db_backend_switch_nose_1 … done

Now let’s hop on to the db_backend_switch_nose_1 by opening a bash shell to it and creating our test DB schemas in db1 and db2.

Start in-flight transaction (simulated application)

Let’s open a bash command shell in our test docker container to simulate an application connecting to MySQL via ProxySQL;

Now we have a in-flight transaction, which by default should connect to the backend DB: db1.

You can verify that the transaction is not yet committed by directly connecting to db1.

Switch destination_hostgroup for our test user (proxysql admin)

With an in-flight transaction set up, we are now ready to switch the query rule for our test user sbtest1 from sending queries to db1 to sending queries to db2;

Commit the in-flight transaction(simulated application)

Now we can come back to our simulated application to commit our in-flight transaction by issuing commit; Voila! It works. You can verify the transaction was successfully committed to db1 by directly connecting to db2.

Shutdown db1

Let’s shutdown the db1 docker container by issuing a docker-compose command on host machine: docker-compose stop db1

Test new transaction after shutdown db1(simulated application)

Now let’s use the simulated application to test a new transaction;

You will notice that when you issue begin in the mysql client, you will get an error: ERROR 2013 (HY000): Lost connection to MySQL server during query

⚠️To avoid this error in your application, you should do a connection test in your connection pool if you use one; don’t let your connection pool lend a bad connection to your application.

However, the mysql client automatically reconnected to MySQL DB via ProxySQL. Now let’s restart the new transaction and give it a try. Voila! It worked. You can verify that the transaction was successfully committed to db2 by directly connecting to db2.

Here we manually demoed how ProxySQL can help you handle: an in-flight transaction, a new transaction, and an existing connection when you are pulling out a DB node from your DB cluster. Of course, we would not want to do all these tedious and error-prone steps on our production environment. So, please look forward to my next blog post about how to handle it automatically.

--

--

--

Experienced Java developer. Interested in DevOps, JVM optimization, Micro Services..

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Django Queryset API Is Enough.

You Should Know About Centos Alternatives?

Getting started with OpenLDAP (Part I)

ASP.NET Core with Docker

Fallout Cell Mate — Bringing the Wasteland Together

Coding Careers: Full-Stack, Front-End, and Back-End Job Paths

Asynchronous programming in Java

Support Engineer in Microsoft to Software Engineer 2 in Intuit — The Learning Path

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
David Liu

David Liu

Experienced Java developer. Interested in DevOps, JVM optimization, Micro Services..

More from Medium

The Hidden Costs of Context Switching: 4 Strategies for an Optimized Workday

Best Proxy Scrapers for 2022

Arrays vs Linked Lists (Part 1)

Is There Any Danger in Using Cloud-Based Autocomplete in Software Development?