Home News Feeds Planet MySQL

Statistics

Content View Hits : 12680
Newsfeeds
Planet MySQL
Planet MySQL - https://planet.mysql.com

  • ProxySQL Behavior in the Percona Kubernetes Operator for Percona XtraDB Cluster
    The Percona Kubernetes Operator for Percona XtraDB Cluster(PXC) comes with ProxySQL as part of the deal. And to be honest, the behavior of ProxySQL is pretty much the same as in a regular non-k8s deployment of it. So why bother to write a blog about it? Because what happens around ProxySQL in the context of the operator is actually interesting. ProxySQL is deployed on its own POD (that can be scaled as well as the PXC Pods can). Each ProxySQL Pod has its own ProxySQL Container and a sidecar container. If you are curious, you can find out which node holds the pod by running kubectl describe pod cluster1-proxysql-0 | grep Node: Node: ip-192-168-37-111.ec2.internal/192.168.37.111 Login into and ask for the running containers. You will see something like this: [root@ip-192-168-37-111 ~]# docker ps | grep -i proxysql d63c55d063c5 percona/percona-xtradb-cluster-operator "/entrypoint.sh /usr…" 2 hours ago Up 2 hours k8s_proxysql-monit_cluster1-proxys ql-0_pxc_631a2c34-5de2-4c0f-b02e-cb077df4ee13_0 d75002a3847e percona/percona-xtradb-cluster-operator "/entrypoint.sh /usr…" 2 hours ago Up 2 hours k8s_pxc-monit_cluster1-proxysql-0_ pxc_631a2c34-5de2-4c0f-b02e-cb077df4ee13_0 e34d551594a8 percona/percona-xtradb-cluster-operator "/entrypoint.sh /usr…" 2 hours ago Up 2 hours k8s_proxysql_cluster1-proxysql-0_p xc_631a2c34-5de2-4c0f-b02e-cb077df4ee13_0 Now, what’s the purpose of the sidecar container in this case? To find out if there are new PXC nodes (pods) or on the contrary, PXC pods have been removed (due to scale down) and configure ProxySQL accordingly. Adding and Removing PXC Nodes (Pods) Let’s see it in action. A regular PXC kubernetes deployment with 3 PXC pods, like this: kubectl get pod NAME READY STATUS RESTARTS AGE cluster1-proxysql-0 3/3 Running 0 106m cluster1-proxysql-1 3/3 Running 0 106m cluster1-proxysql-2 3/3 Running 0 106m cluster1-pxc-0 1/1 Running 0 131m cluster1-pxc-1 1/1 Running 0 128m cluster1-pxc-2 1/1 Running 0 129m Will have the mysql_server information as following: mysql> select hostgroup_id,hostname,status, weight from runtime_mysql_servers; +--------------+---------------------------------------------------+--------+--------+ | hostgroup_id | hostname | status | weight | +--------------+---------------------------------------------------+--------+--------+ | 11 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | +--------------+---------------------------------------------------+--------+--------+ 5 rows in set (0.00 sec) What do we have? 3 PXC pods 3 ProxySQL POD The 3 PXC pods (or nodes) registered inside ProxySQL And several host groups. What are those host groups? mysql> select * from runtime_mysql_galera_hostgroups\G *************************** 1. row *************************** writer_hostgroup: 11 backup_writer_hostgroup: 12 reader_hostgroup: 10 offline_hostgroup: 13 active: 1 max_writers: 1 writer_is_also_reader: 2 max_transactions_behind: 100 comment: NULL 1 row in set (0.01 sec) ProxySQL is using the native galera support and has defined a writer hg, a backup writer hg, and a reader hg. Looking back at the server configuration we have 1 writer, 2 readers, and those same 2 readers are also backup writers. And what are the query rules? mysql> select rule_id, username, match_digest, active, destination_hostgroup from runtime_mysql_query_rules; +---------+--------------+---------------------+--------+-----------------------+ | rule_id | username | match_digest | active | destination_hostgroup | +---------+--------------+---------------------+--------+-----------------------+ | 1 | clustercheck | ^SELECT.*FOR UPDATE | 1 | 11 | | 2 | clustercheck | ^SELECT | 1 | 10 | | 3 | monitor | ^SELECT.*FOR UPDATE | 1 | 11 | | 4 | monitor | ^SELECT | 1 | 10 | | 5 | root | ^SELECT.*FOR UPDATE | 1 | 11 | | 6 | root | ^SELECT | 1 | 10 | | 7 | xtrabackup | ^SELECT.*FOR UPDATE | 1 | 11 | | 8 | xtrabackup | ^SELECT | 1 | 10 | +---------+--------------+---------------------+--------+-----------------------+ 8 rows in set (0.00 sec) Now, let’s scale up the deployment and add 2 more PXC pods: kubectl patch pxc cluster1 --type='json' -p='[{"op": "replace", "path": "/spec/pxc/size", "value": 5 }]' And let’s check the PODs kubectl get pods NAME READY STATUS RESTARTS AGE cluster1-proxysql-0 3/3 Running 0 124m cluster1-proxysql-1 3/3 Running 0 124m cluster1-proxysql-2 3/3 Running 0 124m cluster1-pxc-0 1/1 Running 0 149m cluster1-pxc-1 1/1 Running 0 146m cluster1-pxc-2 1/1 Running 0 147m cluster1-pxc-3 1/1 Running 0 2m53s cluster1-pxc-4 1/1 Running 0 2m10s And now the transition inside ProxySQL: mysql> select hostgroup_id,hostname,status, weight from runtime_mysql_servers; +--------------+---------------------------------------------------+--------+--------+ | hostgroup_id | hostname | status | weight | +--------------+---------------------------------------------------+--------+--------+ | 11 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | +--------------+---------------------------------------------------+--------+--------+ 5 rows in set (0.00 sec) mysql> select hostgroup_id,hostname,status, weight from runtime_mysql_servers; +--------------+---------------------------------------------------+--------------+--------+ | hostgroup_id | hostname | status | weight | +--------------+---------------------------------------------------+--------------+--------+ | 11 | cluster1-pxc-3.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 13 | cluster1-pxc-4.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 11 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | OFFLINE_HARD | 1000 | +--------------+---------------------------------------------------+--------------+--------+ 9 rows in set (0.00 sec) mysql> select hostgroup_id,hostname,status, weight from runtime_mysql_servers; +--------------+---------------------------------------------------+--------+--------+ | hostgroup_id | hostname | status | weight | +--------------+---------------------------------------------------+--------+--------+ | 11 | cluster1-pxc-4.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-3.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-3.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | +--------------+---------------------------------------------------+--------+--------+ 9 rows in set (0.00 sec) What happened? The new PXC nodes were added to ProxySQL and are ready to handle the traffic. We can also see that the previous master node, which was cluster1-pxc-2.cluster1 is now assigned to the reader hg and the new master is cluster1-pxc-4.cluster1 And what about the query rules? mysql> select rule_id, username, match_digest, active, destination_hostgroup from runtime_mysql_query_rules; +---------+--------------+---------------------+--------+-----------------------+ | rule_id | username | match_digest | active | destination_hostgroup | +---------+--------------+---------------------+--------+-----------------------+ | 1 | clustercheck | ^SELECT.*FOR UPDATE | 1 | 11 | | 2 | clustercheck | ^SELECT | 1 | 10 | | 3 | monitor | ^SELECT.*FOR UPDATE | 1 | 11 | | 4 | monitor | ^SELECT | 1 | 10 | | 5 | root | ^SELECT.*FOR UPDATE | 1 | 11 | | 6 | root | ^SELECT | 1 | 10 | | 7 | xtrabackup | ^SELECT.*FOR UPDATE | 1 | 11 | | 8 | xtrabackup | ^SELECT | 1 | 10 | +---------+--------------+---------------------+--------+-----------------------+ 8 rows in set (0.00 sec) Same as before. Query rules are not modified when adding/removing PXC pods.  But what happens when rules are modified? Adding and Removing ProxySQL Query Rules In our Operator, we use the ProxySQL Native Clustering. What does that means? It means that if the user made configuration changes (via admin port) on one instance, it is automatically distributed to all members. For example, adding a rule: We use Pod 0 for adding the rule: kubectl exec -it cluster1-proxysql-0 -- mysql -h127.0.0.1 -P6032 -uproxyadmin -padmin_password Defaulting container name to proxysql. Use 'kubectl describe pod/cluster1-proxysql-0 -n pxc' to see all of the containers in this pod. mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2270 Server version: 8.0.18 (ProxySQL Admin Module) Copyright (c) 2009-2020 Percona LLC and/or its affiliates Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> select rule_id, username, match_digest, active, destination_hostgroup from runtime_mysql_query_rules; Empty set (0.00 sec) mysql> INSERT INTO mysql_query_rules (active, match_digest, destination_hostgroup, cache_ttl,username) VALUES (1, "^SELECT.*table-dani.*", 10, NULL, "root"); Query OK, 1 row affected (0.00 sec) mysql> LOAD MYSQL QUERY RULES TO RUNTIME; Query OK, 0 rows affected (0.00 sec) mysql> select rule_id, username, match_digest, active, destination_hostgroup from runtime_mysql_query_rules; +---------+----------+-----------------------+--------+-----------------------+ | rule_id | username | match_digest | active | destination_hostgroup | +---------+----------+-----------------------+--------+-----------------------+ | 13 | root | ^SELECT.*table-dani.* | 1 | 10 | +---------+----------+-----------------------+--------+-----------------------+ 1 row in set (0.00 sec) And the rule is immediately replicated to the other pods, for example, POD 1: kubectl exec -it cluster1-proxysql-1 -- mysql -h127.0.0.1 -P6032 -uproxyadmin -padmin_password Defaulting container name to proxysql. Use 'kubectl describe pod/cluster1-proxysql-1 -n pxc' to see all of the containers in this pod. mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1977 Server version: 8.0.18 (ProxySQL Admin Module) Copyright (c) 2009-2020 Percona LLC and/or its affiliates Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> select rule_id, username, match_digest, active, destination_hostgroup from runtime_mysql_query_rules; +---------+----------+-----------------------+--------+-----------------------+ | rule_id | username | match_digest | active | destination_hostgroup | +---------+----------+-----------------------+--------+-----------------------+ | 13 | root | ^SELECT.*table-dani.* | 1 | 10 | +---------+----------+-----------------------+--------+-----------------------+ 1 row in set (0.00 sec) In Conclusion: The Percona Kubernetes Operator for Percona XtraDB Cluster is aware of the changes in the pods Modifications are automatically made in order to dispatch traffic to the new pods Query rules are not modified ProxySQL Cluster is enabled in order to maintain all the PODs in sync

  • MySQL Day Virtual Event: 5 Sessions in 1 Day
    MySQL Day Virtual Event Join us on July 29th, 2020 (8AM - 1PM PST) for a virtual event about why and how to upgrade to MySQL 8.0.  Learn the key reasons you should upgrade to 8.0. Discover the best practices developed by our support team based on their experience working directly with customers.  Get tips and techniques from our community evangelists.  Find out why the University of California at Irvine chose to upgrade to 8.0, learn about their process, their experience, and the improvements to their application performance. Register to the event and attend the sessions of your choice.  Sessions are running on the hour so you can easily plan your agenda around your session interest.  Each session will last approximately 30-40 minutes with a 15 minute Q&A, followed by 5-15 minute break in between sessions. AGENDA: About the sessions: Best Practices Tips | Upgrading to MySQL 8.0 Upgrading to MySQL 8.0 is easy and straight forward. This webinar will go over the upgrade process, the requirements, and how to use the new MySQL shell to determine potential problems before the upgrade. You will learn the dos and don’ts. After the webinar, you will know all you need to know to upgrade smoothly and effectively to MySQL 8.0. MySQL 8.0 Through the Eyes of the MySQL Support Team  Working directly with customers, the support team answers many questions related to upgrading to MySQL 8.0. This session will cover some of the best practices developed by the support team as well as Dos and Don’ts to be considered before upgrading. Customer Insights from UC Irvine In this session, you will get a quick introduction to UCI, why they chose MySQL, the system environment, number of servers, and some details about the architecture. The session will cover the reasons UCI chose to upgrade to 8.0, the upgrade process, and the upgrade experience.  MySQL 8.0 Indexes, Histograms, and Other Ways to Speed Up Your Queries Indexes have been used for years to speed up database queries but are badly understood and often over utilized. The system overhead of maintaining indexes can become burdensome. Plus MySQL has several types of indexes like multi-valued and functional that can add great value if used properly. Histograms were introduced in MySQL 8.0 and are great for data with little churn but they need to be properly established.This webinar will cover indexes, histograms, and some other 'secret' techniques to make your MySQL instances really perform. Transforming Your Application with MySQL 8.0 This session will focus on the business benefits of upgrading to MySQL 8.0.  There are many new features that makes MySQL 8.0 a must upgrade to version from ease of use, increase productivity, new SQL features, security and performance improvements and the ability to use the same database for both SQL and NoSQL. https://www.mysql.com/news-and-events/web-seminars/upgrading-to-mysql-8-0-2020/  

  • Running MySQL on selected NUMA Node(s)
    “Running MySQL on selected NUMA node(s)” looks pretty straightforward but unfortunately it isn’t. Recently, I was faced with a situation that demanded running MySQL on 2 (out of 4) NUMA nodes. Naturally, the first thing I tried was to restrict CPU/Core set using numactl --physcpubind selecting only the said CPUs/cores from the said NUMA nodes. MySQL was configured to use innodb_numa_interleave=1 so I was expecting it to allocate memory from the said NUMA nodes only (as I restricted usage of CPU/core). Suprise-1: MySQL uses numa_all_nodes_ptr->maskp that means all the nodes are opted even though the CPU task-set is limited to 2 NUMA nodes. Some lookout pointed me to these 2 issues from Daniel Black https://github.com/mysql/mysql-server/pull/104 (5.7) https://github.com/mysql/mysql-server/pull/138 (8.0) Issue proposes to switch to a more logical numa_get_mems_allowed(). As per the documentation it should return a mask of the node that are are allowed to allocate memory for the said process. ref-from-doc: numa_get_mems_allowed() returns the mask of nodes from which the process is allowed to allocate memory in it's current cpuset context. So I decided to apply the patch and proceed. Suprise-2: Just applying patch and relying on cpu/core set didn’t helped. So I thought of trying with membind option. Suprise-3: So now the command looks like: numactl --physcpubind=<cpu-set-from-numa-node-0,1> --membind=0,1 This time I surely expected that memory would be allocated from the said NUMA nodes only but it still didn’t. Memory was allocated from all 4 nodes. Some more documentation search, suggested that for numa_all_nodes_ptr looks at mems_allowed field as mentioned below numa_all_nodes_ptr: The set of nodes to record is derived from /proc/self/status, field "Mems_allowed". The user should not alter this bitmask. and as Alexey Kopytov pointed in PR#138, numa_all_nodes_ptr and numa_get_mems_allowed reads the same mask. This tends to suggest that numa_get_mems_allowed is broken or documentation needs to be updated. Just for completeness, I also tried numctl –interleave but that too didn’t helped Fact Validation: So I decided to try this using a simple program (outside MySQL) to validate the said fact. #include <iostream> #include <numa.h> #include <numaif.h> using namespace std; int main() { cout << *numa_all_nodes_ptr->maskp << endl; cout << *numa_get_mems_allowed()->maskp << endl; } numactl --membind=0-1 ./a.out 15 15 It is pretty clear that both seem to return the same mask value when numa_get_mems_allowed should return only memory allowed nodes. Workaround: I desperately needed a solution so tried using a simple workaround of manually feeding the mask (will continue to follow up about numactl behavior with OS vendor). This approach finally worked and now I can allocate memory from selected NUMA nodes only. +const unsigned long numa_mask = 0x3; struct set_numa_interleave_t { set_numa_interleave_t() { if (srv_numa_interleave) { ib::info(ER_IB_MSG_47) << "Setting NUMA memory policy to" " MPOL_INTERLEAVE"; - if (set_mempolicy(MPOL_INTERLEAVE, numa_all_nodes_ptr->maskp, + if (set_mempolicy(MPOL_INTERLEAVE, &numa_mask, numa_all_nodes_ptr->size) != 0) { ib::warn(ER_IB_MSG_48) << "Failed to set NUMA memory" " policy to MPOL_INTERLEAVE: " @@ -1000,7 +1001,7 @@ static buf_chunk_t *buf_chunk_init( #ifdef HAVE_LIBNUMA if (srv_numa_interleave) { int st = mbind(chunk->mem, chunk->mem_size(), MPOL_INTERLEAVE, - numa_all_nodes_ptr->maskp, numa_all_nodes_ptr->size, + &numa_mask, numa_all_nodes_ptr->size, MPOL_MF_MOVE); if (st != 0) { ib::warn(ER_IB_MSG_54) << "Failed to set NUMA memory policy of" (Of-course this needs re-build from source code and not an option for binary/package user (well there is .. check following section)). But then why didn’t you used … ? Naturally, most of you may suggest that this could be avoided by toggling innodb_numa_interleave back to OFF and using membind. Of-course this approach works but this approach is slightly different because then all the memory allocated is bounded by the said restriction vs innodb_numa_interleave is applicable only during buffer pool allocation. It may serve specific purpose but may not be so called comparable. This has been on my todo list to check effect of complete interleave vs innodb_numa_interleave. Conclusion Balance distribution on NUMA node has multiple aspects including core-selection, memory allocation, thread allocation (equally on selected numa node), etc…. Lot of exciting and surprising things to explore. If you have more questions/queries do let me know. Will try to answer them.

  • MySQL Terminology Updates
    It’s been 20 years since MySQL Replication was introduced in MySQL 3.23.15 (Released in May 2000). Since then, virtually every MySQL Database deployment in production has been using Replication in order to achieve high availability, disaster recovery, read scale out and various other purposes.… Tweet Share

  • Preventing MySQL Error 1040: Too Many Connections
    One of the most common errors encountered in the MySQL world at large is the infamous Error 1040: ERROR 1040 (00000): Too many connections What this means in practical terms is that a MySQL instance has reached its maximum allowable limit for client connections.  Until connections are closed, no new connection will be accepted by the server. I’d like to discuss some practical advice for preventing this situation, or if you find yourself in it, how to recover. Accurately Tune the max_connections Parameter This setting defines the maximum number of connections that a MySQL instance will accept.  Considerations on “why” you would want to even have a max number of connections are based on resources available to the server and application usage patterns.  Allowing uncontrolled connections can crash a server, which may be considered “worse” than preventing further connections.  Max_connections is a value designed to protect your server, not fix problems related to whatever is hijacking the connections. Each connection to the server will consume both a fixed amount of overhead for things like the “thread” managing the connection and the memory used to manage it, as well as variable resources (for instance memory used to create an in-memory table.  It is important to measure the application’s resource patterns and find the point at which exceeding that number of connections will become dangerous. Percona Monitoring and Management (PMM) can help you find these values. Look at the memory usage patterns, threads running, and correlate these with the number of connections.  PMM can also show you spikes in connection activity, letting you know how close to the threshold you’re coming.  Tune accordingly, keeping in mind the resource constraints of the server. Seen below is a server with a very steady connection pattern and there is a lot of room between Max Used and Max Connections. Avoiding Common Scenarios Resulting in Overuse of Connections Having worked in the Percona Managed Services team for years, I’ve had the first-hand opportunity to see where many businesses get into “trouble” from opening too many connections.  Conventional wisdom says that it will usually be a bad code push where an application will behave badly by not closing its open connections or by opening too many quickly for frivolous reasons. There are other scenarios that I’ve seen that will cause this too even if the application is performing “as expected”.  Consider an application stack that utilizes a cache.  Over time the application has scaled up and grown.  Now consider the behavior under load if the cache is completely cleared.  The workers in the application might try to repopulate the cache in mass generating a spike that will overwhelm a server. It is important to consider the systems that use the MySQL server and prevent these sorts of edge case behaviors or it might lead to problems.  If possible, it is a good idea to trap errors in the application and if you run into “Too many connections” have the application back off and slip for a bit before a retry to reduce the pressure on the connection pool. Safeguard Yourself From Being Locked Out MySQL actually gives you “breathing” room from being locked out.  In versions 5.xx the SUPER user has a +1 always available connection and in versions 8.xx there is a +1 for users with CONNECTION_ADMIN privileges.  However, many times a system has lax privilege assignments and maybe an application user is granted these permissions and consumes this extra emergency connection.  It is a good idea to audit users and be sure that only true administrators have access to these privileges so that if a server does consume all its available connections, an administrator can step in and take action.  There are other benefits to being strict on permissions.  Remember that the minimum privilege policy is often a best practice for good reason!  And not always just “security”. MySQL 8.0.14+ also allows us to specify admin_address and admin_port to provide for a completely different endpoint, bypassing the primary endpoint and establishing a dedicated admin connection.  If you’re running a lower version but are using Percona Server for MySQL, you’ll have the option of using extra_port and extra_max_connections to achieve another way of connecting. If you are able to log in as an admin account, you may be able to kill connections, use pt-kill to kill open connections, adjust timeouts, ban offending accounts, or raise the max_connections to free up the server. If you are unable to log in, you may try to adjust the max_connection value on the fly as a last resort.  Please see Too many connections? No problem! Use a Proxy Another way to alleviate connection issues (or move the issue to a different layer in the stack), is to adopt the user of a proxy server, such as ProxySQL to handle multiplexing.  See Multiplexing (Mux) in ProxySQL: Use Case. Limits Per User Another variable that MySQL can use to determine if a connection should be allowed is max_user_connections.  By setting this value, it puts a limit on the number of connections for any given user.  If you have a smaller number of application users that can stand some limit on their connection usage, you can set this value appropriately to prevent total server connection maximum. For instance, if we know we have 3 application users and we expect those 3 users to never individually exceed 300 connections, we could set max_user_connections to 300.  Between the 3 application users, only a total of 900 connections would be allowed.  If max_connections was set to 1000, we’d still have 100 open slots. Another approach in this same vein that is even more granular is to limit connections PER USER account.  To achieve this you can create an account like this: CREATE USER 'user'@'localhost' IDENTIFIED BY 'XXXXXXXX' WITH MAX_USER_CONNECTIONS 10; It is a good idea to limit connections to tools/applications/monitoring that are newly being introduced in your environment and make sure they do not “accidentally” consume too many connections. Close Unused Connections MySQL provides the wait_timeout variable.  If you observe connections climbing progressively over time and not in a spike (and your application can handle it), you may want to reduce this variable from its default of 28800 seconds to something more reasonable.  This will essentially ask the server to close sleeping connections. These are just a few considerations when dealing with “Too many connections”.  I hope they help you.  You may also consider further reading on the topic in this previous Percona blog post, MySQL Error: Too many connections.