MySQL Multi threaded Replication

The general architecture is using 'slave' worker threads on the slave machine to process replicated transactions to the slave databases. 


The key word here is databases.  This design gives no benefit to a single database because a single slave worker thread works per database to ensure that transactions are processed in order.    So if you have for example 3 worker threads and one database with all your tables in it.  Two of the threads will be idle and therefore will not take advantage of multithreaded replication.

The issue with this is if you have a long running transaction on a database this may take longer to process than younger transactions in a different database that could get processed first.   This could lead to a situation where the slave could potentially be in a state the master was never was in.  FK's may also be an issue as transactions containing them will need to be processed together or application code will need to cater for this and ensure transactions are processed in the correct order.

The setup required to take advantage of this setup may involve redesigning your instance as this only works for an instance with multiple databases   This will also take advantage of multi core CPUs, which should be an added performance benefit.  One client had one database with around 50 tables and because replications lag was a main factor for them they opted to redesign the instance and split it into several databases which helped the situation.

Testing
I did a test with multiple databases and a table in each.   I then set up a script that inserted ten thousand rows into each table in each database and run all 4 scripts together.  Immediately four slave worker thread started processing.  They took approximately the same time complete.  So the benefit will be that if you split the application into many databases then have a table in each database (unlikely) the database will be as up to date as possible, but for large loads/transactions there will be no benefit.

Found a great link with more detail:

The set up was a little more complicated.  There are a couple of new parameters:
master_info_repository='table'
global master_info_repository='table'
slave_parallel_workers=<number required>

Build Notes
Build a master and slave and ensure replication is working.  Both the master and the slave contain the following parameters:
master-info-repository  = TABLE
relay-log-info-repository = TABLE

[localhost] {root} ((none)) > STOP SLAVE;
Query OK, 0 rows affected (0.01 sec)

[localhost] {root} ((none)) > set global slave_parallel_workers=5;
Query OK, 0 rows affected (0.00 sec)

[localhost] {root} ((none)) > START SLAVE;
Query OK, 0 rows affected, 1 warning (0.02 sec)

[localhost] {root} ((none)) > SHOW WARNINGS;
+-------+------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Level | Code | Message |
+-------+------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Note | 1753 | slave_transaction_retries is not supported in multi-threaded slave mode. In the event of a transient failure, the slave will not retry the transaction and will stop. |
+-------+------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

[localhost] {root} ((none)) > show global variables like "%worker%";
+------------------------+-------+
| Variable_name | Value |
+------------------------+-------+
| slave_parallel_workers | 5 |
+------------------------+-------+
1 row in set (0.00 sec)

[localhost] {root} ((none)) > SELECT * FROM mysql.slave_worker_info;
+----+----------------+---------------+-----------------+----------------+---------------------------+--------------------------+----------------------------+---------------------------+------------------+-----------------------+------------------------------------------------------------------+
| Id | Relay_log_name | Relay_log_pos | Master_log_name | Master_log_pos | Checkpoint_relay_log_name | Checkpoint_relay_log_pos | Checkpoint_master_log_name | Checkpoint_master_log_pos | Checkpoint_seqno | Checkpoint_group_size | Checkpoint_group_bitmap |
+----+----------------+---------------+-----------------+----------------+---------------------------+--------------------------+----------------------------+---------------------------+------------------+-----------------------+------------------------------------------------------------------+
| 1 | | 0 | | 0 | | 0 | | 0 | 0 | 64 | |
| 2 | | 0 | | 0 | | 0 | | 0 | 0 | 64 | |
| 3 | | 0 | | 0 | | 0 | | 0 | 0 | 64 | |
| 4 | | 0 | | 0 | | 0 | | 0 | 0 | 64 | |
| 5 | | 0 | | 0 | | 0 | | 0 | 0 | 64 | |
+----+----------------+---------------+-----------------+----------------+---------------------------+--------------------------+----------------------------+---------------------------+------------------+-----------------------+------------------------------------------------------------------+
5 rows in set (0.00 sec)

I then wrote scripts to start inserting rows into 5 separate tables in 5 separate databases whilst this was happening the continue querying form mysql.slave_worker_info and all 5 workers will start working using 5 separate threads.

1 comment: