-1

the table row is about one hundred million, sometimes the io bps about 150 IOPS about 4k

  1. os version: CentOS Linux 7
  2. MySQL version: docker mysql:5.6
server_id=3310
skip-host-cache
skip-name-resolve
max_allowed_packet=20G
innodb_log_file_size=1G
init-connect='SET NAMES utf8mb4'
character-set-server = utf8mb4
collation-server = utf8mb4_general_ci
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=5120M
expire-logs-days=7
log_bin=webser
binlog_format=ROW
back_log=1024
slow_query_log
slow_query_log_file=slow-log
tmpdir=/var/log/mysql
sync_binlog=1000
  1. the create table statement
CREATE TABLE `device_record` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `os` tinyint(9) DEFAULT NULL,
  `uid` int(11) DEFAULT '0',
  `idfa` varchar(50) DEFAULT NULL,
  `adv` varchar(8) DEFAULT NULL,
  `oaid` varchar(100) DEFAULT NULL,
  `appId` tinyint(4) DEFAULT NULL,
  `agent` varchar(100) DEFAULT NULL,
  `channel` varchar(20) DEFAULT NULL,
  `callback` varchar(1500) DEFAULT NULL,
  `activeAt` datetime DEFAULT NULL,
  `chargeId` int(11) DEFAULT '0',
  `createAt` datetime DEFAULT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `idfa_record_index_oaid` (`oaid`),
  UNIQUE KEY `index_record_index_agent` (`agent`) USING BTREE,
  UNIQUE KEY `idfa_record_index_idfa_appId` (`idfa`) USING BTREE,
  KEY `index_record_index_uid` (`uid`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=1160240883 DEFAULT CHARSET=utf8mb4
  1. insert statement
@Insert(
      "insert into idfa_record (os,idfa,oaid,appId,agent,channel,callback,adv,createAt) "
          + "values(#{os},#{idfa},#{oaid},#{appId},#{agent},#{channel},#{callback},#{adv},now()) on duplicate key "
          + "update createAt=if(uid<=0,now(),createAt),activeAt=if(uid<=0 and channel != #{channel},null,activeAt),channel=if(uid<=0,#{channel},channel),"
          + "adv=if(uid<=0,#{adv},adv),callback=if(uid<=0,#{callback},callback),appId=if(uid<=0,#{appId},appId)")
2
  • 1
    Mysql <> SQL server - please correct your tags Commented Apr 24, 2022 at 1:32
  • Could you post A) TEXT of two INSERTs from your Slow Query Log? and B) EXPLAIN INSERT .... so we can see what the OPTIMIZER has done with your insert request. Commented Apr 25, 2022 at 14:32

1 Answer 1

2

100M rows, but the auto_increment is already at 1160M? This is quite possible, but...

  • Most importantly, the table is more than halfway to overflowing INT SIGNED.
  • Are you doing inserts that "burn" ids?
  • Does the existence of 4 Unique keys cause many rows to be skipped?

This seems excessive: max_allowed_packet=20G.

  • How much RAM is available?
  • Does swapping occur?

How many rows are inserted per second? What is "bps"? (I am pondering why there are 4K writes. I would expect about 2 IOPS per Unique key per INSERT, but that dones not add up to 4K unless you have about 500 Inserts/sec.

Are the Inserts coming from different clients? (This feeds into "burning" ids, sluggishness, etc.)

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.