Common script records in mysql

Keywords: MySQL SQL Database mysqldump

-- select * from information_schema.processlist 
show processlist; -- View the user link show full processlist; kill id;
/**
     View state
   Connections, the number of connections attempted to connect to (whether successful or not) MySQL servers.
   Max_used_connections, the maximum number of connections that have been used concurrently since the server started.
   Threads_connected, the current number of connections.
 */
show status like '%Connections%';

Show variables like'% connect%'; - max_connections, maximum number of connections

- Reset counter max_connection_errors error connection number
FLUSH HOSTS;

/**
1. MySQL Query Status
Com_select;Com_update;Com_insert;Com_delete;Com_change_db

2. Query cache size: query_cache_size(variables)
Query Cache Maximum Query Data Set Size: query_cache_limit(variables);
Number of queries in the cache: Qcache_inserts(status);
Query cache hit ratio: (Qcache_hits/(Qcache_hits+Qcache_inserts)*100% (status)

3. Index cache hit rate
 Index cache size: key_buffer_size (variables)
Index cache hit ratio: (Key_reads/Key_read_requests)*100% (status)

4. Number of concurrent connections
 Maximum number of permissible connections: max_connections(variables)
Actual maximum number of connections: max_used_connections(status)
Current number of connections: Threads_connected(status)
Number of active connections: Threads_running(status)
Number of cached connections: Threads_cache(status)

5. Traffic statistics
Bytes_received ,Bytes_sent(status)

6. Connection Number
 Number of connections per second: Connections(status)
Number of connections actually created per second: Threads_created(status)

7. Table Locking Statistics
 Number of table locks released immediately: Table_locks_immediate(status)
Number of table locks to wait: Table_locks_waited(status)
*/

EXPLAIN optimization statement

Create user privileges - user's operation rights, such as SELECT, INSERT, UPDATE. Use ALL.; database name, tablename - table name if you want to grant the user the corresponding operation rights for all databases and tables, such as *. *. 
GRANT privileges ON databasename.tablename TO 'username'@'host'; 

The user authorized by the above command cannot authorize other users. If you want the user to authorize, use the following command: 
GRANT privileges ON databasename.tablename TO 'username'@'host' WITH GRANT OPTION; 

GRANT ALL ON xz.* TO 'xz'@'192.168.%' WITH GRANT OPTION;
GRANT ALL ON xz.* TO 'xz'@'localhost' WITH GRANT OPTION;

- Setting passwords
SET PASSWORD FOR 'xz'@'192.168.18.%' = PASSWORD("xz"); 
SET PASSWORD FOR 'xz'@'localhost' = PASSWORD("xz"); 

flush privileges;

my.cnf
max_connections
 Maximum number of concurrent connections. When MySQL's concurrent connection reaches this setting, the new connection will be rejected. When MySQL is found to be capable of handling more concurrency, it is recommended to increase this value to bring a higher load (CPU/IO/memory) to the server.
Default value: 100, reference setting: 900

back_log
 Maximum TCP/IP connection queue. When MySQL has a large number of connection requests at the same time, MySQL will try to process these requests with the existing threads. If it can't handle them, MySQL will put the connection in a queue first, and then start new threads. This process will be fast, but if there are many concurrent requests, you need to raise this value, otherwise the new connection request will be rejected. During a pressure test, it was found that the client returned a lot of "Lost connection to MySQL" because the default value of back_log was too small. Increasing this value increases CPU load and consumes more memory.
Default: 50, Reference Settings: 200

skip-name-resolve
 Turn off reverse domain name resolution. MySQL defaults to reverse domain name resolution for each client connection, and it is strongly recommended that reverse domain name resolution be turned off. The way to close it is to add a skip-name-resolve line to my.cnf

innodb_file_per_table
 If you use innodb, it is strongly recommended to open this setting, otherwise all InnoDB tables share a file, and the size of the file will not be reduced by the reduction of table data, and the disk will explode over time. This is a bug in mysql: http://bugs.mysql.com/bug.php?id=1341?

max_connect_errors
 When the client connects to the server (more than connect_timeout), the server records an error to the client, and when the number of errors reaches max_connect_errors, the client is locked. Unless the FLUSH HOSTS command is executed. Absolutely a mine!
Default: 10, Reference Settings: 1844674407370954751

connect_timeout
 The number of seconds the connection timed out.
Default: 5, Reference Settings: 15
  
slave_net_timeout
 When MySQL master-slave replication occurs, when the network between Master and Slave is interrupted, but Master and Slave cannot detect it (such as firewall or routing problems). Slave waits for seconds set by slave_net_timeout to think that the network is malfunctioning, and then reconnects and catches up with the data from the main database. The default is 3600 seconds. I believe that after an hour, the Yellow cauliflower is cold.
Default: 3600, Reference Settings: 30

wait_timeout
 Connection expiration seconds. When a connection SLEEP exceeds wait_timeout seconds, the MySQL server interrupts the connection. Over-long setting of this value may lead to a large number of SLEEP links occupying system resources, too small will lead to "MySQL has gone away" error.
Default: 28800, Reference Settings: 288000

key_buffer
 Primary key cache. If you find a large number of slow log s, you can try to raise this value, which will result in higher memory overhead.

table_cache
 Memory allocated to frequently accessed tables. By increasing this value, disk IO can generally be reduced, but more memory will be occupied accordingly.


Export sql scripts
 Mysqldump-u username-p database name > storage location
mysqldump -u root -p test > c:/a.sql

--no-create-info,-t 
Export data only, without adding the CREATE TABLE statement.

--no-data,-d 
No data is exported, only database table structure is exported.

--opt 
This is just a shortcut option, equivalent to adding -- add-drop-tables --add-locking --create-option --disable-keys --extended-insert --lock-tables --quick --set-charset options at the same time. This option allows mysqldump to export data quickly, and the exported data can be imported quickly. This option is turned on by default, but can be disabled with skip-opt. Note that if you run mysqldump without specifying the -- quick or -- opt option, the entire result set is put in memory. Problems may arise if large databases are exported.

--quick,-q 
This option is useful when exporting large tables, forcing mysqldump to fetch records directly from server queries instead of caching all records into memory.

--routines,-R 
Export stored procedures and custom functions.



Import sql scripts 
Mysqldump-u username-p database name < storage location
mysqldump -u root -p test < c:/a.sql




1. View only the contents of the first binlog file 
show binlog events;

2. View the contents of the specified binlog file 
show binlog events in 'mysql-bin.000002' from 123 limit 1,2;

3. View the binlog file currently being written 
show master status\G

4. Get a list of binlog files 
show binary logs;


5. export sql
mysqlbinlog  binlog.001128 binlog.001129 binlog.001130 --start-datetime='2015-12-09 00:00:00' --stop-datetime='2015-12-18 18:00;00' --database=xz > /backup/xz_log.sql | mysql -uroot -p 

6. filtering sql
  less xz_log.sql |grep  -i -E 'insert|update|delete' -C2 |grep  -i dbk_hotcommon
  Derived Direct Filtration)
  mysqlbinlog  binlog.001130 --start-datetime='2015-12-09 00:00:00' --stop-datetime='2015-12-18 18:00;00' --database=xz |grep -i -w insert -C2 |grep -i -w dbk_hotcommon > /backup/xz_log_insert.sql | mysql -uroot -p

Partition table maintenance script

-- delete a partition
alter table sens_sensitive drop partition pm;
-- Increase partition
alter table sens_sensitive add partition (partition p6 values less than (2,'2014-06-01 00:00:00'));

-- Query partition status
select table_name,partition_name,partition_description from information_schema.partitions where table_schema = 'fwpt'

-- Reconstructing zoning: This has the same effect as deleting all records saved in the partition and then re-inserting them. It can be used to clean up partition debris. 
alter table t1 rebuild partition p0, p1;

-- Analysis partition: Read and save the key distribution of the partition.
alter table t1 analyze partition p3;

-- Repair zoning: Repair damaged zones.  
alter table t1 repair partition p0,p1;

-- Check partitions: You can use almost non-partitioned tables check table Check partitions in the same way.  
alter table sens_sensitive check partition p1_201312;

-- Optimize partitioning: If a large number of rows are deleted from the partition, or for a row with variable length (that is, there are varchar,blob,or text Type columns) have been modified a lot and can be used“ alter table ... optimize partition"To reclaim unused space and defragment partitioned data files. 
-- Use on a given partition table“ optimize partition"Equivalent to running on that partition check partition Check the partition. analyze partition Analytical partition, and repair partition Repair partition.
alter table sens_sensitive optimize partition p1_201312,p1_201401;

Automated Extended Partition-Stored Procedure Script

-- delimiter $$ -- Command interpreter, default is a sign, according to the actual situation
drop procedure if exists create_partition;  -- $$
create procedure create_partition (in databasename varchar(50),in tablename varchar(50))  
l_end:begin 
        declare max_partition_description varchar(255) default 0;  -- Store partition expression string
        declare max_partition_description_date varchar(255) default 0;  -- Store date expression section
        declare max_partition_description_organid varchar(255) default 0;  -- Storage mechanism part
        declare p_name varchar(255) default 0; -- Partition naming
        declare p_description varchar(255) default 0; -- Partition expression
        declare partition_name_delete varchar(100); -- No body exists, partitions to be deleted
        declare i int default 1; -- Loop variable
    declare stop int default 0; -- Cursor Stop Identification

        -- Invalid partition for query, nonexistent for organization
        declare cur_delete cursor for (select partition_name from information_schema.partitions where table_schema = databasename and table_name = tablename and not exists(select 1 from sys_department where departmentlevel = 1 and partition_description like concat(departmentid,',%') limit 1 ));
        -- Query all organizational partitions
    declare cur cursor for (select p.* from (select max(partition_description) from information_schema.partitions where table_schema = databasename and table_name = tablename and exists ( select 1 from sys_department where departmentlevel = 1 and partition_description like concat(departmentid, ',%') limit 1)group by substr(partition_description,1,position(',' in partition_description)) union select concat(departmentid,',\'1999-09-09 00:00:00\'')  from sys_department where departmentlevel = 1 and not exists(select 1 from information_schema.partitions where partition_description like concat(departmentid, ',%') and  table_schema = databasename and table_name = tablename limit 1)) p );

    declare continue handler for sqlstate '02000' set stop=1; -- Cyclic End Control

        /**   Clean up data partitions of deleted, non-existent organizations**/
    open cur_delete; -- Open cursor
    fetch cur_delete into partition_name_delete; -- The first partition to be deleted
    while stop <> 1 do
                if partition_name_delete is not null then
           set @del_sql=concat('alter table ',tablename,' drop partition ',partition_name_delete); -- Organize Delete Partition Statements
           select @del_sql; -- Output deletion script
                     /** Execute deletion**/
           prepare stmt1 from @del_sql;
           execute stmt1;
           deallocate prepare stmt1;
                end if;
        fetch cur_delete into partition_name_delete; -- Next deleted partition
    end while;
    close cur_delete;


        /**** Expanding Zoning****/
    set stop = 0;
    open cur; -- Open cursor
    fetch cur into max_partition_description; -- First Processing Data
    while stop <> 1 do

                -- mechanism id Part
                set max_partition_description_organid = substr(max_partition_description,1,position("," in max_partition_description) - 1);
                -- Time partition section
                set max_partition_description_date = substr( max_partition_description,(position("," in max_partition_description)+2),19);

                -- Partitions are not created until the partition table has three months to go.
                if max_partition_description_date > adddate(now(), interval 3 month) then  
                    select concat("organid ",max_partition_description_organid," partition table is enough...") as "***info*****************************";
                else
                    set i = 0;
                    while i <= 6 do -- Expanding 6 months
                        if max_partition_description_date < now() then
                            -- The maximum partition has been less than the current time, starting from the current month partition
                            set max_partition_description_date = substr(now(),1,10);
                        end if;

                        -- Partition name p+mechanism id+Years (Years and months are the upper limit. Actual data are from last month.)
                        set p_name = concat('p',max_partition_description_organid,'_',substr(replace(max_partition_description_date, '-', ''),1,6)); 

                        -- Setting New Month+1
                        set max_partition_description_date = adddate(max_partition_description_date, interval 1 month);

                        -- Create 6 partitions in a loop-every time+1 month(Years and months are the upper limit. Actual data are from last month.)
                        set p_description = concat(max_partition_description_organid,',\'',substr(max_partition_description_date,1,8),'01 00:00:00\'');
                        set @s=concat('alter table ',tablename,' add partition  (partition ',p_name,' values less than (',p_description,'))');  
                        select @s as "***info*****************************";
                        prepare stmt2 from @s;  
                        execute stmt2;  
                        deallocate prepare stmt2;  
                        set i = i + 1 ;  
                    end while;
                end if; 

        fetch cur into max_partition_description; -- Next Processing Data
    end while;
    close cur;


end l_end;  -- $$
-- delimiter ;

Automated Extension Event Script

DELIMITER || 
CREATE EVENT auto_set_partitions ON SCHEDULE EVERY 15 DAY DO -- 15Execute once a day

BEGIN
    CALL create_partition ( 'fwpt' , 'sens_sensitive'); -- Sensitive Information Data Table
  CALL create_partition ( 'fwpt' , 'sens_sensitive_wb'); -- Microblog Information Data Table
  CALL create_partition ( 'fwpt' , 'issue_yq_info_doc'); -- Topic related documentation table
  CALL create_partition ( 'fwpt' , 'issue_wb_info_doc'); -- Topic related documentation table
END; || 
DELIMITER || 

Posted by powaz on Mon, 08 Apr 2019 13:30:31 -0700