RSS

Category Archives: MySql

MySql Interview DBA Questions & Answers

Coming soon,

Please subscribe to this thread under Follow Blog Via section, so you can get an email when i post these Important MySql DBA Questions & Answers, which are very crucial for any DBA’s Learning or Experienced.

I will discuss more on DBA job duties, daily Tasks, How to Resolve Issues quickly, Identify Bottlenecks, Upgrading Servers etc in my Coming Posts, if you are truly Interested to know more about these topics or if you want to get hand’s on these topics, please subscribe to this thread.

and for more info on MySql, Please click on the Links under Database Section, i will update them with more info soon.

Thanks and have a Great day.

 

Tags: , , , , , , , , ,

Commonly used daily Linux Commands.

1) How to Replace a big string with WildCard Characters.  just dump the string in between the hash (#) and  it does the job.  use /g at the end to replace strings everywhere.

%s#/this/folder/replaces/#/with/this/one/#g

2) Search for a word and delete to the end of the line.  for ex: in an XML file  sometimes you need to delete 1000’s of lines matching a word or a <string> spread across the XML file,  you can’t simply replace some lines like below ex.. in MS-WORD or any other editors,   from the below ex: only delete those which starts with <pcode> until the end of the line and preseve the “\>”. use the below simple command to do the job.

<xml=? >
<scalar variable   <pcode  value=1000  test1 test3 test4> />
<vector variable   <pcode  value=1001  test5 test6 test7> />
<stellar value       <pcode  value=1002  test8 test9 test10> />
</xml>

%s/\<pcode\>.*/\/>/

3)  How to find Un-wanted files, if you’r clean folder has been messed up.  I had this situation where some of the junk files has been added to my MySql Data direcotory where i see only the Database Tables which are useful and i used the below command to find all the files excluding MySql Database Tables.

grep ./ --exclude=*.{ibd,MYD,MYI,frm} *

4)  CHOP the file to 1gb from a 9.5GB file : i had this big file which is 9.5gb Data MySql Log file for my analysis purposes, however my script takes too long time to read this file and i had no choice to chop this file to 1GB and read the data from this 1GB file which makes it easier for MySql to read faster.  I used the below command.

 dd if=10gbfilename of=1gb_new_filename bs=100M count=10

5)  How to get extract 100 lines of data from a file which has 10,000 lines.
sed -n 1,100p test1.log > outputfile.log
6) How to find which raid your Linux software has..

for i in /dev/md*; do printf ‘%s: %s\n’ $i “$( sudo /sbin/mdadm –detail $i 2>/dev/null | grep ‘Raid Level’ )”; done

7) Convert files to Unix &  UTF8 format.

Convert to UTF8 format

/usr/bin/iconv -c -f LATIN1 -t UTF8 insert_statements_postgres1.sql > utf8_postgres_inserts.sql

/usr/bin/iconv -c -f LATIN1 -t UTF8 delete_statements_post1.sql > utf8_postgres_deletes.sql

Convert Bulk files to utf8 format: (csv files)

for file in *.csv; do
/usr/bin/iconv -c -f LATIN1 -t UTF8 “$file” -o “${file%.csv}.csv”
done

 
 

Tags: , , , , , ,

Replication enhancements in MySql 5.5

Recently we upgraded all our databases from MySql 5.1 to 5.5 Version and we see Couple of good enhancements done on the Replication side, couple of them i noticed are.

1) Ignore any DDL commands if they doesn’t comply with master : For ex: in a  Master-Master Setup environment

Master1 (MySql 5.1 Version)  – Slave of Master2
Master2 (MySql 5.1 Version)  – Slave of Master 1
Replica 1 (Mysql 5.5 Version)
Replica 2 (MySql 5.5 Version)

i)  we had 2 temporary tables created on Master and replicated to all the slaves where slaves are upgraded to Mysql 5.5 and Both Masters running MySql 5.1 version (bad scenario),
ii) I deleted 2 temporary tables with set Sql_log_bin=0  on master1 (not Master 2)
iii) I ran the same script on all Replication Slaves separately and Deleted temp tables.
iv) i forgot to set SQL_LOG_BIN=0 on Master2 and executed the script.  Master1  slave stopped with an error “unable to locate temp tables 1 and 2. but on Replication Slaves never been stopped or had any errors.  MySql 5.5 Version simply ignored those commands when tables doesn’t exist.

This is great enhancement.

Read the rest of this entry »

 

Tags: , , , , , , , , , ,

Improving Replication Performance

Have  you ever seen your replica not catching upto speed even though you have everything setup properly in your config file, i faced this situation when i setup a new replication slave and replication lag keeps increasing or not catching up fast,  tried many ways to tweak the my.cnf  configuration file, increased memory, modified buffer_pool_size to the max limit of the memory, increased additional buffer, added more cpu’s does not helped me much to fix the replication lag. finally after reading to some other blogs i found  innodb_flush_log_at_trx_commit =1 is the reason for replication lag,

When the value is 1 (the default), the log buffer is written out to the log file at each transaction commit and the flush to disk operation is performed on the log file

Solution: Set the value is 2, the log buffer is written out to the file at each commit, but the flush to disk operation is not performed on it. However, the flushing on the log file takes place once per second also when the value is 2.

innodb_flush_log_at_trx_commit = 1

Hope this helps..

 

Tags: , , , , , , , , ,

Increasing Ulimit for Partitioning tables

Data Ofen grow larger while we keep adding data to it and Retrieving data would slow down eventually and takes longer time to produce results, this is sometimes unacceptable for clients to wait that longer,  One great solution to Retrieve data faster without Archiving or purging any data is Table Partitioning,  Table will be broken into small chunks according to date range / no.of records (like 1-100, 101-200, 201-300 and so on… )  and a lot of files will be created due to this partitioning,  

MySql User  on the HOST has a limit of only 4096 files to store by default. this has to be increased  to a certain number, otherwise  MySql database will be shutdown or corrupt due to insufficient files on the host.

To overcome this situation, you need to increase the file limit on the host.  Please add the below code in bash file and run it from your host as root, this will increase to the no.of files you need, but please make sure not to give the number above OS limit . the below example will allow 30,000 files for mysql user while creating any table partitions.

**********************************************

#! /bin/csh

cat >/etc/security/limits.d/mysql.conf <<EOF
mysql soft nofile 30000
mysql hard nofile 30000
EOF

**********************************************

To test the above scritp. login as user mysql and run “ulimit -n” command.

before running the script
> ulimit -n
4096

After running the above script
> ulimit -n
30000

 

 
Leave a comment

Posted by on January 27, 2012 in Database Administration, linux, MySql

 

Tags: , , ,

New Config File Creator from Percona

Percona Tools Website   is a beautiful tool to build a config file according to your Server Requirements, which is  a real benefit for any Companies who are using MySql having any Server performance issue, This tool may not exactly suit all your required settings but it surely helps you to build a better config file, you can modify it after creating this file.

it is worth a try…

 
Leave a comment

Posted by on January 23, 2012 in Database Administration, MySql, percona

 

Extracting 1 week of Data from a Big log File

It is very difficult to read one week of data from a Big Log file (i.e 2gb size log file).  where it takes 100% cpu and blocks all the processess, and take s more time to get a small piece of data using less command in unix.   to resolve this, search for the first line number of the 1st day of the Week and last line no of the last day of the Week and use SED command to extract from starting line position to end line position, copy that data to a new file.

First Try to Find the Line No’s of the Starting Date of 1 week (lets say 9 Apr 2010 – 16 Apr 2010)

grep -n ‘Time: 100409’ /var/log/mysqld/myserver_slow_queries.log | more

Copy or write down the Line no. let’s say 2399098

grep -n ‘Time: 100416’ /var/log/mysqld/myserver_slow_queries.log | more

Copy or write down the Last Line no. 2483712

now run the SED Unix Command to grab the data from start line to end line into a new log file.

sed -n 2399098,2483712p myserver_slow_queries.log > myserver_slow_queries.log.week

Done. One Week of Data from a Big Log file will be copied to a new file.

 
Leave a comment

Posted by on August 23, 2010 in Database Administration, linux, MySql

 

Tags: , , , ,

 
%d bloggers like this: