RSS

Tag Archives: grep

Commonly used daily Linux Commands.

1) How to Replace a big string with WildCard Characters.  just dump the string in between the hash (#) and  it does the job.  use /g at the end to replace strings everywhere.

%s#/this/folder/replaces/#/with/this/one/#g

2) Search for a word and delete to the end of the line.  for ex: in an XML file  sometimes you need to delete 1000’s of lines matching a word or a <string> spread across the XML file,  you can’t simply replace some lines like below ex.. in MS-WORD or any other editors,   from the below ex: only delete those which starts with <pcode> until the end of the line and preseve the “\>”. use the below simple command to do the job.

<xml=? >
<scalar variable   <pcode  value=1000  test1 test3 test4> />
<vector variable   <pcode  value=1001  test5 test6 test7> />
<stellar value       <pcode  value=1002  test8 test9 test10> />
</xml>

%s/\<pcode\>.*/\/>/

3)  How to find Un-wanted files, if you’r clean folder has been messed up.  I had this situation where some of the junk files has been added to my MySql Data direcotory where i see only the Database Tables which are useful and i used the below command to find all the files excluding MySql Database Tables.

grep ./ --exclude=*.{ibd,MYD,MYI,frm} *

4)  CHOP the file to 1gb from a 9.5GB file : i had this big file which is 9.5gb Data MySql Log file for my analysis purposes, however my script takes too long time to read this file and i had no choice to chop this file to 1GB and read the data from this 1GB file which makes it easier for MySql to read faster.  I used the below command.

 dd if=10gbfilename of=1gb_new_filename bs=100M count=10

5)  How to get extract 100 lines of data from a file which has 10,000 lines.
sed -n 1,100p test1.log > outputfile.log
6) How to find which raid your Linux software has..

for i in /dev/md*; do printf ‘%s: %s\n’ $i “$( sudo /sbin/mdadm –detail $i 2>/dev/null | grep ‘Raid Level’ )”; done

7) Convert files to Unix &  UTF8 format.

Convert to UTF8 format

/usr/bin/iconv -c -f LATIN1 -t UTF8 insert_statements_postgres1.sql > utf8_postgres_inserts.sql

/usr/bin/iconv -c -f LATIN1 -t UTF8 delete_statements_post1.sql > utf8_postgres_deletes.sql

Convert Bulk files to utf8 format: (csv files)

for file in *.csv; do
/usr/bin/iconv -c -f LATIN1 -t UTF8 “$file” -o “${file%.csv}.csv”
done

 
 

Tags: , , , , , ,

Extracting 1 week of Data from a Big log File

It is very difficult to read one week of data from a Big Log file (i.e 2gb size log file).  where it takes 100% cpu and blocks all the processess, and take s more time to get a small piece of data using less command in unix.   to resolve this, search for the first line number of the 1st day of the Week and last line no of the last day of the Week and use SED command to extract from starting line position to end line position, copy that data to a new file.

First Try to Find the Line No’s of the Starting Date of 1 week (lets say 9 Apr 2010 – 16 Apr 2010)

grep -n ‘Time: 100409’ /var/log/mysqld/myserver_slow_queries.log | more

Copy or write down the Line no. let’s say 2399098

grep -n ‘Time: 100416’ /var/log/mysqld/myserver_slow_queries.log | more

Copy or write down the Last Line no. 2483712

now run the SED Unix Command to grab the data from start line to end line into a new log file.

sed -n 2399098,2483712p myserver_slow_queries.log > myserver_slow_queries.log.week

Done. One Week of Data from a Big Log file will be copied to a new file.

 
Leave a comment

Posted by on August 23, 2010 in Database Administration, linux, MySql

 

Tags: , , , ,

 
%d bloggers like this: