Big data learning tutorial SD version - Part 1 [shell]

Keywords: Linux Big Data shell

Shell command line interpreter, Linux scripting language

1.1 variables

  1. Common system variables: $HOME $PWD $SHELL $USER
  2. Strict space rules
  • There must be no spaces on either side of the equal sign
  • There are spaces in the variable. You can wrap it with "" or ()
  • There must be a space between expr operators
  • There should be spaces on the left and right sides of conditional judgment []
  • There should be spaces after if, while and for
  1. The default types in bash are string types
  2. export promotes the variable to a global variable
  3. $0 script name, 1 − 9 ginseng number , 1-9 parameters, 1 − 9 parameters, # get the number of parameters
  4. ∗ Obtain take place have ginseng number , can straight meet use to f o r Follow ring , use " *Get all parameters, which can be directly used in the for loop“ * get all parameters, which can be directly used in the for loop. If "*", the will be spliced
  5. @ Obtain take place have ginseng number , turn become column surface , use " @Get all parameters, turn them into a list, and use“ @Get all parameters, turn them into a list, and use "@" to have the same effect
  6. $? Get the return value of the previous command

1.2 operators

  1. Numerical operation method: $(()), $[], expr
  2. Multiplication operator:
stay[]Internal*  stay expr Zhongwei\*

1.3 condition judgment

  1. Value comparison judgment: - lt -le -eq
  2. File permission judgment: - r -w -x
  3. File type judgment: - f -e -d
  4. String comparison judgment:=
  5. Multi conditional judgment: & & and | or

1.4 process control

  1. if select statement:
if [ xxx ];then
elif [ xxx ];then
  1. case selection statement:
case $var in 
  1. for loop statement:
for ((xxx;xxx;xxx))
for item in x y z
  1. while loop statement:
while [ xxx ]

1.5 console input

  1. read xxx

    -p "prompt"

    -t wait time s

1.6 function

  1. System function
  • basename file name interception

    basename /home/aaa/example.txt   # out:expample.txt
    basename /home/aaa/example.txt .txt #out:example
  • dirname file path interception

    dirname /home/aaa/example.txt # out:/home/aaa
  1. Custom function

    function funname(){
    # Since the shell script runs line by line, it must be declared before calling the function
    # The formal parameter of the function is specified by $n. The function is called in the following way: funname $x $y

1.7 tools

  1. cut: crop data, default "\ t"
  • -f: Column number, multiple columns are connected with "," and range with "-"

  • -d: Separator

    ifconfig eth0 |grep inet|cut -d " " -f 10   # Cut out the local IP address of linux. The version is different and the display is different
  1. sed: the stream editor processes one line at a time and does not change the content of the source file
  • -Short for e edit, a short option before multiple commands

    • a add

       "na xxx" # Means to insert xxx under line n
    • d delete

      "/xxx/d" # Indicates the line where xxx is deleted
    • s replace

      "s/old/new/g" # Means to replace old with new
  1. awk: text analysis tool, read in line by line, cut with separator, default separator ""
  • -F: specify separator

  • -v: define variables

    awk -F "x" '/pattern/ {action}' filename #Find the row that matches the pattern, split it with "x", and execute the action action
    BEGIN END # Matches the beginning and end of the line
    -v i=1 # When using variable i later, you don't need to use $i, just use i
    • Built in variable
      • FILENAME: file name
      • NR: line number
      • NF: number of columns after cutting each row
    ifconfig eth0|grep inet|awk -F " " '{print $2}' # # Like cut, cut out the local IP address of linux. The version is different and the display is different
  1. Sort: sort files
  • -n: sort number by value size
  • -r: reverse sort
  • -t: Specifies the separator used for sorting
  • -k: Specifies the column to sort
sort -t 'x' -nk y filename # Split by x, sort by y column

1.8 question bank

  1. Query the line number of the empty line in file.txt

    awk '/^$/ {print NR}' file.txt
  2. Calculate the sum of the second column of file.txt and output [format of each row: name score]

    cat file.txt |awk -F " " '{sum+=$2} END{print sum}'
  3. shell checks if the file exists

    if [ -f file.txt ];then
    	echo "exists"
    	echo "not exsits"
  4. Sort digital files

    sort -n file.txt
  5. Find all the text in the / home folder. The text content contains the keyword "xxx"

    grep -r "xxx" /home |cut -d ":" -f 1

Posted by bumbar on Wed, 01 Dec 2021 13:51:37 -0800