Logstash and filebeat configuration

Keywords: Redis Nginx ascii ElasticSearch

 

 

The mutate plug-in can modify the data in the event, including rename, update, replace, convert, split, gsub, uppercase, lowercase, strip, remove field, join, merge and other functions.

1,rename

For a field that already exists, rename its field name.

 

filter {
    mutate {
        rename => ["syslog_host", "host"]
    }
}

2,update

Update the field content. If the field does not exist, no new one will be created

 

filter {
    mutate {
        update => { "sample" => "My new message" }
    }
}

3,replace

Same as update function, the difference is that if the field does not exist, a new field will be created

 

filter {
    mutate {
        replace => { "message" => "%{source_host}: My new message" }
    }
}

4,convert

Data type conversion.

 

filter {
    mutate {
        convert => ["request_time", "float"]
    }
}

5,gsub

gsub provides the function of text replacement through regular expression.

 

filter {
    mutate {
        gsub => [
            # replace all forward slashes with underscore
            "fieldname", "/", "_",
            # replace backslashes, question marks, hashes, and minuses
            # with a dot "."
            "fieldname2", "[\\?#-]", "."
        ]
    }
}

6,uppercase/lowercase

toggle case

 

filter {
    mutate {
        uppercase => [ "fieldname" ]
    }
}

7,split

Split the extracted field by a character

 

filter {
    mutate {
        split => ["message", "|"]
    }
}

For the string "123|321|adfd|dfjld*=123", you can see the output:

 

{
    "message" => [
        [0] "123",
        [1] "321",
        [2] "adfd",
        [3] "dfjld*=123"
    ],
    "@version" => "1",
    "@timestamp" => "2014-08-20T15:58:23.120Z",
    "host" => "raochenlindeMacBook-Air.local"
}

8,strip

Similar to trim, only the first and last blank characters are removed

 

filter {
    mutate {
        strip => ["field1", "field2"]
    }
}

9,remove_field

To delete a field:

 

filter {
    mutate {
        remove_field => [ "foo_%{somefield}" ]
    }
}

10,join

Aggregates the array elements in a field of type array into a string using the specified character as the separator.
For example, we can re aggregate the split results:

 

filter {
    mutate {
        split => ["message", "|"]
    }
    mutate {
        join => ["message", ","]
    }
}

Output results:

 

{
    "message" => "123,321,adfd,dfjld*=123",
    "@version" => "1",
    "@timestamp" => "2014-08-20T16:01:33.972Z",
    "host" => "raochenlindeMacBook-Air.local"
}

11,merge

For several fields of type array or hash or string, we can use merge to merge

 

filter {
    mutate {
        merge => [ "dest_field", "added_field" ]
    }
}

It should be noted that the fields array and hash cannot be merge d

Note: it is recommended to enclose regular quotation marks, such as' ^ \ [? [0-9] [0-9]:? [0-9] [0-9] ^ [[: graph:]] + '.

Example describe

Single character

 

x

Single character

.

Any character

[xyz]

Character class

[^xyz]

Non character class

[[:alpha:]]

ASCII character class

[[:^alpha:]]

Non ASCII character class

\d

Perl character class

\D

Non Perl character class

\pN

Unicode character class (one letter name)

\p{Greek}

Unicode character class

\PN

Non Unicode character class (one letter name)

\P{Greek}

Non Unicode character class

Composite type

 

xy

And

x|y

or

Repetition type

 

x*

Start with x

x+

One or more x's

x?

Zero or an x

x{n,m}

n or n+1 or ... or m x, prefer more

x{n,}

n or more x, prefer more

x{n}

exactly n x

x*?

zero or more x, prefer fewer

x+?

one or more x, prefer fewer

x??

zero or one x, prefer zero

x{n,m}?

n or n+1 or ... or m x, prefer fewer

x{n,}?

n or more x, prefer fewer

x{n}?

exactly n x

Grouping

 

(re)

numbered capturing group (submatch)

(?P<name>re)

named & numbered capturing group (submatch)

(?:re)

non-capturing group

(?i)abc

set flags within current group, non-capturing

(?i:re)

set flags during re, non-capturing

(?i)PaTTeRN

case-insensitive (default false)

(?m)multiline

multi-line mode: ^ and $ match begin/end line in addition to begin/end text (default false)

(?s)pattern.

let . match \n (default false)

(?U)x*abc

ungreedy: swap meaning of x* and x*?, x+ and x+?, etc (default false)

Empty string

 

^

at beginning of text or line (m=true)

$

at end of text (like \z not \Z) or line (m=true)

\A

at beginning of text

\b

at ASCII word boundary (\w on one side and \W, \A, or \z on the other)

\B

not at ASCII word boundary

\z

at end of text

Escape sequence

 

\a

bell (same as \007)

\f

form feed (same as \014)

\t

horizontal tab (same as \011)

\n

newline (same as \012)

\r

carriage return (same as \015)

\v

vertical tab character (same as \013)

\*

literal *, for any punctuation character *

\123

octal character code (up to three digits)

\x7F

two-digit hex character code

\x{10FFFF}

hex character code

\Q...\E

literal text ... even if ... has punctuation

ASCII character class

 

[[:alnum:]]

alphanumeric (same as [0-9A-Za-z])

[[:alpha:]]

alphabetic (same as [A-Za-z])

[[:ascii:]]

ASCII (same as \x00-\x7F])

[[:blank:]]

blank (same as [\t ])

[[:cntrl:]]

control (same as [\x00-\x1F\x7F])

[[:digit:]]

digits (same as [0-9])

[[:graph:]]

graphical (same as [!-~] == [A-Za-z0-9!"#$%&'()*+,\-./:;<=>?@[\\\]^_` {|}~])

[[:lower:]]

lower case (same as [a-z])

[[:print:]]

printable (same as [ -~] == [ [:graph:]])

[[:punct:]]

punctuation (same as [!-/:-@[-`{-~])

[[:space:]]

whitespace (same as [\t\n\v\f\r ])

[[:upper:]]

upper case (same as [A-Z])

[[:word:]]

word characters (same as [0-9A-Za-z_])

[[:xdigit:]]

hex digit (same as [0-9A-Fa-f])

Perl character class support

 

\d

digits (same as [0-9])

\D

not digits (same as [^0-9])

\s

whitespace (same as [\t\n\f\r ])

\S

not whitespace (same as [^\t\n\f\r ])

\w

word characters (same as [0-9A-Za-z_])

\W

not word characters (same as [^0-9A-Za-z_])

 

 

Architecture 1:
filebeat -> logstash1 -> redis -> logstash2 -> elasticsearch(colony) -> kibana
//I don't want to write the steps of the installation program here. I'm sure you have no difficulty:
(Software installation can be designed by yourself)
230,install filebeat, logstash1 ,elasticsearch
232,install logstash2, redis, elasticsearch  ,kibana

//Note: filebeat files focus on file formats
1,To configure filebeat Document:
[root@localhost filebeat]# cat /etc/filebeat/filebeat.yml
filebeat:
  prospectors:
   # - #Start of each log file
   #   paths: #Define path
   #     - /var/www/logs/access.log #Absolute path
   #   input_type: log #Log type is log
   #   document_type: api4-nginx-accesslog # This name should correspond to the name defined by logstash, and logstash should use this name for type judgment
    -
      paths:
        - /opt/apps/huhu/logs/ase.log
      input_type: log
      document_type: "ase-ase-log"
      encoding: utf-8
      tail_files: true  #Last line at a time
      multiline.pattern: '^\[' #Segmentation character
      multiline.negate: true
      multiline.match: after    #Final merger
      #tags: ["ase-ase"]

    -
      paths:   #Collect json format logs
        - /var/log/nginx/access.log
      input_type: log
      document_type: "nginx-access-log"
      tail_files: true
      json.keys_under_root: true      
      json.overwrite_keys: true  

  registry_file: /var/lib/filebeat/registry
output:      #Output to 230
  logstash:
    hosts: ["192.168.0.230:5044"]

shipper:
  logging:
    to_files: true
    files:
      path: /tmp/mybeat

 2.Configuration 230:logstash-->input-redis
[root@web1 conf.d]# pwd
/etc/logstash/conf.d
[root@web1 conf.d]# cat nginx-ase-input.conf 
input {
        beats {
        port => 5044
        codec => "json"
        }}

output {                         
        if [type] == "nginx-access-log" {
        redis {                            #nginx logs write redis information
                data_type => "list"
                key => "nginx-accesslog"
                host => "192.168.0.232"
                port => "6379"
                db => "4"
                password => "123456"
        }}
        if [type] == "ase-ase-log" {
        redis {                            #Write to the ase log and write to the redis information
                data_type => "list"
                key => "ase-log"
                host => "192.168.0.232"
                port => "6379"
                db => "4"
                password => "123456"
        }}      

}

  3.redis write to elstach 232 server configuration:logstash-->output-->resid->elasticsearch
[root@localhost conf.d]# pwd
/etc/logstash/conf.d
[root@localhost conf.d]# cat nginx-ase-output.conf 
input {
        redis {
               type => "nginx-access-log"
                data_type => "list"
                key => "nginx-accesslog"
                host => "192.168.0.232"
                port => "6379"
                db => "4"
                password => "123456"
                codec  => "json"
        }

        redis {
                type => "ase-ase-log"
                data_type => "list"
                key => "ase-log"
                host => "192.168.0.232"
                port => "6379"
                db => "4"
                password => "123456"
        }
}

output {
    if [type] == "nginx-access-log" { 
        elasticsearch {  
            hosts => ["192.168.0.232:9200"] 
            index => "nginx-accesslog-%{+YYYY.MM.dd}" 
    }}
    if [type] == "ase-ase-log" {
            elasticsearch {
                hosts => ["192.168.0.232:9200"]
                index => "ase-log-%{+YYYY.MM.dd}"
        }}
}

4,Configure on 232 elsaticsearch--->kibana
//Just find the ELS index on kibana.

//Architecture two:
filebeat -> redis -> logstash --> elsasctic --> kibana  #The disadvantage is that there is a limit on how to write filebeat into redis, and multiple writes have not been found yet.

1.feilebeat To configure:
[root@localhost yes_yml]# cat filebeat.yml 
filebeat:
  prospectors:
   # - #Start of each log file
   #   paths: #Define path
   #     - /var/www/logs/access.log #Absolute path
   #   input_type: log #Log type is log
   #   document_type: api4-nginx-accesslog # This name should correspond to the name defined by logstash, and logstash should use this name for type judgment
    -
      paths:
        - /opt/apps/qpq/logs/qpq.log
      input_type: log
      document_type: "qpq-qpq-log"
      encoding: utf-8
      tail_files: true
      multiline.pattern: '^\['
      multiline.negate: true
      multiline.match: after
   #tags: ["qpq-qpq-log"]
  registry_file: /var/lib/filebeat/registry

output:
  redis:
      host: "192.168.0.232"
      port: 6379
      db: 3
      password: "123456"
      timeout: 5
      reconnect_interval: 1
      index: "pqp-pqp-log"

shipper:
  logging:
    to_files: true
    files:
      path: /tmp/mybeat

2.From 232 redis-->els--kibana
[root@localhost yes_yml]# cat systemlog.conf 
input {
   redis {
        type => "qpq-qpq-log"
        data_type => "list"
        key => "qpq-pqp-log"
        host => "192.168.0.232"
        port => "6379"
        db => "3" 
        password => "123456"
        }}
output {
   if [type] == "qpq-qpq-log"{
      elasticsearch {  
            hosts => ["192.168.0.232:9200"] 
            index => "qpq-qpq-log-%{+YYYY.MM.dd}" 

 }

}
}

3.Configure on 232 elsaticsearch--->kibana
//Just find the ELS index on kibana

 

 

 

 

 

 

 

451 original articles published, 256 praised, 750000 visitors+
His message board follow

Posted by fiorefrank on Thu, 16 Jan 2020 03:45:15 -0800