brief introduction
Elasticsearch
Elastic search is a real-time distributed search and analysis engine that allows you to explore your data at a speed and scale you've never had before. It is used as a combination of full-text retrieval, structured search, analysis and the three functions. Support cluster configuration.
Logstash/Filebeats
Logstash is a powerful data processing tool, which can achieve data transmission, format processing, formatted output, and powerful plug-in functions, often used for log processing.
Kibana
kibana is an open source and free tool. It can provide log analysis friendly Web interface for Logstash and Elastic Search. It can help you aggregate, analyze and search important data logs.
Architectural process
Installation Configuration
Edition
- Elasticsearch
- Logstash
- Kibana
- Filebeats
Precondition
- java8
- mac software management tool brew
brew
# Installation software brew install your-software # View software installation information brew info your-software # Management service, not much use it, ELK has its own startup script in the installation directory bin / below, and basically will carry parameters to start. brew services start/stop your-service
Elasticsearch
mac installs elastic search
#mac installs elastic search brew install elasticsearch
Relevant installation location of elastic search
Installation directory: / usr/local/Cellar/elastic search/{elastic search-version}/ Log directory: / usr / local / var / log / elastic search/ Plug-in directory: / usr / local / var / elastic search / plugins/ Configuration directory: / usr / local / etc / elastic search/
start-up
brew services start elasticsearch For the first time, the default port number is 9200, the user name is elastic, and the password is unknown to me (all the data found are pre-6.0 versions, the password is changeme, not clear after 6.0). Modify the default password by calling the _xpack interface:
Edition
elasticsearch --version Version: 6.6.1, Build: oss/tar/1fd8f69/2019-02-13T17:10:04.160291Z, JVM: 1.8.0_131
Kibana
mac installation kibana
brew install kibana
Installation location
Installation directory: / usr/local/Cellar/kibana/{kibana-version}/ Configuration directory: / usr/local/etc/kibana/
Remarks
Before starting kibana, you need to modify the configuration file / usr/local/etc/kibana/kibana.yml, cancel the annotations of elastic search. name and elastic search. password, and change the value to the user name password username: elastic, password: 123456, please refer to the kibana.yml fragment below.
# kibana.yml # If your Elasticsearch is protected with basic authentication, these settings provide # the username and password that the Kibana server uses to perform maintenance on the Kibana # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which # is proxied through the Kibana server. elasticsearch.username: "elastic" elasticsearch.password: "changeme"
start-up
brew services start kibana First start, the default port number is 5601. Open the browser to visit http://localhost:5601 to visit the kibana management page. The pop-up box asks for user name and password, and then enter elastic and 123456. Note: The username password configured in kibana.yml here is what kibana needs to access elastic search, while the username password manually entered in the web page is the password for us to log on to kibana management page. It is not clear why they can share a password.
Edition
kibana --version 6.6.1
Logstash
mac installs logstash
brew install logstash
Relevant installation location of logstash
Installation directory: / usr/local/Cellar/logstash/{logstash-version}/ Configuration directory: / usr/local/etc/logstash
To configure
vim ./first-pipeline.conf
- Support Filebeat as input source
# Sample Logstash configuration for creating a simple # Beats -> Logstash -> Elasticsearch pipeline. input { beats { host =>"127.0.0.1" port => "5044" } } output { elasticsearch { hosts => ["http://localhost:9200"] index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" #user => "elastic" #password => "changeme" } }
- logstash configuration file input supports file input, for example:
[root@access-1 logstash-7.2.1]# cat logstash_809.conf input { file{ path => ['/opt/access-server-1.0.5/log/akka-gb809.log'] #Read log file path type => "akka-gb809" #A label stat_interval => "2" #Read log files every few seconds, default 1 second } file{ path => ['/opt/access-server-1.0.5/log/akka-gb808.log'] type => "akka-gb808" stat_interval => "2" } file{ path => ['/opt/access-server-1.0.5/log/akka.log'] type => "akka" stat_interval => "2" } file{ path => ['/opt/access-server-1.0.5/log/all_error.log'] type => "all_error" stat_interval => "2" codec => multiline { #Print out the newline log pattern => "(^\d{2}\:\d{2}\:\d{2}\.\d{3})UTC" #Matching regularity negate => true what => "previous" } } } filter { date { match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss.SSS" ] } } output { if [type] == "akka-gb809" { #Log file labels to match elasticsearch { hosts => "192.168.108.151:9200" #es node address index => "access-1-akka-gb809" #Generated index for kibana presentation } } if [type] == "akka-gb808" { elasticsearch { hosts => "192.168.108.151:9200" index => "access-1-akka-gb808" } } if [type] == "akka" { elasticsearch { hosts => "192.168.108.151:9200" index => "access-1-akka" } } if [type] == "all_error" { elasticsearch { hosts => "192.168.108.151:9200" index => "access-1-all_error" } } }
start-up
logstash -e 'input { stdin { } } output { stdout {} }'
or
logstash -f config/first-pipeline.conf --config.test_and_exit
This command verifies that the configuration file is correct
logstash -f config/first-pipeline.conf --config.reload.automatic
This command starts logstash and restarts automatically when the first-pipeline.conf file changes.
Background boot
nohup logstash -f config/first-pipeline.conf --config.reload.automatic & > /dev/null
Edition
logstash 6.6.1
Filebeats'
install
#mac Installation Filebeats' brew install filebeat
position
Installation directory: / usr/local/Cellar/filebeat/{filebeat-version}/ Configuration directory: / usr/local/etc/filebeat/ Cache directory: / usr/local/var/lib/filebeat/
To configure
vim /usr/local/etc/filebeat//filebeat.yml
###################### Filebeat Configuration Example ######################### # This file is an example configuration file highlighting only the most common # options. The filebeat.reference.yml file from the same directory contains all the # supported options with more comments. You can use it as a reference. # # You can find the full configuration reference here: # https://www.elastic.co/guide/en/beats/filebeat/index.html # For more available modules and options, please see the filebeat.reference.yml sample # configuration file. #=========================== Filebeat prospectors ============================= filebeat.prospectors: # Each - is a prospector. Most options can be set at the prospector level, so # you can use different prospectors for various configurations. # Below are the prospector specific configurations. - type: log # Change to true to enable this prospector configuration. enabled: true # Paths that should be crawled and fetched. Glob based paths. paths: - /apps/intelligent-family-console/intelligentFamilyConsole/*.log #- c:\programdata\elasticsearch\logs\* # Exclude lines. A list of regular expressions to match. It drops the lines that are # matching any regular expression from the list. #exclude_lines: ['^DBG'] # Include lines. A list of regular expressions to match. It exports the lines that are # matching any regular expression from the list. #include_lines: ['^ERR', '^WARN'] # Exclude files. A list of regular expressions to match. Filebeat drops the files that # are matching any regular expression from the list. By default, no files are dropped. #exclude_files: ['.gz$'] # Optional additional fields. These fields can be freely picked # to add additional information to the crawled log files for filtering #fields: # level: debug # review: 1 ### Multiline options # Mutiline can be used for log messages spanning multiple lines. This is common # for Java Stack Traces or C-Line Continuation # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [ #multiline.pattern: ^\[ # Defines if the pattern set under pattern should be negated or not. Default is false. #multiline.negate: false # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern # that was (not) matched before or after or as long as a pattern is not matched based on negate. # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash #multiline.match: after #============================= Filebeat modules =============================== filebeat.config.modules: # Glob pattern for configuration loading path: ${path.config}/modules.d/*.yml # Set to true to enable config reloading reload.enabled: false # Period on which files under path should be checked for changes #reload.period: 10s #==================== Elasticsearch template setting ========================== setup.template.settings: index.number_of_shards: 3 #index.codec: best_compression #_source.enabled: false #================================ General ===================================== # The name of the shipper that publishes the network data. It can be used to group # all the transactions sent by a single shipper in the web interface. #name: # The tags of the shipper are included in their own field with each # transaction published. #tags: ["service-X", "web-tier"] # Optional fields that you can specify to add additional information to the # output. #fields: # env: staging #============================== Dashboards ===================================== # These settings control loading the sample dashboards to the Kibana index. Loading # the dashboards is disabled by default and can be enabled either by setting the # options here, or by using the `-setup` CLI flag or the `setup` command. #setup.dashboards.enabled: false # The URL from where to download the dashboards archive. By default this URL # has a value which is computed based on the Beat name and version. For released # versions, this URL points to the dashboard archive on the artifacts.elastic.co # website. #setup.dashboards.url: #============================== Kibana ===================================== # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API. # This requires a Kibana endpoint configuration. setup.kibana: # Kibana Host # Scheme and port can be left out and will be set to the default (http and 5601) # In case you specify and additional path, the scheme is required: http://localhost:5601/path # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601 #host: "localhost:5601" #============================= Elastic Cloud ================================== # These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/). # The cloud.id setting overwrites the `output.elasticsearch.hosts` and # `setup.kibana.host` options. # You can find the `cloud.id` in the Elastic Cloud web UI. #cloud.id: # The cloud.auth setting overwrites the `output.elasticsearch.username` and # `output.elasticsearch.password` settings. The format is `<user>:<pass>`. #cloud.auth: #================================ Outputs ===================================== # Configure what output to use when sending the data collected by the beat. #-------------------------- Elasticsearch output ------------------------------ #output.elasticsearch: # Array of hosts to connect to. #hosts: ["localhost:9200"] # Optional protocol and basic auth credentials. #protocol: "https" #username: "elastic" #password: "changeme" #----------------------------- Logstash output -------------------------------- output.logstash: # The Logstash hosts hosts: ["localhost:5044"] # Optional SSL. By default is off. # List of root certificates for HTTPS server verifications #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for SSL client authentication #ssl.certificate: "/etc/pki/client/cert.pem" # Client Certificate Key #ssl.key: "/etc/pki/client/cert.key" #================================ Logging ===================================== # Sets log level. The default log level is info. # Available log levels are: error, warning, info, debug logging.level: debug # At debug level, you can selectively enable logging only for some components. # To enable all selectors use ["*"]. Examples of other selectors are "beat", # "publish", "service". #logging.selectors: ["*"] #============================== Xpack Monitoring =============================== # filebeat can export internal metrics to a central Elasticsearch monitoring # cluster. This requires xpack monitoring to be enabled in Elasticsearch. The # reporting is disabled by default. # Set to true to enable the monitoring reporter. #xpack.monitoring.enabled: false # Uncomment to send the metrics to Elasticsearch. Most settings from the # Elasticsearch output are accepted here as well. Any setting that is not set is # automatically inherited from the Elasticsearch output configuration, so if you # have the Elasticsearch output configured, you can simply uncomment the # following line. #xpack.monitoring.elasticsearch:
Mainly configure filebeat.inputs, which logs to collect; close output. elastic search, open output.logstash, and push the collected information to logstash.
start-up
filebeat -e -c ./filebeat6.3.2/filebeat.yml
or
nohup filebeat -e -c ./filebeat6.3.2/filebeat.yml & >/dev/null
Edition
filebeat --version lag --version has been deprecated, version flag has been deprecated, use version subcommand filebeat version 6.2.4 (amd64), libbeat 6.2.4
Kibana case
Create Index patterns
Retrieval interface
The left is a retrievable condition
Follow up
Timing deletion of logs
Elastic search cluster deployment
Download and decompress
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.1-linux-x86_64.tar.gz tar -zvxf elasticsearch-7.2.1-linux-x86_64.tar.gz -C /usr/local/elk
Create Users and Authorizations
Elastic Serach requires that you start as a non-root and create users and user groups at each node
[root@elk-1 ~]# groupadd elasticsearch [root@elk-1 ~]# useradd elasticsearch -g elasticsearch
Create data and logs directories on each node:
[root@elk-1 ~]# mkdir -p /data/elasticsearch/{data,logs} [root@elk-1 ~]# chown -R elasticsearch. /data/elasticsearch/ [root@elk-1 ~]# chown -R elasticsearch. /home/elk/elasticsearch/elasticsearch-7.2.1
Modify the elasticsearch.yml configuration file
- master node configuration file
[root@elk-1 config]# grep -Ev "^$|^[#;]" elasticsearch.yml cluster.name: master-node node.name: master node.master: true node.data: true http.cors.enabled: true http.cors.allow-origin: /.*/ path.data: /home/elk/data network.host: 0.0.0.0 http.port: 9200 discovery.seed_hosts: ["192.168.108.151", "192.168.108.152", "192.168.108.153"] cluster.initial_master_nodes: ["master", "data-node1","data-node2"]
- Noe1 Node Profile
[root@elk-2 config]# grep -Ev "^$|^[#;]" elasticsearch.yml cluster.name: master-node node.name: data-node1 node.master: true node.data: true path.data: /home/elk/data network.host: 0.0.0.0 http.port: 9200 discovery.seed_hosts: ["192.168.108.151", "192.168.108.152", "192.168.108.153"] cluster.initial_master_nodes: ["master-node", "data-node1","data-node2"]
- Noe2 Node Profile
[root@elk-3 config]# grep -Ev "^$|^[#;]" elasticsearch.yml cluster.name: master-node node.name: data-node2 node.master: true node.data: true path.data: /home/elk/data network.host: 0.0.0.0 http.port: 9200 discovery.seed_hosts: ["192.168.108.151", "192.168.108.152", "192.168.108.153"] cluster.initial_master_nodes: ["master", "data-node1","data-node2"]
- Modifying JVM memory of elastic search
[root@elk-1 config]# grep -Ev "^$|^[#;]" jvm.options -Xms1g -Xmx1g
- Start elastic search
[root@ELK1 elk]# su - elasticsearch Last login: Mon Aug 12 09:58:23 CST 2019 on pts/1 [elasticsearch@ELK1 ~]$ cd /home/elk/elasticsearch-7.2.1/bin/ [elasticsearch@ELK1 bin]$ ./elasticsearch -d
- View port numbers 9200 and 9300, respectively
[root@elk-1 config]# ss -tlunp|grep java tcp LISTEN 0 128 :::9200 :::* users:(("java",pid=50257,fd=263)) tcp LISTEN 0 128 :::9300 :::* users:(("java",pid=50257,fd=212))
- es Cluster Basic Operations
#View cluster health information curl 'localhost:9200/_cluster/health?pretty' #View the details of the cluster curl ' localhost:9200/_cluster/state?pretty' #Query index list curl -XGET http:// localhost:9200/_cat/indices?v #Create an index curl -XPUT http:// localhost:9200/customer?pretty #Query Index curl -XGET http:// localhost:9200/customer/external/1?pretty #Delete index curl -XDELETE http:// localhost:9200/customer?pretty #Delete the specified index curl -XDELETE localhost:9200/nginx-log-2019.08 #Delete multiple indexes curl -XDELETE localhost:9200/system-log-2019.0606,system-log-2019.0607 #Delete all indexes curl -XDELETE localhost:9200/_all #When deleting data, wildcards are not usually recommended. The consequences of misdeletion are serious. All index es may be deleted. For security reasons, wildcards should be prohibited. Disabled_all and * wildcards can be set in the elastic search.yml configuration file. action.destructive_requires_name: true
Elastic search header plug-in
Reference resources
https://blog.csdn.net/ljx1528...
https://blog.csdn.net/zhengde...
https://blog.csdn.net/callmep...
http://www.mamicode.com/info-...
https://blog.csdn.net/Ahri_J/...