1. Pull the Logstash image (keep consistent with ES version)
docker pull logstash:7.5.1
2. Start the container
docker run --name logstash -d -p 5044:5044 --net esnet 8b94897b4254
The network set by -- net in the command should be consistent with ES and kibana
3. Modify Logstash configuration file
// 0.0.0.0: allow any IP access http.host: "0.0.0.0" // Configure the elasticsearch cluster address xpack.monitoring.elasticsearch.hosts: [ "http://192.168.172.131:9200","http://192.168.172.129:9200","http://192.168.172.128:9200" ] // Allow monitoring xpack.monitoring.enabled: true // Read profile specification at startup path.config: /usr/share/logstash/config/logstash.conf // The specified file can be used to configure Logstash to read some files and import them into ES
4.logstash.conf configuration
# Sample Logstash configuration for creating a simple # Beats -> Logstash -> Elasticsearch pipeline. input { //This port can not be matched, because the default is port 5044 beats { port => 5044 } } output { elasticsearch { // If it is configured as an ES node, the cluster can configure all nodes hosts => ["http://localhost:9200"] // Can be customized index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" #user => "elastic" #password => "changeme" } }
If you want to import documents when logstash starts
The configuration is as follows:
input { file { path => "/usr/share/logstash/bin/file.csv" start_position => "beginning" sincedb_path => "/dev/null" } } ````` output { elasticsearch { hosts => "http://localhost:9200" index => "file" document_id => "%{id}" } stdout {} }
Of course, the 7.X version only has one main partition and one sub partition by default. If we want to specify multiple partitions, we need to create the index in advance and set the partition allocation when starting logstash
PUT /file { "settings": { "number_of_shards": 3, "number_of_replicas": 1 } }
Please move to: https://blog.51cto.com/9844951/2471039