SaltStack data system
SaltStack has two major data systems:
- Grains
- Pillar
SaltStack data system components
Grains
Grains is a component of SaltStack, which stores the information collected when minion starts.
Grains is one of the most important components in SaltStack components, because we often use it in the process of configuration and deployment. Grains is a component of SaltStack that records some static information of minion. It can be simply understood that grains records some common attributes of each minion, such as CPU, memory, disk, network information, etc. We can view all grains information of a minion through grains.items.
Functions of Grains:
- Collect asset information
Grains application scenario:
- Information Service
- Target matching at the command line
- Target matching in top file
- Target matching in template
Environmental description:
host name | ip | Applications to install |
---|---|---|
msater | 192.168.58.120 | salt-master,salt-minion |
node2 | 192.168.58.121 | salt-minion |
node3 | 192.168.58.30 | salt-minion |
Information query example:
1. List the key s and value s of all grains
[root@master ~]# salt 'node2' grains.items node2: ---------- biosreleasedate: //bios time 07/22/2020 //bios version biosversion: //cpu related properties 6.00 cpu_flags: - fpu - vme - de `......Omitted here n that 's ok cpu_model: //Specific model of cpu Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz cpuarch: //cpu architecture x86_64 cwd: / disks: - sr0 dns: ---------- domain: ip4_nameservers: - 114.114.114.114 ip6_nameservers: nameservers: - 114.114.114.114 options: search: sortlist: domain: efi: False efi-secure-boot: False fqdn: node2 fqdn_ip4: //ip address - 192.168.58.121 fqdn_ip6: - fe80::cfb9:ea79:ec31:a606 fqdns: - minion gid: 0 gpus: |_ ---------- model: SVGA II Adapter vendor: vmware groupname: root host: //host name node2 hwaddr_interfaces: ---------- ens160: 00:0c:29:e5:44:fe lo: 00:00:00:00:00:00 id: node2 init: systemd ip4_gw: 192.168.58.2 ip4_interfaces: ---------- ens160: - 192.168.58.121 lo: - 127.0.0.1
Only query the key s of all grains
[root@master ~]# salt 'node3' grains.ls node3: - biosreleasedate - biosversion - cpu_flags - cpu_model - cpuarch - cwd - disks - dns - domain - efi - efi-secure-boot - fqdn - fqdn_ip4 - fqdn_ip6 - fqdns - gid - gpus - groupname - host - hwaddr_interfaces - id - init - ip4_gw - ip4_interfaces - ip6_gw - ip6_interfaces - ip_gw - ip_interfaces - ipv4 - ipv6 - kernel - kernelparams - kernelrelease - kernelversion - locale_info - localhost - lsb_distrib_codename - lsb_distrib_id - lsb_distrib_release - lvm - machine_id - manufacturer - master - mdadm - mem_total - nodename - num_cpus - num_gpus - os - os_family - osarch - oscodename - osfinger - osfullname - osmajorrelease - osrelease - osrelease_info - path - pid - productname - ps - pythonexecutable - pythonpath - pythonversion - saltpath - saltversion - saltversioninfo - selinux - serialnumber - server_id - shell - ssds - swap_total - systemd - systempath - transactional - uid - username - uuid - virtual - zfs_feature_flags - zfs_support - zmqversion
Query the value of a key, such as listing the ip addresses of all hosts
[root@master ~]# salt '*' grains.get fqdn_ip4 node3: - 192.168.58.30 node2: - 192.168.58.121 master: - 192.168.58.120 node4: - 192.168.58.40 [root@master ~]# salt '*' grains.get ip4_interfaces master: ---------- ens160: - 192.168.58.120 lo: - 127.0.0.1 node2: ---------- ens160: - 192.168.58.121 lo: - 127.0.0.1 node4: ---------- ens160: - 192.168.58.40 lo: - 127.0.0.1 node3: ---------- ens160: - 192.168.58.30 lo: - 127.0.0.1 [root@master ~]# salt '*' grains.get ip4_interfaces:ens160 node2: - 192.168.58.121 master: - 192.168.58.120 node4: - 192.168.58.40 node3: - 192.168.58.30
What are the key s and values and how to get them
[root@master ~]# salt 'master' grains.get ip4_interfaces master: ---------- ens160: - 192.168.58.120 lo: - 127.0.0.1
When obtaining the key, use the comma in English (.) as an example
Take the (key) key of the ens160 network card
[root@master ~]# salt 'master' grains.get ip4_interfaces.ens160 master:
Use the colon (:) in English when obtaining the key. Take the above as an example
Take the key value of ens160 network card
[root@master ~]# salt 'master' grains.get ip4_interfaces:ens160 master: - 192.168.58.120
Target matching instance:
Match minion with Grains:
[root@master ~]# salt -G 'os:redhat' cmd.run 'uptime' / / list the normal running time of the system with redhat node4: 15:37:25 up 4:43, 2 users, load average: 0.17, 0.30, 0.32 node3: 15:37:25 up 4:44, 2 users, load average: 0.15, 0.25, 0.30 node2: 15:37:25 up 4:51, 2 users, load average: 0.21, 0.19, 0.22 master: 15:37:25 up 4:51, 2 users, load average: 0.91, 0.52, 0.42
Use Grains in the top file:
[root@master ~]# vim /srv/salt/base/top.sls base: 'os:CentOS': [root@master ~]# cat /srv/salt/base/top.sls base: 'os:RedHat' master: - web.apache.install_a
There are two ways to customize Grains:
- minion configuration file, search for grains in the configuration file
- Generate a grains file under / etc/salt and define it in this file (recommended method)
[root@master ~]# vim /etc/salt/grains [root@master ~]# cat /etc/salt/grains xu-grains: meng [root@master ~]# systemctl restart salt-minion [root@master ~]# salt '*' grains.get xu-grains node4: node2: node3: master: meng
Customize Grains without restarting:
[root@master ~]# vim /etc/salt/grains meng-grains: xym like Sunny day = jay so xym = jay [root@master ~]# salt '*' saltutil.sync_grains node2: master: node3: node4: [root@master ~]# salt '*' grains.get meng-grains node3: node2: master: xym like Sunny day = jay so xym = jay node4:
Pillar
Pillar is also one of the very important components of the SaltStack component. It is a data management center. It often configures states and uses it in large-scale configuration management. The main function of pillar in SaltStack is to store and define some data required in configuration management, such as software version number, user name, password and other information. Its definition storage format is similar to Grains, which is YAML format.
There is a section of Pillar settings in the Master configuration file, which specifically defines some parameters related to Pillar:
#pillar_roots: # base: # - /srv/pillar
In the default Base environment, the working directory of Pillar is under / srv/pillar directory. If you want to define multiple Pillar working directories with different environments, you only need to modify the configuration file here.
Pillar features:
- You can define the data required for the specified minion
- Only the specified person can see the defined data
- Set in master configuration file
View pillar information
[root@master ~]# salt '*' pillar.items node3: ---------- master: ---------- node4: ---------- node2: ----------
The default pillar does not have any information. If you want to view the information, you need to set the pillar in the master configuration file_ The annotation of opts is uncommented and its value is set to True.
[root@master base]# vim /etc/salt/master # master config file that can then be used on minions. pillar_opts: True .... [root@master ~]# systemctl restart salt-master [root@master ~]# salt 'master' pillar.items master: ---------- master: ---------- __cli: salt-master __role: master allow_minion_key_revoke: True archive_jobs: False auth_events: True auth_mode: 1 auto_accept: False azurefs_update_interval: 60 cache: localfs cache_sreqs: True cachedir: /var/cache/salt/master clean_dynamic_modules: True cli_summary: False client_acl_verify: True cluster_mode: False con_cache: False ......
pillar custom data:
Find pillar in the master configuration file_ Roots can see where they store the pillar
vim /etc/salt/master ...ellipsis N that 's ok ##### Pillar settings ##### ########################################## # Salt Pillars allow for the building of global data that can be made selectively # available to different minions based on minion grain filtering. The Salt # Pillar is laid out in the same fashion as the file server, with environments, # a top file and sls files. However, pillar data does not need to be in the # highstate format, and is generally just key/value pairs. pillar_roots: base: - /srv/pillar/base [root@master ~]# mkdir -p /srv/pillar/base [root@master ~]# systemctl restart salt-master [root@master ~]# vim /srv/pillar/base/web.sls top.sls web.sls [root@master ~]# cat /srv/pillar/base/web.sls {% if grains['os'] == 'RedHat' %} web: httpd {% elif grains['os'] == 'CentOS' %} web: nginx {% endif %} [root@master ~]# vim /srv/pillar/base/top.sls base: //Specify environment 'master': //Specify target - web //Reference apache.sls or apache/init.sls [root@master ~]# salt '*' pillar.items master: ---------- web: httpd node3: ---------- node2: ---------- node4: ---------- stay salt Next modification apache Status file, reference pillar Data [root@master ~]# cat /srv/salt/base/web/apache/install_a.sls apache-install: pkg.installed: - name: {{ pillar['web'] }} apache-service: service.running: - name: {{ pillar['web'] }} - enable: True see top.sls file [root@master ~]# cat /srv/salt/base/top.sls base: master: - web.apache.install_a Execute advanced status file [root@master ~]# salt 'master' state.highstate master: ---------- ID: apache-install Function: pkg.installed Name: httpd Result: True Comment: The following packages were installed/updated: httpd Started: 17:04:17.437885 Duration: 25946.149 ms Changes: ---------- apr: ---------- new: 1.6.3-11.el8 old: apr-util: ---------- new: 1.6.1-6.el8 old: apr-util-bdb: ---------- new: 1.6.1-6.el8 old: apr-util-openssl: ---------- new: 1.6.1-6.el8 old: centos-logos-httpd: ---------- new: 85.8-1.el8 old: httpd: ---------- new: 2.4.37-39.module_el8.4.0+950+0577e6ac.1 old: httpd-filesystem: ---------- new: 2.4.37-39.module_el8.4.0+950+0577e6ac.1 old: httpd-tools: ---------- new: 2.4.37-39.module_el8.4.0+950+0577e6ac.1 old: mailcap: ---------- new: 2.1.48-3.el8 old: mod_http2: ---------- new: 1.15.7-3.module_el8.4.0+778+c970deab old: ---------- ID: apache-service Function: service.running Name: httpd Result: True Comment: Service httpd has been enabled, and is running Started: 17:04:43.422896 Duration: 1558.777 ms Changes: ---------- httpd: True Summary for master ------------ Succeeded: 2 (changed=2) Failed: 0 ------------ Total states run: 2 Total run time: 27.505 s
Differences between Grains and Pillar
Storage location | type | Acquisition mode | Application scenario | |
---|---|---|---|---|
Grains | minion | static state | Acquisition at minion startup You can avoid restarting the minion service by refreshing | 1. Information query 2. Perform target matching on the command line 3. Perform target matching in the top file 4. Target matching in the template |
Pillar | master | dynamic | Specify and take effect in real time | 1. Target matching 2. Sensitive data configuration |