This paper derives from:Everyday Learning IT - Knowledge Base
Cluster machine
Host | IP address |
---|---|
192.168.99.181 | ambari-mirror |
192.168.99.101 | ambari-server |
192.168.99.106 | ambari-agent1 |
192.168.99.107 | ambari-agent2 |
Create management user hadoop (all node operations)
useradd hadoop
Modify the machine hosts file (all node operations)
echo -e '192.168.99.181 ambari-mirror\n192.168.99.101 ambari-server\n192.168.99.106 ambari-agent1\n192.168.99.107 ambari-agent2' >> /etc/hosts
Set hadoop user sudo password-free (all node operations)
Use visudo to add after line 99
hadoop ALL=(ALL) NOPASSWD: ALL
Configure hadoop user keyless login (operated on ambari-mirror node)
Enter hadoop users
su hadoop
Generate a key (just go all the way back)
ssh-keygen
Added to authorized keys
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
Modify key permissions
chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keys
Copy keys to other nodes (using root users, operating on ambari-mirror nodes)
sudo scp -r /home/hadoop/.ssh/ root@ambari-server:/home/hadoop/ sudo scp -r /home/hadoop/.ssh/ root@ambari-agent1:/home/hadoop/ sudo scp -r /home/hadoop/.ssh/ root@ambari-agent2:/home/hadoop/
NTP service (operates on all nodes)
Installation service: sudo yum install ntp Set boot start: sudo system CTL enable ntpd Start service: sudo system CTL start ntpd
Firewall (operated on all nodes)
Close the firewall: sudo service firewalld stop
Close selinux (operates on all nodes)
Use / usr/sbin/sestatus -v to check if selinux is enable closed
sudo sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
Restart your computer to enter hadoop users using sudo reboot
Setting up UMASK (using root user, operating on all nodes)
echo umask 0022 >> /etc/profile && source /etc/profile
Install dependency packages (using hadoop users, operating on ambari-mirror nodes)
sudo yum install yum-utils createrepo httpd yum-plugin-priorities
Setting up pluginconf (using root user, operating on ambari-mirror node)
echo 'gpgcheck=0' >> /etc/yum/pluginconf.d/priorities.conf
Download the ambari installation source (using hadoop users, operating on the ambari-mirror node)
sudo wget -nv http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.2.2.0/ambari.repo -O /etc/yum.repos.d/ambari.repo sudo wget -nv http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.4.2.0/hdp.repo -O /etc/yum.repos.d/HDP.repo
Check the availability of the installation source (using hadoop users, operating on the ambari-mirror node)
yum repolist
Ensure that you have hdp sources
Create centos7 folder (using hadoop users, operating on ambari-mirror nodes)
sudo mkdir -p ambari/centos7 && cd /var/www/html/ambari/centos7/
Synchronize ambari source to local (using hadoop users, operating on ambari-mirror nodes)
sudo reposync -r Updates-ambari-2.2.2.0 sudo reposync -r HDP-2.4.2.0 sudo reposync -r HDP-UTILS-1.1.0.20
Generate local warehouse source data (using hadoop users, operating on ambari-mirror nodes)
Create ambari metadata
sudo createrepo /var/www/html/ambari/centos7/Updates-ambari-2.2.2.0/
Spawning worker 0 with 8 pkgs
Workers Finished
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete
Create hdp metadata
sudo createrepo /var/www/html/hdp/centos7/2.4.X/HDP-2.4.2.0/
Spawning worker 0 with 181 pkgs
Workers Finished
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete
Create hdp util metadata
shellsudo createrepo /var/www/html/hdp/centos7/2.4.X/HDP-UTILS-1.1.0.20/
Spawning worker 0 with 44 pkgs
Workers Finished
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete
Start the http service (using hadoop users, operating on ambari-mirror nodes)
sudo service httpd start
View Ambari source availability (using hadoop users, operating on ambari-mirror nodes)
http://ambari-mirror/ambari/ http://ambari-mirror/hdp/