hive-1.2.1 Installation and Simple Use
Hive can only be installed on one node
1. Upload tar package
2. Decompression
tar -zxvf hive-1.2.1.tar.gz -C /apps/
3. Install mysql database (switch to root user) (there are no limitations on where to install, only nodes that can connect to the hadoop cluster)
4. Configure hive
(a) Configure the HIVE_HOME environment variable vi conf/hive-e ...
Posted by gyash on Thu, 26 Mar 2020 21:12:15 -0700
Single-machine version Hadoop hdfs installation setup record
**System Configuration**
** Specification: ** 1vCPUs | 2GB | s6.medium.2
** Mirror: ** Ubuntu 18.04 server 64bit
**User: ** Create a halo user on Ubuntu
Preparatory software: 1 Hadoop installation package (recommended cdh, cloudera site) 2 Java 1.8 +3 ssh
1. Install Java
Download the Linux version of JDK jdk-8u161-linux-x64.tar.gz first
Un ...
Posted by cbn_noodles on Wed, 18 Mar 2020 19:38:36 -0700
Security settings when building a cluster on Baidu cloud server
After moving the hadoop cluster on the local virtual machine to Baidu cloud server, I found that there are always many unknown ip addresses logging in to my server, because the firewall is closed locally, but in the actual deployment, this is too unsafe. So I spent two hours setting up the firewall of t ...
Posted by wonderman on Sun, 15 Mar 2020 02:23:32 -0700
layout mode of Ozone Block Chunk file
Article directory
Preface
Original layout of zone datanode chunk file
Ozone Datanode Chunk Layout: File > per > chunk and file > per > block
The actual storage comparison between the old and new layout s of Chunk
Quote
Preface
In zone, the file object data of zone is organized in th ...
Posted by giovo on Sat, 14 Mar 2020 22:58:45 -0700
Flume installation deployment and cases
1. Installation address
1) Flume official website address
http://flume.apache.org/
2) Document view address
http://flume.apache.org/FlumeUserGuide.html
3) Download address
http://archive.apache.org/dist/flume/
2. Installation and deployment
1) Upload apache-flume-1.7.0-bin.tar.gz to the / opt/softwa ...
Posted by The14thGOD on Fri, 13 Mar 2020 19:40:57 -0700
Cs231n Assignment 1 SVM of in-depth learning series
Write at the beginning: Finally I copied the svm job of the big guys again. It's up to me to copy it. I'll attach a link to the big guys who talked better to me in the response location. Thank them.
Recently, facing the pressure of finding an internship, this employment is still very high, what can I d ...
Posted by cornelombaard on Fri, 13 Mar 2020 18:51:52 -0700
Sword point data warehouse Shell command three
1, Last course review
2, Linux basic command 3
2.1 users and user groups
2.2. Use of personal environment variables (. Selection of bashrc and. Bash? Profile) and global environment variables (/ etc/profile) and aliases
2.3 use of history command
2.4 use of delete command
Three. Homework
1, Last cour ...
Posted by Arsench on Thu, 12 Mar 2020 03:15:32 -0700
MapReduce implements wordcount statistics
Inherit Mapper's generics
public class WCMapper extends Mapper<LongWritable, Text, Text, LongWritable>
Longwritable - > start offset
Text - > entered text
Text - > output text
Longwritable - > count
Of the four generics, the first two are the type of specified mapper input data, the key is the type of input key, ...
Posted by Okami on Thu, 05 Mar 2020 21:21:55 -0800
Write a simple MR program and run it in the cluster!! (wordcount)
Preface
Implement a handwritten WC program and package it to run on the cluster.
Create a Maven project and import pom
Engineering catalogue
Import pom
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/X ...
Posted by RCS on Thu, 05 Mar 2020 04:17:04 -0800
Configuration and startup of Hadoop pseudo distributed environment
1. Environmental preparation
On a linux machine, install the hadoop running environment. For the installation method, see: Establishment of HADOOP operation environment
2. Start HDFS and run MapReduce
2.1. Configure cluster
1. Configuration: hadoop-env.sh
Get the installation path of JDK in Linux system:
[root@ hadoop101 ~]# echo $JAVA_HOME
/op ...
Posted by spooke2k on Tue, 25 Feb 2020 19:31:27 -0800