HDFS principle and operation
hadoop fs
hadoop fs -ls /
hadoop fs -lsr
hadoop fs -mkdir /user/hadoop
hadoop fs -put a.txt /user/hadoop/
hadoop fs -get /user/hadoop/a.txt /
hadoop fs -cp src dst
hadoop fs -mv src dst
hadoop fs -cat /user/hadoop/a.txt
hadoop fs -rm /user/hadoop/a.txt
hadoop fs -rmr /user/hadoop/a.txt
hadoop fs -text /user/hadoop/a.txt
hadoop fs -copyFromL ...
Posted by JamieinNH on Sat, 16 Oct 2021 10:26:18 -0700
hadoop learning notes
Introduction to hadoop
Hadoop is a distributed system infrastructure developed by the Apache Foundation. Users can develop distributed programs without knowing the underlying details of the distribution. Take full advantage of the power of clusters for high-speed operations and storage. Hadoop implements a Distributed File System, one of w ...
Posted by dineshthakur on Tue, 12 Oct 2021 10:37:51 -0700
Centos6.8 + Hadoop 3.2 + JDK1.8 distributed cluster installation process (real)
These days when I started hadoop, I didn't even mention the installation of hadoop distributed cluster in the video. I started to talk about the concept directly. After thinking about it for a few weeks, I still solved it. There are too many small problems and the data are messy. The blogs speak differently. I have to summarize and find out the ...
Posted by lordfrikk on Fri, 08 Oct 2021 20:16:11 -0700
Linux custom script integration
1. Cluster distribution file
Application scenario
We often need to copy the newly created and modified files to the same directory of all nodes. It is not so friendly to execute the scp command once.
Basic knowledge
(a) rsync command original copy:
[root@bigdata801 hadoop-3.3.1]# rsync -av /opt/module/hadoop-3.3.1/ bigdata802:/opt/module/h ...
Posted by Dimitri89 on Wed, 06 Oct 2021 16:56:51 -0700
2021 summer study day 25
preface
Time: August 13, 2021Content:
Composition of HDFS componentsComposition of YARN componentsMapReduce architectureHDFS operation commandFully distributed installation
1 Composition of HDFS components
[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the ...
Posted by bad_goose on Sat, 02 Oct 2021 15:08:35 -0700
Large Data Platform Real-Time Number Warehouse from 0 to Built - 04 hadoop Installation Test
Summary
This is about hadoop Installation tests for.
stay server110 Install the configuration on and synchronize to server111,server112
Environmental Science Centos 7 jdk 1.8 hadoop-3.2.1
server110 192.168.1.110 server111 192.168.1.111 server112 192.168.1.112
install
#decompression
[root@server110 software]# tar -xzvf hadoop ...
Posted by eazyGen on Sat, 02 Oct 2021 10:14:39 -0700
HDFS operation and command introduction
Common HDFS commands
<path>... One or more paths in hdfs, if not specified, default to/user/<currentUser> <localsrc>...one or more paths to the local file system Target Path in <dst> HDFS
view help
Command: hdfs dfs -help [cmd ...]
Parameters:
cmd... One or more commands to query
Create directory
Command: hd ...
Posted by dr.wong on Sat, 02 Oct 2021 09:30:52 -0700
hadoop+zookeeper to set up a highly available cluster
hadoop to build highly available clusters
Question: What are the problems with existing clusters? HDFS clusters, in a single NN scenario, if the NN fails, the entire HDFS cluster will not be available (centralized cluster), the solution is to configure multiple NNs.
But the question arises again. Which one of these NN Ns provides service ...
Posted by saf on Tue, 28 Sep 2021 11:07:59 -0700
Flume file configuration method and maven
Configuration introduction
in general:
source is used to receive data sinks are used to send data channel is used to cache data
The following are some related types that will be used later. 1.Source component type (used to receive data sent from a certain place)
Netcat Source Accept the request data from the data client, which is often ...
Posted by burningkamikaze on Tue, 21 Sep 2021 15:54:33 -0700
02 Hadoop cluster construction
Hadoop cluster construction
1, Environmental preparation (prepare a formwork machine)
1.1 template machine configuration - Hadoop 100
The template machine does not move. In order to facilitate cloning later, new nodes are added directly
Virtual machine requirements: 4G memory, 50G hard disk, CentOS7, minimum installation
Here, Hadoop 100 i ...
Posted by omarh2005 on Tue, 21 Sep 2021 02:16:00 -0700