Java code implements basic operations on HBase

overview 1. Importing jar packages 2. Testing 3. Abnormal handling First of all, the HBase Build, then start Zookeeper,Hadoop HBase cluster 1. Importing jar packages Get ready: 1.CentOS7 2.Zookeeper Cluster 3. Hadoop 2.7.3 Cluster 4. HBase 2.0.0 cluster 5.eclipse Build a java project in eclipse where you create a new lib fol ...

Posted by torrentmasta on Wed, 23 Jan 2019 19:27:12 -0800

Flume data acquisition preparation

  Flume is a highly available, reliable and distributed system for collecting, aggregating and transferring massive logs provided by Cloudera. Flume supports customizing various data senders in the log system for collecting data. At the same time, Flume provides the ability to process data simply and write to various data rec ...

Posted by ron814 on Sat, 19 Jan 2019 07:21:13 -0800

Using sqoop, data is imported from mysql to hdfs, hbase, hive

The first part is the reprinted article, which introduces the ways and means of transferring data from mysql to hdfs, hbase and hive. The following is an example of my own guiding mysql data to HDFS in practical projects for readers'reference. 1. Testing MySQL connections bin/sqoop list-databases --connect jdbc:mysql://192.16 ...

Posted by varun_146100 on Mon, 07 Jan 2019 12:06:09 -0800

Cloudera Manager HBase Thrift interface Go/Python client

background A recent requirement is to write a data query interface that stores the data in the Hadoop cluster HBase built by CDH. It has always been a firm Pythoner (actually lazy), but this year, after gradually contacting and experimenting with Go, I find it very appetizing. In addition to the company's disgusting operation and maintenance ...

Posted by supergrame on Sun, 23 Dec 2018 03:24:06 -0800

HBase Day1 (Introduction to HBase, Environment Building, HBase shell command)

Why use HBase? Hbase is called Hadoop database. The design idea comes from the paper of bigtable (based on NoSQL database on GFS). HDFS supports the storage of massive data, does not support data modification (record level) and does not support immediate access to massive data. Generally, if you want to random read and write la ...

Posted by kanth1 on Fri, 21 Dec 2018 13:36:05 -0800