DBUtils for database operations

Summary DBUtils is a practical tool for database operation in Java programming. It is compact, simple and practical. DBUtils encapsulates the operation of JDBC, simplifies the operation of JDBC, and can write less code. Introduction to Three Core Functions of DBUtils API to provide operations on sql statements in Query Runner ResultSetHan ...

Posted by Brentley_11 on Sat, 02 Feb 2019 14:12:16 -0800

Sqoop Incremental Import and Export and Job Operation Example

Incremental import Incremental import append for incremental columns # First import [root@node222 ~]# /usr/local/sqoop-1.4.7/bin/sqoop import --connect jdbc:mysql://192.168.0.200:3306/sakila?useSSL=false --table actor --where "actor_id < 50" --username sakila -P --num-mappers 1 --target-dir /tmp/hive/sqoop/actor_all ... 18/10/1 ...

Posted by zak on Sat, 02 Feb 2019 12:21:16 -0800

Operating HDFS with Java

After building a high-availability HDFS cluster, Java can be used in Eclipse to operate HDFS and read and write files. High Availability HDFS Cluster Building Steps: https://blog.csdn.net/Chris_MZJ/article/details/83033471 Connecting HDFS with Eclipse 1. Place hadoop-eclipse-plugin-2.6.0.rar in the installation directory of ...

Posted by mfos on Sat, 02 Feb 2019 09:06:15 -0800

Hive Integrated HBase Detailed

Reproduced from: https://www.cnblogs.com/MOBIN/p/5704001.html 1. Create HBase tables from Hive Create a Live table pointing to HBase using the HQL statement CREATE TABLE hbase_table_1(key int, value string) //Table name hbase_table_1 in Hive STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' //Designated Storage P ...

Posted by maxpagels on Sat, 02 Feb 2019 02:45:15 -0800

Spark Learning Notes (1) - Introduction to Spark, Cluster Installation

1 Spark Introduction Spark is a fast, universal and scalable large data analysis engine. It was born in AMPLab, University of California, Berkeley in 2009. It was open source in 2010. It became Apache incubator in June 2013 and top-level Apache project in February 2014. At present, Spark ecosystem has developed into a collecti ...

Posted by All4172 on Sat, 02 Feb 2019 01:21:15 -0800

centos Server Builds SVN

I. Installation and testing [root@VM_0_10_centos ~]# yum install subversion View svn version [root@VM_0_10_centos ~]# svnserve --version svnserve, version 1.7.14 (r1542130) compiled Apr 11 2018, 02:40:28 Copyright (C) 2013 The Apache Software Foundation. This software consists of contributions made by many people; see th ...

Posted by Jaxolotl on Fri, 01 Feb 2019 22:42:15 -0800

Talk about storm's Assignment Distribution Service

order This paper mainly studies Assignment Distribution Service of storm. AssignmentDistributionService storm-2.0.0/storm-server/src/main/java/org/apache/storm/nimbus/AssignmentDistributionService.java /** * A service for distributing master assignments to supervisors, this service makes the assignments notification * asynchronous. * * < ...

Posted by SirChick on Fri, 01 Feb 2019 22:21:17 -0800

Fully Distributed Cluster (V) Hbase-1.2.6.1 Installation Configuration

environmental information Fully Distributed Cluster (I) Cluster Foundation Environment and zookeeper-3.4.10 Installation and Deployment hadoop cluster installation configuration process You need to deploy hadoop cluster before installing hive Fully Distributed Cluster (II) Haoop 2.6.5 Installation and Deployment Hbase Cluster Installatio ...

Posted by MFHJoe on Fri, 01 Feb 2019 19:12:15 -0800

shiro's Notes

Record problems with shiro at work. In fact, many problems are explained in official documents, but only limited to English reading ability. Only after solving the problem, can we find that it was explained. The shiro annotation @RequiresGuest, etc. is an intersection of permissions. Check order: RequiresRoles RequiresPermissions RequiresAu ...

Posted by Chevy on Fri, 01 Feb 2019 08:09:16 -0800

Spark SQL Notebook Arrangement (3): Load and Save Function and Spark SQL Function

Load and save function Data loading (json file, jdbc) and saving (json, jdbc) The test code is as follows: package cn.xpleaf.bigdata.spark.scala.sql.p1 import java.util.Properties import org.apache.log4j.{Level, Logger} import org.apache.spark.{SparkConf, SparkContext} import org.apache.spark.sql.{SQLContext, SaveMode} /* ...

Posted by danielrs1 on Fri, 01 Feb 2019 00:09:16 -0800