Learning Manifest, ClassTag, TypeTag in Scala
Introduction to Manifest
Manifest is a feature introduced by Scala 2.8 that allows compilers to obtain generic type information at runtime. stay JVM On the other hand, the generic parameter type T is "wiped" off at runtime, and the compiler treats T as an Object. So the specific information of T is not available; in order to get the ...
Posted by SheepWoolie on Sun, 14 Apr 2019 16:21:32 -0700
Development Environment Configuration and Common Software Installation after Ubuntu 16.04 Installation
After installation of Ubuntu 16.04, 1. Install the commonly used software Sogou Input Method + Editor Atom + Browser Chome + Video Player vlc + Image Editor GIMP + Video Recording Software RcordMyDesktop. 2. Development Environment Configuration. JDK Environment Configuration+ Scala Environment Configuration + Noejs Environment Configuration + ...
Posted by jek on Sun, 07 Apr 2019 10:54:30 -0700
Analysis of Kafka Log Compaction
Looking at the Kafka document recently, I found that Kafka has one. Log Compaction Functions we haven't noticed before, but they have high potential practical value.
What is Log Compaction
Every piece of data in Kafka has a pair of Keys and Value s, which are stored on disk and will not be preserved permanently, but delete the earliest written ...
Posted by AmandaF on Sun, 07 Apr 2019 07:12:31 -0700
Scala-Array, Mapping
1, array
1) Create arrays
Create fixed-length arrays and variable-length arrays
//Create a fixed-length array, use the new keyword, specify that the array generic type is Int, the length is 5, the default will initialize these five values for the default value of the specified type, such as Int, then the five default values ar ...
Posted by helppy on Fri, 29 Mar 2019 23:57:30 -0700
Construction of Kakfa Distributed Cluster
Based on the latest version of kafka_2.11-0.10.1.0, this paper describes the construction process of distributed Kafka cluster environment. Server List:
172.31.10.1
172.31.10.2
172.31.10.3
1. Download the kafka installation package
Log on to Kafka http://kafka.apache.org/.
Click the "Download" button on the left
Select the corres ...
Posted by jkkenzie on Wed, 20 Mar 2019 23:15:34 -0700
Akka (42): Http: authentication, autorization and use of raw headers
When we use Akka-http as a database data exchange tool, the data is stored in Entity in the form of Source[ROW,]. In many cases, besides data, we may need to transfer additional information, such as how to deal with the data. We can transfer additional custom messages through raw-header of Akka-http, which can be achieved through raw-header fil ...
Posted by lordvader on Fri, 08 Feb 2019 02:33:17 -0800
My first ML algorithm (linear regression gradient descent)
Code:
object Main extends App{
def Hθ(θ0:Double,θ1:Double,x:Double):Double=θ0+θ1*x
def costFunction(θ0:Double,θ1:Double):Double={
val m=data.size
var squreSum=0.0
data.foreach{case (x,y)=>{
squreSum+= (Hθ(θ0,θ1,x)-y)*(Hθ(θ0,θ1,x)-y)
}}
squreSum/(2.0*m)
}
val data=List((3,4),(2,1),(4,3),(0,1))//Data sets ...
Posted by Wolfsoap on Thu, 07 Feb 2019 14:45:16 -0800
geotools writes shp to sql, realizes shp data into Oracle Spatial Library
Summary
It is difficult to avoid the problem of SHP file storage when using Oracle Spatial. Although there are shp2sdo tools, they are not used to it, so this paper describes how to realize shp2sql conversion with geotools.
Effect
Implementation code
package com.lzugis.geotools;
import com.lzugis.CommonMethod;
impo ...
Posted by NuLL[PL] on Tue, 05 Feb 2019 21:00:16 -0800
Spark Learning Notes (3) - Spark Operator
1 Spark Operator
1.1 is divided into two categories
1.1.1 Transformation
Transformation delays execution, which records metadata information and actually starts computing when the computing task triggers the Action.
1.1.2 Action
1.2 Two Ways to Create RDD
RDD is created through the file system supported by HDFS. The ...
Posted by gauravupadhyaya on Thu, 31 Jan 2019 22:39:16 -0800
Introduction to spark 4 (RDD Advanced Operator 1)
1. mapPartitionsWithIndex
Create RDD with a specified partition number of 2
scala> val rdd1 = sc.parallelize(List(1,2,3,4,5,6,7),2)
View partition
scala> rdd1.partitions
- As follows:
res0: Array[org.apache.spark.Partition] = Array(org.apache.spark.rdd.ParallelCollectionPartition@691, org.apache.spark.rdd.ParallelColle ...
Posted by r4ck4 on Wed, 30 Jan 2019 19:06:16 -0800