OpenCV-Python uses OCR handwriting data set to run SVM | fifty-six
target
In this chapter, we will recognize the handwritten dataset again, but use SVM instead of kNN.
Recognize handwritten numbers
In kNN, we directly use pixel intensity as feature vector. This time we will use the directional gradient histogram (HOG) as the feature vector.
Here, we use the second-order moment to correct the skew of the image ...
Posted by gurjit on Sun, 29 Mar 2020 00:05:02 -0700
HBase-1.2.1 cluster building
1.hbase is a distributed system, which needs to rely on HDFS as the storage medium and zookeeper to complete the monitoring of the master-slave nodes.
2. Installation preparation is as follows:
1.HBase role
HMaster (master node, one Active, one StandBy)
Hreginserver (data node, multiple)
2. dependence
HDFS cluster, zookeeper cluster [start]
...
Posted by upnxwood16 on Fri, 27 Mar 2020 07:23:00 -0700
hive-1.2.1 Installation and Simple Use
Hive can only be installed on one node
1. Upload tar package
2. Decompression
tar -zxvf hive-1.2.1.tar.gz -C /apps/
3. Install mysql database (switch to root user) (there are no limitations on where to install, only nodes that can connect to the hadoop cluster)
4. Configure hive
(a) Configure the HIVE_HOME environment variable vi conf/hive-e ...
Posted by gyash on Thu, 26 Mar 2020 21:12:15 -0700
[HQL] HQL realizes the change of daily order quantity, order quantity without address and order quantity without address N days later
The original table (order? TBL) is partitioned by dt, and the full order is saved by partition every day
(zone 20200321)
order_id
address
trade_time
dt
1
Tianjin
20200320
20200321
2
Beijing
20200320
20200321
3
Beijing
20200319
20200321
4
Tianjin
20200319
20200321
5
Beijing
20200321
20200321
6
20200321
20200321
(zone 20200320) ...
Posted by wendu on Thu, 26 Mar 2020 07:10:56 -0700
Java serialization 100 character reading and writing with buffer, decorator mode
1, BufferedWriter
1. Test with character reading and writing with buffer
package com.bjpowernode.java_learning;
import java.io.*;
public class D100_1_BufferedWriter {
public static void main(String[] args) throws Exception{
//Create character output stream with buffer
String address = "C:\\\\Users\\\\lenovo1\\\\Workspaces\\ ...
Posted by blckspder on Sat, 21 Mar 2020 08:48:50 -0700
Source code analysis of ArrayList constructor and add method
ArrayList constructor
/**
* Initial capacity
*/
private static final int DEFAULT_CAPACITY = 10;
private static final Object[] EMPTY_ELEMENTDATA = {};
private static final Object[] DEFAULTCAPACITY_EMPTY_ELEMENTDATA = {};
transient Object[] elementData; // non-private to simplify ne ...
Posted by glansing on Sat, 14 Mar 2020 23:39:52 -0700
Advanced features of redis billion level data filtering and bloon filter
1, Brief introduction of bloon filter
Last time, we learned to use HyperLogLog to estimate big data. It is very valuable and can solve many statistical requirements with low accuracy. But if we want to know whether a certain value is already in the HyperLogLog structure, it can't help. It only provi ...
Posted by tywin on Fri, 13 Mar 2020 08:32:34 -0700
Record a merge sort optimization
Core idea of merging and sorting:
It's easy to combine two ordered arrays into one.
Steps:
Divide an array into two arrays from the middle
Repeat step 1 for the two arrays after bisection until the length of the array is 1
Merge the two divided arrays. At this time, the two divided arrays are in or ...
Posted by watski on Thu, 12 Mar 2020 03:59:07 -0700
Python data analysis library -- Numpy:ndarray object, index, array transpose
NumPy's ndarray: a multidimensional array object
ndarray object is an N-dimensional array object and a flexible and fast big data container
You can use the array of ndarray to perform some mathematical operations on the whole block of data
ndarray is a general multi-dimensional container of homogeneo ...
Posted by niall_buckley on Fri, 06 Mar 2020 03:40:09 -0800
Configuration and startup of Hadoop pseudo distributed environment
1. Environmental preparation
On a linux machine, install the hadoop running environment. For the installation method, see: Establishment of HADOOP operation environment
2. Start HDFS and run MapReduce
2.1. Configure cluster
1. Configuration: hadoop-env.sh
Get the installation path of JDK in Linux system:
[root@ hadoop101 ~]# echo $JAVA_HOME
/op ...
Posted by spooke2k on Tue, 25 Feb 2020 19:31:27 -0800