Add privilege control for Kubernetes dashboard access users
Add privilege control for Kubernetes dashboard access users
Article directory
Add privilege control for Kubernetes dashboard access users
1. demand
2. plan
3. implementation
3.1 Assign dashboard permissions
3.2 Allocate kubeapps permissions
3.3 Generating kubeconfig
4. Test Verification
1. demand
To create applicat ...
Posted by phprocket on Fri, 01 Feb 2019 09:00:15 -0800
Spark SQL Notebook Arrangement (3): Load and Save Function and Spark SQL Function
Load and save function
Data loading (json file, jdbc) and saving (json, jdbc)
The test code is as follows:
package cn.xpleaf.bigdata.spark.scala.sql.p1
import java.util.Properties
import org.apache.log4j.{Level, Logger}
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.{SQLContext, SaveMode}
/* ...
Posted by danielrs1 on Fri, 01 Feb 2019 00:09:16 -0800
Construction of Hadoop Cluster
Article directory
1. Basic information
2. Installation process
1. Switch to hadoop account and decompress hadoop to the destination installation directory by tar-zxvf command:
2. Create tmpdir directory:
Configure hadoop-env.sh file:
4. Configure mapred-env.sh file:
5. Configure the core-site.xml file core-site.xml
Configure ...
Posted by mohamdally on Thu, 31 Jan 2019 23:15:16 -0800
Spark Learning Notes (3) - Spark Operator
1 Spark Operator
1.1 is divided into two categories
1.1.1 Transformation
Transformation delays execution, which records metadata information and actually starts computing when the computing task triggers the Action.
1.1.2 Action
1.2 Two Ways to Create RDD
RDD is created through the file system supported by HDFS. The ...
Posted by gauravupadhyaya on Thu, 31 Jan 2019 22:39:16 -0800
One of the introductory cases of SparkSQL (SparkSQL 1.x)
SparkSQL 1.x and 2.x programming API s have some changes that are used in enterprises, so both of them will use cases to learn.
The case of using Spark SQL 1.x first
IDEA+Maven+Scala
1. pom dependencies for importing SparkSQL
In the previous Blog Spark case, the following dependencies are added to the pom dependencies of th ...
Posted by fernado1283 on Thu, 31 Jan 2019 21:21:15 -0800
Introduction to SparkSQL Case 2 (SparkSQL 1.x)
The main ideas in the introduction case of SparkSQL are as follows:
Create SparkContext
Create SQLContext
3). Create RDD
4. Create a class and define its member variables
5. Collate data and associate class es
6. Converting RDD to DataFrame (Importing Implicit Conversion)
7. Register the DataFrame as a temporary table
8. Writi ...
Posted by pietbez on Thu, 31 Jan 2019 19:57:15 -0800
Deploying docker-based CEPH cluster using ceph-ansible
Deploying docker-based CEPH cluster using ceph-ansible
Install ansible
Download ceph-ansible
Configure ceph-ansible
Start deployment
Destroying Clusters (Caution)
Install ansible
For installing ansible and configuring secret-free login between nodes, this is not covered here. Please refer to Official Documents.
Downlo ...
Posted by Valkrin on Thu, 31 Jan 2019 19:12:15 -0800
Docker Learning Notes (6) Stacks
Stack is actually a set of interrelated services. Generally, all the services of an application are placed in one stack, and the deployment of an application can be accomplished in one click through the. yml file. Of course, more complex applications may be split into multiple stacks. In the preceding note. We deployed the stac ...
Posted by sujithfem on Thu, 31 Jan 2019 18:24:15 -0800
Initialization failure of hiveMetastore metadata base: java.io.IOException: Schema script failed, errorcode 2
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 259, in <module>
HiveMetastore().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File ...
Posted by aceconcepts on Thu, 31 Jan 2019 16:00:15 -0800
[python reptile] crawl disease database
Database address: http://web.tfrd.org.tw/genehelp/diseaseDatabase.html?selectedIndex=0
The database looks like this:
This time, we mainly crawl the name of the disease. The difficulty is that the source code of the web page can not see the data, but we can view the source website of the web page request data through the F12 ...
Posted by Patrick on Thu, 31 Jan 2019 15:39:15 -0800