[deep learning] forward propagation with complete python code
Hello, everyone. Today, I'd like to share with you the derivation process of forward propagation in tensorflow 2.0 deep learning, using the mnist data set provided by the system.
1. Data acquisition
First, we import the required library files and datasets. The imported x and y data are array types and need to be converted to tensor type tf.co ...
Posted by djw821 on Sun, 05 Dec 2021 01:44:16 -0800
Neural network -- Python implements BP neural network algorithm (Theory + example + program)
1, Multilayer perceptron model based on BP algorithm
The multilayer perceptron using BP algorithm is the most widely used neural network so far. In the application of multilayer perceptron, the single hidden layer network shown in Figure 3-15 is the most common. In general, it is customary to call a single hidden layer feedforward network a th ...
Posted by Eratimus on Tue, 30 Nov 2021 21:35:36 -0800
[TextCNN full version] fast + high accuracy baseline
preface: Two months ago, I wrote the complete steps of TextCNN (less than 60 lines of code), but did not take into account the subsequent engineering deployment and large amount of data (unable to load all into memory), so today I made a transformation and optimization according to the actual case. The operation steps of TextCNN can generally b ...
Posted by FuriousIrishman on Tue, 30 Nov 2021 19:18:57 -0800
Summary of this week
catalogue
Plans completed this week
Thesis Reading 1
Abstract (Abstract)
1. Introduction
2. Pseudolabel method for deep neural networks
2.1. Deep Neural Networks
2.2. Denoising auto encoder
2.3. Dropout
2.4. Pseudo label
3. Why could Pseudo-Label work? (why pseudo tags work)
3.1. Low density separation between classes ...
Posted by washbucket on Sat, 27 Nov 2021 23:45:53 -0800
[Li Hongyi machine learning] Basic Concept (p4) learning notes
Summary of Li Hongyi's machine learning notes Course links
Review
The more complex the model, the lower the error on the testing data.
error comes from two places:
biasvariance
f_star is f_ An estimate of hat, f_star may not be equal to f_hat, the distance may come from bias or variance
Estimate the mean value of variab ...
Posted by -Zeus- on Tue, 23 Nov 2021 23:58:56 -0800
PyTorch deep learning practice lesson 11 convolutional neural network (advanced part) perception module handwritten numeral recognition
Video link: The final collection of PyTorch deep learning practice_ Beep beep beep_ bilibili
This secondary implementation is a more complex neural network. Although the model looks very complex, in order to reduce code redundancy and improve code reusability, we can define the neural network with the same structure as a class to improv ...
Posted by elpaisa on Sat, 20 Nov 2021 13:12:27 -0800
[project practice] Python realizes data analysis of semiconductor etcher based on RBF neural network
Note: This is a machine learning practical project (with data + code). If you need data + complete code, you can get it directly at the end of the article.
1. Project background
For the fault diagnosis of semiconductor etcher, it is necessary to collect and obtain the data of the etching process o ...
Posted by Pascal P. on Fri, 19 Nov 2021 20:20:15 -0800
Wu Enda's programming assignment in the second week
Title Description
Given the training data set (pictures of cats), let's build a simple neural network to identify cats.
Dataset description
There are 209 pictures in the training set, and the shape of each picture is (64, 64, 3) There are 50 pictures in the test set, and the shape of each picture is (64, 64, 3) classes stores two string data ...
Posted by HaXoRL33T on Fri, 19 Nov 2021 16:36:49 -0800
shuffleNet-V1 paper reading and code implementation
preface
shufflenetV1 is another direction of the development of convolutional neural network to lightweight, which is the lightweight network following Mobilenet.
1, Paper reading summary
Paper address Tricks: application of group revolution on 1 * 1 convolution; channel shuffle improves information transmission between channels ...
Posted by tomsasse on Fri, 19 Nov 2021 03:42:54 -0800
[neural network] weight decay
Weight attenuation
In the previous section, we observed the over fitting phenomenon, that is, the training error of the model is much smaller than its error on the test set. Although increasing the training data set may reduce over fitting, it is often expensive to obtain additional training data. This section describes a common method to ...
Posted by kark_1999 on Wed, 17 Nov 2021 06:27:12 -0800