GFPGAN source code analysis - Part 6
2021SC@SDUSC
Source code: archs\gfpganv1_clean_arch.py
This paper mainly analyzes gfpganv1_ clean_ Under arch.py
class GFPGANv1Clean(nn.Module) class_ init_ () method
catalogue
class GFPGANv1Clean(nn.Module)
init()
(1) Settings for channels
(2) Call torch.nn.Conv2d() to create a convolutional neural network
(3) Downsample
(4) u ...
Posted by jabapyth on Mon, 06 Dec 2021 11:21:47 -0800
pytorch Basics
1.Autograd (automatic gradient algorithm)
autograd is the core package of PyTorch to implement the automatic gradient algorithm mentioned earlier. Let's start by introducing the variables.
2.Variable
autograd.Variable is the encapsulation of Tensor. Once we have defined the final variable (i.e., calculated loss, etc.), we can call its backwa ...
Posted by csxpcm on Sun, 05 Dec 2021 15:57:54 -0800
Installation tutorial on CUDA+CUDNN in Windows Environment
Anaconda + pychar package is recommended.
This article describes how to install the pytorch and tensorflow frameworks
First of all, you should know which version of your graphics card driver is (take 1660S as an example)
1. Open the NVIDIA control panel, which can be opened by right clicking on the desktop or hiding the icon bar in the lower ...
Posted by poltort on Sun, 05 Dec 2021 01:46:12 -0800
#Introduction to PyTorch build MLP model to realize classification tasks
this is the second article on the introduction to PyTorch. It will be continuously updated as a series of PyTorch articles. this paper will introduce how to use PyTorch to build a simple MLP (Multi-layer Perceptron) model to realize two classification and multi classification tasks.
Data set introduction
the second classi ...
Posted by nepeaNMedia on Fri, 03 Dec 2021 14:11:19 -0800
GCANet (gated context aggregation network for image defogging and raining)
Atomization treatment can be represented by the following model:
I (x): foggy picture J (x): picture of defogging A : Global atmospheric light T (x): intermediate transformation mapping, depending on unknown depth information, medium transmission map
The previous defogging methods used r ...
Posted by carlos1234 on Thu, 02 Dec 2021 20:49:46 -0800
Calculation of cross_entropy loss function of torch (including python code)
1. Call
Firstly, the cross entropy loss function of torch is called as follows:
torch.nn.functional.cross_entropy(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean')
It is usually written as:
import torch.nn.functional as F
F.cross_entropy(input, target)
2. Parameter description
Input( tensor ...
Posted by wilhud on Wed, 01 Dec 2021 00:39:19 -0800
Conclusion: the overall structure flow chart of t5 transformers
In order to better understand the content of t5 model structure, the overall structure process of t5 model is given here
t5 overall structure and process
During the operation of t5, the key is changed_ States and values_ Value of States
layerselfattention of 6 encoder parts
Enter hidden_staes = (1,8,11,64) First call query_states
query_ ...
Posted by EPJS on Tue, 30 Nov 2021 23:13:03 -0800
Automatic driving based on Unity
1. Process
Software download and installationdata acquisitionCustom datasetModel buildingmodel trainingtestsummaryfollow-up
Download address of this project GitHub
1. Software download and installation
1.Download address: https://github.com/udacity/self-driving-car-sim 2. After entering the link, you can choose your own platform to download ...
Posted by Plex on Mon, 29 Nov 2021 07:47:16 -0800
Computer vision - Attention mechanism (with code)
1. Introduction to attention
Attention means attention in Chinese. This mechanism is put into computer vision, which is similar to showing us a picture of a beautiful and handsome man. Where is the person we first pay attention to 😏
Where did you first see 😏
The earliest attention mechanism was applied to computer vision. The mechanis ...
Posted by anthonyfellows on Mon, 29 Nov 2021 04:51:03 -0800
Interpretation of PixPro self-monitoring paper
PixPro is the first to use pixel level contrast learning for feature representation learningThe above figure is the flow chart of the whole algorithm, which will be analyzed in detail nextForward propagationInput is the input image, and the dimension size is (b, c, h, w)augmentation: cut the same input in random size and position and reduce it ...
Posted by Darhazer on Mon, 29 Nov 2021 00:44:25 -0800