[MAE] implementation of Masked Autoencoders and visualization of pre training
Masked Autoencoders Are Scalable Vision Learners
MAE proposes a self supervised training method, which can effectively and train the model and improve the performance of the model. This project realizes the self-monitoring training part and visualizes the training process.
network structure
The structure of MAE is relatively simple. It is c ...
Posted by derzok on Fri, 03 Dec 2021 17:10:04 -0800
log usage
log usage
It is inevitable that we encounter bugs in the development process. How can we easily find bugs? There is a logging module in Python that can record what is wrong and record relevant information to help us debug easily
log level
Logs are divided into five levels, from low to high:
1. DEBUG
2. INFO
3. WARNING
4. ERROR
5. C ...
Posted by svivian on Sat, 27 Nov 2021 20:01:35 -0800
Data analysis of hands-on learning -- establishment and evaluation of model
1. Model construction
1.1 get modeling data
#Read raw data
train = pd.read_csv('train.csv')
#Read cleaned data set
data = pd.read_csv('clear_data.csv')
1.2 select appropriate model
Before model selection, we need to know whether the data set is finally supervised learning or unsupervised learning
Machine learning is mainly divided into ...
Posted by boon4376 on Thu, 25 Nov 2021 10:27:37 -0800
Learn NLP with Transformer (Chapter 8)
8. Sequence labeling
Task08 text classification This study refers to Datawhale open source learning: https://github.com/datawhalechina/learn-nlp-with-transformers The content is generally derived from the original text and adjusted in combination with their own learning ideas.
Personal summary: first, the structure of sequence annotation task ...
Posted by cspgsl on Mon, 27 Sep 2021 02:22:23 -0700
20210925_NLP transformer_ Text classification of NLP
6, Text classification
source
Datewhle29 issue__ NLP transformer:
Erenup (more notes), Peking University, principalZhang Fan, Datawhale, Tianjin University, Chapter 4Zhang Xian, Harbin Institute of technology, Chapter 2Li luoqiu, Zhejiang University, Chapter 3CAI Jie, Peking University, Chapter 4hlzhang, McGill University, Chapter 4T ...
Posted by kbc1 on Sat, 25 Sep 2021 02:32:28 -0700
Image classification for deep learning -- a detailed explanation of Vision Transformer(ViT) network
Deep learning image classification (XVIII) detailed explanation of Vision Transformer(ViT) network
In the previous section, we talked about the self attention structure in Transformer. In this section, learn the detailed explanation of Vision Transformer(vit). Learning video from Bilibili , refer to blog Detailed explanation of Vision Tran ...
Posted by seaweed on Thu, 09 Sep 2021 21:09:01 -0700
NLP learning -- code implementation of 22.Transformer
introduction
Transformer is the basic structure of almost all pre training models today. Maybe we usually pay more attention to how to make better use of the trained GPT, BERT and other models for fine tune, but equally important, we need to know how to build these powerful models. Therefore, in this paper, we mainly study how to im ...
Posted by MBrody on Thu, 02 Sep 2021 10:33:37 -0700