Pytorch Tutorial-Building models
这是我学习Pytorch时记录的一些笔记 ,希望能对你有所帮助😊
torch.nn.Module & torch.nn.Parameter
In this section, we’ll be discussing some of the tools PyTorch makes available for building deep learning networks.
Except for Parameter
, the classes we discuss in this section are all subclasses of torch.nn.Module
. This is the PyTorch base class meant to encapsulate(封装) behaviors specific to PyTorch Models and their components.
One important behavior of torch.nn.Module
is registering parameters. If a particular Module
subclass has learning weights, these weights are expressed as instances of torch.nn.Parameter
. The Parameter
class is a subclass of torch.Tensor
, with the special behavior that when they are assigned as attributes of a Module
, they are added to the list of that modules parameters. These parameters may be accessed through the parameters()
method on the Module
class.
As a simple example, here’s a very simple model with two linear layers and an activation function. We’ll create an instance of it and ask it to report on its parameters:
1 |
|
- This shows the fundamental structure of a PyTorch model: there is an
__init__()
method that defines the layers and other components of a model, and aforward()
method where the computation gets done. Note that we can print the model, or any of its submodules, to learn about its structure.
Common Layer Types
Linear Layers
The most basic type of neural network layer is a linear or fully connected layer. This is a layer where every input influences every output of the layer to a degree specified by the layer’s weights
. If a model has m inputs and n outputs, the weights will be an m * n matrix. For example:
1 |
|
- Parameter会自动开启
autograd
- Linear layers are used widely in deep learning models. One of the most common places you’ll see them is in
classifier models
Convolutional Layers
- Convolutional layers are built to handle data with a high degree of spatial correlation. They are very commonly used in computer vision, where they detect close groupings of features which the compose into higher-level features. They pop up in other contexts too - for example, in NLP applications, where the a word’s immediate context (that is, the other words nearby in the sequence) can affect the meaning of a sentence.
1 |
|
- 卷积层构造函数的第一个参数是输入通道的数量,第二个参数是输出特征的数量,第三个参数是窗口或
kernel
内核大小
关于卷积神经网络(CNN)中 卷积层(Convolutional Layer)、ReLU激活函数 和 最大池化层(Max Pooling Layer) 的处理流程及其作用。以下是逐步解析:
1. 卷积层(Convolutional Layer)的输出
- 输入假设:假设输入是一个单通道(灰度)的
32x32
图像,经过第一层卷积操作:1
self.conv1 = torch.nn.Conv2d(1, 6, 5) # 输入通道1,输出通道6,卷积核5x5
- 输出尺寸:
卷积后输出的尺寸计算公式为:
$$
\text{输出尺寸} = \left\lfloor \frac{\text{输入尺寸} - \text{卷积核尺寸} + 2 \times \text{填充}}{\text{步长}} \right\rfloor + 1
$$
默认情况下,padding=0
(无填充),stride=1
(步长为1),因此:32 - 5 + 1 = 28- 输出张量形状:
[batch_size, 6, 28, 28]
(6个通道,每个通道的激活图大小为28x28
)。
- 输出张量形状:
2. ReLU激活函数的作用
1 |
|
- ReLU(Rectified Linear Unit):定义为 $\text{ReLU}(x) = \max(0, x)$ 。
- 功能:
- 引入非线性:使模型能够学习复杂的非线性关系。
- 稀疏激活:将负值置零,保留正值,增强模型的稀疏性。
- 缓解梯度消失:相比 Sigmoid/Tanh,ReLU 的梯度在正区间恒为1,避免梯度消失问题。
- 输出形状:与输入相同,仍为
[batch_size, 6, 28, 28]
。
3. 最大池化层(Max Pooling)的细节
1 |
|
- 目的:降低空间维度(下采样),减少计算量并增强平移不变性。
- 操作规则:
- 将输入激活图划分为不重叠的
2x2
区域。 - 对每个区域取最大值,作为输出。
- 步长默认等于池化窗口大小(即
stride=2
),因此输出尺寸减半。
- 将输入激活图划分为不重叠的
- 计算示例:
- 输入尺寸:
[batch_size, 6, 28, 28]
。 - 输出尺寸:
$\left\lfloor \frac{28 - 2}{2} \right\rfloor + 1 = 14$ - 输出张量形状:
[batch_size, 6, 14, 14]
。
- 输入尺寸:
为什么选择最大值?
- 保留最显著特征:最大值代表该区域最强烈的激活响应,有助于保留重要特征(如边缘、纹理)。
- 抑制噪声:忽略非最大值,降低噪声干扰。
4. 维度变化的直观理解
操作 | 输入形状 | 输出形状 | 关键作用 |
---|---|---|---|
卷积(Conv1) | [1, 1, 32, 32] | [1, 6, 28, 28] | 提取局部特征,增加通道数 |
ReLU | [1, 6, 28, 28] | [1, 6, 28, 28] | 引入非线性,过滤负值 |
最大池化 | [1, 6, 28, 28] | [1, 6, 14, 14] | 降低分辨率,增强鲁棒性 |
5. 为什么需要这些步骤?
- 卷积层:
- 通过局部感受野提取空间特征(如边缘、角点)。
- 使用多个卷积核(通道)捕捉不同特征模式。
- ReLU:
- 解决线性模型的局限性,使网络能拟合复杂函数。
- 池化层:
- 减少参数数量,防止过拟合
- 使模型对输入的小平移/形变更鲁棒(“近似不变性”)。
- There are convolutional layers for addressing 1D, 2D, and 3D tensors. There are also many more optional arguments for a conv layer constructor, including
stride length
(e.g., only scanning every second or every third position) in the input,padding
(so you can scan out to the edges of the input), and more. See the documentation for more information.
Recurrent Layers
- Recurrent neural networks (or RNNs) are used for
sequential data
- anything from time-series measurements from a scientific instrument to natural language sentences to DNA nucleotides. An RNN does this by maintaining a hidden state that acts as a sort of memory for what it has seen in the sequence so far. - The internal structure of an RNN layer - or its variants, the
LSTM
(long short-term memory) andGRU
(gated recurrent unit) - is moderately complex and beyond the scope of this video, but we’ll show you what one looks like in action with anLSTM-based part-of-speech tagger
(a type of classifier that tells you if a word is a noun, verb, etc.):
1 |
|
The constructor has four arguments:
vocab_size
is the number of words in the input vocabulary. Each word is aone-hot vector
(or unit vector) in avocab_size
-dimensional space.tagset_size
is the number of tags in the output set.embedding_dim
is the size of the embedding space for the vocabulary. An embedding maps a vocabulary onto a low-dimensional space, where words with similar meanings are close together in the space.hidden_dim
is the size of the LSTM’s memory.
The input will be a sentence with the words represented as indices of one-hot vectors. The embedding layer will then map these down to an embedding_dim
-dimensional space. The LSTM takes this sequence of embeddings and iterates over it, fielding an output vector of length hidden_dim
. The final linear layer acts as a classifier; applying log_softmax()
to the output of the final layer converts the output into a normalized set of estimated probabilities that a given word maps to a given tag.
If you’d like to see this network in action, check out the Sequence Models and LSTM Networks tutorial on pytorch.org.
Transformers
Transformers are multi-purpose networks that have taken over the state of the art in NLP with models like BERT
. A discussion of transformer architecture is beyond the scope of this video, but PyTorch has a Transformer
class that allows you to define the overall parameters of a transformer model - the number of attention heads, the number of encoder & decoder layers, dropout and activation functions, etc. (You can even build the BERT model from this single class, with the right parameters!) The torch.nn.Transformer
class also has classes to encapsulate the individual components (TransformerEncoder
, TransformerDecoder
) and subcomponents (TransformerEncoderLayer
, TransformerDecoderLayer
). For details, check out the documentation on transformer classes, and the relevant tutorial on pytorch.org.
Other Layers and Functions
Data Manipulation Layers
There are other layer types that perform important functions in models, but don’t participate in the learning process themselves.
Max pooling
Max pooling
(and its twin, min pooling
) reduce a tensor by combining cells, and assigning the maximum value of the input cells to the output cell. (We saw this ) For example:
1 |
|
If you look closely at the values above, you’ll see that each of the values in the maxpooled output is the maximum value of each quadrant of the 6x6 input.
Normalization layers
Normalization layers
re-center and normalize the output of one layer before feeding it to another. Centering and scaling the intermediate tensors has a number of beneficial effects, such as letting you use higher learning rates without exploding/vanishing gradients.
1 |
|
Running the cell above, we’ve added a large scaling factor and offset to an input tensor; you should see the input tensor’s mean()
somewhere in the neighborhood of 15. After running it through the normalization layer, you can see that the values are smaller, and grouped around zero - in fact, the mean should be very small (> 1e-8).
This is beneficial because many activation functions (discussed below) have their strongest gradients near 0, but sometimes suffer from vanishing or exploding gradients for inputs that drive them far away from zero. Keeping the data centered around the area of steepest gradient will tend to mean faster, better learning and higher feasible learning rates.
Dropout layers
Dropout layers are a tool for encouraging sparse representations 稀疏表示 in your model - that is, pushing it to do inference with less data.
Dropout layers work by randomly setting parts of the input tensor zero during training - dropout layers are always turned off for inference 推理. This forces the model to learn against this masked or reduced dataset. For example:
1 |
|
Above, you can see the effect of dropout on a sample tensor. You can use the optional p
argument to set the probability of an individual weight dropping out; if you don’t it defaults to 0.5.
Activation Functions
Activation functions make deep learning possible. A neural network is really a program - with many parameters - that simulates a mathematical function. If all we did was multiple tensors by layer weights repeatedly, we could only simulate linear functions; further, there would be no point to having many layers, as the whole network could be reduced to a single matrix multiplication. Inserting non-linear activation functions between layers is what allows a deep learning model to simulate any function, rather than just linear ones.
torch.nn.Module
has objects encapsulating 封装 all of the major activation functions including ReLU
and its many variants, Tanh
, Hardtanh
, sigmoid
, and more. It also includes other functions, such as Softmax
, that are most useful at the output stage of a model.
Loss Functions
Loss functions tell us how far a model’s prediction is from the correct answer. PyTorch contains a variety of loss functions, including common MSE
(mean squared error = L2 norm), Cross Entropy Loss
and Negative Likelihood Loss
(useful for classifiers), and others.
Advanced: Replacing Layers
- waiting to be updated😀