bird-of-paradise commited on
Commit
d9aa228
·
verified ·
1 Parent(s): 1649e1e

First commit

Browse files
Files changed (1) hide show
  1. README.md +215 -3
README.md CHANGED
@@ -1,3 +1,215 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ # Implementing Transformer from Scratch: A Step-by-Step Guide
6
+
7
+ This repository provides a detailed guide and implementation of the Transformer architecture from the ["Attention Is All You Need"](https://arxiv.org/abs/1706.03762) paper. The implementation focuses on understanding each component through clear code, comprehensive testing, and visual aids.
8
+
9
+ ## Table of Contents
10
+ 1. [Summary and Key Insights](#summary-and-key-insights)
11
+ 2. [Implementation Details](#implementation-details)
12
+ - [Embedding and Positional Encoding](#embedding-and-positional-encoding)
13
+ - [Transformer Attention](#transformer-attention)
14
+ - [Feed-Forward Network](#feed-forward-network)
15
+ - [Transformer Decoder](#transformer-decoder)
16
+ - [Encoder-Decoder Stack](#encoder-decoder-stack)
17
+ - [Full Transformer](#full-transformer)
18
+ 3. [Testing](#testing)
19
+ 4. [Visualizations](#visualizations)
20
+
21
+ ## Quick Start
22
+ View the complete implementation and tutorial in the [Jupyter notebook](Transformer_Implementation_Tutorial.ipynb).
23
+
24
+ ## Summary and Key Insights
25
+
26
+ ### Paper Reference
27
+ - ["Attention Is All You Need"](https://arxiv.org/abs/1706.03762) (Vaswani et al., 2017)
28
+ - Key sections:
29
+ - 3.1: Encoder and Decoder Stacks
30
+ - 3.2: Attention Mechanism
31
+ - 3.3: Position-wise Feed-Forward Networks
32
+ - 3.4: Embeddings and Softmax
33
+ - 3.5: Positional Encoding
34
+ - 5.4: Regularization (dropout strategy)
35
+
36
+ ### Implementation Strategy
37
+ Breaking down the architecture into manageable pieces and gradually adding complexity:
38
+
39
+ 1. Start with foundational components:
40
+ - Embedding + Positional Encoding
41
+ - Single-head self-attention
42
+
43
+ 2. Build up attention mechanism:
44
+ - Extend to multi-head attention
45
+ - Add cross-attention capability
46
+ - Implement attention masking
47
+
48
+ 3. Construct larger components:
49
+ - Encoder (self-attention + FFN)
50
+ - Decoder (masked self-attention + cross-attention + FFN)
51
+
52
+ 4. Combine into final architecture:
53
+ - Encoder-Decoder stack
54
+ - Full Transformer with input/output layers
55
+
56
+ ### Development Tips
57
+ 1. Visualization and Planning:
58
+ - Draw out tensor dimensions on paper
59
+ - Sketch attention patterns and masks
60
+ - Map each component back to paper equations
61
+ - This helps catch dimension mismatches early!
62
+
63
+ 2. Dimension Cheat Sheet:
64
+ - Input tokens: [batch_size, seq_len]
65
+ - Embeddings: [batch_size, seq_len, d_model]
66
+ - Attention matrices: [batch_size, num_heads, seq_len, seq_len]
67
+ - FFN hidden layer: [batch_size, seq_len, d_ff]
68
+ - Output logits: [batch_size, seq_len, vocab_size]
69
+
70
+ 3. Common Pitfalls:
71
+ - Forgetting to scale dot products by √d_k
72
+ - Incorrect mask dimensions or application
73
+ - Missing residual connections
74
+ - Wrong order of layer norm and dropout
75
+ - Tensor dimension mismatches in attention
76
+ - Not handling padding properly
77
+
78
+ 4. Performance Considerations:
79
+ - Memory usage scales with sequence length squared
80
+ - Attention computation is O(n²) with sequence length
81
+ - Balance between d_model and num_heads
82
+ - Trade-off between model size and batch size
83
+
84
+ ## Implementation Details
85
+
86
+ ### Embedding and Positional Encoding
87
+ This implements the input embedding and positional encoding from Section 3.5 of the paper. Key points:
88
+ - Embedding dimension can differ from model dimension (using projection)
89
+ - Positional encoding uses sine and cosine functions
90
+ - Scale embeddings by √d_model
91
+ - Apply dropout to the sum of embeddings and positional encodings
92
+
93
+ Implementation tips:
94
+ - Use `nn.Embedding` for token embeddings
95
+ - Store scaling factor as float during initialization
96
+ - Remember to expand positional encoding for batch dimension
97
+ - Add assertion for input dtype (should be torch.long)
98
+
99
+ ### Transformer Attention
100
+ Implements the core attention mechanism from Section 3.2.1. Formula: Attention(Q,K,V) = softmax(QK^T/√d_k)V
101
+
102
+ Key points:
103
+ - Supports both self-attention and cross-attention
104
+ - Handles different sequence lengths for encoder/decoder
105
+ - Scales dot products by 1/√d_k
106
+ - Applies attention masking before softmax
107
+
108
+ Implementation tips:
109
+ - Use separate Q,K,V projections
110
+ - Handle masking through addition (not masked_fill)
111
+ - Remember to reshape for multi-head attention
112
+ - Keep track of tensor dimensions at each step
113
+
114
+ ### Feed-Forward Network (FFN)
115
+ Implements the position-wise feed-forward network from Section 3.3: FFN(x) = max(0, xW₁ + b₁)W₂ + b₂
116
+
117
+ Key points:
118
+ - Two linear transformations with ReLU in between
119
+ - Inner layer dimension (d_ff) is typically 2048
120
+ - Applied identically to each position
121
+
122
+ Implementation tips:
123
+ - Use nn.Linear for transformations
124
+ - Remember to include bias terms
125
+ - Position-wise means same transformation for each position
126
+ - Dimension flow: d_model → d_ff → d_model
127
+
128
+ ### Transformer Decoder
129
+ Implements decoder layer from Section 3.1, with three sub-layers:
130
+ - Masked multi-head self-attention
131
+ - Multi-head cross-attention with encoder output
132
+ - Position-wise feed-forward network
133
+
134
+ Key points:
135
+ - Self-attention uses causal masking
136
+ - Cross-attention allows attending to all encoder outputs
137
+ - Each sub-layer followed by residual connection and layer normalization
138
+
139
+ Key implementation detail for causal masking:
140
+ - Create causal mask using upper triangular matrix:
141
+ ```python
142
+ mask = torch.triu(torch.ones(seq_len, seq_len), diagonal=1)
143
+ mask = mask.masked_fill(mask == 1, float('-inf'))
144
+ ```
145
+
146
+ This creates a pattern where position i can only attend to positions ≤ i
147
+ Using -inf ensures zero attention to future positions after softmax
148
+ Visualization of mask for seq_len=5:\
149
+ [[0, -inf, -inf, -inf, -inf],\
150
+ [0, 0, -inf, -inf, -inf],\
151
+ [0, 0, 0, -inf, -inf],\
152
+ [0, 0, 0, 0, -inf],\
153
+ [0, 0, 0, 0, 0]]
154
+
155
+
156
+ Implementation tips:
157
+ - Order of operations matters (masking before softmax)
158
+ - Each attention layer has its own projections
159
+ - Remember to pass encoder outputs for cross-attention
160
+ Careful with mask dimensions in self and cross attention
161
+
162
+ ### Encoder-Decoder Stack
163
+ Implements the full stack of encoder and decoder layers from Section 3.1.
164
+ Key points:
165
+ - Multiple encoder and decoder layers (typically 6)
166
+ - Each encoder output feeds into all decoder layers
167
+ - Maintains residual connections throughout the stack
168
+
169
+ Implementation tips:
170
+ - Use nn.ModuleList for layer stacks
171
+ - Share encoder outputs across decoder layers
172
+ - Maintain consistent masking throughout
173
+ - Handle padding masks separately from causal masks
174
+
175
+ ### Full Transformer
176
+ Combines all components into complete architecture:
177
+ - Input embeddings for source and target
178
+ - Positional encoding
179
+ - Encoder-decoder stack
180
+ - Final linear and softmax layer
181
+
182
+ Key points:
183
+ - Handles different vocabulary sizes for source/target
184
+ - Shifts decoder inputs for teacher forcing
185
+ - Projects outputs to target vocabulary size
186
+ - Applies log softmax for training stability
187
+
188
+ Implementation tips:
189
+ - Handle start tokens for decoder input
190
+ - Maintain separate embeddings for source/target
191
+ - Remember to scale embeddings
192
+ - Consider sharing embedding weights with output layer
193
+
194
+ ### Testing
195
+ Our implementation includes comprehensive tests for each component:
196
+
197
+ - Shape preservation through layers
198
+ - Masking effectiveness
199
+ - Attention pattern verification
200
+ - Forward/backward pass validation
201
+ - Parameter and gradient checks
202
+
203
+ See the notebook for detailed test implementations and results.
204
+
205
+ ### Visualizations
206
+ The implementation includes visualizations of:
207
+
208
+ - Attention patterns
209
+ - Positional encodings
210
+ - Masking effects
211
+ - Layer connectivity
212
+
213
+ These visualizations help understand the inner workings of the transformer and verify correct implementation.
214
+
215
+ For detailed code and interactive examples, please refer to the complete implementation notebook.