7
7
< link rel ="stylesheet " href ="../../css/style.css ">
8
8
< link rel ="icon " href ="../../images/img0.png " type ="image/png ">
9
9
< link rel ="stylesheet " href ="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.5.1/css/all.min.css " integrity ="sha512-DTOQO9RWCH3ppGqcWaEA1BIZOC6xxalwEsw9c2QQeAIftl+Vegovlnee1c9QX4TctnWMn13TZye+giMm8e2LwA== " crossorigin ="anonymous " referrerpolicy ="no-referrer " />
10
+ < link rel ="stylesheet " href ="../../css/prism.css ">
10
11
< script src ="../../js/main.js " defer > </ script >
11
-
12
- <!-- <link rel="stylesheet" href="../../css/prism-line-numbers.css"> -->
13
- <!-- <link href="https://cdnjs.cloudflare.com/ajax/libs/prism/1.24.1/themes/prism.css" rel="stylesheet" /> -->
14
- < link rel ="stylesheet " href ="../../css/prism.css ">
15
- <!-- <link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/11.4.0/styles/default.min.css"> -->
12
+ < script src ="../../js/prism.js " defer > </ script >
16
13
</ head >
17
14
< body >
18
15
< header class ="header ">
@@ -73,14 +70,261 @@ <h3>Tensors</h3>
73
70
Tensors are multi-dimensional data similar to matrices in < a href ="https://numpy.org/ "> numpy</ a > , but with additional attributes that enable parallel computing to accelerate calculations.
74
71
</ p >
75
72
76
- < div class ="code-box2 left-aligned ">
73
+ < p class ="left-aligned ">
74
+ First, let's import necessary librairies:
75
+ </ p >
76
+
77
+ < div class ="left-aligned ">
77
78
< pre class ="line-numbers "> < code class ="language-python "> import torch
79
+ import numpy as np</ code > </ pre >
80
+ </ div >
81
+
82
+ < p class ="left-aligned ">
83
+ Tensors can be created in various ways. For example, we can create one directly from a list or a nested list:
84
+ </ p >
85
+
86
+ < div class ="left-aligned ">
87
+ < pre class ="line-numbers "> < code class ="language-python "> data = [[1, 2], [3, 4]]
88
+ data_tensor = torch.tensor(data)</ code > </ pre >
89
+ </ div >
90
+
91
+ < p class ="left-aligned ">
92
+ From a numpy array:
93
+ </ p >
94
+
95
+ < div class ="left-aligned ">
96
+ < pre class ="line-numbers "> < code class ="language-python "> data_array = np.array(data)
97
+ data_array_tensor = torch.from_numpy(data_array)</ code > </ pre >
98
+ </ div >
99
+
100
+ < p class ="left-aligned ">
101
+ From another tensor:
102
+ </ p >
103
+
104
+ < div class ="left-aligned ">
105
+ < pre class ="line-numbers "> < code class ="language-python "> ones_tensor = torch.ones_like(data_tensor)
106
+ random_tensor = torch.rand_like(data_tensor, dtype=torch.float)
107
+
108
+ print(f"Ones tensor: \n{ones_tensor}")
109
+ print(f"Random tensor: \n{random_tensor}")</ code > </ pre >
110
+ </ div >
111
+
112
+ < div class ="left-aligned ">
113
+ < pre class ="line-numbers "> < code class ="language-python "> Ones tensor:
114
+ tensor([[1, 1],
115
+ [1, 1]])
116
+
117
+ Random tensor:
118
+ tensor([[0.2302, 0.7488],
119
+ [0.0755, 0.3460]])</ code > </ pre >
120
+ </ div >
121
+
122
+ < p class ="left-aligned ">
123
+ With random or constant values:
124
+ </ p >
125
+
126
+ < div class ="left-aligned ">
127
+ < pre class ="line-numbers "> < code class ="language-python "> shape = (2, 4,)
128
+ random_normal_tensor = torch.randn(shape)
129
+ new_ones_tensor = ones_tensor.new_ones(shape, dtype=torch.double)
130
+ empty_tensor = torch.empty(shape)
131
+
132
+ print(f"Random normal tensor: \n{random_normal_tensor}")
133
+ print(f"New ones tensor: \n{new_ones_tensor}")
134
+ print(f"Empty tensor: \n{empty_tensor}")</ code > </ pre >
135
+ </ div >
136
+
137
+ < div class ="left-aligned ">
138
+ < pre class ="line-numbers "> < code class ="language-python "> Random normal tensor:
139
+ tensor([[ 0.5631, -0.0305, -1.2209, -0.8312],
140
+ [ 0.6690, -0.6183, 0.3573, -0.4407]])
141
+
142
+ New ones tensor:
143
+ tensor([[1., 1., 1., 1.],
144
+ [1., 1., 1., 1.]], dtype=torch.float64)
145
+
146
+ Empty tensor:
147
+ tensor([[0., 0., 0., 0.],
148
+ [0., 0., 0., 0.]])</ code > </ pre >
149
+ </ div >
150
+
151
+ < h3 > Attributes</ h3 >
152
+
153
+ < div class ="left-aligned ">
154
+ < pre class ="line-numbers "> < code class ="language-python "> tensor = torch.rand(4, 7)
155
+
156
+ print(f"Shape of tensor: {tensor.shape}")
157
+ print(f"Size of tensor: {tensor.size()}")
158
+ print(f"Datatype of tensor: {tensor.dtype}")
159
+ print(f"Type of tensor: {type(tensor)}")
160
+ print(f"Device tensor is stored on: {tensor.device}")</ code > </ pre >
161
+ </ div >
162
+
163
+ < div class ="left-aligned ">
164
+ < pre class ="line-numbers "> < code class ="language-python "> Shape of tensor: torch.Size([4, 7])
165
+ Size of tensor: torch.Size([4, 7])
166
+ Datatype of tensor: torch.float32
167
+ Type of tensor: <class 'torch.Tensor'>
168
+ Device tensor is stored on: cpu</ code > </ pre >
169
+ </ div >
170
+
171
+ < h3 > Operations</ h3 >
172
+
173
+ < p >
174
+ There are over 100 tensor operations, for example transposing, indexing, slicing, mathematical operations, linear algebra. They are available < a href ="https://pytorch.org/docs/stable/torch.html "> here</ a > .
175
+ </ p >
176
+
177
+ < p class ="left-aligned ">
178
+ Addition two tensors, element-wise
179
+ </ p >
180
+
181
+ < div class ="left-aligned ">
182
+ < pre class ="line-numbers "> < code class ="language-python "> x = torch.randn(3, 3)
183
+ y = torch.ones(3, 3)
184
+
185
+ print(f"x: \n{x}")
186
+ print(f"y: \n{y}")
187
+ print(f"1. x + y: \n{x + y}")
188
+ print(f"2. x + y: \n{torch.add(x, y)}")
189
+ print(f"3. x + y: \n{x.add(y)}")</ code > </ pre >
190
+ </ div >
191
+
192
+ < div class ="left-aligned ">
193
+ < pre class ="line-numbers "> < code class ="language-python "> x:
194
+ tensor([[ 0.4901, 0.3201, -0.1917],
195
+ [ 0.2385, 1.0622, 1.7395],
196
+ [ 1.4905, 0.3360, -0.0343]])
197
+
198
+ y:
199
+ tensor([[1., 1., 1.],
200
+ [1., 1., 1.],
201
+ [1., 1., 1.]])
202
+
203
+ 1. x + y:
204
+ tensor([[1.4901, 1.3201, 0.8083],
205
+ [1.2385, 2.0622, 2.7395],
206
+ [2.4905, 1.3360, 0.9657]])
207
+
208
+ 2. x + y:
209
+ tensor([[1.4901, 1.3201, 0.8083],
210
+ [1.2385, 2.0622, 2.7395],
211
+ [2.4905, 1.3360, 0.9657]])
212
+
213
+ 3. x + y:
214
+ tensor([[1.4901, 1.3201, 0.8083],
215
+ [1.2385, 2.0622, 2.7395],
216
+ [2.4905, 1.3360, 0.9657]])</ code > </ pre >
217
+ </ div >
218
+
219
+ < p class ="left-aligned ">
220
+ In-place operations
221
+ </ p >
222
+
223
+ < div class ="left-aligned ">
224
+ < pre class ="line-numbers "> < code class ="language-python "> x = torch.randn(3, 3)
225
+ y = torch.ones(3, 3)
226
+
227
+ print(f"x: \n{x}")
228
+ print(f"In-place addition: \n{x.add_(y)}")
229
+ print(f"x: \n{x}")</ code > </ pre >
230
+ </ div >
231
+
232
+ < div class ="left-aligned ">
233
+ < pre class ="line-numbers "> < code class ="language-python "> x:
234
+ tensor([[-1.6163, -1.6534, -1.0660],
235
+ [ 2.2851, -0.5562, 0.0684],
236
+ [ 1.5171, -0.8063, 1.4790]])
78
237
79
- x_ones = torch.ones_like(x_data)
80
- print(f"Ones Tensor: \n {x_ones} \n")
81
-
82
- x_rand = torch.rand_like(x_data, dtype=torch.float)
83
- print(f"Random Tensor: \n {x_rand} \n")</ code > </ pre >
238
+ In-place addition:
239
+ tensor([[-0.6163, -0.6534, -0.0660],
240
+ [ 3.2851, 0.4438, 1.0684],
241
+ [ 2.5171, 0.1937, 2.4790]])
242
+
243
+ x:
244
+ tensor([[-0.6163, -0.6534, -0.0660],
245
+ [ 3.2851, 0.4438, 1.0684],
246
+ [ 2.5171, 0.1937, 2.4790]])</ code > </ pre >
247
+ </ div >
248
+
249
+ < p class ="left-aligned ">
250
+ Tensors can be indexed and sliced like numpy arrays or lists.
251
+ </ p >
252
+
253
+ < div class ="left-aligned ">
254
+ < pre class ="line-numbers "> < code class ="language-python "> x = torch.ones(4, 4)
255
+ print(f"2nd column of x: \n{x[:, 1]}")
256
+
257
+ x[:, 1] = 0
258
+ print(f"x: \n{x}")</ code > </ pre >
259
+ </ div >
260
+
261
+ < div class ="left-aligned ">
262
+ < pre class ="line-numbers "> < code class ="language-python "> 2nd column of x:
263
+ tensor([1., 1., 1., 1.])
264
+
265
+ x:
266
+ tensor([[1., 0., 1., 1.],
267
+ [1., 0., 1., 1.],
268
+ [1., 0., 1., 1.],
269
+ [1., 0., 1., 1.]])</ code > </ pre >
270
+ </ div >
271
+
272
+ < p class ="left-aligned ">
273
+ To join 2 tensors, use < strong > torch.cat</ strong > or < strong > torch.stack</ strong >
274
+ </ p >
275
+
276
+ < div class ="left-aligned ">
277
+ < pre class ="line-numbers "> < code class ="language-python "> print(f"Concatenating 2 tensors along 1st dimension: \n{torch.cat([x, x], dim=1)}")</ code > </ pre >
278
+ </ div >
279
+
280
+ < div class ="left-aligned ">
281
+ < pre class ="line-numbers "> < code class ="language-python "> Concatenating 2 tensors along 1st dimension:
282
+ tensor([[1., 0., 1., 1., 1., 0., 1., 1.],
283
+ [1., 0., 1., 1., 1., 0., 1., 1.],
284
+ [1., 0., 1., 1., 1., 0., 1., 1.],
285
+ [1., 0., 1., 1., 1., 0., 1., 1.]])</ code > </ pre >
286
+ </ div >
287
+
288
+ < p class ="left-aligned ">
289
+ Tensors can be run on GPU. Parallel computing allows faster operations than on CPU.
290
+ </ p >
291
+
292
+ < div class ="left-aligned ">
293
+ < pre class ="line-numbers "> < code class ="language-python "> # Check availability of gpu devices
294
+ if torch.cuda.is_available():
295
+ x = x.to('cuda')</ code > </ pre >
296
+ </ div >
297
+
298
+ < p class ="left-aligned ">
299
+ Note that macOS does not natively support NVIDIA GOUs with CUDA. Latest Apple Silicon devices use Metal for GPU acceleration, which is supported by Pytorch, but only in specific version.
300
+ </ p >
301
+
302
+ < div class ="left-aligned ">
303
+ < pre class ="line-numbers "> < code class ="language-python "> # Check availability of gpu devices
304
+ if torch.cuda.is_available():
305
+ x = x.to('cuda')</ code > </ pre >
306
+ </ div >
307
+
308
+ < div class ="left-aligned ">
309
+ < pre class ="line-numbers "> < code class ="language-python "> tensor = torch.randn(3, 3)
310
+
311
+ print(f"CUDA is available: {torch.cuda.is_available()}")
312
+ print(f"MPS is available: {torch.backends.mps.is_available()}")
313
+
314
+ if torch.backends.mps.is_available():
315
+ mps_device = torch.device('mps')
316
+ tensor = tensor.to(mps_device)
317
+ print(f"MPS device tensor: \n{tensor}")</ code > </ pre >
318
+ </ div >
319
+
320
+ < div class ="left-aligned ">
321
+ < pre class ="line-numbers "> < code class ="language-python "> CUDA is available: False
322
+ MPS is available: True
323
+
324
+ MPS device tensor:
325
+ tensor([[ 1.9530, 0.2365, 0.0942],
326
+ [ 2.0012, 0.1181, 1.2998],
327
+ [-0.6547, -0.0198, 0.4644]], device='mps:0')</ code > </ pre >
84
328
</ div >
85
329
86
330
< h3 > Link with NumPy arrays</ h3 >
@@ -89,10 +333,67 @@ <h3>Link with NumPy arrays</h3>
89
333
Tensors and < a href ="https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html "> Numpy's ndarrays</ a > are pretty much the same. They often share the same underlying memory on CPU, allow users to transfer data from one to another without having the need of creating new variables.
90
334
</ p >
91
335
336
+ < div class ="left-aligned ">
337
+ < pre class ="line-numbers "> < code class ="language-python "> a = torch.ones(5)
338
+ print(f"a: \n{a}")
339
+ print(f"a is {type(a)}")
340
+
341
+ b = a.numpy()
342
+ print(f"b is {type(b)}")</ code > </ pre >
343
+ </ div >
344
+
345
+ < div class ="left-aligned ">
346
+ < pre class ="line-numbers "> < code class ="language-python "> a:
347
+ tensor([1., 1., 1., 1., 1.])
348
+ a is <class 'torch.Tensor'>
349
+ b is <class 'numpy.ndarray'> </ code > </ pre >
350
+ </ div >
351
+
352
+ < p >
353
+ This prouves that torch tensor and numpy array share the same underlying memory on CPU. Any operation on a will change b on the same manner, and vice-versa.
354
+ </ p >
355
+
356
+ < div class ="left-aligned ">
357
+ < pre class ="line-numbers "> < code class ="language-python "> a.add_(1)
358
+ print(f"a: \n{a}")
359
+ print(f"b: \n{b}")</ code > </ pre >
360
+ </ div >
361
+
362
+ < div class ="left-aligned ">
363
+ < pre class ="line-numbers "> < code class ="language-python "> a:
364
+ tensor([2., 2., 2., 2., 2.])
365
+
366
+ b:
367
+ [2. 2. 2. 2. 2.]</ code > </ pre >
368
+ </ div >
369
+
370
+ < p class ="left-aligned ">
371
+ To convert numpy array to tensors:
372
+ </ p >
373
+
374
+ < div class ="left-aligned ">
375
+ < pre class ="line-numbers "> < code class ="language-python "> a = np.ones(5)
376
+ b = torch.from_numpy(a)
377
+
378
+ print(f"a: \n{a}")
379
+ print(f"b: \n{b}")
380
+ print(type(a))
381
+ print(type(b))</ code > </ pre >
382
+ </ div >
383
+
384
+ < div class ="left-aligned ">
385
+ < pre class ="line-numbers "> < code class ="language-python "> a:
386
+ [1. 1. 1. 1. 1.]
387
+ b:
388
+ tensor([1., 1., 1., 1., 1.], dtype=torch.float64)
389
+ <class 'numpy.ndarray'>
390
+ <class 'torch.Tensor'> </ code > </ pre >
391
+ </ div >
392
+
92
393
< hr >
93
394
94
395
< div class ="blog-page-end-nav-container ">
95
- < a href ="ArtificialNeuralNets .html " class ="blog-page-end-nav-item blog-page-end-nav-item-left ">
396
+ < a href ="ComputerVision .html " class ="blog-page-end-nav-item blog-page-end-nav-item-left ">
96
397
< svg xmlns ="http://www.w3.org/2000/svg " fill ="none " viewBox ="0 0 24 24 " stroke-width ="1.5 " stroke ="currentColor " class ="w-6 h-6 ">
97
398
< path stroke-linecap ="round " stroke-linejoin ="round " d ="M15.75 19.5L8.25 12l7.5-7.5 " />
98
399
</ svg >
@@ -104,7 +405,7 @@ <h3>Link with NumPy arrays</h3>
104
405
105
406
< div class ="blog-page-end-divider "> </ div >
106
407
107
- < a href ="next-article .html " class ="blog-page-end-nav-item blog-page-end-nav-item-right ">
408
+ < a href ="ArtificialNeuralNets .html " class ="blog-page-end-nav-item blog-page-end-nav-item-right ">
108
409
< div >
109
410
< span class ="blog-page-end-nav-title "> Next Article</ span >
110
411
< span class ="blog-page-end-nav-desc "> Artificial Neural Networks</ span >
@@ -137,29 +438,5 @@ <h3>Link with NumPy arrays</h3>
137
438
</ div >
138
439
</ div >
139
440
</ footer >
140
-
141
- < script src ="../../js/prism.js "> </ script >
142
- <!-- <script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.24.1/prism.min.js"></script> -->
143
- <!-- <script src="//cdnjs.cloudflare.com/ajax/libs/highlight.js/11.4.0/highlight.min.js"></script> -->
144
- <!-- <script>hljs.highlightAll();</script> -->
145
- <!-- <script>
146
- // Get the code block
147
- var codeBlock = document.querySelector('.code-box2');
148
-
149
- // Get the text content, trim leading spaces or newlines
150
- codeBlock.innerHTML = codeBlock.innerHTML.trim();
151
- </script> -->
152
- <!-- <script>
153
- var pre= document.querySelector('pre');
154
-
155
- //insert a span in front of the first letter. (the span will automatically close.)
156
- pre.innerHTML= pre.textContent.replace(/(\w)/, '<span>$1');
157
-
158
- //get the new span's left offset:
159
- var left= pre.querySelector('span').getClientRects()[0].left;
160
-
161
- //move the code to the left, taking into account the body's margin:
162
- pre.style.marginLeft= (-left + pre.getClientRects()[0].left)+'px';
163
- </script> -->
164
441
</ body >
165
442
</ html >
0 commit comments