-
Notifications
You must be signed in to change notification settings - Fork 16
/
Copy pathtrain_log.txt
121 lines (119 loc) · 3.13 KB
/
train_log.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
nichenji@b2240-05:~/building-autoencoders-in-Pytorch$ python main.py ============== Encoder ==============
Sequential(
(0): Conv2d (3, 32, kernel_size=(4, 4), stride=(4, 4))
(1): ReLU()
(2): Conv2d (32, 48, kernel_size=(3, 3), stride=(3, 3), padding=(2, 2))
(3): ReLU()
)
============== Decoder ==============
Sequential(
(0): ConvTranspose2d (48, 32, kernel_size=(3, 3), stride=(3, 3), padding=(2, 2 ))
(1): ReLU()
(2): ConvTranspose2d (32, 3, kernel_size=(4, 4), stride=(4, 4))
(3): Sigmoid()
)
Model moved to GPU in order to speed up training.
Files already downloaded and verified
Files already downloaded and verified
[1, 2000] loss: 0.692
[2, 2000] loss: 0.691
[3, 2000] loss: 0.691
[4, 2000] loss: 0.691
[5, 2000] loss: 0.688
[6, 2000] loss: 0.610
[7, 2000] loss: 0.592
[8, 2000] loss: 0.589
[9, 2000] loss: 0.589
[10, 2000] loss: 0.588
[11, 2000] loss: 0.587
[12, 2000] loss: 0.587
[13, 2000] loss: 0.587
[14, 2000] loss: 0.586
[15, 2000] loss: 0.586
[16, 2000] loss: 0.585
[17, 2000] loss: 0.585
[18, 2000] loss: 0.585
[19, 2000] loss: 0.583
[20, 2000] loss: 0.582
[21, 2000] loss: 0.580
[22, 2000] loss: 0.578
[23, 2000] loss: 0.575
[24, 2000] loss: 0.573
[25, 2000] loss: 0.572
[26, 2000] loss: 0.570
[27, 2000] loss: 0.569
[28, 2000] loss: 0.568
[29, 2000] loss: 0.567
[30, 2000] loss: 0.566
[31, 2000] loss: 0.565
[32, 2000] loss: 0.565
[33, 2000] loss: 0.564
[34, 2000] loss: 0.564
[35, 2000] loss: 0.564
[36, 2000] loss: 0.563
[37, 2000] loss: 0.563
[38, 2000] loss: 0.564
[39, 2000] loss: 0.563
[40, 2000] loss: 0.563
[41, 2000] loss: 0.563
[42, 2000] loss: 0.563
[43, 2000] loss: 0.562
[44, 2000] loss: 0.563
[45, 2000] loss: 0.563
[46, 2000] loss: 0.563
[47, 2000] loss: 0.562
[48, 2000] loss: 0.563
[49, 2000] loss: 0.562
[50, 2000] loss: 0.562
[51, 2000] loss: 0.562
[52, 2000] loss: 0.563
[53, 2000] loss: 0.562
[54, 2000] loss: 0.562
[55, 2000] loss: 0.562
[56, 2000] loss: 0.561
[57, 2000] loss: 0.561
[58, 2000] loss: 0.562
[59, 2000] loss: 0.562
[60, 2000] loss: 0.562
[61, 2000] loss: 0.561
[62, 2000] loss: 0.562
[63, 2000] loss: 0.562
[64, 2000] loss: 0.562
[65, 2000] loss: 0.562
[66, 2000] loss: 0.562
[67, 2000] loss: 0.562
[68, 2000] loss: 0.562
[69, 2000] loss: 0.561
[70, 2000] loss: 0.561
[71, 2000] loss: 0.562
[72, 2000] loss: 0.561
[73, 2000] loss: 0.561
[74, 2000] loss: 0.561
[75, 2000] loss: 0.561
[76, 2000] loss: 0.561
[77, 2000] loss: 0.561
[78, 2000] loss: 0.561
[79, 2000] loss: 0.561
[80, 2000] loss: 0.561
[81, 2000] loss: 0.561
[82, 2000] loss: 0.561
[83, 2000] loss: 0.561
[84, 2000] loss: 0.561
[85, 2000] loss: 0.561
[86, 2000] loss: 0.561
[87, 2000] loss: 0.561
[88, 2000] loss: 0.561
[89, 2000] loss: 0.560
[90, 2000] loss: 0.560
[91, 2000] loss: 0.560
[92, 2000] loss: 0.560
[93, 2000] loss: 0.561
[94, 2000] loss: 0.560
[95, 2000] loss: 0.560
[96, 2000] loss: 0.560
[97, 2000] loss: 0.560
[98, 2000] loss: 0.560
[99, 2000] loss: 0.560
[100, 2000] loss: 0.560
Finished Training
Saving Model...