-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathsearch.xml
7631 lines (7507 loc) · 729 KB
/
search.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="utf-8"?>
<search>
<entry>
<title>2019年书单</title>
<url>/posts/353b91f9/</url>
<content><![CDATA[<p>新的一年,保持阅读和写作的习惯,每一个月读完一本书写一篇读书笔记(简称三一计划)。</p>
<span id="more"></span>
<p><img src="https://i.loli.net/2019/01/02/5c2cce1e92717.jpg" alt="booklist.jpg"></p>
<p>今年准备阅读的12本书如下:</p>
<ol>
<li>易经</li>
<li>理想国</li>
<li>自然哲学的数学原理</li>
<li>红楼梦</li>
<li>瓦尔登湖</li>
<li>科学与方法</li>
<li>科学史</li>
<li>小王子</li>
<li>西方哲学史</li>
<li>中国哲学简史</li>
<li>从一到无穷大</li>
<li>美的历程</li>
</ol>
]]></content>
<categories>
<category>随笔感悟</category>
</categories>
<tags>
<tag>书籍</tag>
<tag>阅读</tag>
</tags>
</entry>
<entry>
<title>Add Two Numbers</title>
<url>/posts/64c8f80f/</url>
<content><![CDATA[<h5 id="问题描述"><a href="#问题描述" class="headerlink" title="问题描述"></a>问题描述</h5><p>You are given two non-empty linked lists representing two non-negative integers. The digits are stored in reverse order and each of their nodes contain a single digit. Add the two numbers and return it as a linked list.</p>
<p>You may assume the two numbers do not contain any leading zero, except the number 0 itself.</p>
<span id="more"></span>
<h5 id="示例"><a href="#示例" class="headerlink" title="示例"></a>示例</h5><p>Input: (2 -> 4 -> 3) + (5 -> 6 -> 4)<br>Output: 7 -> 0 -> 8<br>Explanation: 342 + 465 = 807.</p>
<h5 id="算法思路"><a href="#算法思路" class="headerlink" title="算法思路"></a>算法思路</h5><p>在这个问题中,有两种解决思路,第一种是先将输入的链表计算结果转换成字符串,再转成链表输出;第二种是直接采用链表操作。总的来说,第一种方法更易于理解,执行效率更高(runtime is 116ms),第二种方法更加复杂,耗时更长(runtime is 288ms),所以推荐使用方法一。</p>
<h5 id="代码实现(Python3)"><a href="#代码实现(Python3)" class="headerlink" title="代码实现(Python3)"></a>代码实现(Python3)</h5><p>方法一:</p>
<figure class="highlight python"><table><tr><td class="code"><pre><code class="hljs python"><span class="hljs-keyword">class</span> <span class="hljs-title class_">Solution</span>:<br> <span class="hljs-keyword">def</span> <span class="hljs-title function_">addTwoNumbers</span>(<span class="hljs-params">self, l1, l2</span>):<br> twoSum = <span class="hljs-built_in">str</span>(<span class="hljs-built_in">int</span>(self.getNumber(l1)) + <span class="hljs-built_in">int</span>(self.getNumber(l2)))<br> <span class="hljs-keyword">return</span> self.constructList(twoSum)<br> <br> <span class="hljs-keyword">def</span> <span class="hljs-title function_">getNumber</span>(<span class="hljs-params">self, head</span>):<br> current, res = head, []<br> <span class="hljs-keyword">while</span> current:<br> res.append(<span class="hljs-built_in">str</span>(current.val))<br> current = current.<span class="hljs-built_in">next</span><br> <span class="hljs-keyword">return</span> <span class="hljs-string">''</span>.join(res[::-<span class="hljs-number">1</span>]) <span class="hljs-comment"># reverse res</span><br><br> <span class="hljs-keyword">def</span> <span class="hljs-title function_">constructList</span>(<span class="hljs-params">self, num</span>):<br> head = ListNode(<span class="hljs-built_in">int</span>(num[-<span class="hljs-number">1</span>])) <span class="hljs-comment"># get the last element</span><br> current = head<br> <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-built_in">len</span>(num) - <span class="hljs-number">2</span>, -<span class="hljs-number">1</span>, -<span class="hljs-number">1</span>): <span class="hljs-comment"># range(start, stop, step)</span><br> node = ListNode(<span class="hljs-built_in">int</span>(num[i]))<br> current.<span class="hljs-built_in">next</span> = node<br> current = current.<span class="hljs-built_in">next</span><br> <span class="hljs-keyword">return</span> head<br></code></pre></td></tr></table></figure>
<p>方法二:</p>
<figure class="highlight python"><table><tr><td class="code"><pre><code class="hljs python"><span class="hljs-comment"># Definition for singly-linked list.</span><br><span class="hljs-comment"># class ListNode(object):</span><br><span class="hljs-comment"># def __init__(self, x):</span><br><span class="hljs-comment"># self.val = x</span><br><span class="hljs-comment"># self.next = None</span><br><br><span class="hljs-keyword">class</span> <span class="hljs-title class_">Solution</span>(<span class="hljs-title class_ inherited__">object</span>):<br> <span class="hljs-keyword">def</span> <span class="hljs-title function_">addTwoNumbers</span>(<span class="hljs-params">self, l1, l2</span>):<br> <span class="hljs-string">"""</span><br><span class="hljs-string"> :type l1: ListNode</span><br><span class="hljs-string"> :type l2: ListNode</span><br><span class="hljs-string"> :rtype: ListNode</span><br><span class="hljs-string"> """</span><br> <br> <span class="hljs-comment"># Carry over place holder.</span><br> carry = <span class="hljs-number">0</span><br> <br> <span class="hljs-comment"># Create a dummy head, tail node to keep track of the start and end of the list</span><br> dummy_head = tail = ListNode(<span class="hljs-number">0</span>)<br> <br> <span class="hljs-comment"># While there are elements in either list we add them</span><br> <span class="hljs-keyword">while</span> l1 <span class="hljs-keyword">or</span> l2:<br> <br> <span class="hljs-comment"># Get the two element values, if there is not a node use 0</span><br> num_one = l1.val <span class="hljs-keyword">if</span> l1 <span class="hljs-keyword">else</span> <span class="hljs-number">0</span><br> num_two = l2.val <span class="hljs-keyword">if</span> l2 <span class="hljs-keyword">else</span> <span class="hljs-number">0</span><br> <br> <span class="hljs-comment"># Get the new total with two numbers plus the carry over</span><br> new_sum = num_one + num_two + carry<br> <br> <span class="hljs-comment"># Check to see if the number will create a carry</span><br> <span class="hljs-keyword">if</span> new_sum > <span class="hljs-number">9</span>:<br> <span class="hljs-comment"># Set the tail node equal to the new node.</span><br> tail.<span class="hljs-built_in">next</span> = ListNode(new_sum - <span class="hljs-number">10</span>)<br> <span class="hljs-comment"># Set a carry because the number was larger than 10</span><br> carry = <span class="hljs-number">1</span><br> <span class="hljs-keyword">else</span>:<br> tail.<span class="hljs-built_in">next</span> = ListNode(new_sum)<br> carry = <span class="hljs-number">0</span><br> <br> <span class="hljs-comment"># Set the tail node to the new tail.</span><br> tail = tail.<span class="hljs-built_in">next</span> <br> <br> <span class="hljs-comment"># Set the current nodes to the next number</span><br> <span class="hljs-keyword">if</span> l1:<br> l1 = l1.<span class="hljs-built_in">next</span><br> <span class="hljs-keyword">if</span> l2:<br> l2 = l2.<span class="hljs-built_in">next</span><br> <br> <span class="hljs-comment"># Once the adding has been completed if there is still a carry append a new node</span><br> <span class="hljs-keyword">if</span> carry:<br> tail.<span class="hljs-built_in">next</span> = ListNode(carry)<br> <br> <span class="hljs-comment"># Return the first number set, dummy head was just a place holder. The tail would point to the last node and not the first number in the linked list.</span><br> <span class="hljs-keyword">return</span> dummy_head.<span class="hljs-built_in">next</span><br></code></pre></td></tr></table></figure>
<h5 id="参考资料"><a href="#参考资料" class="headerlink" title="参考资料"></a>参考资料</h5><ol>
<li><a href="https://stackoverflow.com/questions/31633635/what-is-the-meaning-of-inta-1-in-python">https://stackoverflow.com/questions/31633635/what-is-the-meaning-of-inta-1-in-python</a> What is the meaning of “int(a[::-1])” in Python?</li>
<li><a href="https://stackoverflow.com/questions/930397/getting-the-last-element-of-a-list-in-python">https://stackoverflow.com/questions/930397/getting-the-last-element-of-a-list-in-python</a> Getting the last element of a list in Python</li>
<li><a href="https://www.pythoncentral.io/pythons-range-function-explained/">https://www.pythoncentral.io/pythons-range-function-explained/</a> Python’s range() Function Explained</li>
<li><a href="https://leetcode.com/problems/add-two-numbers/">https://leetcode.com/problems/add-two-numbers/</a> Add Two Numbers</li>
</ol>
]]></content>
<categories>
<category>算法</category>
</categories>
<tags>
<tag>算法</tag>
<tag>python</tag>
</tags>
</entry>
<entry>
<title>【论文投稿-2021】2021年机器学习和数据挖掘相关会议投稿时间</title>
<url>/posts/82aa9d8e/</url>
<content><![CDATA[<p>按月份对2021年机器学习和数据挖掘相关会议征稿进行统计。</p>
<span id="more"></span>
<h3 id="2021年"><a href="#2021年" class="headerlink" title="2021年"></a>2021年</h3><h4 id="2021-01"><a href="#2021-01" class="headerlink" title="2021-01"></a>2021-01</h4><ol>
<li>IJCNN 2021(The international joint conference on neural networks),Online,截稿日期:2021-01-15 (推迟为2021-02-10)<a href="https://www.ijcnn.org/">https://www.ijcnn.org/</a></li>
<li>IJCAI 2021 (International Joint Conference on Artificial Intelligence), Montreal, Canada, 截稿日期:2021-01-20(摘要提交截止日期:2021-01-13)<a href="https://ijcai-21.org/">https://ijcai-21.org/</a></li>
</ol>
<h4 id="2021-02"><a href="#2021-02" class="headerlink" title="2021-02"></a>2021-02</h4><ol>
<li>KDD 2021(ACM SIGKDD Conference on Knowledge Discovery and Data Mining),Singapore,截稿日期:2021-02-08 <a href="https://www.kdd.org/kdd2021/">https://www.kdd.org/kdd2021/</a></li>
</ol>
<h4 id="2021-04"><a href="#2021-04" class="headerlink" title="2021-04"></a>2021-04</h4><ol>
<li>CICAI 2021 (CAAI International Conference on artificial intelligence 2021) (cicai.caai.cn), 中国杭州,截稿日期:2021-04-18 <a href="https://cicai.caai.cn/">https://cicai.caai.cn/</a></li>
</ol>
<h4 id="2021-05"><a href="#2021-05" class="headerlink" title="2021-05"></a>2021-05</h4><ol>
<li>CCKS 2021(China Conference on Knowledge Graph and Semantic Computing),中国广州,截稿日期:2021-5-10 <a href="http://sigkg.cn/ccks2021/">http://sigkg.cn/ccks2021/</a></li>
<li>CIKM 2021 (Conference on Information and Knowledge Management),Online,截稿日期:2021-05-26 (摘要提交截止日期:2021-05-19)<a href="https://www.cikm2021.org/">https://www.cikm2021.org/</a></li>
</ol>
<h4 id="2021-06"><a href="#2021-06" class="headerlink" title="2021-06"></a>2021-06</h4><ol>
<li>ACM SIGSPATIAL 2021 (ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems),Beijing, China,截稿日期:2021-06-10 (摘要提交截止日期:2021-05-27)<a href="https://sigspatial2021.sigspatial.org/#:~:text=The%20ACM%20SIGSPATIAL%20International%20Conference%20on%20Advances%20in,the%20situation%20of%20COVID-19%20and%20international%20travel%20restrictions">https://sigspatial2021.sigspatial.org/#:~:text=The%20ACM%20SIGSPATIAL%20International%20Conference%20on%20Advances%20in,the%20situation%20of%20COVID-19%20and%20international%20travel%20restrictions</a>.</li>
<li>ICDM2021 (International Conference on Data Mining),Auckland, New Zealand,截稿日期:2021-06-11 <a href="https://icdm2021.auckland.ac.nz/">https://icdm2021.auckland.ac.nz/</a></li>
<li>ISKE2021 (International Conference on Intelligent Systems and Knowledge Engineering),中国成都,截稿日期:2021-06-30 ((推迟为2021-07-31)<a href="http://iske2021.org/#:~:text=Welcome%20to%20ISKE%202021%20It%20is%20our%20great,following%20the%20successful%20previous%20ISKE%20Conferences%20since%202006">http://iske2021.org/#:~:text=Welcome%20to%20ISKE%202021%20It%20is%20our%20great,following%20the%20successful%20previous%20ISKE%20Conferences%20since%202006</a>.</li>
</ol>
<h4 id="2021-08"><a href="#2021-08" class="headerlink" title="2021-08"></a>2021-08</h4><ol>
<li>WSDM2022 (Web Search and Data Mining), Arizona, USA, 截止日期:2021-08-09 <a href="http://www.wsdm-conference.org/2022/">http://www.wsdm-conference.org/2022/</a></li>
</ol>
<h4 id="2021-09"><a href="#2021-09" class="headerlink" title="2021-09"></a>2021-09</h4><ol>
<li>AAAI2022(The Thirty-Sixth AAAI Conference on Artificial Intelligence),Vancouver, BC, Canada, 截稿日期:2021-09-08 (摘要提交截止日期:2021-08-30)<a href="https://aaai.org/Conferences/AAAI-22/">https://aaai.org/Conferences/AAAI-22/</a></li>
</ol>
<h4 id="2021-10"><a href="#2021-10" class="headerlink" title="2021-10"></a>2021-10</h4><ol>
<li>WWW 2022(International World Wide Web Conferences),Lyon, France, 截稿日期:2021-10-21 (摘要提交截止日期:2021-10-14) <a href="https://www2022.thewebconf.org/">https://www2022.thewebconf.org/</a></li>
<li>PAKDD 2022(The 26th Pacific-Asia Conference on Knowledge Discovery and Data Mining),Chengdu, China,截稿日期:2021-10-31 <a href="http://pakdd.net/">http://pakdd.net/</a></li>
</ol>
<h3 id="2022年"><a href="#2022年" class="headerlink" title="2022年"></a>2022年</h3><h4 id="2022-01"><a href="#2022-01" class="headerlink" title="2022-01"></a>2022-01</h4><ol>
<li>ICDM2022(IEEE International Conference on Data Mining),New York, USA,截稿日期:2022-01-15 <a href="https://www.data-mining-forum.de/icdm2022.php">https://www.data-mining-forum.de/icdm2022.php</a></li>
</ol>
<h3 id="参考资料"><a href="#参考资料" class="headerlink" title="参考资料"></a>参考资料</h3><ol>
<li><a href="http://www.wikicfp.com/cfp/series?t=c&i=A">http://www.wikicfp.com/cfp/series?t=c&i=A</a></li>
</ol>
]]></content>
<categories>
<category>论文阅读</category>
</categories>
<tags>
<tag>机器学习</tag>
<tag>数据挖掘</tag>
<tag>论文投稿</tag>
</tags>
</entry>
<entry>
<title>2022年国庆小记</title>
<url>/posts/9fb794da/</url>
<content><![CDATA[<p>趁年轻,去运动,去思考,去见识,成长自己。</p>
<span id="more"></span>
<p>疫情之下,保持一个良好的身心状态,对于应对各方面的压力,随地随地可能发生的变化,过好生活就显得至关重要。今天是国庆假期的第一天,由于疫情的缘故,不能安排旅行,就只能待在成都,待在自己平时生活的地方,读书、学习、散步、思考、网上交流。</p>
<p>趁着这个机会,将之前运动的一些经验和心得记录一下,希望对那些愿意调整自己身心状态,在身体和心理上感觉更加舒服些的朋友们有所帮助。</p>
<p>从2020年发生疫情以来,今年快到第3年了,时间过得很快,一溜烟的,但是疫情给我们带来的苦闷或者心烦,确实一直持续的,久久难以散去。希望有那么一天,我们可以开心地面对面交谈,热情地问候,幸福地拥抱。</p>
<p>从2021年12月10日,开始发起#趁年轻 去运动#的活动,到2022年09月29日,保持运动,同时记录的习惯,自己已经运动130天整。最开始在操场跑步,后来在校园里跑步,再后来在室内做无器械运动,身体发生了明显的改变,手臂的肌肉变得更加结实,腿部变得更加有力,精气神变得更加棒。总的来说,一个感受就是,如果室外条件适合,就在室外运动,室内条件合适,就在室内运动,最重要的是,运动起来,注意安全。关于跑步,还在知乎写过一篇文章(<a href="https://zhuanlan.zhihu.com/p/438670862">空气质量知多少,早上跑步好不好?</a>),感兴趣的朋友也可以去看看。无器械运动推荐这本书(<a href="https://book.douban.com/subject/11608712/">无器械健身</a>)。</p>
<p>有时候,心情比较烦闷或者比较无聊的时候,也会选择看书、看电影,看纪录片,从年初到现在,大概看了11本书,23部电影,5部纪录片,在身体没法到远方旅行的时候,让思想和心灵去远航也是一次难得的体验,这是一次次穿越时空,跨越江河的旅行,很棒。如果你不知道选什么书看,这里有份书单,供你选阅(<a href="https://xiepeng21.cn/2019-01-02/2019%E5%B9%B4%E4%B9%A6%E5%8D%95.html">2019年书单</a>)。想找经典的电影,可以看看豆瓣电影Top250列表(<a href="https://movie.douban.com/top250">豆瓣电影Top250</a>),里面很多电影质量很高,值得一看。纪录片的话,自己之前整理了一个列表(<a href="https://www.zhihu.com/question/534427413/answer/2505056045">纪录片推荐</a>),可以选择自己感兴趣的纪录片来看,挺有意思的。</p>
<p>如果晚上睡不太好,也会听听轻音乐,助眠音乐,做冥想练习(<a href="https://www.17mingxiang.com/">Now冥想</a>),回到当下,放空自己的大脑,对于缓解压力,睡个好觉,挺有帮助。</p>
<p>10月的第一天,碎碎念,写了这些。今年剩下的日子还剩3个月,2022年已经过了3/4,希望我们都能安心生活,踏实做事,一点一点,在一件件小事上,修炼自己,保持身心健康,向阳生长。</p>
<hr>
<p>欢迎各位提出建议、问题,我们一起交流、学习、成长。</p>
<p>“问渠那得清如许?为有源头活水来” ヾ(◍°∇°◍)ノ゙</p>
<p>– 我在半亩方塘等你 ^_^</p>
]]></content>
<categories>
<category>随笔感悟</category>
</categories>
<tags>
<tag>阅读</tag>
<tag>趁年轻,去运动</tag>
<tag>国庆</tag>
<tag>电影</tag>
<tag>纪录片</tag>
</tags>
</entry>
<entry>
<title>【论文投稿-2022】2022年人工智能和数据挖掘顶级会议投稿日期</title>
<url>/posts/7c9718ae/</url>
<content><![CDATA[<p>2022的前两个月,在过年、改论文,修改基金项目的过程中,时间不知不觉就过去了。刚好今天周六,趁这个时间,对2022年人工智能和数据挖掘相关顶级会议征稿日期进行统计。</p>
<span id="more"></span>
<table>
<thead>
<tr>
<th><strong>Year</strong></th>
<th><strong>Conference</strong></th>
<th><strong>Link</strong></th>
<th>**Deadline **</th>
</tr>
</thead>
<tbody><tr>
<td>2022</td>
<td>WSDM 2022</td>
<td><a href="http://www.wsdm-conference.org/2022/">http://www.wsdm-conference.org/2022/</a></td>
<td>2021-08-13</td>
</tr>
<tr>
<td></td>
<td>AAAI 2022</td>
<td><a href="https://aaai.org/Conferences/AAAI-22/aaai22call/">https://aaai.org/Conferences/AAAI-22/aaai22call/</a></td>
<td>2021-09-08</td>
</tr>
<tr>
<td></td>
<td>ICLR 2022</td>
<td><a href="https://iclr.cc/Conferences/2022/CallForPapers">https://iclr.cc/Conferences/2022/CallForPapers</a></td>
<td>2021-10-05</td>
</tr>
<tr>
<td></td>
<td>SDM 2022</td>
<td><a href="https://www.siam.org/conferences/cm/conference/sdm22">https://www.siam.org/conferences/cm/conference/sdm22</a></td>
<td>2021-10-12</td>
</tr>
<tr>
<td></td>
<td>WWW 2022</td>
<td><a href="https://www2022.thewebconf.org/">https://www2022.thewebconf.org/</a></td>
<td>2021-10-21</td>
</tr>
<tr>
<td></td>
<td>IJCAI 2022</td>
<td><a href="https://ijcai-22.org/calls-papers/">https://ijcai-22.org/calls-papers/</a></td>
<td>2022-01-14</td>
</tr>
<tr>
<td></td>
<td>ICML 2022</td>
<td><a href="https://icml.cc/Conferences/2022/CallForPapers">https://icml.cc/Conferences/2022/CallForPapers</a></td>
<td>2022-02-03</td>
</tr>
<tr>
<td></td>
<td>KDD 2022</td>
<td><a href="https://kdd.org/kdd2022/cfpResearch.html">https://kdd.org/kdd2022/cfpResearch.html</a></td>
<td>2022-02-10</td>
</tr>
<tr>
<td></td>
<td>CIKM 2022</td>
<td><a href="https://www.cikm2022.org/calls">https://www.cikm2022.org/calls</a></td>
<td>2022-05-16</td>
</tr>
<tr>
<td></td>
<td>NIPS 2022</td>
<td><a href="https://neurips.cc/Conferences/2022/CallForPapers">https://neurips.cc/Conferences/2022/CallForPapers</a></td>
<td>2022-05-19</td>
</tr>
<tr>
<td></td>
<td>ICDM 2022</td>
<td><a href="https://icdm22.cse.usf.edu/calls/Papers.html">https://icdm22.cse.usf.edu/calls/Papers.html</a></td>
<td>2022-06-10</td>
</tr>
<tr>
<td></td>
<td>SIGSPATIAL 2022</td>
<td><a href="https://sigspatial2022.sigspatial.org/">https://sigspatial2022.sigspatial.org/</a></td>
<td>2022-06-10</td>
</tr>
</tbody></table>
<p><strong>More:</strong> If you want to find more conferences information, you can visit the website (<a href="http://www.wikicfp.com/cfp/series?t=c&i=A">http://www.wikicfp.com/cfp/series?t=c&i=A</a>).</p>
]]></content>
<categories>
<category>论文阅读</category>
</categories>
<tags>
<tag>数据挖掘</tag>
<tag>论文投稿</tag>
<tag>人工智能</tag>
</tags>
</entry>
<entry>
<title>如何计算经过CNN(卷积神经网络)卷积后的图片大小</title>
<url>/posts/fb776320/</url>
<content><![CDATA[<p>对于给定的一张图片,如下图,32×32大小的输入图片经过一次卷积之后,如何得到28×28的feature maps?</p>
<span id="more"></span>
<p><img src="https://i.loli.net/2018/12/23/5c1fac8bbfa94.png" alt="Screenshot_2018-09-29-15-31-03-61.png"></p>
<p>设卷积前的图像大小为n×n, 过滤器大小为f×f, padding(填充)为p, stride(步长)为 s, 则卷积后的图像 m×m 大小为:<br>m = (n+2p-f)/s + 1.(如果商不为整数,向下取整)</p>
<p>(32+0-5)/1 + 1 = 28. 所以上图中的输入图片经过一次卷积之后,得到了28×28的feature maps.</p>
]]></content>
<categories>
<category>机器学习</category>
</categories>
<tags>
<tag>深度学习</tag>
<tag>CNN</tag>
</tags>
</entry>
<entry>
<title>Hexo常用命令</title>
<url>/posts/24caea6b/</url>
<content><![CDATA[<p>hexo的基本命令有13个,但平时发布文章,经常用的命令只有下列几个:</p>
<span id="more"></span>
<h2 id="常用命令(按发布文章时使用命令的顺序)"><a href="#常用命令(按发布文章时使用命令的顺序)" class="headerlink" title="常用命令(按发布文章时使用命令的顺序)"></a>常用命令(按发布文章时使用命令的顺序)</h2><figure class="highlight actionscript"><table><tr><td class="code"><pre><code class="hljs actionscript">hexo <span class="hljs-keyword">new</span> <span class="hljs-string">"hello"</span><br></code></pre></td></tr></table></figure>
<p>新建一篇标题为“hello”的文章。文章在文件夹\Hexo\source_posts下查看,也可以缩写成 hexo n。</p>
<figure class="highlight ebnf"><table><tr><td class="code"><pre><code class="hljs ebnf"><span class="hljs-attribute">hexo g</span><br></code></pre></td></tr></table></figure>
<p>生成博客的静态文件到文件夹\Hexo\public。便于查看生成博客的静态文件。该命令是hexo generate的缩写。如果使用过gulp压缩过博客,可以使用组合命令“hexo g && gulp”来达到压缩博客静态文件,提高博客访问速度的目的。</p>
<figure class="highlight ebnf"><table><tr><td class="code"><pre><code class="hljs ebnf"><span class="hljs-attribute">hexo s</span><br></code></pre></td></tr></table></figure>
<p>用于启动本地服务器,查看网站预览效果。默认本地访问地址为:<a href="http://localhost:4000/">http://localhost:4000/</a> 。该命令是hexo server的缩写。预览的同时可以修改文章内容或该主题的代码,保存后刷新当前页即可。如果要修改博客根目录的_config.yml文件,需要重新执行hexo s才能预览效果。</p>
<figure class="highlight ebnf"><table><tr><td class="code"><pre><code class="hljs ebnf"><span class="hljs-attribute">hexo d</span><br></code></pre></td></tr></table></figure>
<p>将本地博客文件部署到设定的github仓库中去。到该步骤,外网可以访问该博客。该命令是hexo deploy的缩写。</p>
<figure class="highlight ebnf"><table><tr><td class="code"><pre><code class="hljs ebnf"><span class="hljs-attribute">hexo cl</span><br></code></pre></td></tr></table></figure>
<p>清除缓存文件db.json 和已生成的静态文件 public。通常在博客访问异常时使用。该命令是hexo clean的缩写。</p>
<figure class="highlight haxe"><table><tr><td class="code"><pre><code class="hljs haxe">hexo <span class="hljs-keyword">new</span> <span class="hljs-type">page</span> <span class="hljs-string">"mypage"</span><br></code></pre></td></tr></table></figure>
<p>新建一个标题为mypage的页面,默认超链接地址为博客主页地址/mypage/。该页面不会出现在首页文章列表和归档中,也不支持设置分类和标签。</p>
<h2 id="参考文献"><a href="#参考文献" class="headerlink" title="参考文献"></a>参考文献</h2><ol>
<li><a href="http://moxfive.xyz/2015/12/21/common-hexo-commands/#comments">http://moxfive.xyz/2015/12/21/common-hexo-commands/#comments</a></li>
<li><a href="https://leaferx.online/2017/06/16/use-gulp-to-minimize/">https://leaferx.online/2017/06/16/use-gulp-to-minimize/</a></li>
</ol>
]]></content>
<categories>
<category>经验技巧</category>
</categories>
<tags>
<tag>hexo命令</tag>
</tags>
</entry>
<entry>
<title>OneDrive 登录时一直转圈圈</title>
<url>/posts/7340ecaf/</url>
<content><![CDATA[<p>今天早上突然遇到OneDrive登录时,一直转圈圈,登录不了,采取了几种方法,查询了一些网上资料,现总结如下。</p>
<span id="more"></span>
<ol>
<li>换一个网络连接。可以使用手机热点,有人报告说电信网络曾经登录不了,我刚好就是电信网络;</li>
<li>更改网络设置。在“更改网络适配器”选项中,打开当前网络的“属性”,将“Internet 协议版本6”前面的√去掉,然后用快捷键“Win+R”输出cmd,调出命令窗口,输入“ipconfig /flushdns”命令即可;</li>
<li>关闭科学上网软件。</li>
</ol>
<p>笔者是在经历了以上三个步骤之后,OneDrive就可以正常登录。如果你遇到不能登录OneDrive,以上方法不能解决或者有新的办法,欢迎留言交流。</p>
]]></content>
<categories>
<category>经验技巧</category>
</categories>
<tags>
<tag>OneDrive</tag>
</tags>
</entry>
<entry>
<title>L1、L2正则化</title>
<url>/posts/a02b6105/</url>
<content><![CDATA[<h5 id="简介"><a href="#简介" class="headerlink" title="简介"></a>简介</h5><p>在之前的博客里介绍了<a href="http://xiepeng21.cn/%E3%80%90%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E3%80%91%E9%98%B2%E6%AD%A2%E8%BF%87%E6%8B%9F%E5%90%88%E7%9A%84%E6%96%B9%E6%B3%95.html">常见的防止过拟合的方法</a>,这次主要就其中的L1、L2正则化方法进行介绍并比较它们的不同。</p>
<span id="more"></span>
<p>我们使用L1、L2正则化方法的目的,在于减缓机器学习中的过拟合现象。</p>
<p>为什么它们能减缓过拟合现象呢,由于正则项的加入,使得权重矩阵的值减小,因为它假定一个拥有更小权重矩阵的神经网络将产生更简单的模型,进而在一定程度上能减缓过拟合。</p>
<p>在L1和L2中,所采用的正则化项是不同的。</p>
<p>在L2中,其中λ是正则化参数,这个超参数可以通过优化得到更好的结果。同时L2正则化也被称为权重衰减(weight decay),因为它使权重衰减至0(但不等于0)。</p>
<p><img src="https://i.loli.net/2018/11/07/5be2dc708e5c7.png" alt="1.png"></p>
<p>在L1中,我们惩罚权重的绝对值,这里权重可能会减至0。因此,当我们尝试压缩我们的模型时,使用L1十分有用。在其他方面,我们更倾向于使用L2。</p>
<p><img src="https://i.loli.net/2018/11/07/5be2ddd4db9af.png" alt="2.png"></p>
<h5 id="参考资料"><a href="#参考资料" class="headerlink" title="参考资料"></a>参考资料</h5><ol>
<li><a href="http://xiepeng21.cn/%E3%80%90%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E3%80%91%E9%98%B2%E6%AD%A2%E8%BF%87%E6%8B%9F%E5%90%88%E7%9A%84%E6%96%B9%E6%B3%95.html">http://xiepeng21.cn/%E3%80%90%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E3%80%91%E9%98%B2%E6%AD%A2%E8%BF%87%E6%8B%9F%E5%90%88%E7%9A%84%E6%96%B9%E6%B3%95.html</a> 【机器学习】防止过拟合的方法</li>
<li><a href="https://www.analyticsvidhya.com/blog/2018/04/fundamentals-deep-learning-regularization-techniques/">https://www.analyticsvidhya.com/blog/2018/04/fundamentals-deep-learning-regularization-techniques/</a> An Overview of Regularization Techniques in Deep Learning (with Python code)</li>
</ol>
]]></content>
<categories>
<category>机器学习</category>
</categories>
<tags>
<tag>机器学习</tag>
<tag>正则化</tag>
</tags>
</entry>
<entry>
<title>Longest Substring Without Repeating Characters</title>
<url>/posts/8e8877f3/</url>
<content><![CDATA[<h5 id="问题描述"><a href="#问题描述" class="headerlink" title="问题描述"></a>问题描述</h5><p>Given a string, find the length of the longest substring without repeating characters.</p>
<span id="more"></span>
<h5 id="示例"><a href="#示例" class="headerlink" title="示例"></a>示例</h5><ul>
<li>Example 1:</li>
</ul>
<p>Input: “abcabcbb”<br>Output: 3<br>Explanation: The answer is “abc”, with the length of 3. </p>
<ul>
<li>Example 2:</li>
</ul>
<p>Input: “bbbbb”<br>Output: 1<br>Explanation: The answer is “b”, with the length of 1.</p>
<ul>
<li>Example 3:</li>
</ul>
<p>Input: “pwwkew”<br>Output: 3<br>Explanation: The answer is “wke”, with the length of 3.<br> Note that the answer must be a substring, “pwke” is a subsequence and not a substring. </p>
<h5 id="算法思路"><a href="#算法思路" class="headerlink" title="算法思路"></a>算法思路</h5><p>该问题可以简化成查找一个字符串中最长的不重复字符子串,返回子串的长度。第一种方法用到了枚举和字典,感觉有点绕,我没有理解清楚;第二种方法采用了字符串的split方法,很巧妙,不过比较难想到。两种方法相比,第二种方法运行时间更短(84ms < 88ms),推荐使用第二种方法,感兴趣的朋友可以研究下第一种方法。</p>
<p>方法一:</p>
<figure class="highlight vim"><table><tr><td class="code"><pre><code class="hljs vim">class Solution(object):<br> def lengthOfLongestSubstring(self, s):<br> dic, <span class="hljs-keyword">res</span>, start, = {}, <span class="hljs-number">0</span>, <span class="hljs-number">0</span><br> <span class="hljs-keyword">for</span> i, ch in enumerate(s):<br> # when char already in dictionary<br> <span class="hljs-keyword">if</span> ch in dic:<br> # check length from start of <span class="hljs-built_in">string</span> <span class="hljs-keyword">to</span> <span class="hljs-built_in">index</span><br> <span class="hljs-keyword">res</span> = <span class="hljs-built_in">max</span>(<span class="hljs-keyword">res</span>, i-start)<br> # <span class="hljs-keyword">update</span> start of <span class="hljs-built_in">string</span> <span class="hljs-built_in">index</span> <span class="hljs-keyword">to</span> the <span class="hljs-keyword">next</span> <span class="hljs-built_in">index</span><br> start = <span class="hljs-built_in">max</span>(start, dic[ch]+<span class="hljs-number">1</span>)<br> # <span class="hljs-built_in">add</span>/<span class="hljs-keyword">update</span> char <span class="hljs-keyword">to</span>/of dictionary <br> dic[ch] = i<br> # answer <span class="hljs-keyword">is</span> either in the begining/middle OR some mid <span class="hljs-keyword">to</span> the end of <span class="hljs-built_in">string</span><br> <span class="hljs-keyword">return</span> <span class="hljs-built_in">max</span>(<span class="hljs-keyword">res</span>, <span class="hljs-built_in">len</span>(s)-start)<br></code></pre></td></tr></table></figure>
<p>方法二:</p>
<figure class="highlight python"><table><tr><td class="code"><pre><code class="hljs python"><span class="hljs-keyword">class</span> <span class="hljs-title class_">Solution</span>:<br> <span class="hljs-keyword">def</span> <span class="hljs-title function_">lengthOfLongestSubstring</span>(<span class="hljs-params">self, s</span>):<br> <span class="hljs-string">"""</span><br><span class="hljs-string"> :type s: str</span><br><span class="hljs-string"> :rtype: int</span><br><span class="hljs-string"> """</span><br> maxnum,num,ss =<span class="hljs-number">0</span>,<span class="hljs-number">0</span>,<span class="hljs-string">''</span><br> <span class="hljs-keyword">for</span> each <span class="hljs-keyword">in</span> s:<br> <span class="hljs-keyword">if</span> each <span class="hljs-keyword">in</span> ss:<br> ss = ss.split(each)[-<span class="hljs-number">1</span>]+each <span class="hljs-comment"># ‘’+each</span><br> num =<span class="hljs-built_in">len</span>(ss)<br> <span class="hljs-keyword">else</span>:<br> num += <span class="hljs-number">1</span><br> ss += each<br> <span class="hljs-keyword">if</span> num>=maxnum:<br> maxnum = num<br> <span class="hljs-keyword">return</span> maxnum<br></code></pre></td></tr></table></figure>
<h5 id="参考资料"><a href="#参考资料" class="headerlink" title="参考资料"></a>参考资料</h5><ol>
<li><a href="https://www.w3schools.com/python/ref_string_split.asp">https://www.w3schools.com/python/ref_string_split.asp</a> Python String split() Method</li>
<li><a href="https://leetcode.com/problems/longest-substring-without-repeating-characters/">https://leetcode.com/problems/longest-substring-without-repeating-characters/</a> longest-substring-without-repeating-characters</li>
</ol>
]]></content>
<categories>
<category>算法</category>
</categories>
<tags>
<tag>算法</tag>
<tag>python</tag>
</tags>
</entry>
<entry>
<title>Two Sum</title>
<url>/posts/e83060be/</url>
<content><![CDATA[<h5 id="问题描述"><a href="#问题描述" class="headerlink" title="问题描述"></a>问题描述</h5><p>Given an array of integers, return indices of the two numbers such that they add up to a specific target.<br>You may assume that each input would have exactly one solution, and you may not use the same element twice.</p>
<span id="more"></span>
<h5 id="示例"><a href="#示例" class="headerlink" title="示例"></a>示例</h5><p>Given nums = [2, 7, 11, 15], target = 9,<br>Because nums[0] + nums[1] = 2 + 7 = 9,<br>return [0, 1].</p>
<h5 id="算法思路"><a href="#算法思路" class="headerlink" title="算法思路"></a>算法思路</h5><p>比较容易想到的是,用两个循环,两两元素之和与目标数比较,但是算法复杂度为O(n^2),会超时。另一种办法是用字典和枚举,用字典存储查询过的值和索引,用枚举来遍历整个列表,算法复杂度为O(n),所以推荐用第二种方法。</p>
<h5 id="代码实现(python3)"><a href="#代码实现(python3)" class="headerlink" title="代码实现(python3)"></a>代码实现(python3)</h5><p>方法一:</p>
<figure class="highlight python"><table><tr><td class="code"><pre><code class="hljs python"><span class="hljs-keyword">class</span> <span class="hljs-title class_">Solution</span>:<br> <span class="hljs-keyword">def</span> <span class="hljs-title function_">twoSum</span>(<span class="hljs-params">self, nums, target</span>):<br> <span class="hljs-string">"""</span><br><span class="hljs-string"> :type nums: List[int]</span><br><span class="hljs-string"> :type target: int</span><br><span class="hljs-string"> :rtype: List[int]</span><br><span class="hljs-string"> """</span><br> r = []<br> l = <span class="hljs-built_in">len</span>(nums)<br> <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(l):<br> <span class="hljs-keyword">for</span> j <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(l):<br> <span class="hljs-keyword">if</span> nums[i] + nums[j] == target <span class="hljs-keyword">and</span> i != j:<br> <span class="hljs-keyword">return</span> i,j<br> r = r.append(i,j)<br> <span class="hljs-keyword">return</span> r<br></code></pre></td></tr></table></figure>
<p>方法二:</p>
<figure class="highlight angelscript"><table><tr><td class="code"><pre><code class="hljs angelscript"><span class="hljs-keyword">class</span> <span class="hljs-symbol">Solution:</span><br><span class="hljs-symbol"> <span class="hljs-symbol">def</span></span> <span class="hljs-symbol">twoSum</span>(<span class="hljs-symbol">self, <span class="hljs-symbol">nums</span>, <span class="hljs-symbol">target</span></span>):<br> """<br> :<span class="hljs-symbol">type</span> <span class="hljs-symbol">nums: <span class="hljs-symbol">List</span></span>[<span class="hljs-symbol">int</span>]<br> :<span class="hljs-symbol">type</span> <span class="hljs-symbol">target: <span class="hljs-symbol">int</span></span><br> :<span class="hljs-symbol">rtype: <span class="hljs-symbol">List</span></span>[<span class="hljs-symbol">int</span>]<br> """<br> <span class="hljs-symbol">collected</span> = {}<br> <span class="hljs-keyword">for</span> i,x <span class="hljs-keyword">in</span> enumerate(nums):<br> diff = target-x<br> <span class="hljs-keyword">if</span> diff <span class="hljs-keyword">in</span> collected:<br> <span class="hljs-keyword">return</span> [collected[diff], i]<br> collected[x] = i<br></code></pre></td></tr></table></figure>
<h5 id="参考资料"><a href="#参考资料" class="headerlink" title="参考资料"></a>参考资料</h5><ol>
<li><a href="https://www.tutorialspoint.com/python/python_dictionary.htm">https://www.tutorialspoint.com/python/python_dictionary.htm</a> Python - Dictionary</li>
<li><a href="https://www.geeksforgeeks.org/enumerate-in-python/">https://www.geeksforgeeks.org/enumerate-in-python/</a> Enumerate() in Python</li>
<li><a href="https://leetcode.com/problems/two-sum/">https://leetcode.com/problems/two-sum/</a> two-sum</li>
</ol>
]]></content>
<categories>
<category>算法</category>
</categories>
<tags>
<tag>算法</tag>
<tag>python</tag>
</tags>
</entry>
<entry>
<title>hexo博客搭建及多电脑同步</title>
<url>/posts/53375297/</url>
<content><![CDATA[<p>之前写了一篇用hexo+github搭建博客的教程,大致流程为:本地博客搭建,将博客部署到github,配置域名访问。详细可参考我的博客(<a href="https://www.jianshu.com/p/058d4054bc3f">Hexo搭建个人博客教程</a>)。最近因为遇到在实验室写好博客,但是想在寝室发布博客,所以涉及到多电脑同步的问题,现将解决方案记录如下。</p>
<span id="more"></span>
<h5 id="方案一"><a href="#方案一" class="headerlink" title="方案一"></a>方案一</h5><p>用百度云或onedrive一类的云盘备份hexo文件夹。这种方法方便快捷,但是仅仅可以用作博客备份,不方便用于同步更新博客,每次更改博客后都要等待云盘同步,才能继续完成接下来的发布步骤。</p>
<h5 id="方案二"><a href="#方案二" class="headerlink" title="方案二"></a>方案二</h5><p>利用coding.net存放hexo博客源文件,github存放hexo博客的网页静态文件。这种方案可以实现多电脑同步。</p>
<h6 id="实验室电脑"><a href="#实验室电脑" class="headerlink" title="实验室电脑"></a>实验室电脑</h6><ol>
<li>注册coding账号(<a href="https://coding.net/login">注册</a>),新建项目hexo_blog(设置成不公开),存放博客源文件。</li>
<li>关联远程库并推送到hexo_blog。<figure class="highlight armasm"><table><tr><td class="code"><pre><code class="hljs armasm"><span class="hljs-symbol">git</span> remote <span class="hljs-keyword">add</span> origin https:<span class="hljs-comment">//git.coding.net/xiepeng21/hexo_blog.git</span><br><span class="hljs-symbol">git</span> <span class="hljs-keyword">add</span> .<br><span class="hljs-symbol">git</span> commit -m <span class="hljs-string">"backup_v1"</span><br><span class="hljs-symbol">git</span> <span class="hljs-keyword">push</span> -u origin master<br></code></pre></td></tr></table></figure></li>
</ol>
<h6 id="寝室电脑"><a href="#寝室电脑" class="headerlink" title="寝室电脑"></a>寝室电脑</h6><ol>
<li><p>安装好nodejs,git,hexo,具体操作,不再赘述,可参考文首的博客(<a href="https://www.jianshu.com/p/058d4054bc3f">Hexo搭建个人博客教程</a>)。</p>
</li>
<li><p>将存放在远程的hexo_blog克隆到本地</p>
<figure class="highlight awk"><table><tr><td class="code"><pre><code class="hljs awk">git clone https:<span class="hljs-regexp">//gi</span>t.coding.net<span class="hljs-regexp">/xiepeng21/</span>hexo_blog.git<br></code></pre></td></tr></table></figure>
</li>
<li><p>安装博客依赖的包(读取的是package.json的配置)</p>
<figure class="highlight cmake"><table><tr><td class="code"><pre><code class="hljs cmake">npm <span class="hljs-keyword">install</span><br></code></pre></td></tr></table></figure>
</li>
<li><p>接下来就可以在这台电脑上发布博客了。</p>
</li>
</ol>
<p>注:如果遇到其他问题,可以按照搭建博客的流程分析问题解决办法(<a href="https://www.jianshu.com/p/058d4054bc3f">Hexo搭建个人博客教程</a>)。</p>
<h5 id="参考资料"><a href="#参考资料" class="headerlink" title="参考资料"></a>参考资料</h5><ol>
<li><a href="http://chenfengkg.cn/personal-blog-build/">http://chenfengkg.cn/personal-blog-build/</a> 博客一之博客搭建</li>
<li><a href="https://godweiyang.com/2018/04/13/hexo-blog/">https://godweiyang.com/2018/04/13/hexo-blog/</a> 超详细Hexo+Github博客搭建小白教程</li>
<li><a href="https://www.zybuluo.com/mdeditor">https://www.zybuluo.com/mdeditor</a> 作业部落(在线markdown编辑)</li>
</ol>
]]></content>
<categories>
<category>经验技巧</category>
</categories>
<tags>
<tag>博客</tag>
<tag>Hexo</tag>
</tags>
</entry>
<entry>
<title>【AAAI 2022】时空数据挖掘论文</title>
<url>/posts/ab9afbeb/</url>
<content><![CDATA[<p>对AAAI 2022的录用论文进行了整理,筛选出其中与时空数据挖掘相关论文,并进行任务分类。</p>
<span id="more"></span>
<p>AAAI 2022录用论文列表:<br><a href="https://aaai.org/Conferences/AAAI-22/wp-content/uploads/2021/12/AAAI-22_Accepted_Paper_List_Main_Technical_Track.pdf">https://aaai.org/Conferences/AAAI-22/wp-content/uploads/2021/12/AAAI-22_Accepted_Paper_List_Main_Technical_Track.pdf</a></p>
<p>全文还没有放出来,如果需要,可以在谷歌学术上查找,部分论文已经放在arXiv上。</p>
<h3 id="Traffic-forecasting"><a href="#Traffic-forecasting" class="headerlink" title="Traffic forecasting"></a>Traffic forecasting</h3><ol>
<li><strong>STDEN: Towards Physics-Guided Neural Networks for Traffic Flow Prediction</strong>. Jiahao Ji, Jingyuan Wang, Zhe Jiang, Jiawei Jiang, Hu Zhang</li>
<li><strong>Graph Neural Controlled Differential Equations for Traffic Forecasting</strong>. Jeongwhan Choi, Hwangyong Choi, Jeehyun Hwang, Noseong Park</li>
</ol>
<h3 id="Traffic-assignment"><a href="#Traffic-assignment" class="headerlink" title="Traffic assignment"></a>Traffic assignment</h3><ol>
<li><strong>Machine-Learned Prediction Equilibrium for Dynamic Traffic Assignment</strong>. Lukas Graf, Tobias Harks, Kostas Kollias, Michael Markl</li>
</ol>
<h3 id="Trajectory-prediction"><a href="#Trajectory-prediction" class="headerlink" title="Trajectory prediction"></a>Trajectory prediction</h3><ol>
<li><strong>Social Interpretable Tree for Pedestrian Trajectory Prediction</strong>. Liushuai Shi, Le Wang, Chengjiang Long, Sanping Zhou, Fang Zheng, Nanning Zheng, Gang Hua</li>
<li><strong>Complementary Attention Gated Network for Pedestrian Trajectory Prediction</strong>. Jinghai Duan, Le Wang, Chengjiang Long, Sanping Zhou, Fang Zheng, Liushuai Shi, Gang Hua</li>
</ol>
<h3 id="Meteorological-forecasting"><a href="#Meteorological-forecasting" class="headerlink" title="Meteorological forecasting"></a>Meteorological forecasting</h3><ol>
<li><strong>Conditional Local Convolution for Spatio-Temporal Meteorological Forecasting</strong> Haitao Lin, Zhangyang Gao, Yongjie Xu, Lirong Wu, Ling Li, Stan Z. Li</li>
<li><strong>Learning and Dynamical Models for Sub-Seasonal Climate Forecasting: Comparison and Collaboration</strong>. Sijie He, Xinyan Li, Laurie Trenary, Benjamin A. Cash, Timothy DelSole, Arindam Banerjee</li>
</ol>
<h3 id="Driver-request-assignment"><a href="#Driver-request-assignment" class="headerlink" title="Driver-request assignment"></a>Driver-request assignment</h3><ol>
<li><strong>Real-Time Driver-Request Assignment in Ridesourcing</strong>. Hao Wang, Xiaohui Bei</li>
</ol>
<h3 id="Time-series-anomaly-detection"><a href="#Time-series-anomaly-detection" class="headerlink" title="Time-series anomaly detection"></a>Time-series anomaly detection</h3><ol>
<li><strong>Towards a Rigorous Evaluation of Time-Series Anomaly Detection</strong>. Siwon Kim, Kukjin Choi, Hyun-Soo Choi, Byunghan Lee, Sungroh Yoon</li>
</ol>
<h3 id="Spatiotemporal-Graph"><a href="#Spatiotemporal-Graph" class="headerlink" title="Spatiotemporal Graph"></a>Spatiotemporal Graph</h3><ol>
<li><strong>Disentangled Spatiotemporal Graph Generative Model</strong>. Yuanqi Du, Xiaojie Guo, Hengning Cao, Yanfang Ye, Zhao Liang</li>
</ol>
<h3 id="Urban-crime-prediction"><a href="#Urban-crime-prediction" class="headerlink" title="Urban crime prediction"></a>Urban crime prediction</h3><ol>
<li><strong>Multi-Type Urban Crime Prediction</strong>. Xiangyu Zhao, Wenqi Fan, Hui Liu, Jiliang Tang</li>
<li><strong>HAGEN: Homophily-Aware Graph Convolutional Recurrent Network for Crime Forecasting</strong>. Chenyu Wang, Zongyu Lin, Xiaochen Yang, Jiao Sun, Mingxuan Yue, Cyrus Shahabi</li>
</ol>
<h3 id="Time-series-forecasting"><a href="#Time-series-forecasting" class="headerlink" title="Time series forecasting"></a>Time series forecasting</h3><ol>
<li><strong>CATN: Cross Attentive Tree-Aware Network for Multivariate Time Series Forecasting</strong>. Hui He, Qi Zhang, Simeng Bai, Kun Yi, Zhendong Niu</li>
<li><strong>Reinforcement Learning based Dynamic Model Combination for Time Series Forecasting</strong>. Yuwei Fu, Di Wu, Benoit Boulet</li>
</ol>
<h3 id="Spatio-temporal-patterns-modeling"><a href="#Spatio-temporal-patterns-modeling" class="headerlink" title="Spatio-temporal patterns modeling"></a>Spatio-temporal patterns modeling</h3><ol>
<li><strong>SPATE-GAN: Improved Generative Modeling of Dynamic Spatio-Temporal Patterns with an Autoregressive Embedding Loss</strong>. Konstantin Klemmer, Tianlin Xu, Beatrice Acciaio, Daniel B. Neill</li>
</ol>
<h3 id="Time-series-representation"><a href="#Time-series-representation" class="headerlink" title="Time series representation"></a>Time series representation</h3><ol>
<li><strong>TS2Vec: Towards Universal Representation of Time Series</strong>. Zhihan Yue, Yujing Wang, Juanyong Duan, Tianmeng Yang, Congrui Huang, Yunhai Tong, Bixiong Xu</li>
</ol>
<h3 id="Extreme-events-modeling"><a href="#Extreme-events-modeling" class="headerlink" title="Extreme events modeling"></a>Extreme events modeling</h3><ol>
<li><strong>DeepGPD: A Deep Learning Approach for Modeling Geospatio-Temporal Extreme Events</strong>. Tyler Wilson, Pang-Ning Tan, Lifeng Luo</li>
</ol>
<h3 id="Multimodal-mobility-nowcasting"><a href="#Multimodal-mobility-nowcasting" class="headerlink" title="Multimodal mobility nowcasting"></a>Multimodal mobility nowcasting</h3><ol>
<li><strong>Event-Aware Multimodal Mobility Nowcasting</strong>. Zhaonan Wang, Renhe Jiang, Hao Xue, Flora D. Salim, Xuan Song, Ryosuke Shibasaki</li>
</ol>
<h3 id="Time-series-analysis-and-embedding"><a href="#Time-series-analysis-and-embedding" class="headerlink" title="Time series analysis and embedding"></a>Time series analysis and embedding</h3><ol>
<li><strong>I-SEA: Importance Sampling and Expected Alignment-Based Deep Distance Metric Learning for Time Series Analysis and Embedding</strong>. Sirisha Rambhatla, Zhengping Che, Yan Liu</li>
</ol>
<h3 id="Time-series-generation"><a href="#Time-series-generation" class="headerlink" title="Time series generation"></a>Time series generation</h3><ol>
<li><strong>Conditional Loss and Deep Euler Scheme for Time Series Generation</strong>. Carl Remlinger, Joseph Mikael, Romuald Elie</li>
</ol>
<h3 id="Air-pollution-monitoring"><a href="#Air-pollution-monitoring" class="headerlink" title="Air pollution monitoring"></a>Air pollution monitoring</h3><ol>
<li><strong>Bayesian Optimisation for Active Monitoring of Air Pollution</strong>. Sigrid Passano Hellan, Christopher G. Lucas, Nigel H. Goddard</li>
</ol>
<h3 id="Air-quality-inference"><a href="#Air-quality-inference" class="headerlink" title="Air quality inference"></a>Air quality inference</h3><ol>
<li><strong>Accurate and Scalable Gaussian Processes for Fine-Grained Air Quality Inference</strong>. Zeel B Patel, Palak Purohit, Harsh Patel, Shivam Sahni, Nipun Batra</li>
</ol>
<h3 id="Epidemic-forecasting"><a href="#Epidemic-forecasting" class="headerlink" title="Epidemic forecasting"></a>Epidemic forecasting</h3><ol>
<li><strong>CausalGNN: Causal-Based Graph Neural Networks for Spatio-Temporal Epidemic Forecasting</strong>. Lijing Wang, Aniruddha Adiga, Jiangzhuo Chen, Adam Sadilek, Srinivasan Venkatramanan, Madhav Marathe</li>
</ol>
]]></content>
<categories>
<category>论文阅读</category>
</categories>
<tags>
<tag>时空数据挖掘</tag>
<tag>AAAI 2022</tag>
</tags>
</entry>
<entry>
<title>【ICLR 2022】时空数据挖掘论文</title>
<url>/posts/38a6ca75/</url>
<content><![CDATA[<p>对ICLR 2022的录用论文进行了整理,筛选出其中与时空数据挖掘相关论文,并进行任务分类。</p>
<span id="more"></span>
<p>ICLR 2022完整录用论文列表:<a href="https://openreview.net/group?id=ICLR.cc/2022/Conference#oral-submissions">https://openreview.net/group?id=ICLR.cc/2022/Conference#oral-submissions</a></p>
<h3 id="Time-series-signal-analysis"><a href="#Time-series-signal-analysis" class="headerlink" title="Time series signal analysis"></a>Time series signal analysis</h3><ol>
<li><a href="https://openreview.net/forum?id=U4uFaLyg7PV">T-WaveNet: A Tree-Structured Wavelet Neural Network for Time Series Signal Analysis</a>. Minhao LIU, Ailing Zeng, Qiuxia LAI, Ruiyuan Gao, Min Li, Jing Qin, Qiang Xu</li>
</ol>
<h3 id="Time-series-forecasting"><a href="#Time-series-forecasting" class="headerlink" title="Time series forecasting"></a>Time series forecasting</h3><ol>
<li><a href="https://openreview.net/forum?id=JpNH4CW_zl">Multivariate Time Series Forecasting with Latent Graph Inference</a>. Victor Garcia Satorras, Syama Sundar Rangapuram, Tim Januschowski</li>
<li><a href="https://openreview.net/forum?id=wv6g8fWLX2q">TAMP-S2GCNets: Coupling Time-Aware Multipersistence Knowledge Representation with Spatio-Supra Graph Convolutional Networks for Time-Series Forecasting</a>. Yuzhou Chen, Ignacio Segovia-Dominguez, Baris Coskunuzer, Yulia Gel</li>
<li><a href="https://openreview.net/forum?id=0EXmFzUn5I">Pyraformer: Low-Complexity Pyramidal Attention for Long-Range Time Series Modeling and Forecasting</a>. Shizhan Liu, Hang Yu, Cong Liao, Jianguo Li, Weiyao Lin, Alex X. Liu, Schahram Dustdar</li>
<li><a href="https://openreview.net/forum?id=AJAR-JgNw__">DEPTS: Deep Expansion Learning for Periodic Time Series Forecasting</a>. Wei Fan, Shun Zheng, Xiaohan Yi, Wei Cao, Yanjie Fu, Jiang Bian, Tie-Yan Liu</li>
<li><a href="https://openreview.net/forum?id=PilZY3omXV2">CoST: Contrastive Learning of Disentangled Seasonal-Trend Representations for Time Series Forecasting</a>. Gerald Woo, Chenghao Liu, Doyen Sahoo, Akshat Kumar, Steven Hoi</li>
<li><a href="https://openreview.net/forum?id=cGDAkQo1C0p">Reversible Instance Normalization for Accurate Time-Series Forecasting against Distribution Shift</a>. Taesung Kim, Jinhee Kim, Yunwon Tae, Cheonbok Park, Jang-Ho Choi, Jaegul Choo</li>
</ol>
<h3 id="Time-series-classification"><a href="#Time-series-classification" class="headerlink" title="Time series classification"></a>Time series classification</h3><ol>
<li><a href="https://openreview.net/forum?id=PDYs7Z2XFGv">Omni-Scale CNNs: a simple and effective kernel size configuration for time series classification</a>. Wensi Tang, Guodong Long, Lu Liu, Tianyi Zhou, Michael Blumenstein, Jing Jiang</li>
</ol>
<h3 id="Time-series-anomaly-detection"><a href="#Time-series-anomaly-detection" class="headerlink" title="Time series anomaly detection"></a>Time series anomaly detection</h3><ol>
<li><a href="https://openreview.net/forum?id=45L_dgP48Vd">Graph-Augmented Normalizing Flows for Anomaly Detection of Multiple Time Series</a>. Enyan Dai, Jie Chen</li>
<li><a href="https://openreview.net/forum?id=LzQQ89U1qm_">Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy</a>. Jiehui Xu, Haixu Wu, Jianmin Wang, Mingsheng Long</li>
</ol>
<h3 id="Time-series-imputation"><a href="#Time-series-imputation" class="headerlink" title="Time series imputation"></a>Time series imputation</h3><ol>
<li><a href="https://openreview.net/forum?id=kOu3-S3wJ7">Filling the G_ap_s: Multivariate Time Series Imputation by Graph Neural Networks</a>. Andrea Cini, Ivan Marisca, Cesare Alippi</li>
</ol>
<h3 id="Spatial-Temporal-Representation-Learning"><a href="#Spatial-Temporal-Representation-Learning" class="headerlink" title="Spatial-Temporal Representation Learning"></a>Spatial-Temporal Representation Learning</h3><ol>
<li><a href="https://openreview.net/forum?id=nBU_u6DLvoK">UniFormer: Unified Transformer for Efficient Spatial-Temporal Representation Learning</a>. Kunchang Li, Yali Wang, Gao Peng, Guanglu Song, Yu Liu, Hongsheng Li, Yu Qiao</li>
<li><a href="https://openreview.net/forum?id=Jh9VxCkrEZn">Spatiotemporal Representation Learning on Time Series with Dynamic Graph ODEs</a>. Ming Jin, Yuan-Fang Li, Yu Zheng, Bin Yang, Shirui Pan</li>
</ol>
<h3 id="Spatial-temporal-GNN"><a href="#Spatial-temporal-GNN" class="headerlink" title="Spatial-temporal GNN"></a>Spatial-temporal GNN</h3><ol>
<li><a href="https://openreview.net/forum?id=XJiajt89Omg">Space-Time Graph Neural Networks</a>. Samar Hadou, Charilaos I Kanatsoulis, Alejandro Ribeiro</li>
</ol>
<h3 id="Traffic-forecasting"><a href="#Traffic-forecasting" class="headerlink" title="Traffic forecasting"></a>Traffic forecasting</h3><ol>
<li><a href="https://openreview.net/forum?id=wwDg3bbYBIq">Learning to Remember Patterns: Pattern Matching Memory Networks for Traffic Forecasting</a>. Hyunwook Lee, Seungmin Jin, Hyeshin Chu, Hongkyu Lim, Sungahn Ko</li>
</ol>
<h3 id="Trajectory-prediction"><a href="#Trajectory-prediction" class="headerlink" title="Trajectory prediction"></a>Trajectory prediction</h3><ol>
<li><a href="https://openreview.net/forum?id=POxF-LEqnF">You Mostly Walk Alone: Analyzing Feature Attribution in Trajectory Prediction</a>. Osama Makansi, Julius Von Kügelgen, Francesco Locatello, Peter Vincent Gehler, Dominik Janzing, Thomas Brox, Bernhard Schölkopf</li>
</ol>
]]></content>
<categories>
<category>论文阅读</category>
</categories>
<tags>
<tag>时空数据挖掘</tag>
<tag>ICLR 2022</tag>
</tags>
</entry>
<entry>
<title>【ICLR 2024】时空数据挖掘论文</title>
<url>/posts/3d1ac43b/</url>
<content><![CDATA[<p>对ICLR 2024的录用论文进行了整理,筛选出其中与时空数据挖掘相关论文,并进行任务分类,共50篇。</p>
<span id="more"></span>
<h1 id="Air-Quality-Prediction-1-空气质量预测"><a href="#Air-Quality-Prediction-1-空气质量预测" class="headerlink" title="Air Quality Prediction (1) 空气质量预测"></a>Air Quality Prediction (1) 空气质量预测</h1><ol>
<li><strong>AirPhyNet: Harnessing Physics-Guided Neural Networks for Air Quality Prediction</strong>. <a href="https://openreview.net/forum?id=JW3jTjaaAB">https://openreview.net/forum?id=JW3jTjaaAB</a></li>
</ol>
<h1 id="Climate-Forecasting-1-气候预测"><a href="#Climate-Forecasting-1-气候预测" class="headerlink" title="Climate Forecasting (1) 气候预测"></a>Climate Forecasting (1) 气候预测</h1><ol>
<li><strong>ClimODE: Climate Forecasting With Physics-informed Neural ODEs</strong>. <a href="https://openreview.net/forum?id=xuY33XhEGR">https://openreview.net/forum?id=xuY33XhEGR</a></li>
</ol>
<h1 id="Dynamic-Graph-1-动态图"><a href="#Dynamic-Graph-1-动态图" class="headerlink" title="Dynamic Graph (1) 动态图"></a>Dynamic Graph (1) 动态图</h1><ol>
<li><strong>Causality-Inspired Spatial-Temporal Explanations for Dynamic Graph Neural Networks</strong>. <a href="https://openreview.net/forum?id=AJBkfwXh3u">https://openreview.net/forum?id=AJBkfwXh3u</a></li>
</ol>
<h1 id="Geospatial-Knowledge-Extraction-1-地理空间知识提取"><a href="#Geospatial-Knowledge-Extraction-1-地理空间知识提取" class="headerlink" title="Geospatial Knowledge Extraction (1) 地理空间知识提取"></a>Geospatial Knowledge Extraction (1) 地理空间知识提取</h1><ol>
<li><strong>GeoLLM: Extracting Geospatial Knowledge from Large Language Models</strong>. <a href="https://openreview.net/forum?id=TqL2xBwXP3">https://openreview.net/forum?id=TqL2xBwXP3</a></li>
</ol>
<h1 id="Long-term-Time-Series-Forecasting-2-长时间序列预测"><a href="#Long-term-Time-Series-Forecasting-2-长时间序列预测" class="headerlink" title="Long-term Time Series Forecasting (2) 长时间序列预测"></a>Long-term Time Series Forecasting (2) 长时间序列预测</h1><ol>
<li><strong>Periodicity Decoupling Framework for Long-term Series Forecasting</strong>. <a href="https://openreview.net/forum?id=dp27P5HBBt">https://openreview.net/forum?id=dp27P5HBBt</a></li>
<li><strong>Self-Supervised Contrastive Forecasting</strong>. <a href="https://openreview.net/forum?id=nBCuRzjqK7">https://openreview.net/forum?id=nBCuRzjqK7</a></li>
</ol>
<h1 id="Spatio-Temporal-Graph-Transfer-Learning-1-时空图迁移学习"><a href="#Spatio-Temporal-Graph-Transfer-Learning-1-时空图迁移学习" class="headerlink" title="Spatio-Temporal Graph Transfer Learning (1) 时空图迁移学习"></a>Spatio-Temporal Graph Transfer Learning (1) 时空图迁移学习</h1><ol>
<li><strong>A Generative Pre-Training Framework for Spatio-Temporal Graph Transfer Learning</strong>. <a href="https://openreview.net/forum?id=QyFm3D3Tzi">https://openreview.net/forum?id=QyFm3D3Tzi</a></li>
</ol>
<h1 id="Spatio-Temporal-Causal-inference-1-时空因果推断"><a href="#Spatio-Temporal-Causal-inference-1-时空因果推断" class="headerlink" title="Spatio-Temporal Causal inference (1) 时空因果推断"></a>Spatio-Temporal Causal inference (1) 时空因果推断</h1><ol>
<li><strong>NuwaDynamics: Discovering and Updating in Causal Spatio-Temporal Modeling</strong>. <a href="https://openreview.net/forum?id=sLdVl0q68X">https://openreview.net/forum?id=sLdVl0q68X</a></li>
</ol>
<h1 id="Time-Series-Forecasting-20-时间序列预测"><a href="#Time-Series-Forecasting-20-时间序列预测" class="headerlink" title="Time Series Forecasting (20) 时间序列预测"></a>Time Series Forecasting (20) 时间序列预测</h1><ol>
<li><strong>Biased Temporal Convolution Graph Network for Time Series Forecasting with Missing Values</strong>. <a href="https://openreview.net/forum?id=O9nZCwdGcG">https://openreview.net/forum?id=O9nZCwdGcG</a></li>
<li><strong>CARD: Channel Aligned Robust Blend Transformer for Time Series Forecasting</strong>. <a href="https://openreview.net/forum?id=MJksrOhurE">https://openreview.net/forum?id=MJksrOhurE</a></li>
<li><strong>Copula Conformal prediction for multi-step time series prediction</strong>. <a href="https://openreview.net/forum?id=ojIJZDNIBj">https://openreview.net/forum?id=ojIJZDNIBj</a></li>
<li><strong>DAM: A Foundation Model for Forecasting</strong>. <a href="https://openreview.net/forum?id=4NhMhElWqP">https://openreview.net/forum?id=4NhMhElWqP</a></li>
<li><strong>iTransformer: Inverted Transformers Are Effective for Time Series Forecasting</strong>. <a href="https://openreview.net/forum?id=JePfAI8fah">https://openreview.net/forum?id=JePfAI8fah</a></li>
<li><strong>Interpretable Sparse System Identification: Beyond Recent Deep Learning Techniques on Time-Series Prediction</strong>. <a href="https://openreview.net/forum?id=aFWUY3E7ws">https://openreview.net/forum?id=aFWUY3E7ws</a></li>
<li><strong>Multi-Resolution Diffusion Models for Time Series Forecasting</strong>. <a href="https://openreview.net/forum?id=mmjnr0G8ZY">https://openreview.net/forum?id=mmjnr0G8ZY</a></li>
<li><strong>MG-TSD: Multi-Granularity Time Series Diffusion Models with Guided Learning Process</strong>. <a href="https://openreview.net/forum?id=CZiY6OLktd">https://openreview.net/forum?id=CZiY6OLktd</a></li>
<li><strong>Multi-scale Transformers with Adaptive Pathways for Time Series Forecasting</strong>. <a href="https://openreview.net/forum?id=lJkOCMP2aW">https://openreview.net/forum?id=lJkOCMP2aW</a></li>
<li><strong>Rethinking Channel Dependence for Multivariate Time Series Forecasting: Learning from Leading Indicators</strong>. <a href="https://openreview.net/forum?id=JiTVtCUOpS">https://openreview.net/forum?id=JiTVtCUOpS</a></li>
<li><strong>RobustTSF: Towards Theory and Design of Robust Time Series Forecasting with Anomalies</strong>. <a href="https://openreview.net/forum?id=ltZ9ianMth">https://openreview.net/forum?id=ltZ9ianMth</a></li>
<li><strong>SocioDojo: Building Lifelong Analytical Agents with Real-world Text and Time Series</strong>. <a href="https://openreview.net/forum?id=s9z0HzWJJp">https://openreview.net/forum?id=s9z0HzWJJp</a></li>
<li><strong>STanHop: Sparse Tandem Hopfield Model for Memory-Enhanced Time Series Prediction</strong>. <a href="https://openreview.net/forum?id=6iwg437CZs">https://openreview.net/forum?id=6iwg437CZs</a></li>
<li><strong>Transformer-Modulated Diffusion Models for Probabilistic Multivariate Time Series Forecasting</strong>. <a href="https://openreview.net/forum?id=qae04YACHs">https://openreview.net/forum?id=qae04YACHs</a></li>
<li><strong>Time-LLM: Time Series Forecasting by Reprogramming Large Language Models</strong>. <a href="https://openreview.net/forum?id=Unb5CVPtae">https://openreview.net/forum?id=Unb5CVPtae</a></li>
<li><strong>TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting</strong>. <a href="https://openreview.net/forum?id=YH5w12OUuU">https://openreview.net/forum?id=YH5w12OUuU</a></li>
<li><strong>TACTiS-2: Better, Faster, Simpler Attentional Copulas for Multivariate Time Series</strong>. <a href="https://openreview.net/forum?id=xtOydkE1Ku">https://openreview.net/forum?id=xtOydkE1Ku</a></li>
<li><strong>Towards Transparent Time Series Forecasting</strong>. <a href="https://openreview.net/forum?id=TYXtXLYHpR">https://openreview.net/forum?id=TYXtXLYHpR</a></li>
<li><strong>TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting</strong>. <a href="https://openreview.net/forum?id=7oLshfEIC2">https://openreview.net/forum?id=7oLshfEIC2</a></li>
<li><strong>VQ-TR: Vector Quantized Attention for Time Series Forecasting</strong>. <a href="https://openreview.net/forum?id=IxpTsFS7mh">https://openreview.net/forum?id=IxpTsFS7mh</a></li>
</ol>
<h1 id="Temporal-Graphs-1-时序图"><a href="#Temporal-Graphs-1-时序图" class="headerlink" title="Temporal Graphs (1) 时序图"></a>Temporal Graphs (1) 时序图</h1><ol>
<li><strong>Beyond Spatio-Temporal Representations: Evolving Fourier Transform for Temporal Graphs</strong>. <a href="https://openreview.net/forum?id=uvFhCUPjtI">https://openreview.net/forum?id=uvFhCUPjtI</a></li>
</ol>
<h1 id="Time-Series-Imputation-1-时间序列填补"><a href="#Time-Series-Imputation-1-时间序列填补" class="headerlink" title="Time Series Imputation (1) 时间序列填补"></a>Time Series Imputation (1) 时间序列填补</h1><ol>
<li><strong>Conditional Information Bottleneck Approach for Time Series Imputation</strong>. <a href="https://openreview.net/forum?id=K1mcPiDdOJ">https://openreview.net/forum?id=K1mcPiDdOJ</a></li>
</ol>
<h1 id="Time-Series-Causal-Discovery-1-时间序列因果发现"><a href="#Time-Series-Causal-Discovery-1-时间序列因果发现" class="headerlink" title="Time Series Causal Discovery (1) 时间序列因果发现"></a>Time Series Causal Discovery (1) 时间序列因果发现</h1><ol>
<li><strong>CausalTime: Realistically Generated Time-series for Benchmarking of Causal Discovery</strong>. <a href="https://openreview.net/forum?id=iad1yyyGme">https://openreview.net/forum?id=iad1yyyGme</a></li>
</ol>
<h1 id="Time-Series-Generation-2-时间序列生成"><a href="#Time-Series-Generation-2-时间序列生成" class="headerlink" title="Time Series Generation (2) 时间序列生成"></a>Time Series Generation (2) 时间序列生成</h1><ol>
<li><strong>Diffusion-TS: Interpretable Diffusion for General Time Series Generation</strong>. <a href="https://openreview.net/forum?id=4h1apFjO99">https://openreview.net/forum?id=4h1apFjO99</a></li>
<li><strong>Generative Modeling of Regular and Irregular Time Series Data via Koopman VAEs</strong>. <a href="https://openreview.net/forum?id=eY7sLb0dVF">https://openreview.net/forum?id=eY7sLb0dVF</a></li>
</ol>
<h1 id="Time-Series-Representations-2-时间序列表示"><a href="#Time-Series-Representations-2-时间序列表示" class="headerlink" title="Time Series Representations (2) 时间序列表示"></a>Time Series Representations (2) 时间序列表示</h1><ol>
<li><strong>Disentangling Time Series Representations via Contrastive based $l$− Variational Inference</strong>. <a href="https://openreview.net/forum?id=iI7hZSczxE">https://openreview.net/forum?id=iI7hZSczxE</a> </li>
<li><strong>T-Rep: Representation Learning for Time Series using Time-Embeddings</strong>. <a href="https://openreview.net/forum?id=3y2TfP966N">https://openreview.net/forum?id=3y2TfP966N</a></li>
</ol>
<h1 id="Time-Series-Analysis-2-时间序列分析"><a href="#Time-Series-Analysis-2-时间序列分析" class="headerlink" title="Time Series Analysis (2) 时间序列分析"></a>Time Series Analysis (2) 时间序列分析</h1><ol>
<li><strong>FITS: Modeling Time Series with 10$k$ Parameters</strong>. <a href="https://openreview.net/forum?id=bWcnvZ3qMb">https://openreview.net/forum?id=bWcnvZ3qMb</a></li>
<li><strong>ModernTCN: A Modern Pure Convolution Structure for General Time Series Analysis</strong>. <a href="https://openreview.net/forum?id=vpJMJerXHU">https://openreview.net/forum?id=vpJMJerXHU</a></li>
</ol>
<h1 id="Time-Series-Explanation-1-时间序列解释"><a href="#Time-Series-Explanation-1-时间序列解释" class="headerlink" title="Time Series Explanation (1) 时间序列解释"></a>Time Series Explanation (1) 时间序列解释</h1><ol>
<li><strong>Explaining Time Series via Contrastive and Locally Sparse Perturbations</strong>. <a href="https://openreview.net/forum?id=qDdSRaOiyb">https://openreview.net/forum?id=qDdSRaOiyb</a></li>
</ol>
<h1 id="Time-Series-Pattern-Recognition-1-时间序列模式识别"><a href="#Time-Series-Pattern-Recognition-1-时间序列模式识别" class="headerlink" title="Time Series Pattern Recognition (1) 时间序列模式识别"></a>Time Series Pattern Recognition (1) 时间序列模式识别</h1><ol>
<li><strong>Generative Learning for Financial Time Series with Irregular and Scale-Invariant Patterns</strong>. <a href="https://openreview.net/forum?id=CdjnzWsQax">https://openreview.net/forum?id=CdjnzWsQax</a></li>
</ol>
<h1 id="Time-Series-Embedding-3-时间序列嵌入"><a href="#Time-Series-Embedding-3-时间序列嵌入" class="headerlink" title="Time Series Embedding (3) 时间序列嵌入"></a>Time Series Embedding (3) 时间序列嵌入</h1><ol>
<li><strong>GAFormer: Enhancing Timeseries Transformers Through Group-Aware Embeddings</strong>. <a href="https://openreview.net/forum?id=c56TWtYp0W">https://openreview.net/forum?id=c56TWtYp0W</a></li>
<li><strong>Learning to Embed Time Series Patches Independently</strong>. <a href="https://openreview.net/forum?id=WS7GuBDFa2">https://openreview.net/forum?id=WS7GuBDFa2</a></li>
<li><strong>TEST: Text Prototype Aligned Embedding to Activate LLM’s Ability for Time Series</strong>. <a href="https://openreview.net/forum?id=Tuh4nZVb0g">https://openreview.net/forum?id=Tuh4nZVb0g</a></li>
</ol>
<h1 id="Time-Series-Classification-2-时间序列分类"><a href="#Time-Series-Classification-2-时间序列分类" class="headerlink" title="Time Series Classification (2) 时间序列分类"></a>Time Series Classification (2) 时间序列分类</h1><ol>
<li><strong>Inherently Interpretable Time Series Classification via Multiple Instance Learning</strong>. <a href="https://openreview.net/forum?id=xriGRsoAza">https://openreview.net/forum?id=xriGRsoAza</a></li>
<li><strong>Stable Neural Stochastic Differential Equations in Analyzing Irregular Time Series Data</strong>. <a href="https://openreview.net/forum?id=4VIgNuQ1pY">https://openreview.net/forum?id=4VIgNuQ1pY</a></li>
</ol>
<h1 id="Time-Series-Alignment-1-时间序列对齐"><a href="#Time-Series-Alignment-1-时间序列对齐" class="headerlink" title="Time Series Alignment (1) 时间序列对齐"></a>Time Series Alignment (1) 时间序列对齐</h1><ol>
<li><strong>Leveraging Generative Models for Unsupervised Alignment of Neural Time Series Data</strong>. <a href="https://openreview.net/forum?id=9zhHVyLY4K">https://openreview.net/forum?id=9zhHVyLY4K</a></li>
</ol>
<h1 id="Time-Series-Contrastive-Learning-4-时间序列对比学习"><a href="#Time-Series-Contrastive-Learning-4-时间序列对比学习" class="headerlink" title="Time Series Contrastive Learning (4) 时间序列对比学习"></a>Time Series Contrastive Learning (4) 时间序列对比学习</h1><ol>
<li><strong>Parametric Augmentation for Time Series Contrastive Learning</strong>. <a href="https://openreview.net/forum?id=EIPLdFy3vp">https://openreview.net/forum?id=EIPLdFy3vp</a></li>
<li><strong>Retrieval-Based Reconstruction For Time-series Contrastive Learning</strong>. <a href="https://openreview.net/forum?id=3zQo5oUvia">https://openreview.net/forum?id=3zQo5oUvia</a></li>
<li><strong>Soft Contrastive Learning for Time Series</strong>. <a href="https://openreview.net/forum?id=pAsQSWlDUf">https://openreview.net/forum?id=pAsQSWlDUf</a></li>
<li><strong>Towards Enhancing Time Series Contrastive Learning: A Dynamic Bad Pair Mining Approach</strong>. <a href="https://openreview.net/forum?id=K2c04ulKXn">https://openreview.net/forum?id=K2c04ulKXn</a></li>
</ol>
<h1 id="Traffic-Predictoin-1-交通预测"><a href="#Traffic-Predictoin-1-交通预测" class="headerlink" title="Traffic Predictoin (1) 交通预测"></a>Traffic Predictoin (1) 交通预测</h1><ol>
<li><strong>TESTAM: A Time-Enhanced Spatio-Temporal Attention Model with Mixture of Experts</strong>. <a href="https://openreview.net/forum?id=N0nTk5BSvO">https://openreview.net/forum?id=N0nTk5BSvO</a></li>
</ol>
<h1 id="更多"><a href="#更多" class="headerlink" title="更多"></a>更多</h1><ol>
<li>时空数据挖掘论文库 <a href="https://github.com/xiepeng21/research_spatio-temporal-data-mining">Research of Spatio-Temporal Data Mining</a></li>
<li>【KDD 2023】时空数据挖掘论文 <a href="https://zhuanlan.zhihu.com/p/649761763">半亩方塘:【KDD 2023】时空数据挖掘论文</a></li>
<li>【ICLR 2023】 时空数据挖掘论文 <a href="https://zhuanlan.zhihu.com/p/629231015">半亩方塘:【ICLR 2023】时空数据挖掘论文</a></li>
<li>【NeurIPS 2023】时空数据挖掘论文 <a href="https://xiepeng21.cn/posts/3727adf0/">【NeurIPS 2023】时空数据挖掘论文</a></li>
<li>【ICLR 2022】 时空数据挖掘论文 <a href="https://zhuanlan.zhihu.com/p/478362107">半亩方塘:【ICLR 2022】时空数据挖掘论文</a></li>
<li>【ICML 2022】时空数据挖掘论文 <a href="https://zhuanlan.zhihu.com/p/548907678">半亩方塘:【ICML 2022】时空数据挖掘论文</a></li>
<li>【IJCAI 2022】时空数据挖掘论文 <a href="https://zhuanlan.zhihu.com/p/547781911">半亩方塘:【IJCAI 2022】时空数据挖掘论文</a></li>
<li>【KDD 2022】时空数据挖掘论文 <a href="https://zhuanlan.zhihu.com/p/547262551">半亩方塘:【KDD 2022】时空数据挖掘论文</a></li>
<li>【WSDM 2022】时空数据挖掘论文 <a href="https://zhuanlan.zhihu.com/p/478363400">半亩方塘:【WSDM 2022】时空数据挖掘论文</a></li>
<li>【AAAI 2022】 时空数据挖掘论文 <a href="https://zhuanlan.zhihu.com/p/478357750">半亩方塘:【AAAI 2022】时空数据挖掘论文</a></li>
<li>近3年用于时空数据挖掘的图神经网络论文(2018-2021)<a href="https://zhuanlan.zhihu.com/p/420738345">半亩方塘:【图神经网络】近3年用于时空数据挖掘的图神经网络论文(2018-2021)</a></li>
<li>【KDD 2021】 时空数据挖掘论文 <a href="https://zhuanlan.zhihu.com/p/423342733">半亩方塘:【KDD 2021 】时空数据挖掘论文</a></li>
<li>【IJCAI 2021】 时空数据挖掘论文 <a href="https://zhuanlan.zhihu.com/p/423323595">半亩方塘:【IJCAI 2021 】时空数据挖掘论文</a></li>
<li>【WWW 2021】 时空数据挖掘论文 <a href="https://zhuanlan.zhihu.com/p/440291388">半亩方塘:【WWW 2021 】时空数据挖掘论文</a></li>
<li>【ACM SIGSPATIAL 2021】 时空数据挖掘论文 <a href="https://zhuanlan.zhihu.com/p/442419220">半亩方塘:【ACM SIGSPATIAL 2021】时空数据挖掘论文</a></li>
<li>【时空数据挖掘】推开窗,看一看 <a href="https://zhuanlan.zhihu.com/p/573936494">半亩方塘:【时空数据挖掘】推开窗,看一看</a></li>
<li>【时空数据挖掘】 - 层次体系构建 <a href="https://zhuanlan.zhihu.com/p/435895312">半亩方塘:【时空数据挖掘】 - 层次体系构建</a></li>
<li>【时空数据挖掘】- 任务 <a href="https://zhuanlan.zhihu.com/p/426898203">半亩方塘:【时空数据挖掘】- 任务</a></li>
<li>【时空数据挖掘】- 方法 <a href="https://zhuanlan.zhihu.com/p/435890647">半亩方塘:【时空数据挖掘】- 方法</a></li>
<li>【时空数据挖掘】- 数据 <a href="https://zhuanlan.zhihu.com/p/435882797">半亩方塘:【时空数据挖掘】- 数据</a></li>
</ol>
]]></content>
<categories>
<category>论文阅读</category>
</categories>
<tags>
<tag>时空数据挖掘</tag>
<tag>ICLR 2024</tag>
</tags>
</entry>
<entry>
<title>【WSDM 2022】时空数据挖掘论文</title>
<url>/posts/e5ed55f3/</url>
<content><![CDATA[<p>对WSDM 2022的录用论文进行了整理,筛选出其中与时空数据挖掘相关论文,并进行任务分类。</p>
<span id="more"></span>
<p>WSDM 2022 完整录用论文列表:<a href="https://www.wsdm-conference.org/2022/accepted-papers/">https://www.wsdm-conference.org/2022/accepted-papers/</a></p>
<h3 id="Human-mobility-prediction"><a href="#Human-mobility-prediction" class="headerlink" title="Human mobility prediction"></a>Human mobility prediction</h3><p><a href="https://dl.acm.org/doi/10.1145/3488560.3498438">RLMob: Deep Reinforcement Learning for Successive Mobility Prediction</a>. Ziyan Luo, Congcong Miao</p>
<h3 id="Precipitation-forecasting"><a href="#Precipitation-forecasting" class="headerlink" title="Precipitation forecasting"></a>Precipitation forecasting</h3><p><a href="https://dl.acm.org/doi/10.1145/3488560.3498448">A New Class of Polynomial Activation Functions of Deep Learning for Precipitation Forecasting</a>. Jiachuan Wang, Lei Chen, Charles Wang Wai Ng</p>
<h3 id="Route-recommendation"><a href="#Route-recommendation" class="headerlink" title="Route recommendation"></a>Route recommendation</h3><p><a href="https://dl.acm.org/doi/10.1145/3488560.3498512">Personalized Long-distance Fuel-efficient Route Recommendation Through Historical Trajectories Mining</a>. Zhan Wang, Zhaohui Peng, Senzhang Wang, Qiao Song</p>
<h3 id="Passenger-demand-prediction"><a href="#Passenger-demand-prediction" class="headerlink" title="Passenger demand prediction"></a>Passenger demand prediction</h3><p><a href="https://dl.acm.org/doi/10.1145/3488560.3498394">CMT-Net: A Mutual Transition Aware Framework for Taxicab Pick-ups and Drop-offs Co-Prediction</a>. Yudong Zhang, Binwu Wang, Ziyang Shan, Zhengyang Zhou, Yang Wang</p>
<h3 id="Urban-flow-prediction"><a href="#Urban-flow-prediction" class="headerlink" title="Urban flow prediction"></a>Urban flow prediction</h3><p><a href="https://dl.acm.org/doi/10.1145/3488560.3498444">ST-GSP: Spatial-Temporal Global Semantic Representation Learning for Urban Flow Prediction</a>. Liang Zhao, Min Gao, Zongwei Wang</p>
]]></content>
<categories>
<category>论文阅读</category>
</categories>
<tags>
<tag>时空数据挖掘</tag>
<tag>WSDM 2022</tag>
</tags>
</entry>
<entry>
<title>【NeurIPS 2023】时空数据挖掘论文</title>
<url>/posts/3727adf0/</url>
<content><![CDATA[<p>对NeurIPS 2023的录用论文进行了整理,筛选出其中与时空数据挖掘相关论文,如下。</p>
<span id="more"></span>
<p>NeurIPS 2023论文录用列表: <a href="https://openreview.net/group?id=NeurIPS.cc/2023/Conference">https://openreview.net/group?id=NeurIPS.cc/2023/Conference</a></p>
<ul>
<li>Automatic Integration for Spatiotemporal Neural Point Processes. Zihao Zhou, Rose Yu. <a href="https://neurips.cc/virtual/2023/poster/72359">paper</a></li>
<li>Adaptive Normalization for Non-stationary Time Series Forecasting: A Temporal Slice Perspective. Zhiding Liu, Mingyue Cheng, Zhi Li, Zhenya Huang, Qi Liu, Yanhu Xie, Enhong Chen. <a href="https://neurips.cc/virtual/2023/poster/72816">paper</a></li>
<li>BasisFormer: Attention-based Time Series Forecasting with Learnable and Interpretable Basis. Zelin Ni, Hang Yu, Shizhan Liu, Jianguo Li , Weiyao Lin. <a href="https://neurips.cc/virtual/2023/poster/69976">paper</a></li>
<li>BioMassters: A Benchmark Dataset for Forest Biomass Estimation using Multi-modal Satellite Time-series. Andrea Nascetti, Ritu Yadav, Kirill Brodt, Qixun Qu, Hongwei Fan, Yuri Shendryk, Isha Shah, Christine Chung. <a href="https://neurips.cc/virtual/2023/poster/73499">paper</a></li>
<li>Creating High-Fidelity Synthetic GPS Trajectory Dataset for Urban Mobility Analysis. Yuanshao Zhu, Yongchao Ye, Ying Wu, Xiangyu Zhao, James Yu. <a href="https://neurips.cc/virtual/2023/poster/73469">paper</a></li>
<li>Contrast Everything: Multi-Granularity Representation Learning for Medical Time-Series. Yihe Wang, Yu Han, Haishuai Wang, Xiang Zhang. <a href="https://neurips.cc/virtual/2023/poster/70272">paper</a></li>
<li>ContiFormer: Continuous-Time Transformer for Irregular Time Series Modeling. Yuqi Chen, Kan Ren, Yansen Wang, Yuchen Fang, Weiwei Sun, Dongsheng Li. <a href="https://neurips.cc/virtual/2023/poster/71304">paper</a></li>
<li>Causal Discovery from Subsampled Time Series with Proxy Variables. Mingzhou Liu, Xinwei Sun, Lingjing Hu, Yizhou Wang. <a href="https://neurips.cc/virtual/2023/poster/70936">paper</a></li>
<li>Causal Discovery in Semi-Stationary Time Series. Shanyun Gao, Raghavendra Addanki, Tong Yu, Ryan Rossi, Murat Kocaoglu. <a href="https://neurips.cc/virtual/2023/poster/71016">paper</a></li>
<li>CrossGNN: Confronting Noisy Multivariate Time Series Via Cross Interaction Refinement. Qihe Huang, Lei Shen, Ruixin Zhang, Shouhong Ding, Binwu Wang, Zhengyang Zhou, Yang Wang. <a href="https://neurips.cc/virtual/2023/poster/70010">paper</a></li>
<li>Conformal Prediction for Time Series with Modern Hopfield Networks. Andreas Auer, Martin Gauch, Daniel Klotz, Sepp Hochreiter. <a href="https://neurips.cc/virtual/2023/poster/72007">paper</a></li>
<li>Conformal Scorecasting: Anticipatory Uncertainty Quantification for Distribution Shift in Time Series. Anastasios Angelopoulos, Ryan Tibshirani, Emmanuel Candes. <a href="https://neurips.cc/virtual/2023/poster/69896">paper</a></li>
<li>DiffTraj: Generating GPS Trajectory with Diffusion Probabilistic Model. Yuanshao Zhu, Yongchao Ye, Shiyao Zhang, Xiangyu Zhao, James Yu. <a href="https://neurips.cc/virtual/2023/poster/69935">paper</a></li>
<li>DYffusion: A Dynamics-informed Diffusion Model for Spatiotemporal Forecasting. Salva Rühling Cachay, Bo Zhao, Hailey James, Rose Yu. <a href="https://neurips.cc/virtual/2023/poster/71410">paper</a></li>
<li>Drift doesn’t Matter: Dynamic Decomposition with Diffusion Reconstruction for Unstable Multivariate Time Series Anomaly Detection. Chengsen Wang, Qi Qi, Jingyu Wang, Haifeng Sun, Xingyu Wang, Zirui Zhuang, Jianxin Liao. <a href="https://neurips.cc/virtual/2023/poster/71195">paper</a><a href="https://github.com/ForestsKing/D3R">code</a></li>
<li>Deciphering Spatio-Temporal Graph Forecasting: A Causal Lens and Treatment. Yutong Xia, Yuxuan Liang, Haomin Wen, Xu Liu, Kun Wang, Zhengyang Zhou, Roger Zimmermann. <a href="https://neurips.cc/virtual/2023/poster/73036">paper</a></li>
<li>Equivariant Spatio-Temporal Attentive Graph Networks to Simulate Physical Dynamics. Liming Wu, Zhichao Hou, Jirui Yuan, Yu Rong, Wenbing Huang. <a href="https://neurips.cc/virtual/2023/poster/72921">paper</a></li>
<li>Encoding Time-Series Explanations through Self-Supervised Model Behavior Consistency. Owen Queen, Thomas Hartvigsen, Teddy Koker, Huan He, Theodoros Tsiligkaridis, Marinka Zitnik. <a href="https://neurips.cc/virtual/2023/poster/69958">paper</a></li>
<li>Equivariant Neural Simulators for Stochastic Spatiotemporal Dynamics. Koen Minartz, Yoeri Poels, Simon Koop, Vlado Menkovski. <a href="https://neurips.cc/virtual/2023/poster/72442">paper</a></li>
<li>Fine-Grained Spatio-Temporal Particulate Matter Dataset From Delhi For ML based Modeling. Sachin Chauhan, Zeel Bharatkumar Patel, Sayan Ranu, Rijurekha Sen, Nipun Batra. <a href="https://neurips.cc/virtual/2023/poster/73476">paper</a></li>
<li>Frequency-domain MLPs are More Effective Learners in Time Series Forecasting. Kun Yi, Qi Zhang, Wei Fan, Hui He, Pengyang Wang, Shoujin Wang, Ning An, Defu Lian, Longbing Cao, Zhendong Niu. <a href="https://neurips.cc/virtual/2023/poster/70726">paper</a></li>
<li>FOCAL: Contrastive Learning for Multimodal Time-Series Sensing Signals in Factorized Orthogonal Latent Space. Shengzhong Liu, Tomoyoshi Kimura, Dongxin Liu, Ruijie Wang, Jinyang Li, Suhas Diggavi, Mani Srivastava, Tarek Abdelzaher. <a href="https://neurips.cc/virtual/2023/poster/70617">paper</a></li>
<li>FourierGNN: Rethinking Multivariate Time Series Forecasting from a Pure Graph Perspective. Kun Yi, Qi Zhang, Wei Fan, Hui He, Liang Hu, Pengyang Wang, Ning An, Longbing Cao, Zhendong Niu. <a href="https://neurips.cc/virtual/2023/poster/71159">paper</a></li>
<li>Finding Order in Chaos: A Novel Data Augmentation Method for Time Series in Contrastive Learning. Berken Utku Demirel, Christian Holz. <a href="https://neurips.cc/virtual/2023/poster/71014">paper</a><a href="https://github.com/eth-siplab/Finding_Order_in_Chaos">code</a></li>
<li>Generative Pre-Training of Spatio-Temporal Graph Neural Networks. Zhonghang Li, Lianghao Xia, Yong Xu, Chao Huang. <a href="https://neurips.cc/virtual/2023/poster/70508">paper</a></li>
<li>Integration-free Training for Spatio-temporal Multimodal Covariate Deep Kernel Point Processes. Yixuan Zhang, Quyu Kong, Feng Zhou. <a href="https://neurips.cc/virtual/2023/poster/71268">paper</a></li>
<li>Koopa: Learning Non-stationary Time Series Dynamics with Koopman Predictors. Yong Liu, Chenyu Li, Jianmin Wang, Mingsheng Long. <a href="https://neurips.cc/virtual/2023/poster/72562">paper</a><a href="https://github.com/thuml/Koopa">code</a></li>
<li>LargeST: A Benchmark Dataset for Large-Scale Traffic Forecasting. Xu Liu, Yutong Xia, Yuxuan Liang, Junfeng Hu, Yiwei Wang, Lei Bai, Chao Huang, Zhenguang Liu, Bryan Hooi, Roger Zimmermann. <a href="https://neurips.cc/virtual/2023/poster/73480">paper</a><a href="https://github.com/liuxu77/LargeST">code</a></li>
<li>Large Language Models Are Zero Shot Time Series Forecasters. Marc Finzi, Nate Gruver, Shikai Qiu, Andrew Wilson. <a href="https://neurips.cc/virtual/2023/poster/70543">paper</a></li>
<li>MEMTO: Memory-guided Transformer for Multivariate Time Series Anomaly Detection. Junho Song, Keonwoo Kim, Jeonglyul Oh, Sungzoon Cho. <a href="https://neurips.cc/virtual/2023/poster/71519">paper</a><a href="https://github.com/gunny97/MEMTO">code</a></li>
<li>Nominality Score Conditioned Time Series Anomaly Detection by Point/Sequential Reconstruction. Chih-Yu Lai, Fan-Keng Sun, Zhengqi Gao, Jeffrey H Lang, Duane Boning. <a href="https://neurips.cc/virtual/2023/poster/70582">paper</a><a href="https://github.com/andrewlai61616/NPSR">code</a></li>
<li>OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive Learning. Cheng Tan, Siyuan Li, Zhangyang Gao, Wenfei Guan, Zedong Wang, Zicheng Liu, Lirong Wu, Stan Z. Li. <a href="https://neurips.cc/virtual/2023/poster/73674">paper</a><a href="https://github.com/chengtan9907/OpenSTL">code</a></li>
<li>OneNet: Enhancing Time Series Forecasting Models under Concept Drift by Online Ensembling. Yifan Zhang, Qingsong Wen, Xue Wang, Weiqi Chen, Liang Sun, Zhang Zhang, Liang Wang, Rong Jin, Tieniu Tan. <a href="https://neurips.cc/virtual/2023/poster/71725">paper</a><a href="https://github.com/yfzhang114/OneNet">code</a></li>
<li>One Fits All: Power General Time Series Analysis by Pretrained LM. Tian Zhou, Peisong Niu, xue wang, Liang Sun, Rong Jin. <a href="https://neurips.cc/virtual/2023/poster/70856">paper</a></li>
<li>On the Constrained Time-Series Generation Problem. Andrea Coletta, Sriram Gopalakrishnan, Daniel Borrajo, Svitlana Vyetrenko. <a href="https://neurips.cc/virtual/2023/poster/72006">paper</a></li>
<li>Predict, Refine, Synthesize: Self-Guiding Diffusion Models for Probabilistic Time Series Forecasting. Marcel Kollovieh, Abdul Fatir Ansari, Michael Bohlke-Schneider, Jasper Zschiegner, Hao Wang, Yuyang (Bernie) Wang. <a href="https://neurips.cc/virtual/2023/poster/70377">paper</a></li>
<li>SpatialRank: Urban Event Ranking with NDCG Optimization on Spatiotemporal Data. Bang An, Xun Zhou, Yongjian Zhong, Tianbao Yang. <a href="https://neurips.cc/virtual/2023/poster/70628">paper</a></li>
<li>Sparse Graph Learning from Spatiotemporal Time Series. Andrea Cini, Daniele Zambon, Cesare Alippi. <a href="https://neurips.cc/virtual/2023/poster/73924">paper</a></li>
<li>Sparse Deep Learning for Time Series Data: Theory and Applications. Mingxuan Zhang, Yan Sun, Faming Liang. <a href="https://neurips.cc/virtual/2023/poster/72629">paper</a></li>
<li>Scale-teaching: Robust Multi-scale Training for Time Series Classification with Noisy Labels. Zhen Liu, ma peitian, Dongliang Chen, Wenbin Pei, Qianli Ma. <a href="https://neurips.cc/virtual/2023/poster/72608">paper</a><a href="https://github.com/qianlima-lab/Scale-teaching">code</a></li>
<li>SimMTM: A Simple Pre-Training Framework for Masked Time-Series Modeling. Jiaxiang Dong, Haixu Wu, Haoran Zhang, Li Zhang, Jianmin Wang, Mingsheng Long. <a href="https://neurips.cc/virtual/2023/poster/70829">paper</a><a href="https://github.com/thuml/SimMTM">code</a></li>
<li>Taming Local Effects in Graph-based Spatiotemporal Forecasting. Andrea Cini, Ivan Marisca, Daniele Zambon, Cesare Alippi. <a href="https://neurips.cc/virtual/2023/poster/70034">paper</a></li>
<li>Time Series Kernels based on Nonlinear Vector AutoRegressive Delay Embeddings. Giovanni De Felice, John Goulermas, Vladimir Gusev. <a href="https://neurips.cc/virtual/2023/poster/71521">paper</a></li>
<li>Time Series as Images: Vision Transformer for Irregularly Sampled Time Series. Zekun Li, Shiyang Li, Xifeng Yan. <a href="https://neurips.cc/virtual/2023/poster/71219">paper</a></li>
<li>UUKG: Unified Urban Knowledge Graph Dataset for Urban Spatiotemporal Prediction. Yansong Ning, Hao Liu, Hao Wang, Zhenyu Zeng, Hui Xiong. <a href="https://neurips.cc/virtual/2023/poster/73440">paper</a><a href="https://github.com/usail-hkust/UUKG">code</a></li>
<li>What if We Enrich day-ahead Solar Irradiance Time Series Forecasting with Spatio-Temporal Context? Oussama Boussif, Ghait Boukachab, Dan Assouline, Stefano Massaroli, Tianle Yuan, Loubna Benabbou, Yoshua Bengio. <a href="https://neurips.cc/virtual/2023/poster/70031">paper</a><a href="https://github.com/gitbooo/CrossViVit">code</a></li>
<li>WITRAN: Water-wave Information Transmission and Recurrent Acceleration Network for Long-range Time Series Forecasting. Yuxin Jia, Youfang Lin, Xinyan Hao, Yan Lin, Shengnan Guo, Huaiyu Wan. <a href="https://neurips.cc/virtual/2023/poster/69972">paper</a><a href="https://github.com/Water2sea/WITRAN">code</a></li>
<li>WildfireSpreadTS: A dataset of multi-modal time series for wildfire spread prediction. Sebastian Gerard, Yu Zhao, Josephine Sullivan. <a href="https://neurips.cc/virtual/2023/poster/73593">paper</a></li>
</ul>
<h1 id="发现更多"><a href="#发现更多" class="headerlink" title="发现更多"></a>发现更多</h1><ol>
<li><a href="https://www.zhihu.com/column/c_1437144001987792896">时空数据挖掘</a> </li>
<li><a href="https://github.com/xiepeng21/research_spatio-temporal-data-mining">时空数据挖掘论文库</a></li>
<li><a href="https://zhuanlan.zhihu.com/p/573936494">半亩方塘:【时空数据挖掘】推开窗,看一看</a></li>
<li><a href="https://zhuanlan.zhihu.com/p/435895312">半亩方塘:【时空数据挖掘】 - 层次体系构建</a></li>
<li><a href="https://zhuanlan.zhihu.com/p/426898203">半亩方塘:【时空数据挖掘】- 任务</a></li>
<li><a href="https://zhuanlan.zhihu.com/p/435890647">半亩方塘:【时空数据挖掘】- 方法</a></li>
<li><a href="https://zhuanlan.zhihu.com/p/435882797">半亩方塘:【时空数据挖掘】- 数据</a></li>
<li><a href="https://zhuanlan.zhihu.com/p/649761763">【KDD 2023】时空数据挖掘论文</a></li>
<li><a href="https://zhuanlan.zhihu.com/p/629231015">【ICLR 2023】 时空数据挖掘论文</a></li>
<li><a href="https://zhuanlan.zhihu.com/p/478362107">半亩方塘:【ICLR 2022】时空数据挖掘论文</a></li>
<li><a href="https://zhuanlan.zhihu.com/p/548907678">半亩方塘:【ICML 2022】时空数据挖掘论文</a></li>
<li><a href="https://zhuanlan.zhihu.com/p/547781911">半亩方塘:【IJCAI 2022】时空数据挖掘论文</a></li>
<li><a href="https://zhuanlan.zhihu.com/p/547262551">半亩方塘:【KDD 2022】时空数据挖掘论文</a></li>
<li><a href="https://zhuanlan.zhihu.com/p/478363400">半亩方塘:【WSDM 2022】时空数据挖掘论文</a></li>
<li><a href="https://zhuanlan.zhihu.com/p/478357750">半亩方塘:【AAAI 2022】时空数据挖掘论文</a></li>
<li><a href="https://zhuanlan.zhihu.com/p/420738345">半亩方塘:【图神经网络】近3年用于时空数据挖掘的图神经网络论文(2018-2021)</a></li>
<li><a href="https://zhuanlan.zhihu.com/p/423342733">半亩方塘:【KDD 2021 】时空数据挖掘论文</a></li>
<li><a href="https://zhuanlan.zhihu.com/p/423323595">半亩方塘:【IJCAI 2021 】时空数据挖掘论文</a></li>
<li><a href="https://zhuanlan.zhihu.com/p/440291388">半亩方塘:【WWW 2021 】时空数据挖掘论文</a></li>
<li><a href="https://zhuanlan.zhihu.com/p/442419220">半亩方塘:【ACM SIGSPATIAL 2021】时空数据挖掘论文</a></li>
</ol>
<h1 id="参考资料"><a href="#参考资料" class="headerlink" title="参考资料"></a>参考资料</h1><ol>
<li>NeurIPS 2023 时间序列(Time Series)论文总结 <a href="https://zhuanlan.zhihu.com/p/659088918">https://zhuanlan.zhihu.com/p/659088918</a></li>
<li>NeurIPS 2023 时空数据论文总结 <a href="https://zhuanlan.zhihu.com/p/659050798">https://zhuanlan.zhihu.com/p/659050798</a></li>
<li>时空数据顶会论文总结 <a href="https://www.zhihu.com/column/c_1683852314232885248">https://www.zhihu.com/column/c_1683852314232885248</a></li>
</ol>
]]></content>
<categories>
<category>论文阅读</category>
</categories>
<tags>
<tag>时空数据挖掘</tag>
<tag>NeurIPS 2023</tag>
</tags>
</entry>
<entry>
<title>Ubuntu 16.04将cuda 10.1降到cuda10.0</title>
<url>/posts/8517e2d2/</url>
<content><![CDATA[<p>最近因为跑程序,需要用到cuda的版本为10.0,但是机器的cuda版本为10.1,所以就将其版本进行了降低。</p>
<span id="more"></span>
<h5 id="卸载cuda-10-1"><a href="#卸载cuda-10-1" class="headerlink" title="卸载cuda 10.1"></a>卸载cuda 10.1</h5><figure class="highlight awk"><table><tr><td class="code"><pre><code class="hljs awk">sudo <span class="hljs-regexp">/usr/</span>local<span class="hljs-regexp">/cuda-10.1/</span>bin/uninstall_cuda_10.<span class="hljs-number">1</span>.pl<br>sudo apt --purge remove nvidia*<br></code></pre></td></tr></table></figure>
<h5 id="下载cuda和cudnn"><a href="#下载cuda和cudnn" class="headerlink" title="下载cuda和cudnn"></a>下载cuda和cudnn</h5><p>在官网下载cuda10.0和cudnn7.3.1,两者的版本一定要匹配。<br>cuda和cudnn匹配列表:<br><a href="https://developer.nvidia.com/rdp/cudnn-archive#a-collapse731-10">https://developer.nvidia.com/rdp/cudnn-archive#a-collapse731-10</a></p>
<p>cuda10.0下载地址(选择runfile的文件下载,安装比较方便):<br><a href="https://developer.nvidia.com/cuda-10.0-download-archive?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1604&target_type=runfilelocal">https://developer.nvidia.com/cuda-10.0-download-archive?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1604&target_type=runfilelocal</a></p>
<p>cudnn7.3.1下载地址:<br><a href="https://developer.nvidia.com/compute/machine-learning/cudnn/secure/v7.3.1/prod/10.0_2018927/cudnn-10.0-linux-x64-v7.3.1.20">https://developer.nvidia.com/compute/machine-learning/cudnn/secure/v7.3.1/prod/10.0_2018927/cudnn-10.0-linux-x64-v7.3.1.20</a></p>
<h5 id="安装cuda"><a href="#安装cuda" class="headerlink" title="安装cuda"></a>安装cuda</h5><p><strong>1. 禁用nouveau driver</strong></p>
<figure class="highlight 1c"><table><tr><td class="code"><pre><code class="hljs 1c">lsmod <span class="hljs-string">| grep nouveau</span><br></code></pre></td></tr></table></figure>
<p>如果没有结果输出,则代表禁用掉了。如果有内容输出,则代表没有禁用掉。需要执行如下操作,</p>
<p>1)在/etc/modprobe.d中创建文件blacklist-nouveau.conf,</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><code class="hljs bash"><span class="hljs-built_in">cd</span> /etc/modprobe.d<br>sudo <span class="hljs-built_in">touch</span> blacklist-nouveau.conf<br></code></pre></td></tr></table></figure>
<p>2)在文件blacklist-nouveau.conf中输入如下内容,</p>
<figure class="highlight apache"><table><tr><td class="code"><pre><code class="hljs apache"><span class="hljs-attribute">blacklist</span> nouveau<br><span class="hljs-attribute">options</span> nouveau modeset=<span class="hljs-number">0</span><br></code></pre></td></tr></table></figure>
<p>保存并退出后,执行如下命令,</p>
<figure class="highlight ebnf"><table><tr><td class="code"><pre><code class="hljs ebnf"><span class="hljs-attribute">sudo update-initramfs -u</span><br></code></pre></td></tr></table></figure>
<p>3)执行命令<code>lsmod | grep nouveau</code> 检查是否禁用成功。</p>
<p><strong>2. 进入文本模式</strong><br>按Ctrl+Alt+F1,进入文本模式,执行如下命令,关闭图形界面,</p>
<figure class="highlight arduino"><table><tr><td class="code"><pre><code class="hljs arduino">sudo service lightdm stop<br></code></pre></td></tr></table></figure>
<p>然后进入cuda的安装包所在文件夹,执行如下命令:</p>
<figure class="highlight apache"><table><tr><td class="code"><pre><code class="hljs apache"><span class="hljs-attribute">sudo</span> sh cuda_10.<span class="hljs-number">0</span>.<span class="hljs-number">130</span>_410.<span class="hljs-number">48</span>_linux.run<br></code></pre></td></tr></table></figure>
<p>输入上述命令后,会显示用户许可证信息,按空格键直到显示为100%,然后输入accept。</p>
<p>在这个过程中,会遇到如下选项,NVIDIA Accelerated Graphics Driver: 选择n;<br>openGL(有时不会出现): 选择n;<br>其他:都选择y;<br>安装路径可以选择默认:直接回车。</p>
<p>接着,输入命令<code>sudo service lightdm start</code>, 启动图形化界面,按住Alt+Ctrl+F7,返回图形化登录界面,输入账号密码后登录。 或者输入命令<code>sudo reboot</code>,重启回到图形界面。</p>
<p><strong>3. 环境配置</strong><br>输入如下命令,打开/etc/profile文件,</p>
<figure class="highlight awk"><table><tr><td class="code"><pre><code class="hljs awk">vim <span class="hljs-regexp">/etc/</span>profile<br></code></pre></td></tr></table></figure>
<p>在文件中输入如下内容,</p>
<figure class="highlight awk"><table><tr><td class="code"><pre><code class="hljs awk">export PATH=<span class="hljs-regexp">/usr/</span>local<span class="hljs-regexp">/cuda-10.0/</span>bin<br>export LD_LIBRARY_PATH=<span class="hljs-regexp">/usr/</span>local<span class="hljs-regexp">/cuda-10.0/</span>lib64<br></code></pre></td></tr></table></figure>
<p>然后,保存并退出文件。接着,执行如下命令,使配置生效,</p>
<figure class="highlight gradle"><table><tr><td class="code"><pre><code class="hljs gradle"><span class="hljs-keyword">source</span> <span class="hljs-regexp">/etc/</span>profile<br></code></pre></td></tr></table></figure>
<p><strong>4. 检查cuda是否安装成功</strong><br>输入如下命令,查看nvidia驱动的版本:</p>
<figure class="highlight awk"><table><tr><td class="code"><pre><code class="hljs awk">cat <span class="hljs-regexp">/proc/</span>driver<span class="hljs-regexp">/nvidia/</span>version<br></code></pre></td></tr></table></figure>
<p>我的版本为410.48。<br>输入如下命令,查看CUDA Toolkit的版本:</p>
<figure class="highlight ebnf"><table><tr><td class="code"><pre><code class="hljs ebnf"><span class="hljs-attribute">nvcc -V</span><br></code></pre></td></tr></table></figure>
<p>我的版本是V10.0.130。<br>至此,cuda安装成功,接下来安装cudnn。</p>
<h5 id="安装cudnn"><a href="#安装cudnn" class="headerlink" title="安装cudnn"></a>安装cudnn</h5><p><strong>1. 解压cudnn安装包</strong></p>
<figure class="highlight apache"><table><tr><td class="code"><pre><code class="hljs apache"><span class="hljs-attribute">tar</span> -xvf cudnn-<span class="hljs-number">10</span>.<span class="hljs-number">0</span>-linux-x64-v<span class="hljs-number">7.3.1.20</span>.tgz<br></code></pre></td></tr></table></figure>
<p><strong>2. 配置cudnn</strong><br>在当前目录输入依次执行如下命令,</p>
<figure class="highlight awk"><table><tr><td class="code"><pre><code class="hljs awk">sudo cp cuda<span class="hljs-regexp">/include/</span>cudnn.h <span class="hljs-regexp">/usr/</span>local<span class="hljs-regexp">/cuda/i</span>nclude/<br>sudo cp cuda<span class="hljs-regexp">/lib64/</span>libcudnn* <span class="hljs-regexp">/usr/</span>local<span class="hljs-regexp">/cuda/</span>lib64/<br>sudo chmod a+r <span class="hljs-regexp">/usr/</span>local<span class="hljs-regexp">/cuda/i</span>nclude/cudnn.h<br>sudo chmod a+r <span class="hljs-regexp">/usr/</span>local<span class="hljs-regexp">/cuda/</span>lib64/libcudnn*<br></code></pre></td></tr></table></figure>
<p><strong>3. 查看cudnn版本</strong><br>输入如下命令,查看版本,</p>
<figure class="highlight gradle"><table><tr><td class="code"><pre><code class="hljs gradle">cat <span class="hljs-regexp">/usr/</span>local<span class="hljs-regexp">/cuda/i</span>nclude/cudnn.h | <span class="hljs-keyword">grep</span> CUDNN_MAJOR -A <span class="hljs-number">2</span><br></code></pre></td></tr></table></figure>
<p>如果有对应的CUDNN_MAJOR输出,则代表安装成功。</p>
<h5 id="检查cuda和cudnn是否都安装成功"><a href="#检查cuda和cudnn是否都安装成功" class="headerlink" title="检查cuda和cudnn是否都安装成功"></a>检查cuda和cudnn是否都安装成功</h5><p>输入如下命令,</p>
<figure class="highlight ebnf"><table><tr><td class="code"><pre><code class="hljs ebnf"><span class="hljs-attribute">modprobe nvidia</span><br><span class="hljs-attribute">nvidia-smi</span><br></code></pre></td></tr></table></figure>
<p>如果出现GPU的信息列表,则说明安装成功。</p>
<h5 id="常见错误"><a href="#常见错误" class="headerlink" title="常见错误"></a>常见错误</h5><ol>
<li>如果出现无法进入Ubuntu系统,输入如下命令,清除安装的驱动,<figure class="highlight routeros"><table><tr><td class="code"><pre><code class="hljs routeros">sudo apt-<span class="hljs-built_in">get</span> purge nvidia*<br></code></pre></td></tr></table></figure>
然后再输入命令<code>sudo reboot</code>重启,如果问题依然没有解决,按照上述步骤,重新安装cuda。</li>
</ol>
<h5 id="参考资料"><a href="#参考资料" class="headerlink" title="参考资料"></a>参考资料</h5><ol>
<li><a href="https://www.pythonf.cn/read/110763">https://www.pythonf.cn/read/110763</a></li>
<li><a href="https://www.jianshu.com/p/89f1ab962d75">https://www.jianshu.com/p/89f1ab962d75</a></li>
<li><a href="https://blog.csdn.net/weixin_42279044/article/details/83181686">https://blog.csdn.net/weixin_42279044/article/details/83181686</a></li>
<li><a href="https://blog.csdn.net/lihe4151021/article/details/90237681">https://blog.csdn.net/lihe4151021/article/details/90237681</a></li>
<li><a href="https://blog.csdn.net/qq_45049586/article/details/104582468">https://blog.csdn.net/qq_45049586/article/details/104582468</a></li>
<li><a href="https://blog.csdn.net/qq_41915226/article/details/103051602">https://blog.csdn.net/qq_41915226/article/details/103051602</a></li>
</ol>
]]></content>
<categories>
<category>经验技巧</category>
</categories>
<tags>
<tag>CUDA</tag>
<tag>GPU</tag>
</tags>
</entry>
<entry>
<title>【书山有路-10】好情绪养成手册</title>
<url>/posts/5bf6f313/</url>
<content><![CDATA[<p>我们一起来读<a href="https://book.douban.com/subject/36407267/">《好情绪养成手册》</a>这本书吧。</p>
<span id="more"></span>
<p><img src="/../images/hqxycsc.jpg" alt="好情绪养成手册"></p>
<p>好情绪如何养成?情绪、休息、运动、睡眠、饮食,共同构筑健康大厦。让我们一起走进这本书📖吧。</p>
<p>这是一本值得多次阅读的自我接纳,情绪疗愈手册,与《<a href="https://xiepeng21.cn/posts/70e570a9/">少有人走的路</a>》一书一样,作者的【英】索菲·莫特的写作态度很真诚,心理学知识和经验也很丰富,给出的建议,实操性很好,而且很用心地在写作,值得推荐。</p>
<p>情绪就像奔流不息的河流,偶尔波涛汹涌,有时细水长流,没法压抑,只能疏导,推荐这本书,爱自己多一点,睡得更香一些。</p>
<p>精彩片段:</p>
<ul>
<li><p>让树木加速成长的法宝不是主人对它指手画脚,而是主人对它的悉心照料。你的成长也是一样。————让·哈迪(Jenn Hardy)博士</p>
</li>
<li><p>我们内在的批评常常会贬低我们的价值,指出我们犯下的所有错误,但它从来不会为我们提供有用的建议,也不提供改进或提升的方法。</p>
</li>
</ul>
<p>欢迎在留言区,说说你对情绪的理解,阅读这本书给你带来的感触。也可以推荐一些,最近你看过的有意思,有收获的书籍、电影、音乐📚,我们一起成长!</p>
<hr>
<p>欢迎各位提出建议、问题,我们一起交流、学习、成长。</p>
<p>“问渠那得清如许?为有源头活水来” ヾ(◍°∇°◍)ノ゙</p>
<p>——— 我在半亩方塘等你 ^_^</p>
]]></content>
<categories>
<category>随笔感悟</category>
</categories>
<tags>
<tag>书山有路</tag>
<tag>读书笔记</tag>
<tag>情绪</tag>
<tag>心理学</tag>
</tags>
</entry>
<entry>
<title>【书山有路-13】少有人走的路8:寻找石头</title>
<url>/posts/559bb2b0/</url>
<content><![CDATA[<p>我们一起来读<a href="https://book.douban.com/subject/35232776/">《少有人走的路8: 寻找石头》</a>这本书吧。这本书是《少有人走的路》系列的第8本,也是这个系列的收官之作,最后一本书。讲的主要内容是作者对于金钱、婚姻、子女、健康与死亡的终极思考,没有标准答案,这是一个无尽的探索之旅,每个人给出的答案都不太可能会是一样的。</p>
<span id="more"></span>
<p><img src="/../images/xzst.jpg" alt="少有人走的路8: 寻找石头"></p>
<p>这本书与<a href="https://xiepeng21.cn/posts/70e570a9/">《少有人走的路:心智成熟的旅程》</a>、<a href="https://xiepeng21.cn/posts/117e144d/">《少有人走的路2:勇敢地面对谎言》</a>、<a href="https://xiepeng21.cn/posts/ca5ef243/">《少有人走的路3:与心灵对话》</a>、<a href="https://xiepeng21.cn/posts/2cb15dfa/">《少有人走的路4:在焦虑的年代获得精神的成长》</a>、<a href="https://xiepeng21.cn/posts/7a6e1cf9/">《少有人走的路5:不一样的鼓声》</a>、<a href="https://xiepeng21.cn/posts/fd28cad8/">《少有人走的路6:真诚是生命的药》</a>与<a href="https://xiepeng21.cn/posts/76208d8e/">《少有人走的路7:靠窗的床》</a>共同形成了《少有人走的路》系列。</p>
<p>以下内容是书中打动我,给我留下深刻印象的一些摘录和感悟。也欢迎大家在留言区留下那些触动你的摘录以及心得感悟。</p>
<p>踏上生命的旅程,寻找石头,不仅仅是一颗颗新奇的小石头,还可以是一个个不至于让人滑倒的踏脚石。自己是一个爱好石头的人,寻找石头的过程,就是让自己的好奇心在大千世界延伸的一种方式。</p>
<p>石头无声,但它们本身,便已说尽世间万象。</p>
<p>卡尔·荣格把人类邪恶的根源归结为“拒绝面对阴影”,在这里,“阴影”被定义为我们自身的一部分,它包含那些我们不愿意具有的特质。我们不仅会企图对他人掩藏这部分自己,也对自己掩藏,我们不断试图将这部分的自己清扫至意识的地毯之下,如同藏起污物一样。而荣格所说的“拒绝”,指的是一种主动抵制的心理,远比我们平时被动抵制批评时的心理还要强烈。</p>
<p>虽然我对很多事情不再确信无疑,但正是这些不确定和混乱,让我变得更安之若素,甚至开始感到享受。</p>
<hr>
<p>即将走到年尾,也将迎来2025年。</p>
<p>过去的一年,看了一些书,记录一下,也希望新的一年,与书相伴,不断成长。</p>
<ol>
<li>《<a href="https://book.douban.com/subject/35196138/">穿越小径分岔的花园</a>》:太过严格地归属于任何学科,会限制个人的知识独立性。</li>
<li>《<a href="https://book.douban.com/subject/36579580/">每个人都有自己的职场优势</a>》:找到优势,不是成为更好的自己,而是更好地成为自己。</li>
<li>《<a href="https://book.douban.com/subject/36491854/">意识机器</a>》:机器会有意识吗?意识和智能有什么关系?这本书会带领你解开这些问题的面纱。</li>
<li>《<a href="https://book.douban.com/subject/26769136/">为什么长大?</a>》:这本书回答了一些孩子们会问而大多数大人认为已经解答了的问题。比如怎么找到意义?怎么创造自己的生活?</li>
<li>《<a href="https://book.douban.com/subject/34887257/">活法</a>》:命运与因果交织,秉承善良,以利他之心在世间做人做事。</li>
<li>《<a href="https://book.douban.com/subject/23780225/">能断</a>》:凡所有相,皆是虚妄。</li>
<li>《<a href="https://book.douban.com/subject/36407267/">好情绪养成手册</a>》:一本值得多次阅读的自我接纳,情绪疗愈手册。</li>
<li>《<a href="https://book.douban.com/subject/27084650/">人生很短,做一个有趣的人</a>》:这本书是我的床头读物,每晚看一点,不知不觉,就看完了。</li>
<li>《<a href="https://book.douban.com/subject/20424526/">邓小平时代</a>》:既是设计师,也是实干家。</li>
<li>《<a href="https://book.douban.com/subject/30175059/">李光耀观天下</a>》:有大眼光、大格局,搞建设、谋发展的实干家。</li>
<li>《<a href="https://book.douban.com/subject/35899532/">有解:高效解决问题的关键7步</a>》:从战略和战术层面讲如何解决问题的书,实操性很强。</li>
<li>《<a href="https://book.douban.com/subject/24257486/">局外人</a>》:当外在环境不适合正常人生存的时候,正常反倒变成了不正常。重视自己的感受和情绪,相信自己的判断,勇敢地活出自己,自己就是自己的代言人。</li>
<li>《<a href="https://book.douban.com/subject/36357804/">为什么伟大不能被计划</a>》:如果我们能明确实现目标的踏脚石(Stepping Stone),那么规划是明智的;而当面对未知的领域时,不设定目标的创新才是关键所在。</li>
<li>《<a href="https://book.douban.com/subject/30297006/">我是我认识的最聪明的人</a>》:第一次看生物物理学家的自传,伊瓦尔·贾埃弗是个有趣的人,他的人生也挺波澜起伏的,很有故事性。</li>
<li>《<a href="https://book.douban.com/subject/36336313/">深度关系</a>》:从『建立信任』到『彼此成就』,愿我们都能在深度关系中更加了解自己,与他人一起成长。</li>
</ol>
<hr>
<p>欢迎各位提出建议、问题,我们一起交流、学习、成长。</p>
<p>“问渠那得清如许?为有源头活水来” ヾ(◍°∇°◍)ノ゙</p>
<p>——— 我在半亩方塘等你 ^_^</p>
]]></content>
<categories>
<category>随笔感悟</category>
</categories>
<tags>
<tag>书山有路</tag>
<tag>读书笔记</tag>
<tag>少有人走的路</tag>
</tags>
</entry>
<entry>
<title>【书山有路-15】思考的艺术</title>
<url>/posts/1ad8d428/</url>
<content><![CDATA[<p>我们一起来读<a href="https://book.douban.com/subject/20480879/">《思考的艺术》</a>这本书吧。这本书从<em>了解思考</em>,<em>创造力</em>,<em>批判性</em>以及<em>沟通想法</em>四个方面进行介绍。这是一本系统讲述思考方法的书籍。这本书将教会我们如何思考,从依赖其他人的思想中解放出来,并帮助我们形成自己的思想和明智的想法。</p>
<span id="more"></span>
<p><img src="/../images/skdys.jpg" alt="思考的艺术"></p>
<h2 id="思考与决策的重要性"><a href="#思考与决策的重要性" class="headerlink" title="思考与决策的重要性"></a>思考与决策的重要性</h2><p>如何分析、解决<strong>问题</strong>,做出更好的<strong>决策</strong>,与我们的<strong>思考</strong>紧密相关。</p>
<p>通过<strong>表达、阐述问题及观点</strong>,<strong>调查问题</strong>,<strong>产生思想</strong>这三个阶段,可以获得创新性的思考。</p>
<h2 id="高效能思考者的特点"><a href="#高效能思考者的特点" class="headerlink" title="高效能思考者的特点"></a>高效能思考者的特点</h2><p>高效能的思考者不会只<strong>评判</strong>他人的努力而自己不采取<strong>行动</strong>,他们解决问题、做出决定、在争议中选择立场。</p>
<p>哲学家<strong>叔本华</strong>说:每个人都把自己视野的极限当作世界的极限。</p>
<h2 id="解决问题的知识和能力"><a href="#解决问题的知识和能力" class="headerlink" title="解决问题的知识和能力"></a>解决问题的知识和能力</h2><p>成功地分析争议和解决问题需要<strong>事实性知识</strong>,即要熟悉问题产生的历史背景并且理解与问题有关的原则和概念。</p>
<p>如果你想要成为一位优秀的<strong>问题解决者</strong>,就不仅需要具备<strong>事实性知识</strong>,还需要具备<strong>缜密的思维</strong>。</p>
<h2 id="组织中的挑战与机遇"><a href="#组织中的挑战与机遇" class="headerlink" title="组织中的挑战与机遇"></a>组织中的挑战与机遇</h2><p>即便是运行良好的组织,也会经历财富的迅速变化,被迫<strong>精简业务</strong>和<strong>裁员</strong>。</p>
<p>迎接挑战、抓住全球经济带来的机遇需要<strong>创造性</strong>和<strong>评判性思考</strong>技巧。</p>
<h2 id="大脑功能与思考"><a href="#大脑功能与思考" class="headerlink" title="大脑功能与思考"></a>大脑功能与思考</h2><p>大脑<strong>右半球</strong>主要掌握非语言、符号化和直觉反应,而<strong>左半球</strong>则掌握语言的使用、逻辑推理、分析和执行连续性任务。</p>
<p>Roger W. Sperry(罗杰 W. 斯佩里)在他的诺贝尔奖获奖演说中提到:在正常情况下,大脑两个半球是紧密联合成<strong>一个整体</strong>进行工作的,而非一个大脑半球在工作而另一个大脑半球却处在空闲状态。</p>
<h2 id="创造性思维的优势"><a href="#创造性思维的优势" class="headerlink" title="创造性思维的优势"></a>创造性思维的优势</h2><p>好的思考者会比差的思考者产生更多的、更好的想法,他们在思考时能够熟练使用各种各样的创造技术,这使得他们能够更容易<strong>发现问题</strong>。他们能够测试自己的第一印象,做出重要的区分,并且在有<strong>证据</strong>的基础上得出结论,而不仅仅依靠自己的<strong>直觉</strong>。</p>
<h2 id="个性化需求与专注力"><a href="#个性化需求与专注力" class="headerlink" title="个性化需求与专注力"></a>个性化需求与专注力</h2><p>没有两个人的<strong>需求</strong>是完全相同的,对一个人有用的东西未必对另一个人有用。</p>
<p>保持全神贯注意味着当我们的思维游离时,我们能够将<strong>注意力</strong>转向我们的<strong>思考目的</strong>和<strong>问题</strong>。</p>
<hr>
<p>欢迎各位提出建议、问题,我们一起交流、学习、成长。</p>
<p>“问渠那得清如许?为有源头活水来” ヾ(◍°∇°◍)ノ゙</p>
<p>——— 我在半亩方塘等你 ^_^</p>
]]></content>
<categories>
<category>随笔感悟</category>
</categories>
<tags>
<tag>书山有路</tag>
<tag>读书笔记</tag>
<tag>思考的艺术</tag>
</tags>
</entry>
<entry>
<title>【书山有路-11】少有人走的路6:真诚是生命的药</title>
<url>/posts/fd28cad8/</url>
<content><![CDATA[<p>我们一起来读<a href="https://book.douban.com/subject/30428396/">《少有人走的路6: 真诚是生命的药》</a>这本书吧。这本书是《少有人走的路》系列的第6本,讲的主要内容是真诚对人际关系的作用,涉及到家庭教育、婚姻关系、职业等多个方面,恰好分享一本我最近刚读完的一本<a href="https://book.douban.com/subject/36336313/">《深度关系》</a>,这本书给出了很多案例,介绍了不同的关系以及如何维护良好的关系,加深彼此了解,修复破损的关系,一本实操性很强的书,值得一读。</p>
<span id="more"></span>