Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(ANML) Learning to Continually Learn #6

Open
sarrrrry opened this issue Jan 30, 2021 · 0 comments
Open

(ANML) Learning to Continually Learn #6

sarrrrry opened this issue Jan 30, 2021 · 0 comments
Assignees

Comments

@sarrrrry
Copy link
Contributor

一言でいうと

継続学習のためのメタ学習.
通常の予測ネットワークに加えて,Neuromodulated Meta-Learning algorithm (ANML) を学習し,
予測ネットワークの出力を0~1区間 Neuromodulated Netからの出力値でマスク(ゲート)することによって,
選択的可塑性を自動で達成した.

論文リンク

https://arxiv.org/pdf/2002.09571.pdf

著者/所属機関

Shawn Beaulieu1
Lapo Frati1
Thomas Miconi2
Joel Lehman2
Kenneth O. Stanley2
Jeff Clune2,3
Nick Cheney
1

1 University of Vermont, USA, email: {shawn.beaulieu, lapo.frati, ncheney}
@uvm.edu
2 Uber AI Labs, USA, email: {tmiconi, joel.lehman, kstanley} @uber.com
3 OpenAI, USA, email: [email protected]. Current affiliation (work
done at Uber AI Labs).
*Co-senior authors

投稿日付(yyyy/MM/dd)

Fri, 21 Feb 2020 22:52:00 UTC

概要

新規性・差分

手法

image

結果

image

コメント

損失関数や実験の比較対象にOMLが引用されているので,つぎはOMLを読む

@sarrrrry sarrrrry self-assigned this Jan 30, 2021
@sarrrrry sarrrrry changed the title Learning to Continually Learn (ANML) Learning to Continually Learn Jan 30, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant