[DL輪読会]Hindsight Experience Replayを応用した再ラベリングによる効率的な強化学習

>100 Views

March 06, 20

スライド概要

2020/03/06
Deep Learning JP:
http://deeplearning.jp/seminar-2/

シェア

またはPlayer版

埋め込む »CMSなどでJSが使えない場合

関連スライド

各ページのテキスト
1.

Hindsight Experience ReplayΛԠ༻ͨ͠ ࠶ϥϕϦϯάʹΑΔޮ཰తͳ‫ڧ‬Խֶश 2020.03.06 Presenter: Tatsuya Matsushima @__tmats__ , Matsuo Lab 1

2.

ຊൃදʹ͍ͭͯ εύʔεͳใुͷ‫ڧ‬Խֶश໰୊ • ใु͕ಘΒΕΔ·Ͱʹ௕͍‫͕ྻܥ‬ඞཁ(long horizon) • ํࡦͷ୳ࡧίετ͕େ͖͘೉͍͠ • ‫ۃڀ‬తʹ͸Τϐιʔυͷ࠷‫ʹޙ‬2஋ͷใुʢ੒ޭ/ࣦഊʣ ͷΈ͕‫͔ڥ؀‬ΒಘΒΕΔ • ྫ) ϩϘοτΞʔϜʹΑΔϚχϐϡϨʔγϣϯ, ... Α͘࢖ΘΕΔΞϓϩʔν • ಘΒΕͨσʔλΛॻ͖‫ͯ͑׵‬ϥϕϧΛ͚ͭΔ • Hindsight Experience Replay ʢHERʣ • ֶशσʔλʹ͓͚Δΰʔϧͷ࠶ϥϕϦϯά • ใुϥϕϧͷͳ͍σϞͷར༻ʢ໛฿ֶशʣ 2

3.

ຊൃදʹ͍ͭͯ ࠷ۙެ։͞Εͨ͜ΕΒΛ࢖ͬͨ࿦จ 1) Generalized Hindsight for Reinforcement Learning • https://arxiv.org/abs/2002.11708, https://sites.google.com/view/generalized-hindsight • 2020/2/26, ஶऀʹPieter Abbeel HERͷར༻ 2) Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement • https://arxiv.org/abs/2002.11089 • 2020/2/25, ஶऀʹSergey Levine 3) Learning Latent Plans from Play (CoRL2019) • https://arxiv.org/abs/1903.01973, https://learning-from-play.github.io/ • CoRL2019, ஶऀʹSergey Levine σϞͷར༻ (ࠓ೔͸লུ) 4) Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning (CoRL2019) • https://arxiv.org/abs/1910.11956, https://relay-policy-learning.github.io/ • CoRL2019, ஶऀʹSergey Levine 3

4.

ຊൃදʹ͍ͭͯ ࠷ۙެ։͞Εͨ͜ΕΒΛ࢖ͬͨ࿦จ 1) Generalized Hindsight for Reinforcement Learning • https://arxiv.org/abs/2002.11708, https://sites.google.com/view/generalized-hindsight • 2020/2/26, ஶऀʹPieter Abbeel HERͷར༻ 2) Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement • https://arxiv.org/abs/2002.11089 • 2020/2/25, ஶऀʹSergey Levine 3) Learning Latent Plans from Play (CoRL2019) • https://arxiv.org/abs/1903.01973, https://learning-from-play.github.io/ • CoRL2019, ஶऀʹSergey Levine σϞͷར༻ (ࠓ೔͸লུ) 4) Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning (CoRL2019) • https://arxiv.org/abs/1910.11956, https://relay-policy-learning.github.io/ • CoRL2019, ஶऀʹSergey Levine 4

5.

Hindsight Experience ReplayʢHERʣ Hindsight Experience Replay • https://arxiv.org/abs/1707.01495 • Goal-conditionalͳ‫ڧ‬Խֶशʹ͓͍ͯ ͋ͱ஌‫ܙ‬ʢhindsightʣΛ࢖͏‫ݧܦ‬ϦϓϨΠ • λεΫΛୡ੒͠ͳ͔ͬͨ৔߹ʹɼͦΕ·Ͱʹ ͱͬͨҰ࿈ͷߦಈ͕༗ҙٛͰ͋ΓಘͨΰʔϧΛ‫͔ޙ‬Βઃఆֶͯ͠शʹ‫ؚ‬ΊΔ • ྫʣ‫ݩ‬ʑͷΰʔϧͱ͸ผʹɼ֤Τϐιʔυͷ࠷ऴঢ়ଶΛ‫͔ޙ‬Βΰʔϧͩͬͨ͜ͱʹ͢Δ • DLྠಡձʢதଜ͞Μʣ https://www.slideshare.net/DeepLearningJP2016/dlhindsight-experience-replay • Pieter AbbeelͷNIPS2017ͷߨԋ https://www.youtube.com/watch?v=TyOooJC_bLY 5

6.

Hindsight Experience ReplayʢHERʣ • ํࡦɾQؔ਺ΛΰʔϧͰ৚͚݅ͮΒΕͨϞσϧΛ࢖͏ • HERͰ͸ɼΰʔϧͱใुΛॻ͖‫ͨ͑׵‬ʢ࠶ϥϕϦϯάʣσʔλΛ࢖ֶͬͯश ग़యɿPieter AbbeelͷNIPS2017ߨԋεϥΠυ 6

7.

Hindsight Experience ReplayʢHERʣ • ‫ݩ‬ʑͷΤϐιʔυ͚ͩͰ͸ͳ͘ ΰʔϧͱใुΛ࠶ϥϕϦϯάͨ͠ σʔλ΋ϦϓϨΠόοϑΝʹ௥Ճ • ΰʔϧͷܾΊํ𝕊ͷબ୒͸ ͍Ζ͍Ζߟ͑ΒΕΔʢ‫ޙ‬ड़ʣ 7

8.

Hindsight Experience ReplayʢHERʣ DLྠಡձࢿྉʢதଜ͞ΜʣΑΓ 8 https://www.slideshare.net/DeepLearningJP2016/dlhindsight-experience-replay

9.

Hindsight Experience ReplayʢHERʣ DLྠಡձࢿྉʢதଜ͞ΜʣΑΓ 9 https://www.slideshare.net/DeepLearningJP2016/dlhindsight-experience-replay

10.

Hindsight Experience ReplayʢHERʣ DLྠಡձࢿྉʢதଜ͞ΜʣΑΓ 10 https://www.slideshare.net/DeepLearningJP2016/dlhindsight-experience-replay

11.

Hindsight Experience ReplayʢHERʣ DLྠಡձࢿྉʢதଜ͞ΜʣΑΓ 11 https://www.slideshare.net/DeepLearningJP2016/dlhindsight-experience-replay

12.

঺հ͢Δ࿦จͷ࠷ۙͷΞΠσΞ HER͸ɼঢ়ଶͱͯ͠ද‫͞ݱ‬ΕΔΰʔϧΛ࠶ϥϕϦϯάΛ͢Δख๏ͩͬͨ ΰʔϧ͕ঢ়ଶͱͯ͠ఆٛ͞ΕΔ໰୊Ҏ֎ʹ΋ར༻Ͱ͖ΔΑ͏ʹɼ ‫ڧٯ‬Խֶश(IRL)Λ࢖ͬͯɼ͋ͱ஌‫Ͱܙ‬ใुͷ࠶ϥϕϦϯάΛ͢Δ‫͕ڀݚ‬ग़͖͍ͯͯΔ • ݁Ռͱͯ͠ɼϚϧνλεΫRLʹ࢖͑Δ 1) Generalized Hindsight for Reinforcement Learning • https://arxiv.org/abs/2002.11708 (2020/2/26) 2) Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement • https://arxiv.org/abs/2002.11089 (2020/2/25) 1೔ҧ͍Ͱಉ͡େֶʢUCόʔΫϨʔʣ͔Βಉ͡ํ޲ੑͷ࿦จ͕ग़͍ͯΔʢ‫ۮ‬વʁʣ • ‫ऀޙ‬ͷํ͕޿͍࿮૊ΈΛఏҊ͍ͯ͠Δʢ‫͕͢ؾ‬Δʣ 12

13.

ᶃ Genralized Hindsight for Reinforcement Learning Genralized Hindsight for Reinforcement Learning • Alexander C. Li, Lerrel Pinto, Pieter Abbeel • Submitted on 26 Feb 2020 • arXiv: https://arxiv.org/abs/2002.11089 • website: https://sites.google.com/view/generalized-hindsight • HERΛIRLΛ༻͍ͯϚϧνλεΫRLʹར༻Ͱ͖ΔΑ͏ʹ֦ுͨ͠ Generalized HindsightΛఏҊ 13

14.

ᶃ Genralized Hindsight for Reinforcement Learning HERͷ՝୊ • HER͸ΰʔϧͰ৚͚݅ͮΒΕͨʢgoal-conditionalʣRLͰͷख๏ͩͬͨ • ΰʔϧ͕ঢ়ଶͰද‫͞ݱ‬ΕΔ͜ͱ͕ඞཁ͕ͩɼ͜ͷઃఆ͸ҰൠతͰ͸ͳ͍ ຊ࿦จͷ໨త • μΠφϛΫε͸ಉ͕ͩ͡ɼใुؔ਺ͷҟͳΔϚϧνλεΫRLͷ໰୊ઃఆʹɼ HERతͳൃ૝Λ‫ͯ͠༻׆‬ɼαϯϓϧޮ཰ͷ޲্ΛਤΔ • ใुؔ਺͕ r( ⋅ | z) Ͱද͞ΕΔMDPʢͨͩ͠ɼλεΫ෼෍: 𝒯ɼλεΫม਺: z ∼ 𝒯ʣ 14

15.

ᶃ Genralized Hindsight for Reinforcement Learning Hindsight Relabeling • 𝕊͸͋ͱ஌‫ܙ‬ͷλεΫม਺ͷબͼํͱ͢Δ • Approximate IRL relabeling (AIR)ͱAdvantage relabelingΛఏҊ • RLʹΑ͋͘Δ͜ͱ͕ͩɼ͜͜Ͱ͸ใुؔ਺ͷ‫ܗ‬ঢ়͸‫ط‬஌ͱͯ͠ѻ͍ͬͯΔʢ͸ͣʣ • ೚ҙͷs,a,zʹର͢Δr͕Θ͔Δ • r(s, a | z = g) = [d(s, z = g) < ϵ]ͱ͢Ε͹HERͱಉ͡ 15

16.

ᶃ Genralized Hindsight for Reinforcement Learning ϦϥϕϦϯάͷํ๏1: Approximate IRL relabeling (AIR) 𝕊IRL K • λεΫ෼෍𝒯͔ΒK‫ݸ‬ͷλεΫ{vj}j=1Λ αϯϓϧ͠ɼ֤‫ي‬ಓτ͕‫·ؚ‬ΕΔN‫ݸ‬ͷ‫ي‬ಓͷ ू߹𝒟ͷͳ͔ʹ͓͚Δɼ ̂ vj) ͦͷ‫ي‬ಓͷऩӹR(τ, vj)ͷύʔηϯλΠϧP(τ, ͕࠷΋ߴ͍m‫ݸ‬ͷλεΫม਺Λฦ͢ • େ͖ͳKͰ͸ෳ਺ͷλεΫ͕ಉ͡ύʔηϯλΠϧΛ ࣋ͭՄೳੑ͕͋ΔͷͰɼ࣮༻্͸ऩӹͰ͸ͳ͘ π ̂ Ξυόϯςʔδ A(τ, z) = R(τ | z) − V (s0, z) Λ࢖͏ T−1 • ͜Ε͕IRLͳͷ͸ɼIRL͕࠷΋୯७ʹ͸ 𝔼 [ ∑t=0 γr* (st) | πE] ຬͨ͢r*Λ‫͚ͭݟ‬Δ໰୊ͱΈͳͤΔͨΊ ≥ T−1 𝔼 [ ∑t=0 γr* (st) | π] ∀πΛ 16

17.

ᶃ Genralized Hindsight for Reinforcement Learning ϦϥϕϦϯάͷํ๏2: Advantage Relabeling 𝕊A • AIR͸‫͕ྔࢉܭ‬େ͖͍ʢ𝒪(NT)ʣ π ̂ • ΞυόϯςʔδA(τ, z) = R(τ | z) − V (s0, z) ͕࠷΋େ͖͍m‫ݸ‬ͷλεΫม਺Λฦ͢ • ‫ݧܦ‬తʹ͸͏·͍ͬͨ͘ • SACͷ৔߹ V (s, z) = min (Q1(s, π(s | z), z), Q2(s, π(s | z), z)) π 17

18.

ᶃ Genralized Hindsight for Reinforcement Learning ࣮‫ݧ‬ • (a)(b)͸ใुͷώʔτϚοϓ͕ҟͳΔ • (c)͸ϋϯυͷઌ୺ͷҐஔʹΑΔใुͱΤωϧΪʔɼsafetyʹؔ͢Δใु • ใु͕༩͑ΒΕΔҐஔͱͦΕͧΕͷॏΈ͕ҟͳΔ • (d)͸଎͞ɼ޲͖ɼߴ͞ɼΤωϧΪʔফඅʹؔ͢ΔใुͰɼͦΕͧΕͷॏΈ͕ҟͳΔ • (e)͸ਐߦํ޲ͷਖ਼͠͞Ͱใु͕ҟͳΔ 18

19.

ᶃ Genralized Hindsight for Reinforcement Learning ݁Ռ • AIRͱAdvantage Relabeling͸αϯϓϧޮ཰ɾ࠷ऴతͳऩӹͰ ଞͷϕʔεϥΠϯΛ্ճͬͨ • IUͰ͸ϥϯμϜʹλεΫΛબͿ 19

20.

ᶄ Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement • Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba • Submitted on 25 Feb 2020 • arXiv: https://arxiv.org/abs/2002.11708 • HERΛIRLΛ༻͍ͯϚϧνλεΫRLʹར༻Ͱ͖ΔΑ͏ʹ֦ுͨ͠ Hindsight Inference for Policy Improvement (HIPI)ΛఏҊ • Ϟνϕʔγϣϯ͸ಉ͡ • MaxEnt RLͱMaxEnt IRLΛ૊Έ߹ΘͤΔ • ͜ͷ఺͕ᶃͱ͸ҧ͏ ಉ͡ਤ΍Μɾɾɾʢ͕ͬͪ͜ઌͷެ։͚ͩͲʣ 20

21.

ᶄ Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement ͔͜͜Βͷද‫ه‬ • ϚϧνλεΫRL: ใुؔ਺ rψ (s, a) ͸λεΫม਺ ψ • ᶃͰ z ∈ Ψ ʹґଘ ∼ 𝒯 Ͱද‫͍ͨͯ͠ه‬΋ͷͱಉ͡ • λεΫͷࣄલ෼෍ p(ψ) • ํࡦqͷ΋ͱͰͷ‫ي‬ಓͷ໬౓ q(τ) = p1 (s1) ∏t p (st+1 | st, at) q (at | st) 21

22.

ᶄ Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement લఏ: MaxEnt RLʢγϯάϧλεΫʣ • ֶशʹΑΓํࡦ q(τ) Λ p(τ) ≜ 1 Z p1 (s1) ∏t r(st, at) p (st+1 | st, at) e ʹ͚͍ۙͮͨ • ͜Ε͸ q(τ) ͱ p(τ) ͷreverse KLͷ࠷খԽͱߟ͑Δͱɼ ໨తؔ਺͸Τϯτϩϐʔਖ਼ଇԽ͞Εͨใु࿨ͷ࠷େԽʹͳΔ −DKL(q∥p) = 𝔼q ( ∑t rt − log q (at | st)) − log Z [ ] • ෼഑ؔ਺͸ํࡦʹґଘ͠ͳ͍ͷͰRLΞϧΰϦζϜͰ͸ߟ͑ͳ͍ • ྫ) Soft Actor-Critic (SAC) 22

23.

ᶄ Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement લఏ: MaxEnt IRL • λεΫ ψ ͷͱ͖ɼ‫ي‬ಓτͷ໬౓Λ p(τ | ψ) = 1 Z(ψ) p1 (s1) ∏t p (st+1 | st, at) e • ͨͩ͠ɼ෼഑ؔ਺ Z(ψ) rψ(st, at) ͱ͢Δ ≜ ∫ p1 (s1) ∏t p (st+1 | st, at) rψ(st, at) e dτ • ϕΠζͷఆཧΑΓɼλεΫͷࣄ‫ޙ‬෼෍͸ɼ p(ψ | τ) = p(τ | ψ)p(ψ) p(τ) ∝ p(ψ)e ∑t rψ(st, at)−log Z(ψ) • ͜ͷ‫ࢉܭ‬͸೉͍͠ʢ͢΂ͯͷঢ়ଶͱߦಈʹؔ͢Δੵ෼ʣ͕ɼMaxEnt RLΛߟ͑Δͱ log Z(ψ) = maxq(τ|ψ) 𝔼q(τ|ψ) [ ∑t rψ (st, at) − log q (at | st, ψ)] • γϯάϧλεΫͷIRL͸ใुؔ਺Λ‫ٻ‬ΊΔ࿮૊Έ͕ͩɼ͜͜͸ψΛ‫ٻ‬ΊΔ໰୊ͱ͍ͯ͠Δ 23

24.

ᶄ Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement MaxEnt RLʢϚϧνλεΫʣ • ֶशʹΑΓํࡦ q(τ, ψ) Λ p(τ, ψ) ≜ 1 Z p1 (s1) ∏t p (st+1 | st, at) e rψ(st, at) ʹ ͚͍ۙͮͨ • q(τ, ψ) = q(τ | ψ)p(ψ)ͱͯ͠ߟ͑ΔͱɼλεΫґଘͷํࡦq(τ | ψ)͸ 𝔼ψ∼q(ψ),r∼q(τ|ψ) rψ (st, at) − log q (at | st, ψ) Ͱ‫·ٻ‬Δ [∑ ] t 24

25.

ᶄ Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement MaxEnt IRLʹΑΔhindsight relabelling • MaxEnt RLͷϚϧνλεΫͷํࡦΛq(τ, ψ) ࠶ϥϕϦϯά෼෍ q(ψ | τ) Λߟ͑Δͱ = q(ψ | τ)q(τ)ͱͯ͠ɼ ͷ࠷େԽͰ‫·ٻ‬Δ • ͜ΕΛղ͘ͱ q(ψ | τ) ∝ p(ψ)e ∑t rψ(st, at)−log Z(ψ) • ͜Ε͸MaxEnt IRLͰλεΫͷࣄ‫ޙ‬෼෍Λߟ͑Δͷͱಉ͡ • ‫ي‬ಓશମͰ͸ͳ͘1εςοϓͷঢ়ଶભҠΛߟ͑Δͱq (ψ | st, at) ∝ Q̃ (st, at)−log Z(ψ) p(ψ)e 25 q

26.

ᶄ Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement Hindsight Relabeling • ຊ࿦จͷओு͸ɼʮHindsight Relabeling͕IRLͰ͋Δ͜ͱʯ • ใुؔ਺Λ ͱ͢Δͱɼ ͱͳΓɼHERͷΰʔϧঢ়ଶΛ࠶ϥϕϦϯά͢Δͷͱಉ͡ • ͜Ε΋ᶃͷ࿦จͱಉ͡࿦ 26

27.

ᶄ Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement Hindsight Relabelingͷར༻ • ࠶ϥϕϧͯ͠RL͢Δख๏ʢHIPI-RLʣͱBC͢Δख๏ʢHIPI-BCʣΛఏҊ 27

28.

ᶄ Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement ࣮‫ݧ‬ • ᶃͷ࣮‫ͱݧ‬ಉ͡Α͏ʹɼλεΫม਺ψʹΑͬͯใु͕มΘΔΑ͏ͳઃఆ • ψͱͯ͠໨ඪͷਐߦํ޲΍࠲ඪɼ଎͞Λࢦఆ • ͍͔ͭ͘͸ΰʔϧͰλεΫ͕ࢦఆ͞ΕΔλεΫ 28

29.

ᶄ Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement ݁Ռ • ΰʔϧͰࢦఆ͞ΕΔλεΫͰ͸ɼIRLΛ࢖͏͜ͱͰαϯϓϧޮ཰ͷ޲্ • HERͱൺֱ͕Ͱ͖Δ৚݅ • HIPI-RL΋HIPI-BC΋ϥϯμϜʹ࠶ϥϕϦϯά͢ΔΑΓੑೳͷ޲্ 29

30.

·ͱΊɾ‫ײ‬૝ ·ͱΊ • IRLΛར༻͢Δ͜ͱͰɼHERΛใु͕ΰʔϧঢ়ଶҎ֎Ͱࢦఆ͞ΕΔϚϧνλεΫֶ शʹ֦ு͢Δ͜ͱ͕Ͱ͖Δ ‫ײ‬૝ • ෳ਺ͷλεΫͷσʔλΛ࢖ͬͯޮ཰తʹֶश͢Δͷ͸‫࣮ݱ‬తͳํ޲ͩͱࢥ͏Ұํɼ • ΦϯϥΠϯͰෳ਺ͷλεΫ͕ू·ͬͯ͘Δͱ͍͏ͷ͸‫࣮ݱ‬తͳͷ͔Θ͔Βͳ͍ • ΦϑϥΠϯσʔλʹ࣋ͬͯΏ͘ͱ͋Γͦ͏ʁ • ใुؔ਺͕‫ط‬஌Ͱgoal-conditionalͰ͸ͳ͍໰୊ͬͯͦ͜·ͰҰൠతͳͷͩΖ͏͔ʁ • ใुؔ਺͕ઃ‫͞ܭ‬Ε͍ͯΔඞཁ͸͋Δʢओ‫Ͱ؍‬ਓ͕ؒΞϊςʔγϣϯɼͩͱ‫͍͠ݫ‬ʣ • ใुͷਪ࿦ϞσϧΛผʹ࡞Ε͹͍͍ͷ͔΋͠Εͳ͍͚ΕͲ… 30