본문 바로가기

Reinforcement Learning11

Concurrent Training 논문리뷰(Concurrent Training of a Control Policy and a State Estimator for Dynamic and Robust Legged Locomotion) Concurrent Training of a Control Policy and a State Estimator for Dynamic and Robust Legged LocomotionConcurrent Training of a Control Policy and a State Estimator for...simple end-to-end locomotion learning framework that concurrently trains a control policy and a state estimatorIntroduction기존의 4족보행 로봇의 locomotion control은 정교하게 미리 계산된 state estimation을 input으로 받아 계산한다. 하지만 이런 기존의 state estimato.. 2024. 8. 1.
GenLoco 논문리뷰(GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots) GenLoco: Generalized Locomotion Controllers for Quadrupedal RobotsGenLoco: Generalized Locomotion Controllers for Quadrupedal RobotsGenLoco Githubhttps://github.com/HybridRobotics/GenLocoIntroduction4족보행 로봇의 연구/기업 내 사용이 증가함에 따라 다양한 로봇에서 활용 가능한 보행 제어기의 필요성이 증가했다. 기존의 보행 제어기는 robot specific한 모델으로 강화학습 기반 제어기는 이론적으로는 모든 로봇에 적요이 가능하지만 실제로는 reward formulation의 특이성으로 단일 로봇에서만 정상적으로 동작한다. 이 논문에서는 유사한 형.. 2024. 7. 24.
Stanford CS234 Lecture 10 Stanford CS234: Reinforcement Learning | Winter 2019 | Lecture 10 continuing our discussion over Updating Parameters Given the Gradient Local Approximation we couldn’t calculate equation above because we had no clue of what $\tilde{\pi}$ is. So for approximation, we replace the term with previous policy. we take policy $\pi^i$ run it out, get $D$ trajectories and use them to obtain distribution .. 2022. 8. 12.
Stanford CS234 Lecture 9 Stanford CS234: Reinforcement Learning | Winter 2019 | Lecture 9 Continuing discussion about gradient descent, recall that our goal is to converge ASAP to a local optima. We want our policy update to be a monotonic improvement. → guarantees to converge (emphirical) → we simply don’t want to get fired... Recall last time, we expressed gradient of value function as below this term is unbiased but .. 2022. 8. 11.
Stanford CS234 Lecture 8 Stanford CS234: Reinforcement Learning | Winter 2019 | Lecture 8 Policy-Based Reinforcement Learning Recall last lecture → where we learned to find state-value($V$) or state-action value($Q$) for parameter $w$(or $\theta$) and use such to build (hopefully)optimal policy($\pi$) Today we’ll take that policy $\pi^\theta$ as parameter $$ \pi^\theta(s,a)=P[a|s;\theta] $$ our goal is to find a policy .. 2022. 8. 11.
Stanford CS234 Lecture 7 Stanford CS234: Reinforcement Learning | Winter 2019 | Lecture 7 Imitation Learning there are occasions where rewards are dense in time or each iteration is super expensive → autonomous driving kind of stuff So we summon an expert to demonstrate trajectories Our problem Setup we will talk about three methods below and their goal are... Behavoiral Cloning : learn directly from teacher’s policy In.. 2022. 8. 9.
반응형