skip to main content
10.1145/1661412.1618516acmconferencesArticle/Chapter ViewAbstractPublication Pagessiggraph-asiaConference Proceedingsconference-collections
research-article

Robust task-based control policies for physics-based characters

Published: 01 December 2009 Publication History

Abstract

We present a method for precomputing robust task-based control policies for physically simulated characters. This allows for characters that can demonstrate skill and purpose in completing a given task, such as walking to a target location, while physically interacting with the environment in significant ways. As input, the method assumes an abstract action vocabulary consisting of balance-aware, step-based controllers. A novel constrained state exploration phase is first used to define a character dynamics model as well as a finite volume of character states over which the control policy will be defined. An optimized control policy is then computed using reinforcement learning. The final policy spans the cross-product of the character state and task state, and is more robust than the conrollers it is constructed from. We demonstrate real-time results for six locomotion-based tasks and on three highly-varied bipedal characters. We further provide a game-scenario demonstration.

Supplementary Material

Supplemental material. (170-coros.zip)
The supplementary material contains two demos, runnable under Windows. To run the demos, launch the appropriate .bat file. We have included "no shaders" versions in order to maintain compatability on a wider range of machines. (1) Bird Mania birdmania.bat: with shaders birdmania_no_shaders.bat: no shaders (2) Bird Knockdown birdknockdown.bat: with shaders birdknockdown_no_shaders.bat: no shaders

References

[1]
Abe, Y., da Silva, M., and Popović, J. 2007. Multiobjective control with frictional contacts. In Proc. ACM SIGGRAPH/EG Symposium on Computer Animation, 249--258.
[2]
Atkeson, C. G., and Morimoto, J. 2003. Nonparametric representation of policies and value functions: A trajectory-based approach. In Advances in Neural Information Processing Systems 15, 1611--1618.
[3]
Atkeson, C. G., and Stephens, B. 2007. Random sampling of states in dynamic programming. In Proc. Neural Information Processing Systems Conf.
[4]
Byl, K., and Tedrake, R. 2008. Approximate optimal control of the compass gait on rough terrain. In Proc. IEEE Int'l Conf. on Robotics and Automation.
[5]
Chestnutt, J., Lau, M., Cheung, K. M., Kuffner, J., Hodgins, J. K., and Kanade, T. 2005. Footstep planning for the Honda ASIMO humanoid. In Proc. IEEE Int'l Conf. on Robotics and Automation.
[6]
Chestnutt, J. 2007. Navigation Planning for Legged Robots. PhD thesis, Carnegie Mellon University.
[7]
Choi, M., Lee, J., and Shin, S. 2003. Planning biped locomotion using motion capture data and probabilistic roadmaps. ACM Transactions on Graphics 22, 2, 182--203.
[8]
Coros, S., Beaudoin, P., Yin, K., and van de Panne, M. 2008. Synthesis of constrained walking skills. ACM Trans. on Graphics (Proc. SIGGRAPH ASIA) 27, 5, Article 113.
[9]
da Silva, M., Abe, Y., and Popović, J. 2008. Interactive simulation of stylized human locomotion. ACM Transactions on Graphics (Proc. SIGGRAPH) 27, 3, Article 82.
[10]
da Silva, M., Durand, F., and Popovic, J. 2009. Linear Bellman combination for control of character animation. ACM Trans. on Graphics (Proc. SIGGRAPH) 28, 3, Article 82.
[11]
Ernst, D., Geurts, P., and Wehenkel, L. 2005. Tree-based batch mode reinforcement learning. Journal of Machine Learning Research 6, 503--556.
[12]
Faloutsos, P., van de Panne, M., and Terzopoulos, D. 2001. Composable controllers for physics-based character animation. In Proc. ACM SIGGRAPH, 251--260.
[13]
Hodgins, J., Wooten, W., Brogan, D., and O'Brien, J. 1995. Animating human athletics. In Proc. ACM SIGGRAPH, 71--78.
[14]
Ikemoto, L., Arikan, O., and Forsyth, D. A. 2005. Learning to move autonomously in a hostile world. Tech. Rep. UCB/CSD-05-1395, EECS Department, University of California, Berkeley, Jun.
[15]
Kajita, S., Kanehiro, F., Kaneko, K., Fujiwara, K., Harada, K., Yokoi, K., and Hirukawa, H. 2003. Biped walking pattern generation by using preview control of zero-moment point. In Proc. IEEE Int'l Conf. on Robotics and Automation.
[16]
Khatib, O., Sentis, L., Park, J., and Warren, J. 2004. Whole body dynamic behavior and control of human-like robots. International Journal of Humanoid Robotics 1, 1, 29--43.
[17]
Laszlo, J. F., van de Panne, M., and Fiume, E. 1996. Limit cycle control and its application to the animation of balancing and walking. In Proc. ACM SIGGRAPH, 155--162.
[18]
Lau, M., and Kuffner, J. J. 2005. Behavior planning for character animation. In ACM SIGGRAPH/EG Symposium on Computer Animation.
[19]
Lau, M., and Kuffner, J. 2006. Precomputed search trees: Planning for interactive goal-driven animation. In ACM SIGGRAPH/EG Symposium on Computer Animation, 299--308.
[20]
Lee, J., and Lee, K. H. 2004. Precomputing avatar behavior from human motion data. ACM SIGGRAPH/EG Symposium on Computer Animation, 79--87.
[21]
Lo, W., and Zwicker, M. 2008. Real-time planning for parameterized human motion. In ACM SIGGRAPH/EG Symposium on Computer Animation.
[22]
McCann, J., and Pollard, N. 2007. Responsive characters from motion fragments. ACM Transactions on Graphics (Proc. SIGGRAPH) 26, 3, Article 6.
[23]
Morimoto, J., and Atkeson, C. G. 2007. Learning biped locomotion: Application of poincare-map-based reinforcement leraning. IEEE Robotics&Automation Magazine 14, 2, 41--51.
[24]
Morimoto, J., Atkeson, C. G., Endo, G., and Cheng, G. 2007. Improving humanoid locomotive performance with learnt approximated dynamics via guassian processes for regression. In Proc. IEEE Int'l Conf. on Robotics and Automation.
[25]
Muico, U., Lee, Y., Popovic', J., and Popovic', Z. 2009. Contact-aware nonlinear control of dynamic characters. ACM Transactions on Graphics (Proc. SIGGRAPH) 28, 3, Article 81.
[26]
ODE. Open dynamics engine, https://www.ode.org/.
[27]
Raibert, M. H., and Hodgins, J. K. 1991. Animation of dynamic legged locomotion. In Proc. ACM SIGGRAPH, 349--358.
[28]
Sharon, D., and van de Panne, M. 2005. Synthesis of controllers for stylized planar bipedal walking. In Proc. IEEE Int'l Conf. on Robotics and Automation.
[29]
Sok, K. W., Kim, M., and Lee, J. 2007. Simulating biped behaviors from human motion data. ACM Trans. on Graphics (Proc. SIGGRAPH) 26, 3, Article 107.
[30]
Sutton, R., and Barto, A. 1998. Reinforcement Learning: An Introduction. MIT Press.
[31]
Tedrake, R., Zhang, T., and Seung, H. 2004. Stochastic policy gradient reinforcement learning on a simple 3D biped. In Proc. Int'l Conf. on Intelligent Robots and Systems, vol. 3.
[32]
Treuille, A., Lee, Y., and Popović, Z. 2007. Near-optimal character animation with continuous control. ACM Transactions on Graphics (Proc. SIGGRAPH) 26, 3, Article 7.
[33]
Yin, K., Loken, K., and van de Panne, M. 2007. SIMBICON: Simple biped locomotion control. ACM Transactions on Graphics (Proc. SIGGRAPH) 26, 3, Article 105.
[34]
Yoshida, E., Belousov, I., Esteves, C., and Laumond, J. 2005. Humanoid motion planning for dynamic tasks. In Humanoid Robots.
[35]
Zhao, L., and Safonova, A. 2008. Achieving good connectivity in motion graphs. In ACM SIGGRAPH/EG Symposium on Computer Animation.
[36]
Zordan, V., Majkowska, A., Chiu, B., and Fast, M. 2005. Dynamic response for motion capture animation. ACM Transactions on Graphics (Proc. SIGGRAPH) 24, 3, 697--701.

Cited By

View all
  • (2024)An Auto Obstacle Collision Avoidance System using Reinforcement Learning and Motion VAEJournal of the Korea Computer Graphics Society10.15701/kcgs.2024.30.4.130:4(1-10)Online publication date: 1-Sep-2024
  • (2022)Learning Soccer Juggling Skills with Layer-wise Mixture-of-ExpertsACM SIGGRAPH 2022 Conference Proceedings10.1145/3528233.3530735(1-9)Online publication date: 27-Jul-2022
  • (2022)A Survey on Reinforcement Learning Methods in Character AnimationComputer Graphics Forum10.1111/cgf.1450441:2(613-639)Online publication date: 24-May-2022
  • Show More Cited By

Index Terms

  1. Robust task-based control policies for physics-based characters

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SIGGRAPH Asia '09: ACM SIGGRAPH Asia 2009 papers
    December 2009
    669 pages
    ISBN:9781605588582
    DOI:10.1145/1661412
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 01 December 2009

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. animation
    2. simulation of skilled movement

    Qualifiers

    • Research-article

    Conference

    SA09
    Sponsor:
    SA09: SIGGRAPH ASIA 2009
    December 16 - 19, 2009
    Yokohama, Japan

    Acceptance Rates

    SIGGRAPH Asia '09 Paper Acceptance Rate 70 of 275 submissions, 25%;
    Overall Acceptance Rate 178 of 869 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)18
    • Downloads (Last 6 weeks)5
    Reflects downloads up to 09 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)An Auto Obstacle Collision Avoidance System using Reinforcement Learning and Motion VAEJournal of the Korea Computer Graphics Society10.15701/kcgs.2024.30.4.130:4(1-10)Online publication date: 1-Sep-2024
    • (2022)Learning Soccer Juggling Skills with Layer-wise Mixture-of-ExpertsACM SIGGRAPH 2022 Conference Proceedings10.1145/3528233.3530735(1-9)Online publication date: 27-Jul-2022
    • (2022)A Survey on Reinforcement Learning Methods in Character AnimationComputer Graphics Forum10.1111/cgf.1450441:2(613-639)Online publication date: 24-May-2022
    • (2022)Investigation of a Method to Extend a 2-Dimensional Gait to 3-Dimensions in a Human Musculoskeletal Model with 70 Muscles2022 International Symposium on Micro-NanoMehatronics and Human Science (MHS)10.1109/MHS56725.2022.10092017(1-5)Online publication date: 27-Nov-2022
    • (2020)CARLACM Transactions on Graphics10.1145/3386569.339243339:4(38:1-38:10)Online publication date: 12-Aug-2020
    • (2016)A Virtual Character Learns to Defend Himself in Sword Fighting Based on Q-Network2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI)10.1109/ICTAI.2016.0052(291-298)Online publication date: Nov-2016
    • (2015)Adaptive motion synthesis for virtual charactersThe Visual Computer: International Journal of Computer Graphics10.1007/s00371-014-0943-431:5(497-512)Online publication date: 1-May-2015
    • (2014)The trial of galileoProceedings of the first ACM SIGCHI annual symposium on Computer-human interaction in play10.1145/2658537.2662977(363-366)Online publication date: 19-Oct-2014
    • (2011)Generating Real-Time Responsive Balance Recovery AnimationAdvanced Materials Research10.4028/www.scientific.net/AMR.219-220.391219-220(391-395)Online publication date: Mar-2011
    • (2011)Space-time planning with parameterized locomotion controllersACM Transactions on Graphics10.1145/1966394.196640230:3(1-11)Online publication date: 19-May-2011
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media