Mandil et al., 2023 - Google Patents

Combining vision and tactile sensation for video prediction

Mandil et al., 2023

View PDF
Document ID
15354611078560013851
Author
Mandil W
Ghalamzan-E A
Publication year
Publication venue
arXiv preprint arXiv:2304.11193

External Links

Snippet

In this paper, we explore the impact of adding tactile sensation to video prediction models for physical robot interactions. Predicting the impact of robotic actions on the environment is a fundamental challenge in robotics. Current methods leverage visual and robot action data to …
Continue reading at arxiv.org (PDF) (other versions)

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/50Computer-aided design
    • G06F17/5009Computer-aided design using simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/62Methods or arrangements for recognition using electronic means
    • G06K9/6217Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass
    • G06N99/005Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/36Image preprocessing, i.e. processing the image information without deciding about the identity of the image
    • G06K9/46Extraction of features or characteristics of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computer systems utilising knowledge based models

Similar Documents

Publication Publication Date Title
Hua et al. Learning for a robot: Deep reinforcement learning, imitation learning, transfer learning
Kang et al. Real-time fruit recognition and grasping estimation for robotic apple harvesting
US20210390653A1 (en) Learning robotic tasks using one or more neural networks
Sharma et al. Third-person visual imitation learning via decoupled hierarchical controller
Kormushev et al. Reinforcement learning in robotics: Applications and real-world challenges
Shi et al. RoboCraft: Learning to see, simulate, and shape elasto-plastic objects in 3D with graph networks
Kondratenko et al. Machine learning techniques for increasing efficiency of the robot’s sensor and control information processing
Vemuri et al. Enhancing Human-Robot Collaboration in Industry 4.0 with AI-driven HRI
CN111300431B (en) Cross-scene-oriented robot vision simulation learning method and system
Fleer et al. Learning efficient haptic shape exploration with a rigid tactile sensor array
Chen et al. Sign language gesture recognition and classification based on event camera with spiking neural networks
Wang et al. Research on door opening operation of mobile robotic arm based on reinforcement learning
Riedel et al. Hand gesture recognition of methods-time measurement-1 motions in manual assembly tasks using graph convolutional networks
Vianello et al. Latent ergonomics maps: Real-time visualization of estimated ergonomics of human movements
Fan et al. A multi-granularity scene segmentation network for human-robot collaboration environment perception
Shaw et al. Learning dexterity from human hand motion in internet videos
Yang et al. Explicit-to-implicit robot imitation learning by exploring visual content change
Ramachandruni et al. Attentive task-net: Self supervised task-attention network for imitation learning using video demonstration
Mandil et al. Combining vision and tactile sensation for video prediction
Mavsar et al. Simulation-aided handover prediction from video using recurrent image-to-motion networks
Wulfmeier On machine learning and structure for mobile robots
Liao et al. Human hand motion prediction in disassembly operations
Luo et al. Transformer-based vision-language alignment for robot navigation and question answering
Choudhary et al. Spatial and temporal features unified self-supervised representation learning networks
Li et al. Group Sparse Regression‐Based Learning Model for Real‐Time Depth‐Based Human Action Prediction