WO2021044751A1 - Information processing device, information processing method, and information processing program - Google Patents
Information processing device, information processing method, and information processing program Download PDFInfo
- Publication number
- WO2021044751A1 WO2021044751A1 PCT/JP2020/028134 JP2020028134W WO2021044751A1 WO 2021044751 A1 WO2021044751 A1 WO 2021044751A1 JP 2020028134 W JP2020028134 W JP 2020028134W WO 2021044751 A1 WO2021044751 A1 WO 2021044751A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- unit
- adjacent
- information processing
- robot device
- target object
- Prior art date
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
Definitions
- This disclosure relates to an information processing device, an information processing method, and an information processing program.
- Patent Document 1 Technology related to autonomous robots that autonomously clean up articles (objects) and change the placement position of objects according to the situation is known. For example, by recognizing an object to be operated by an electronic tag or the like, an operation on the object is executed (Patent Document 1).
- the installation position is only corrected when another article exists in the changed installation position, and the object in which the adjacent object exists in the state before the change is operated as a target. The case is not considered. Therefore, in the above-mentioned conventional technique, there is a possibility that an object that is difficult to operate at that time, such as an object that needs to operate an adjacent object first, may be an operation target.
- the present disclosure proposes an information processing device, an information processing method, and an information processing program that enable appropriate operation on an object even when an adjacent object exists.
- an object object which is a candidate object to be operated and an adjacent object which is an object adjacent to the object object are imaged.
- the prediction unit that predicts the change in the arrangement state of the adjacent object caused by the operation on the target object based on the image information and the change in the arrangement state of the adjacent object predicted by the prediction unit satisfy a predetermined condition.
- An execution unit that executes a process of manipulating the adjacent object.
- FIG. 1 is a diagram showing an example of information processing according to the first embodiment of the present disclosure.
- the information processing according to the first embodiment of the present disclosure is realized by the robot device 100 shown in FIG.
- the robot device 100 is an information processing device that executes information processing according to the first embodiment.
- the robot device 100 is an autonomous robot that has a moving unit 15 having a function for moving a position and can move to a desired position. Further, the robot device 100 has two operation units (manipulators), a first operation unit 16a and a second operation unit 16b. In the following, when the first operation unit 16a and the second operation unit 16b are described without distinction, they may be described as "operation unit 16".
- the number of operation units 16 included in the robot device 100 is not limited to two, and may be one or three or more. Details of this point will be described later.
- the robot device 100 is an information processing device that executes a process of manipulating an object based on image information (also simply referred to as an "image") detected (imaged) by an image sensor 141 (see FIG. 3).
- the robot device 100 selects a target object as a candidate for an operation target from the objects in the image, predicts a change in the arrangement state of the adjacent object caused by the operation on the target object, and determines the predicted change in the arrangement state of the adjacent object.
- the process of manipulating the adjacent object is executed.
- an object in contact with the target object will be described as an example of an object adjacent to the target object, but the adjacent object is not limited to an object in contact with the target object.
- the adjacent object may be an object located within a predetermined range from.
- the adjacent object may be an object located within the range affected by the removal of the target object.
- the target object is a magnetic object
- the adjacent object may be an object located within a range affected by the magnetism of the target object.
- the operation on the target object is not limited to removing (removing) the target object, and various operations that can affect the adjacent object, such as changing the position of the target object and changing the posture of the target object, are performed. It is a concept that includes operations.
- FIG. 1 shows a process for ST1 in which three objects, OB1, OB2, and OB3, which are books, are stacked.
- the robot device 100 takes an image of the state ST1 by the image sensor 141, and acquires an image (hereinafter, may be referred to as “image IM1”) showing the state ST1 in which the objects OB1, OB2, and OB3 are stacked.
- image IM1 an image showing the state ST1 in which the objects OB1, OB2, and OB3 are stacked.
- the robot device 100 identifies that the image IM1 includes the objects OB1, OB2, and OB3 by analyzing the image IM1 by a technique such as image analysis.
- the robot device 100 selects an object (target object) that is a candidate for the operation target (step S1). For example, the robot device 100 randomly selects a target object from a group of objects OB1, OB2, and OB3.
- the robot device 100 selects an object OB2 (hereinafter, also referred to as “target object OB2”) as a target object from a group of objects OB1, OB2, and OB3.
- the robot device 100 selects an object whose weight is estimated to be less than a predetermined threshold value as an operation target, and this point will be described with reference to FIG.
- the robot device 100 recognizes a physical contact state between an object (adjacent object) around the target object OB2 and the target object OB2. By analyzing the image IM1, the robot device 100 recognizes that the target object OB2 is in contact with the object OB1 and the object OB3.
- the robot device 100 removes the target object (step S2).
- the robot device 100 removes the target object OB2.
- the robot device 100 removes the target object OB2 from the image IM1 showing the state ST1 in which the objects OB1, OB2, and OB3 are stacked.
- the robot device 100 performs processing on the state ST2 in which only the target object OB2 is removed from the object group of the objects OB1, OB2, and OB3.
- the robot device 100 predicts a change in the arrangement state of adjacent objects when the target object is removed (step S3).
- the robot device 100 predicts changes in the posture and position of adjacent objects when the target object is removed.
- the robot device 100 predicts the posture and position of an adjacent object by appropriately using various techniques related to physics simulation. For example, the robot device 100 estimates the center of gravity from the shape data (W [width], D [depth], H [height]) of each object based on the image, and detects the direction of gravity. Then, the robot device 100 predicts the posture and position of the adjacent object by the built-in physical model simulator.
- the robot device 100 may perform prediction by any information or method as long as the posture and position of the adjacent object when the target object is removed can be predicted.
- the robot device 100 predicts the posture amount (posture change prediction value) and the position change amount (position change prediction value) of an adjacent object when the target object is removed.
- the robot device 100 predicts a change in the arrangement state of the objects OB1 and OB3 when the target object OB2 is removed.
- the robot device 100 predicts changes in the posture and position of the object OB1 and changes in the posture and position of the object OB3 when the target object OB2 is removed.
- the robot device 100 predicts that the position and posture of the object OB1 will not change due to the removal of the target object OB2. Since the robot device 100 is in a state where the object OB1 supports the target object OB2, it is predicted that the position and posture of the object OB1 will not be changed by removing the target object OB2. For example, the robot device 100 predicts that the posture change predicted value and the position position change predicted value of the object OB1 when the target object OB2 is removed are 0.
- the robot device 100 predicts that the position and posture of the object OB3 will change due to the removal of the target object OB2. Since the robot device 100 is in a state where the object OB3 is supported by the target object OB2, it is predicted that the position and posture of the object OB3 will change due to the removal of the target object OB2. Further, the robot device 100 predicts the amount of change in the position and posture of the object OB3 due to the removal of the target object OB2. For example, the robot device 100 predicts the posture change predicted value and the position position change predicted value of the object OB1 when the target object OB2 is removed. In this way, when the target object OB2 is removed from the image IM1, the robot device 100 predicts that the object OB3 of the adjacent object has a change in posture or position as shown in the state ST3.
- the robot device 100 determines whether the movement of the surrounding object of the adjacent object is equal to or higher than the threshold value (step S4).
- the robot device 100 determines whether the change in the posture or position of the adjacent object is equal to or greater than the threshold value.
- the robot device 100 uses a posture-related threshold value (posture threshold value) and a position-related threshold value (position threshold value) stored in the storage unit 12 (see FIG. 4) when the posture or position change of an adjacent object is equal to or greater than the threshold value. Determine if there is.
- the robot device 100 executes a process related to the operation based on the determination result (step S5).
- the robot device 100 compares the predicted value of the posture change of the adjacent object (posture change predicted value) caused by the removal of the target object with the posture threshold, and when the posture change predicted value is equal to or more than the posture threshold, the adjacent object Executes the process of operating. Further, the robot device 100 compares the predicted value (position change predicted value) of the position change amount of the adjacent object caused by the removal of the target object with the position threshold value, and when the position change predicted value is equal to or more than the position threshold value, Executes the process of manipulating adjacent objects.
- the robot device 100 may execute a process of operating an adjacent object when one of the posture change or the position change becomes the threshold value or more as described above, and both the posture change and the position change become the threshold value or more. If so, the process of manipulating the adjacent object may be executed. Further, the robot device 100 compares the amount of change (combined change amount) obtained by combining the posture change and the position change with a predetermined threshold value (threshold value TH2 or the like in FIG. 4), and the combined change amount is equal to or larger than the predetermined threshold value. In that case, the process of manipulating the adjacent object may be executed.
- a predetermined threshold value threshold value TH2 or the like in FIG. 4
- the posture change predicted value of the object OB1 caused by the removal of the target object OB2 is less than the posture threshold value, and the posture change predicted value of the object OB1 is less than the posture threshold value. Do not execute the operation.
- the robot device 100 executes a process of operating the object OB3.
- the first operation unit 16a executes the operation of the target object OB2
- the second operation unit 16b executes the operation of the object OB3.
- the first operation unit 16a executes the operation of the target object OB2
- the second operation unit 16b executes the operation of supporting the object OB3 which is an adjacent object.
- the robot device 100 executes a process of suppressing the change in the arrangement state of the adjacent object OB3 due to the movement of the target object in the second operation unit 16b.
- the robot device 100 executes a process of moving the target object OB2 to the first operation unit 16a.
- the robot device 100 may perform an operation of driving the second operation unit 16b and arranging the object OB3 at a stable position after the operation such as moving the target object OB2.
- the robot device 100 executes an operation of holding the target object OB2 by the first operation unit 16a, an operation of holding the object OB3 by the second operation unit 16b, and an object to a desired position by the moving unit 15.
- the object OB3 may be carried together with the OB2.
- the robot device 100 may move the target object OB2 to the first operation unit 16a and move the object OB3 which is an adjacent object to the second operation unit 16b.
- the robot device 100 executes a process of manipulating the target object and the adjacent object based on the change in the arrangement state of the adjacent object adjacent to the target object when the target object is removed. In this way, the robot device 100 can enable an appropriate operation on an object even when an adjacent object exists.
- FIG. 2 is a diagram showing an example of determination of a manipulable object according to the first embodiment.
- the robot device 100 determines an operable object based on the image.
- the robot device 100 detects (images) an image including an unknown object operation group (unknown object group SG1) by the image sensor 141 (step S11).
- the robot device 100 recognizes a plurality of unknown object operation groups (unknown object group SG1) based on the image detected by the image sensor 141.
- the robot device 100 recognizes the unknown object group SG1 including the bookshelf and a plurality of stored books.
- the robot device 100 first segments each object from the accumulated image information of a plurality of unknown objects.
- the robot device 100 segmentes the unknown object group SG1 included in the image detected by the image sensor 141 by appropriately using various techniques related to image segmentation.
- the robot device 100 includes a book group SG11 including a plurality of books such as objects OB11 to OB17, a book group SG12 including a plurality of books, a book group SG13 including a plurality of books, an object OB10 which is a bookshelf, and the like.
- the unknown object group SG1 is segmented.
- the robot device 100 segmentes the book group SG11 into each of the objects OB11 to OB17, and also segments the book group SG12 and the book group SG13 into each book.
- the robot device 100 classifies the unknown object group SG1 (step S12).
- the robot device 100 uses a threshold value such as "carrying weight of the manipulator” (such as the value "Wlood” of the threshold value TH1 in FIG. 4), and the object group G0 that cannot be moved by the external force of the threshold value and the object group G0 within the threshold value.
- the unknown object group SG1 is classified (classified) into one of the object group G1 that can be moved by an external force.
- the robot device 100 compares the weight of each object included in the unknown object group SG1 with the "Wlood”, and classifies the object having a weight exceeding the "Wlood” into the object group G0 as an inoperable object. Further, the robot device 100 compares the weight of each object included in the unknown object group SG1 with the "Wlood”, and classifies the object having a weight of "Wlood” or less as a manipulable object into the object group G1. ..
- the robot device 100 estimates from the image data.
- the robot device 100 recognizes each object of the unknown object group SG1 included in the image detected by the image sensor 141 by appropriately using various techniques related to object recognition such as general object recognition.
- the robot device 100 estimates the shape data (W [width], D [depth], H [height]) of the object extracted from the image. Further, the robot device 100 estimates the material of the object and estimates the density ⁇ of the object. Then, the robot device 100 calculates the estimated weight of the object by using the estimated shape data of the object (W [width], D [depth], H [height]) and the density ⁇ of the object. For example, the robot device 100 calculates the estimated weight "Wp" by multiplying the width, depth, height and density of the object. The robot device 100 calculates the estimated weight "Wp” by the following formula (1).
- the robot device 100 holds the average density of the environment as data in advance, and estimates the weight using the average density. For example, the robot device 100 uses the value “VL1” of the environmental average density DS1 stored in the density information storage unit 122 (see FIG. 5) instead of the estimated density “ ⁇ ” of the object to determine the weight of the object. presume.
- the robot device 100 compares the calculated estimated weight "Wp" of each object with the threshold value "Wlood", determines whether or not each object can be operated, and sets the object group G1 capable of operating each object. It is classified as one of the inoperable object group G0.
- the robot device 100 estimates the weight of each book such as the objects OB11 to OB17 included in the book group SG11 to SG13, and determines that the estimated weight is equal to or less than the threshold value “Wlood”. As a result, the robot device 100 classifies each book such as the objects OB11 to OB17 included in the book group SG11 to SG13 into the operable object group G1.
- the robot device 100 estimates the weight of the object OB10 which is a bookshelf, and determines that the estimated weight is larger than the threshold value "Wlood". As a result, the robot device 100 classifies the object OB10, which is a bookshelf, into the inoperable object group G0. Then, the robot device 100 selects a target object to be operated from the objects belonging to the object group G1.
- the robot device 100 targets only the object group of the object group G1, but when there are few objects belonging to the object group G1, the robot device 100 requests support from another robot or cooperates with a plurality of manipulators. May be good.
- the robot device 100 may select an object belonging to the object group G0 as an operation target, request support from another robot, or perform cooperative work with a plurality of manipulators.
- the robot device 100 selects an object belonging to the object group G0 as an operation target, requests support from another robot, or has a plurality of objects. You may work together with a manipulator. For example, when all the objects belonging to the object group G1 cannot be operated due to the change condition of the adjacent object, the robot device 100 selects the object belonging to the object group G0 as the operation target and requests support from other robots. Or you may work together with multiple manipulators.
- the robot device 100 may autonomously select the operation target object, or may select the operation target object according to a human instruction such as the administrator of the robot device 100. ..
- a human instruction such as the administrator of the robot device 100.
- the robot device 100 may acquire information indicating whether or not the object can be operated from the outside. Further, even if the robot device 100 stores in advance the operationability information indicating whether or not each object can be operated as knowledge in the storage unit 12, and determines whether or not the operation is possible based on the operationability information stored in the storage unit 12. Good.
- the robot device 100 predicts the movement of the object B in advance. First, the robot device 100 analyzes in advance what kind of operation other objects (object B, etc.) in contact with the object (object B, etc.) behave when the object A to be operated is removed, and the adjacent object. Predict the movement and posture of (object B).
- the robot device 100 estimates the center of gravity from the shape data (W [width], D [depth], H [height]) of each object, detects the direction of gravity, and uses the built-in physical model simulator to detect the object. Predict the posture and position of B. When the predicted movement of the adjacent object (object B) converges within the threshold value, the object A is determined to be operable.
- the robot device 100 may operate the target object while supporting and fixing an adjacent object such as a large-moving object by another manipulator (operation unit 16) so as not to move.
- the robot device 100 is not only simply fixed, but is also supported by two or more manipulators (operation units 16), temporarily gripped, handed, and the like.
- the role of object operation may be divided between the operation units 16. Further, the robot device 100 divides the role of object operation among the robot devices by performing cooperative work such as supporting each other, temporarily grasping, and handing the robot device in cooperation with other robot devices. May be good.
- the robot device 100 when the robot device 100 is provided with a camera (image sensor) in the operation unit 16 (end effector unit of the manipulator), the posture and position of the object can be predicted with reference to the image from the end effector unit. Therefore, the robot device 100 avoids occlusion and enables more accurate prediction and determination of posture and position.
- the robot device 100 is movable and includes an operation unit 16 (arm, hand), a moving unit 15 (vehicle, etc.), and a recognition unit (image sensor 141 as a visual sensor, force-tactile sensor 142, etc.). It is a robot and operates an unknown object by the above-mentioned flow.
- the robot device 100 determines whether or not the manipulator can operate an object for a group of objects in contact with each other by using image information, and distinguishes between an operable object and an inoperable object. To do. Further, the robot device 100 extracts the operation target object from the group of objects in contact with each other, excludes the operation target object from the image, and determines the posture and position of the object in contact with the operation target object after the operation target object is removed. Predict. Then, when the change in the posture or position of the object in contact with the operation target object is equal to or greater than the threshold value, the robot device 100 executes an operation on the object in contact with the operation target object or changes the operation target. Further, the robot device 100 enables the operation target object to be operated when the change in the posture or position of the object in contact with the operation target object is less than the threshold value.
- the robot device 100 grips and controls the operation unit 16 so that the posture of the object in contact with the operation target object does not move.
- the robot device 100 includes a plurality of operation units 16, one operation unit 16 executes an operation on an operation target object, and another operation unit 16 executes an operation on an object in contact with the operation target object.
- the robot device 100 may select the operation target object after contacting the surface of the target object with the robot hand (operation unit 16) with a force within the threshold value and confirming whether or not the object has a moving portion. .. The details of this point will be described later.
- the robot device 100 can stably operate the target object from the stacked objects such as OB1 to OB3 without changing the position and orientation of the peripheral objects.
- the robot device 100 can stably operate a specific object from a plurality of stacked unknown object groups without changing the position and orientation of peripheral objects.
- the robot device 100 can stably operate a specific object from among objects whose physical characteristics such as mass and friction coefficient are unknown without changing the position and orientation of peripheral objects.
- the robot device 100 can move in a space such as a room, can autonomously clean up the room, and smoothly executes the operation even when the information on the space or the object is not known. be able to.
- the robot device 100 recognizes the object from the image, considers the influence on the adjacent object, and operates the object. Therefore, the robot device 100 can operate on any object including an unknown object, and can operate in consideration of the influence on other objects.
- the robot device 100 can autonomously move and operate an object with unknown physical parameters in an unstructured unknown environment. As a result, the robot device 100 does not require a database or the like and can be operated in various environments. In addition, the labor and cost of creating a database are not required.
- the robot device 100 predicts the posture and position of surrounding objects before the operation, it is possible to reduce the risk of the objects falling or being damaged due to the operation. Further, since the number of objects and environments that can be operated by the robot device 100 without detailed instructions by humans increases, the autonomy of the robot device 100 can be enhanced and the productivity can be improved.
- FIG. 3 is a diagram showing a configuration example of the robot device 100 according to the first embodiment.
- the robot device 100 includes a communication unit 11, a storage unit 12, a control unit 13, a sensor unit 14, a moving unit 15, a first operation unit 16a, and a second operation unit 16b. Has.
- the communication unit 11 is realized by, for example, a NIC (Network Interface Card), a communication circuit, or the like.
- the communication unit 11 is connected to the network N (Internet, etc.) by wire or wirelessly, and transmits / receives information to / from other devices via the network N.
- the storage unit 12 is realized by, for example, a semiconductor memory element such as a RAM (Random Access Memory) or a flash memory (Flash Memory), or a storage device such as a hard disk or an optical disk.
- the storage unit 12 has a threshold information storage unit 121 and a density information storage unit 122.
- the storage unit 12 is not limited to the threshold information storage unit 121 and the density information storage unit 122, and various types of information are stored.
- the storage unit 12 may store various information related to the operation unit 16. For example, the storage unit 12 may store information indicating the number of operation units 16 and the installation position of the operation unit 16. For example, the storage unit 12 may store various types of information used for identifying (estimating) an object.
- the threshold information storage unit 121 stores various information related to the threshold value.
- the threshold information storage unit 121 stores various information related to the threshold used for various determinations.
- FIG. 4 is a diagram showing an example of the threshold information storage unit according to the first embodiment.
- the threshold information storage unit 121 shown in FIG. 4 includes items such as "threshold ID”, “target”, “use”, and "threshold”.
- Threshold ID indicates identification information for identifying the threshold value.
- Object indicates an object for which a threshold value is used.
- User indicates the use of the threshold.
- the “threshold value” indicates a specific value of the threshold value identified by the corresponding threshold ID.
- the threshold value (threshold value TH1) identified by the threshold value ID “TH1” indicates that the target is “weight”.
- the threshold value TH1 indicates that the threshold value is used for determining the weight of the object.
- the use of the threshold value TH1 is for determining a portable object, and indicates that the object is used for determining whether or not the object can be operated by the robot device 100.
- the value of the threshold value TH1 indicates that it is "Wroad”. In the example of FIG. 4, although it is indicated by an abstract code such as “Wlood”, the value of the threshold value TH1 is a specific numerical value.
- the threshold value (threshold value TH2) identified by the threshold value ID "TH2" indicates that the target is the "positional posture".
- the threshold value TH2 indicates that the threshold value is used for determining the position and orientation of the object.
- the use of the threshold value TH2 is to change an adjacent object, and it is shown that the threshold value TH2 is used for determining a change in the arrangement state of the adjacent object due to an operation on the target object.
- the value of the threshold value TH2 indicates that it is "PVL”. In the example of FIG. 4, although it is indicated by an abstract reference numeral such as “PVL”, the value of the threshold value TH2 is a specific numerical value.
- the threshold information storage unit 121 is not limited to the above, and may store various information depending on the purpose.
- the threshold information storage unit 121 may store a threshold value related to posture (posture threshold value) and a threshold value related to position (position threshold value).
- the robot device 100 compares the predicted value (posture change predicted value) of the posture change amount of the adjacent object caused by the operation on the target object with the posture threshold, and the posture change predicted value is equal to or more than the posture threshold. , Executes the process of manipulating adjacent objects.
- the robot device 100 compares the predicted value (position change predicted value) of the position change amount of the adjacent object caused by the operation with respect to the target object and the position threshold value, and when the position change predicted value is equal to or more than the position threshold value, Executes the process of manipulating adjacent objects.
- the density information storage unit 122 stores various information related to the density.
- the density information storage unit 122 stores various information related to the density used for estimating the weight of the object.
- FIG. 5 is a diagram showing an example of the density information storage unit according to the first embodiment.
- the density information storage unit 122 shown in FIG. 5 includes items such as "density ID”, “density name”, “use”, and "density”.
- Density ID indicates identification information for identifying the density.
- Object indicates an object of density.
- value indicates a specific value of the density identified by the corresponding density ID.
- the density (density DS1) identified by the density ID "DS1" indicates that the target is the "environmental average”.
- Density DS1 is the average density of objects in the global environment and indicates the density applied to each object in the environment. Further, the value of the density DS1 indicates that it is "VL1". In the example of FIG. 4, it is indicated by an abstract code such as "VL1”, but the value of the density DS1 is a specific value such as "3 (g / cm 3 )" or "4 (g / cm 3 )”. It is a numerical value.
- the density information storage unit 122 is not limited to the above, and may store various information depending on the purpose.
- the density information storage unit 122 may store information indicating the density of each object.
- the density information storage unit 122 may store the average density of each object in association with each other.
- a program for example, an information processing program according to the present disclosure
- a program stored inside the robot device 100 by a CPU (Central Processing Unit), an MPU (Micro Processing Unit), or the like is a RAM (Random Access Memory). It is realized by executing such as as a work area. Further, the control unit 13 may be realized by an integrated circuit such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array).
- ASIC Application Specific Integrated Circuit
- FPGA Field Programmable Gate Array
- control unit 13 includes an acquisition unit 131, an analysis unit 132, a classification unit 133, a selection unit 134, a prediction unit 135, a determination unit 136, a planning unit 137, and an execution unit 138. And realizes or executes the functions and actions of information processing described below.
- the internal configuration of the control unit 13 is not limited to the configuration shown in FIG. 3, and may be another configuration as long as it is a configuration for performing information processing described later.
- the acquisition unit 131 acquires various information.
- the acquisition unit 131 acquires various information from an external information processing device.
- the acquisition unit 131 acquires various information from the storage unit 12.
- the acquisition unit 131 acquires various types of information from the threshold information storage unit 121 and the density information storage unit 122.
- the acquisition unit 131 acquires information from the analysis unit 132, the classification unit 133, the selection unit 134, the prediction unit 135, the determination unit 136, and the planning unit 137.
- the acquisition unit 131 stores the acquired information in the storage unit 12.
- the acquisition unit 131 acquires the sensor information detected by the sensor unit 14.
- the acquisition unit 131 acquires the sensor information (image information) detected by the image sensor 141.
- the acquisition unit 131 acquires the image information (image) captured by the image sensor 141.
- the acquisition unit 131 acquires sensor information (contact information) detected by the force sensor 142.
- the acquisition unit 131 acquires image information obtained by capturing an image of a target object, which is a candidate object for operation, and an adjacent object, which is an object adjacent to the target object.
- the acquisition unit 131 acquires image information obtained by capturing an image of the target object and an adjacent object in contact with the target object.
- the acquisition unit 131 acquires image information obtained by capturing images of the stacked target objects and adjacent objects.
- the acquisition unit 131 acquires image information obtained by capturing an image of the target object and an adjacent object located within a range affected by the operation on the target object.
- the acquisition unit 131 acquires an image (image IM1) showing the state ST1 in which the objects OB1, OB2, and OB3 are stacked.
- the analysis unit 132 analyzes various information.
- the analysis unit 132 functions as a physical analysis unit that performs physical analysis.
- the analysis unit 132 analyzes various kinds of information by using the information regarding the physical properties.
- the analysis unit 132 analyzes the image information.
- the analysis unit 132 analyzes various information from the image information based on the information from the external information processing device and the information stored in the storage unit 120.
- the analysis unit 132 identifies various types of information from the image information.
- the analysis unit 132 extracts various information from the image information.
- the analysis unit 132 performs recognition based on the analysis result.
- the analysis unit 132 recognizes various information based on the analysis result.
- the analysis unit 132 performs analysis processing related to the image.
- the analysis unit 132 performs various processes related to image processing.
- the analysis unit 132 processes the image information (image) acquired by the acquisition unit 131.
- the analysis unit 132 processes the image information (image) captured by the image sensor 141.
- the analysis unit 132 processes the image by appropriately using a technique related to image processing.
- the analysis unit 132 executes a process of removing the target object in the image.
- the analysis unit 132 executes a process of removing the target object in the image by appropriately using a technique related to image processing.
- the analysis unit 132 executes a process of removing the target object OB2 in the image obtained by capturing the situation ST1 in which the objects OB1 to OB3 are adjacent to each other.
- the analysis unit 132 executes a process of removing the target object OB2 from the image of the situation ST1 in which the object OB2 is in contact with the object OB1 and the object OB3.
- the analysis unit 132 identifies that the image IM1 includes the objects OB1, OB2, and OB3 by analyzing the image IM1 by a technique such as image analysis.
- the analysis unit 132 recognizes the physical contact state between the object (adjacent object) around the target object OB2 and the target object OB2.
- the robot device 100 recognizes that the target object OB2 is in contact with the object OB1 and the object OB3.
- the classification unit 133 performs various classifications.
- the classification unit 133 classifies various types of information.
- the classification unit 133 performs the classification process based on the information acquired by the acquisition unit 131.
- the classification unit 133 classifies the information acquired by the acquisition unit 131.
- the classification unit 133 performs the classification process based on the information stored in the storage unit 12.
- the classification unit 133 makes various estimations.
- the classification unit 133 estimates various types of information.
- the classification unit 133 estimates the weight of the object.
- the classification unit 133 performs various classifications based on the information acquired by the acquisition unit 131.
- the classification unit 133 performs various classifications using various sensor information detected by the sensor unit 14.
- the classification unit 133 performs various classifications using the sensor information detected by the image sensor 141.
- the classification unit 133 performs various classifications using the sensor information detected by the force sensor 142.
- the classification unit 133 estimates the weight of the object included in the image information.
- the classification unit 133 estimates the weight of the object included in the image information based on the image of the object included in the image information and the density information.
- the classification unit 133 estimates the weight of the object included in the image information based on the size of the object included in the image information and the density information.
- the classification unit 133 estimates the size of the object included in the image information, and estimates the weight of the object included in the image information using the estimated size and the density information.
- the classification unit 133 classifies the object group included in the image information into an operable object and an inoperable object.
- the classification unit 133 classifies the object into either a manipulable object or an inoperable object by comparing the estimated weight of the object with the threshold value.
- the classification unit 133 compares the weight of each object included in the unknown object group SG1 with the "Wlood”, and classifies the object whose weight exceeds the "Wlood” as an inoperable object into the object group G0.
- the classification unit 133 compares the weight of each object included in the unknown object group SG1 with the "Wlood”, and classifies the object having a weight of "Wlood” or less as a manipulable object into the object group G1.
- the classification unit 133 classifies each book such as the objects OB11 to OB17 included in the book groups SG11 to SG13 into the operable object group G1.
- the classification unit 133 classifies the object OB10, which is a bookshelf, into the inoperable object group G0.
- the selection unit 134 selects various information.
- the selection unit 134 extracts various information.
- the selection unit 134 specifies various types of information.
- the selection unit 134 selects various information based on the information acquired from the external information processing device.
- the selection unit 134 selects various information based on the information stored in the storage unit 12.
- the selection unit 134 makes various selections based on the information acquired by the acquisition unit 131.
- the selection unit 134 makes various selections based on the information classified by the classification unit 133.
- the selection unit 134 makes various selections using various sensor information detected by the sensor unit 14.
- the selection unit 134 makes various selections using the sensor information detected by the image sensor 141.
- the selection unit 134 makes various selections using the sensor information detected by the force sensor 142.
- the selection unit 134 selects an operable object from the object group as the target object based on the classification result by the classification unit 133.
- the selection unit 134 randomly selects a target object from the object group of the objects OB1, OB2, and OB3.
- the selection unit 134 selects the object OB2 as the target object from the object group of the objects OB1, OB2, and OB3.
- the prediction unit 135 predicts various types of information.
- the prediction unit 135 predicts various types of information based on the information acquired from the external information processing device.
- the prediction unit 135 predicts various types of information based on the information stored in the storage unit 12.
- the prediction unit 135 predicts various information based on the result of the analysis process by the analysis unit 132.
- the prediction unit 135 makes various predictions based on the information acquired by the acquisition unit 131.
- the prediction unit 135 makes various predictions using various sensor information detected by the sensor unit 14.
- the prediction unit 135 makes various predictions using the sensor information detected by the image sensor 141.
- the prediction unit 135 makes various predictions using the sensor information detected by the force sensor 142.
- the prediction unit 135 predicts a change in the arrangement state of an adjacent object caused by an operation on the target object based on the image information acquired by the acquisition unit 131. For example, the prediction unit 135 predicts a change in the arrangement state of the adjacent object caused by the removal of the target object based on the image information obtained by capturing the image of the target object and the adjacent object. For example, the prediction unit 135 predicts a change in the arrangement state of the adjacent object caused by the change in the position of the target object based on the image information obtained by capturing the image of the target object and the adjacent object.
- the prediction unit 135 predicts a change in the arrangement state of the adjacent object caused by a change in the posture of the target object based on the image information obtained by capturing the image of the target object and the adjacent object.
- the prediction unit 135 predicts a change in the posture of an adjacent object caused by an operation on the target object.
- the prediction unit 135 predicts a change in the position of an adjacent object caused by an operation on the target object.
- the prediction unit 135 predicts a change in the arrangement state of the adjacent object caused by the operation on the target object based on the image information obtained by capturing the image information of the target object and the adjacent object in contact with the target object.
- the prediction unit 135 predicts a change in the arrangement state of the adjacent object caused by the operation on the target object based on the image information obtained by capturing the stacked target object and the adjacent object.
- the prediction unit 135 predicts a change in the arrangement state of the adjacent object caused by the operation on the target object based on the image information obtained by capturing the image information of the target object and the adjacent object located within the range affected by the operation on the target object. To do.
- the prediction unit 135 predicts a change in the arrangement state of the adjacent object caused by the operation on the target object selected by the selection unit 134.
- the prediction unit 135 predicts the posture amount (posture change prediction value) and the position change amount (position change prediction value) of the adjacent object when the target object is removed.
- the prediction unit 135 predicts a change in the arrangement state of the objects OB1 and OB3 when the target object OB2 is removed.
- the prediction unit 135 predicts changes in the posture and position of the object OB1 and changes in the posture and position of the object OB3 when the target object OB2 is removed.
- the prediction unit 135 predicts that the posture change prediction value and the position position change prediction value of the object OB1 when the target object OB2 is removed are 0.
- the prediction unit 135 predicts the amount of change in the position and posture of the object OB3 due to the removal of the target object OB2.
- the prediction unit 135 predicts the posture change prediction value of the object OB1 and the position change prediction value of the position when the target object OB2 is removed.
- the determination unit 136 determines various information.
- the determination unit 136 determines various information.
- the determination unit 136 specifies various types of information.
- the determination unit 136 determines various types of information based on the information acquired from the external information processing device.
- the determination unit 136 determines various types of information based on the information stored in the storage unit 12.
- the determination unit 136 makes various determinations based on the information acquired by the acquisition unit 131.
- the determination unit 136 makes various determinations using various sensor information detected by the sensor unit 14.
- the determination unit 136 makes various determinations using the sensor information detected by the image sensor 141.
- the determination unit 136 makes various determinations using the sensor information detected by the force sensor 142.
- the determination unit 136 determines various information based on the result of the analysis process by the analysis unit 132.
- the determination unit 136 determines various information based on the result of the prediction process by the prediction unit 135.
- the determination unit 136 determines whether or not the object has a portion that moves independently of the object, based on the result of contact with the object by the operation unit 16.
- the determination unit 136 determines whether the change in the posture or position of the adjacent object is equal to or greater than the threshold value.
- the determination unit 136 determines whether or not the change in the attitude or position of the adjacent object is equal to or greater than the threshold value by using the threshold value (posture threshold value) regarding the posture and the threshold value (position threshold value) regarding the position stored in the storage unit 12.
- the determination unit 136 estimates the weight of each book such as the objects OB11 to OB17 included in the book group SG11 to SG13, and determines that the estimated weight is equal to or less than the threshold value "Wlood”.
- the determination unit 136 estimates the weight of the object OB10, which is a bookshelf, and determines that the estimated weight is larger than the threshold value “Wlood”.
- Planning department 137 makes various plans.
- the planning unit 137 generates various information regarding the action plan.
- the planning unit 137 makes various plans based on the information acquired by the acquisition unit 131.
- the planning unit 137 makes various plans based on the prediction result by the prediction unit 135.
- the planning unit 137 makes various plans based on the determination result by the determination unit 136.
- the planning unit 137 makes an action plan by using various techniques related to the action plan.
- Execution unit 138 executes various processes.
- the execution unit 138 executes various processes based on information from an external information processing device.
- the execution unit 138 executes various processes based on the information stored in the storage unit 12.
- the execution unit 138 executes various processes based on the information stored in the threshold information storage unit 121 and the density information storage unit 122.
- the execution unit 138 executes various processes based on the information acquired by the acquisition unit 131.
- the execution unit 138 functions as an operation control unit that controls the operation of the operation unit 16.
- Execution unit 138 executes various processes based on the prediction result by the prediction unit 135.
- the execution unit 138 executes various processes based on the determination result by the determination unit 136.
- the execution unit 138 executes various processes based on the action plan by the planning unit 137.
- the execution unit 138 controls the moving unit 15 to execute the action corresponding to the action plan based on the information of the action plan generated by the planning unit 137.
- the execution unit 138 executes the movement process of the robot device 100 according to the action plan under the control of the movement unit 15 based on the information of the action plan.
- the execution unit 138 controls the operation unit 16 based on the information of the action plan generated by the planning unit 137 to execute the action corresponding to the action plan.
- the execution unit 138 executes the operation processing of the object by the robot device 100 according to the action plan under the control of the operation unit 16 based on the information of the action plan.
- the execution unit 138 executes a process of operating the object OB3.
- the execution unit 138 executes the operation of the target object OB2 by the first operation unit 16a, and executes the operation of the object OB3 by the second operation unit 16b.
- the execution unit 138 executes the operation of the target object OB2 by the first operation unit 16a, and executes the operation of supporting the object OB3 which is an adjacent object by the second operation unit 16b.
- the sensor unit 14 detects predetermined information.
- the sensor unit 14 includes an image sensor 141 and a force sensor 142 as an imaging means for capturing an image.
- the image sensor 141 detects image information and functions as vision for the robot device 100.
- the image sensor 141 is provided on the head of the robot device 100.
- the image sensor 141 captures image information.
- the image sensor 141 detects (images) an image including an unknown object operation group (unknown object group SG1).
- the force sensor 142 detects the force and functions as a tactile sense of the robot device 100.
- the force sensor 142 is provided at the tip end portion (holding portion) of the operation portion 16.
- the force sensor 142 detects the contact of the operation unit 16 with an object.
- the sensor unit 14 is not limited to the image sensor 141 and the force sensor 142, and may have various sensors.
- the sensor unit 14 may have a proximity sensor.
- the sensor unit 14 may have a range finder such as a LiDAR (Light Detection and Ringing, Laser Imaging Detection and Ringing), a ToF (Time of Flight) sensor, or a stereo camera.
- the sensor unit 14 may have a sensor (position sensor) that detects the position information of the robot device 100 such as a GPS (Global Positioning System) sensor.
- the sensor unit 14 is not limited to the above, and may have various sensors.
- the sensor unit 14 may have various sensors such as an acceleration sensor and a gyro sensor. Further, the sensors that detect the above-mentioned various information in the sensor unit 14 may be common sensors, or may be realized by different sensors.
- the moving unit 15 has a function of driving the physical configuration of the robot device 100.
- the moving unit 15 has a function for moving the position of the robot device 100.
- the moving unit 15 is, for example, an actuator.
- the moving unit 15 may have any configuration as long as the robot device 100 can realize a desired operation.
- the moving unit 15 may have any configuration as long as the position of the robot device 100 can be moved.
- the moving unit 15 drives the caterpillars and tires.
- the moving unit 15 moves the robot device 100 and changes the position of the robot device 100 by driving the moving mechanism of the robot device 100 in response to an instruction from the execution unit 138.
- the robot device 100 has two operation units 16 of a first operation unit 16a and a second operation unit 16b.
- the operation unit 16 is a unit corresponding to a human “hand (arm)” and realizes a function for the robot device 100 to act on another object.
- the robot device 100 has a first operation unit 16a and a second operation unit 16b as two hands.
- the operation unit 16 is driven according to the processing by the execution unit 138.
- the operation unit 16 is a manipulator that operates an object.
- the operating unit 16 may be a manipulator having an arm and an end effector.
- the operation unit 16 operates the adjacent object when the change regarding the arrangement state of the adjacent object satisfies a predetermined condition.
- At least one of the plurality of operation units 16 operates the adjacent object when the change regarding the arrangement state of the adjacent object satisfies a predetermined condition.
- the operation unit 16 has a holding unit that holds an object such as an end effector or a robot hand, and a driving unit that drives the holding unit such as an actuator.
- the holding unit of the operation unit 16 may be of any method as long as a desired function can be realized, such as a gripper, a multi-finger hand, a jamming hand, a suction hand, and a soft hand.
- the holding portion of the operating portion 16 may be realized by any configuration as long as it can hold the object, may be a gripping portion that grips the object, or is a suction portion that sucks and holds the object. There may be.
- the holding unit of the operation unit 16 may be provided with a force sensor, an image sensor, a proximity sensor, or the like so that information on the position and force of the target object can be acquired.
- a force sensor 142 is provided in the holding portion of the operation unit 16, and information on the force due to the contact of the operation unit 16 with the target object can be acquired.
- the first operation unit 16a and the second operation unit 16b are provided on both side portions of the body portion (base portion) of the robot device 100, respectively.
- the first operation unit 16a extends from the left side portion of the robot device 100 and functions as the left hand of the robot device 100.
- the second operation unit 16b extends from the right side portion of the robot device 100 and functions as the right hand of the robot device 100.
- the operation units 16 may be provided at various positions depending on the number of the operation units 16 and the shape of the robot device 100.
- FIGS. 6 and 7 are flowcharts showing the information processing procedure according to the first embodiment.
- FIG. 6 is a flowchart showing an outline of the information processing procedure by the robot device 100.
- FIG. 7 is a flowchart showing details of the information processing procedure by the robot device 100.
- the robot device 100 acquires image information obtained by capturing an image of the target object and an adjacent object adjacent to the target object (step S101). For example, the robot device 100 acquires image information obtained by capturing images of a plurality of objects from the image sensor 141.
- the robot device 100 predicts a change in the arrangement state of an adjacent object caused by an operation on the target object based on the image information (step S102). For example, the robot device 100 predicts a change in the arrangement state of an adjacent object caused by the removal of the object selected as the target object among the plurality of objects in the image information.
- the robot device 100 executes a process of operating the adjacent object when the change in the arrangement state of the adjacent object satisfies a predetermined condition (step S103). For example, the robot device 100 executes a process of manipulating an adjacent object when the position or posture of the adjacent object changes by a predetermined threshold value or more due to the removal of the target object.
- the robot device 100 randomly selects an operation target (step S201).
- the robot device 100 selects, as an operation target, an object whose weight is estimated to be less than a predetermined threshold value among a plurality of objects included in the image.
- the robot device 100 selects the object OB2 from the objects OB1 to OB3 as the operation target object (target object).
- the robot device 100 may end the process when there is no selectable object.
- the robot device 100 recognizes the physical contact state with the object around the target object (step S202).
- the robot device 100 recognizes a physical contact state with an object around the target object by analyzing the image. In the example of FIG. 1, the robot device 100 recognizes that the target object OB2 is in contact with the object OB1 and the object OB3.
- the robot device 100 determines whether or not there is an object in physical contact with the surroundings (step S203).
- the robot device 100 determines whether or not there is an adjacent object in physical contact around the target object. In the example of FIG. 1, the robot device 100 determines whether or not there is an adjacent object in physical contact around the target object OB2.
- the robot device 100 determines that there is no object in physical contact with the surroundings (step S203: No)
- the robot device 100 executes the operation of the target object (step S208).
- the robot device 100 controls the first operation unit 16a and the second operation unit 16b, and executes an operation of changing the position and orientation of the target object.
- the robot device 100 determines that there is an object in physical contact in the vicinity (step S203: Yes)
- the robot device 100 predicts the posture of the peripheral object (adjacent object) in physical contact when the target object is removed (step S204).
- the robot device 100 predicts a change in the posture or position of the peripheral object in physical contact when the target object is removed.
- the robot device 100 determines whether the movement of the surrounding object is equal to or higher than the threshold value (step S205).
- the robot device 100 determines whether the change in the posture or position of the adjacent object is equal to or greater than the threshold value.
- the robot device 100 determines whether the change in the posture or the position of the objects OB1 and OB3, which are adjacent objects of the target object OB2, is equal to or more than the threshold value.
- step S208 the robot device 100 executes the operation of the target object (step S208).
- the robot device 100 determines whether or not there is another manipulator that can be operated (step S206).
- the robot device 100 determines whether or not there is an operable manipulator (operation unit 16) in addition to the manipulator (operation unit 16) that operates the target object.
- the robot device 100 determines whether or not there are other operation units 16 having a number of peripheral objects whose movement is equal to or higher than the threshold value, in addition to the operation unit 16 that operates the target object. For example, when the robot device 100 has two peripheral objects whose movements are equal to or higher than the threshold value, the robot device 100 determines whether or not there are two operable operation units 16 in addition to the operation unit 16 that operates the target object. For example, when the robot device 100 has one peripheral object whose movement is equal to or higher than the threshold value, the robot device 100 determines whether or not there is one operable operating unit 16 in addition to the operating unit 16 that operates the target object. For example, the robot device 100 determines whether or not there is another operation unit 16 (for example, the second operation unit 16b) in addition to the operation unit 16 (for example, the first operation unit 16a) that operates the target object.
- another operation unit 16 for example, the second operation unit 16b
- step S206 determines that there is no other manipulator that can be operated (step S206: No)
- the robot device 100 returns to step S201 and repeats the process.
- the robot device 100 determines that there is another manipulator that can be operated (step S206: Yes)
- the robot device 100 supports the peripheral object with another manipulator (step S207).
- the robot device 100 has another operable operation unit 16 in addition to the operation unit 16 that operates the target object
- the robot device 100 executes an operation on an adjacent object such as supporting a peripheral object (adjacent object) by the other operation unit 16.
- the robot device 100 may perform not only the operation of supporting the adjacent object but also the operation of changing the position and the posture of the adjacent object as the operation for the adjacent object.
- one operation unit 16 may move the target object
- another operation unit 16 may move the adjacent object.
- the robot device 100 executes the operation of the target object (step S208).
- the first operation unit 16a may execute the operation of the target object
- the second operation unit 16b may execute the operation of the adjacent object.
- the robot device 100 may execute the operation of the target object OB2 by the first operation unit 16a and the operation of supporting the adjacent object OB3 by the second operation unit 16b.
- the robot device 100 can perform operations on the target object and surrounding objects according to the number of operation units 16.
- FIG. 8 is a diagram showing an example of a conceptual diagram of a robot configuration.
- the configuration group FCB1 shown in FIG. 8 includes a sensor processing unit, an object / environment determination unit, a task planning unit, an operation planning unit, a control unit, and the like.
- the sensor processing unit corresponds to, for example, the sensor unit 14 and the acquisition unit 131 in FIG. 4, and detects various types of information such as visual sense, force sense, tactile sense (vibration), proximity sense, and temperature.
- the object / environment determination unit corresponds to, for example, the analysis unit 132 to the determination unit 136 in FIG. 4, and executes various processes such as estimation and determination.
- the object / environment determination unit executes various processes such as determination of an adjacent object, weight estimation of the adjacent object, estimation of the center of gravity of the adjacent object, motion analysis of the adjacent object, motion prediction of the adjacent object, and stability determination of the adjacent object. ..
- the motion planning unit corresponds to the planning unit 137 in FIG. 4, and performs gripping planning (end effector), movement route planning (moving body), and arm trajectory planning (manipulator).
- the motion planning unit performs gripping planning by the holding unit (end effector) of the operating unit 16, movement route planning by the moving unit 15, and arm trajectory planning by the operating unit 16 (manipulator).
- the control unit corresponds to, for example, the execution unit 138 in FIG. 4 and performs actuator control and sensor control.
- FIG. 9 is a diagram showing an example of processing of Nth-order object operation. Specifically, FIG. 9 is a diagram showing an example of processing of secondary object manipulation. The same points as in FIG. 1 will be omitted as appropriate.
- FIG. 9 shows, as an example, a case where the teapot (object OB21) is operated so that the lid (object PT1) of the teapot does not spill.
- FIG. 9 shows an example in which the object OB21 is operated so that the object PT1 (subordinate) which is the lid of the teapot does not fall from the object OB21 (main object) which is the teapot.
- N is an arbitrary number of 2 or more, “2” in the case of FIG. 9
- object extraction, center of gravity detection, and physics based on visual information by an image sensor 141 or the like It may not be possible to judge whether stable operation can be performed only by model simulation. For example, when pouring tea by tilting the teapot while holding down the lid (object PT1) of the teapot (object OB21), the lid of the teapot is only supported in the direction of gravity by the body of the teapot, and it can be moved in any direction with a weak force. May move.
- the robot device 100 does not have the knowledge that the main body of the kyusu and the lid of the kyusu move. That is, the robot device 100 does not have the knowledge that the kyusu (object OB21) and the lid (object PT1) can be separated.
- the robot device 100 confirms whether or not there is a moving portion of the target object by bringing the surface of the target object recognized by the image sensor 141 into contact with the robot hand with a force within the threshold value.
- the robot device 100 brings the operation unit 16 into contact with the object OB21 and the object PT1 (step S21).
- the robot device 100 brings the second operation unit 16b into contact with the object OB21 (object PT1) with a force within the threshold value.
- the robot device 100 detects the force related to the contact of the second operation unit 16b with the object OB21 (object PT1) by the force sensor 142, and based on the information regarding the detected force, the second operation unit 16b is used as the object OB21 (object PT1).
- the strength of contact with PT1) is controlled.
- the robot device 100 extracts and segments the difference from the image before the movement, and recognizes the object.
- the robot device 100 When the object PT1 moves due to the contact of the second operation unit 16b with the object OB21 (object PT1), the robot device 100 extracts and segments the difference from the image before the movement, and sets the kyusu body (object OB21) and the lid. (Object PT1) is recognized.
- the robot device 100 may recognize the shape of a moving object (object PT1) by using a distance measuring sensor such as ToF mounted on the holding unit (end effector) of the operating unit 16.
- the robot device 100 determines whether or not the object has a portion that moves independently of the object based on the result of contact with the object by the operation unit 16 (step S22).
- the object PT1 moves independently of the object OB21 in response to the contact of the second operation unit 16b with the object OB21 (object PT1). judge.
- the robot device 100 operates an object according to the determination result (step S23).
- the robot device 100 determines that the object has a portion that moves independently of the object, the robot device 100 executes a process of operating the object according to the number of operation units 16.
- the robot device 100 determines that the object OB21 has a portion (object PT1) that moves independently of the object OB21, and executes a process of operating the object according to the number of operation units 16. .. Since the robot device 100 has two operation units 16, the second operation unit 16b holds the object PT1 which is a lid, and the first operation unit 16a operates the object OB21 which is a steeple. Perform the action of pouring tea into (cup).
- the robot device 100 determines that the object has a portion that moves independently of the object, and operates the object according to the result of the determination to execute an appropriate operation according to the object. Can be done.
- FIG. 10 is a diagram showing a configuration example of the robot device according to the second embodiment of the present disclosure.
- the robot device 100A includes a communication unit 11, a storage unit 12, a control unit 13, a sensor unit 14, a moving unit 15, and an operation unit 16.
- the operation unit 16 is provided at the base of the robot device 100a.
- the operation unit 16 is provided so as to extend from a base portion that connects the moving unit 15 of the robot device 100a.
- the operation unit 16 may be provided at a different position depending on the shape of the robot device 100A.
- FIG. 11 is a diagram showing an example of information processing according to the second embodiment.
- the information processing according to the second embodiment is realized by the robot device 100A shown in FIG. A case where processing is performed on a group of objects in which a plurality of objects are stacked will be described as an example with reference to FIG. The same points as in FIG. 1 in FIG. 11 will be omitted as appropriate. Since the processes of the states ST1 to ST3 and steps S1 to S4 shown in FIG. 11 are the same as those of FIG. 1, the description thereof will be omitted.
- the robot device 100a determines that the posture change predicted value of the object OB3 generated by the removal of the target object OB2 is equal to or more than the posture threshold value. Further, in the example of FIG. 11, the robot device 100a has only one operation unit 16. Therefore, the robot device 100a determines that the operation of the target object OB2 is impossible, and selects a target object as a candidate for another operation target (step S31).
- the robot device 100a selects an object OB3 (hereinafter, also referred to as “target object OB3”) as a target object from the object group of the remaining objects OB1 and OB3 other than the object OB2. That is, the robot device 100a executes the process for the object OB3 which is an adjacent object of the target object OB2.
- the robot device 100a removes the target object (step S2).
- the robot device 100a removes the target object OB3.
- the robot device 100a removes the target object OB3 from the image IM1 showing the state ST1 in which the objects OB1, OB2, and OB3 are stacked.
- the robot device 100a performs processing on the state ST32 in which only the target object OB3 is removed from the object group of the objects OB1, OB2, and OB3.
- the robot device 100a predicts a change in the arrangement state of adjacent objects when the target object is removed (step S33).
- the robot device 100a predicts changes in the posture and position of adjacent objects when the target object is removed.
- the robot device 100a is not limited to an object that is in direct contact with the target object (contact object), but an object that is in contact with the contact object, that is, an object that is in chain contact with the target object, or the like, is an adjacent object. It may be processed as. As shown in the state ST1 of FIG. 11, only the object OB2 is in direct contact with the target object OB3, but the object OB1 is in contact with the object OB2 and is in contact with the target object OB3 in a chain reaction.
- the robot device 100a processes the object OB1 as an adjacent object. In this way, the robot device 100a predicts a change in the arrangement state of the objects OB1 and OB2 when the target object OB3 is removed. The robot device 100a predicts changes in the posture and position of the object OB1 and changes in the posture and position of the object OB2 when the target object OB3 is removed.
- the robot device 100a predicts that the position and posture of the object OB2 will not change due to the removal of the target object OB3. Since the robot device 100a is in a state where the object OB2 supports the target object OB3, it is predicted that the position and posture of the object OB2 will not be changed by removing the target object OB3. For example, the robot device 100a predicts that the posture change predicted value and the position position change predicted value of the object OB2 when the target object OB3 is removed are 0.
- the robot device 100a predicts that the position and posture of the object OB1 will not be changed by removing the target object OB3. Since the robot device 100a is in a state where the object OB1 supports the target object OB3 and the object OB2, it is predicted that the position and the posture of the object OB1 will not be changed by removing the target object OB3. For example, the robot device 100a predicts that the posture change predicted value and the position position change predicted value of the object OB1 when the target object OB3 is removed are 0.
- the robot device 100a determines whether the movement of the surrounding object of the adjacent object is equal to or higher than the threshold value (step S4).
- the robot device 100a determines that the change in posture and position of both the object OB1 and the object OB2, which are adjacent objects of the target object OB3, is less than the threshold value.
- the robot device 100a executes a process related to the operation based on the determination result (step S5).
- the robot device 100a assumes that the target object OB3 can be operated because the change in posture and position of both the object OB1 and the object OB2, which are adjacent objects of the target object OB3, is less than the threshold value.
- the operation unit 16 executes the operation of the target object OB2.
- the robot device 100a executes an operation such as moving the target object OB3.
- the robot device 100a completes the execution of all the operations of the objects OB1 to OB3 by executing the operations in the order of the object OB2 and the object OB1.
- the robot device 100a executes a process of manipulating the target object and the adjacent object based on the change in the arrangement state of the adjacent object adjacent to the target object when the target object is removed. In this way, the robot device 100a can enable an appropriate operation on an object even when an adjacent object exists.
- FIG. 12 is a diagram showing a configuration example of the robot device according to the third embodiment of the present disclosure.
- the robot device 100B includes a communication unit 11, a storage unit 12, a control unit 13, a sensor unit 14, a moving unit 15, a first operation unit 16a, and a second operation unit 16b. , And a third operation unit 16c.
- the robot device 100b has three operation units 16 of a first operation unit 16a, a second operation unit 16b, and a third operation unit 16c.
- the first operation unit 16a, the second operation unit 16b, and the third operation unit 16c are provided on both sides of the body portion (base portion) of the robot device 100.
- the first operation unit 16a extends from the left side portion of the robot device 100 and functions as the left hand of the robot device 100.
- the second operation unit 16b and the third operation unit 16c extend from the right side portion of the robot device 100 and function as the right hand of the robot device 100.
- the operation units 16 may be provided at various positions depending on the number of the operation units 16 and the shape of the robot device 100.
- the third operation unit 16c may be provided in the central portion of the base portion.
- the determination unit 136 of the robot device 100B can operate the objects OB1 to OB3 as shown in the state ST1 in FIG. 1 even when the object OB1 is first selected as the target object. judge.
- the determination unit 136 of the robot device 100B may also process the object OB1 in contact with the object OB2 in contact with the object OB1 as an adjacent object.
- the robot device 100B executes a process of operating the objects OB2 and OB3 because the predicted posture change values of the objects OB2 and OB3 caused by the removal of the target object OB1 are equal to or higher than the posture threshold value.
- the first operation unit 16a executes the operation of the target object OB1
- the second operation unit 16b executes the operation of the object OB2
- the third operation unit 16c executes the operation of the object OB3.
- the first operation unit 16a executes the operation of the target object OB1
- the second operation unit 16b executes the operation of supporting the adjacent object OB2
- the third operation unit 16c executes the operation of the adjacent object OB2.
- An operation for supporting a certain object OB3 is executed.
- the robot device 100 drives the second operation unit 16b after an operation such as moving the target object OB1, executes an operation of arranging the object OB2 at a stable position, and drives the third operation unit 16c.
- the operation of arranging the object OB3 at a stable position may be executed.
- the first operation unit 16a executes an operation of holding the target object OB1
- the second operation unit 16b executes an operation of holding the object OB3
- the third operation unit 16c holds the object OB3. You may perform the operation to do.
- the robot device 100 may carry the objects OB2 and OB3 together with the object OB1 to a desired position by the moving unit 15 while the operating unit 16 holds the objects OB1 to OB3.
- the robot device 100b executes a process of manipulating the target object and the adjacent object based on the change in the arrangement state of the adjacent object adjacent to the target object when the target object is removed. In this way, the robot device 100b can enable an appropriate operation on an object even when an adjacent object exists.
- FIG. 13 is a diagram showing an example of information processing according to the third embodiment.
- the information processing according to the third embodiment is realized by the robot device 100B shown in FIG. A case where a process of carrying a tray on which a plurality of objects are placed is performed will be described as an example with reference to FIG. The same points as in the above example will be omitted as appropriate.
- the robot device 100B shows a case where the object OB40, which is a tray on which the objects OB41 to OB46, which are a plurality of dishes (tableware), are placed.
- the robot device 100B places food (objects OB41 to OB46) on the tray (object OB40) and serves the food, it is necessary to be careful not to spill the drink in the cup on the tray.
- the relationship is as follows: operation unit 16 (robot arm) ⁇ tray ⁇ cup ⁇ drink, and when viewed from the operation unit 16, the drink is a tertiary object.
- food and drink may spill due to vibration and external contact.
- the robot device 100B may label tableware that is easy to move by an external force by the operation unit 16 (manipulator) in order of ease of movement.
- the robot device 100B may apply an external force from the operation unit 16 (manipulator) to the tableware, measure the movement of each object OB41 to OB46, and label the objects OB41 to OB46 in the order in which they are easy to move. Then, the robot device 100B may hold an easily movable object by the remaining operation units 16 other than the operation unit 16 necessary for holding the object OB40 which is a tray.
- the robot device 100B is labeled as being easy to move in the order of objects OB46, OB42, OB45, OB41, OB43, and OB44. In this way, the robot device 100B labels the objects OB46 and OB42 whose contents are liquid when they are easy to move.
- the robot device 100B holds the object OB40, which is a tray, by the second operation unit 16b, and holds the object OB46, which is liable to spill the contents (liquid), by the remaining first operation unit 16a.
- the operation unit 16c holds an object OB42 in which the contents (liquid) are likely to spill. In this way, the robot device 100B can stably perform the serving task by holding down the unstable tableware with the operation unit 16 (manipulator) different from the operation unit 16 (manipulator) having the tray. Become.
- FIG. 14 is a diagram showing a configuration example of an information processing system according to a modified example of the present disclosure.
- FIG. 15 is a diagram showing a configuration example of an information processing device according to a modified example of the present disclosure.
- the information processing system 1 includes a robot device 10 and an information processing device 100C.
- the robot device 10 and the information processing device 100C are connected to each other via a network N so as to be communicable by wire or wirelessly.
- the information processing system 1 shown in FIG. 14 may include a plurality of robot devices 10 and a plurality of information processing devices 100C.
- the information processing device 100C may communicate with the robot device 10 via the network N and give an instruction to control the robot device 10 based on the information collected by the robot device 10 and various sensors.
- the robot device 10 transmits sensor information detected by sensors such as an image sensor and a force sensor to the information processing device 100C.
- the robot device 10 transmits image information obtained by capturing an image of a group of objects by an image sensor to the information processing device 100C.
- the information processing apparatus 100C acquires image information including a group of objects.
- the robot device 10 may be any device as long as information can be transmitted and received to and from the information processing device 100C, and may be various robots such as an autonomous mobile robot.
- the information processing device 100C is an information processing device that transmits information (control information) for controlling the robot device 10, such as an action plan, to the robot device 10. For example, the information processing device 100C generates control information for controlling the robot device 10, such as an action plan of the robot device 10, based on the information stored in the storage unit 12C and the information acquired from the robot device 10. .. The information processing device 100C transmits the generated control information to the robot device 10. The robot device 10 that has received the robot device 10 from the information processing device 100C controls the moving unit 15 based on the control information and moves, or controls the operating unit 16 based on the control information to operate the object. ..
- the information processing device 100C includes a communication unit 11C, a storage unit 12C, and a control unit 13C.
- the communication unit 11C is connected to the network N (Internet or the like) by wire or wirelessly, and transmits / receives information to / from the robot device 10 via the network N.
- the storage unit 12C is realized by, for example, a semiconductor memory element such as a RAM or a flash memory, or a storage device such as a hard disk or an optical disk.
- the storage unit 12C stores the same information as the storage unit 12.
- the storage unit 12C has a threshold information storage unit 121 and a density information storage unit 122.
- the storage unit 12C stores information for controlling the movement of the robot device 10, various information received from the robot device 10, and various information to be transmitted to the robot device 10.
- the control unit 13C is realized by, for example, a CPU, an MPU, or the like executing a program stored inside the information processing device 100C (for example, an information processing program according to the present disclosure) using a RAM or the like as a work area. Further, the control unit 13C may be realized by an integrated circuit such as an ASIC or FPGA.
- the control unit 13C includes an acquisition unit 131, an analysis unit 132, a classification unit 133, a selection unit 134, a prediction unit 135, a determination unit 136, a planning unit 137, and a transmission unit 138C.
- the transmission unit 138C transmits various information to an external information processing device.
- the transmission unit 138C transmits various information to an external information processing device.
- the transmission unit 138C transmits various information to the robot device 10.
- the transmission unit 138C provides the information stored in the storage unit 12.
- the transmission unit 138C transmits the information stored in the storage unit 12.
- the transmission unit 138C provides various information based on the information from the robot device 10.
- the transmission unit 138C provides various information based on the information stored in the storage unit 12.
- the transmission unit 138C transmits the control information to the robot device 10.
- the transmission unit 138C transmits the action plan created by the action planning unit to the robot device 10.
- the transmission unit 138C executes a process of transmitting control information to the robot device 10 in order to cause the robot device 10 to operate an adjacent object.
- the transmission unit 138C functions as an execution unit that executes a process of manipulating an adjacent object by executing a process of transmitting control information to the robot device 10.
- the information processing device 100C does not have a sensor unit, a moving unit, an operation unit, or the like, and does not have to have a configuration for realizing a function as a robot device.
- the information processing device 100C includes an input unit (for example, a keyboard, a mouse, etc.) that receives various operations from an administrator or the like that manages the information processing device 100C, and a display unit (for example, a liquid crystal display, etc.) for displaying various information. ) May have.
- each component of each device shown in the figure is a functional concept, and does not necessarily have to be physically configured as shown in the figure. That is, the specific form of distribution / integration of each device is not limited to the one shown in the figure, and all or part of the device is functionally or physically distributed / physically in arbitrary units according to various loads and usage conditions. Can be integrated and configured.
- the information processing devices include a prediction unit (prediction unit 135 in the embodiment) and an execution unit (execution in the embodiment).
- a unit 138) is provided.
- the prediction unit relates to the arrangement state of the adjacent object generated by the operation on the target object based on the image information obtained by capturing the image information of the target object which is a candidate object to be operated and the adjacent object which is an object adjacent to the target object. Predict changes.
- the execution unit executes a process of manipulating the adjacent object when the change in the arrangement state of the adjacent object predicted by the prediction unit satisfies a predetermined condition.
- the information processing apparatus when the target object, which is a candidate object to be operated, has an adjacent object, the information processing apparatus according to the present disclosure refers to the target object based on the image information captured by the target object and the adjacent object. Predict changes in the arrangement of adjacent objects caused by the operation. For example, the information processing apparatus removes the target object, changes the position of the target object, or changes the posture of the target object based on the image information obtained by capturing the image information of the target object and the adjacent object. Predict changes in. Then, when the change in the arrangement state of the adjacent object satisfies a predetermined condition, the information processing device executes a process of operating the adjacent object, so that an appropriate operation on the object is performed even if the adjacent object exists. Can be made possible.
- the execution unit executes a process of operating the adjacent object.
- the information processing device executes a process of operating the adjacent object, so that the information processing device is appropriate for the object even if the adjacent object exists. Operation is possible.
- the prediction unit predicts changes in the posture of adjacent objects caused by operations on the target object.
- the execution unit executes a process of operating the adjacent object.
- the information processing device predicts the change in the posture of the adjacent object, and when the change in the posture of the adjacent object satisfies the condition related to the posture change, the information processing device executes a process of manipulating the adjacent object to cause the adjacent object to move. Appropriate manipulation of the object, even if present, can be made possible.
- the prediction unit predicts changes in the position of adjacent objects caused by operations on the target object.
- the execution unit executes a process of manipulating the adjacent object when the change in the position of the adjacent object satisfies the condition regarding the position change.
- the information processing device predicts the change in the position of the adjacent object, and when the change in the position of the adjacent object satisfies the condition regarding the position change, the information processing device executes a process of manipulating the adjacent object so that the adjacent object can be moved. Appropriate manipulation of the object, even if present, can be made possible.
- the prediction unit predicts changes in the arrangement state of the adjacent object caused by the operation on the target object based on the image information obtained by capturing the image information of the target object and the adjacent object in contact with the target object.
- the information processing device may execute a process of manipulating the adjacent object so that the contacting object exists.
- proper operation can be enabled.
- the prediction unit predicts changes in the arrangement state of the adjacent objects caused by the operation on the target objects based on the image information obtained by capturing the stacked target objects and the adjacent objects. As described above, when the change in the arrangement state of the adjacent objects stacked on the target object satisfies a predetermined condition, the information processing device executes a process of operating the adjacent objects to exist the stacked objects. Even so, it is possible to enable an appropriate operation.
- the prediction unit relates to the arrangement state of the adjacent object generated by the operation on the target object based on the image information obtained by capturing the image information of the target object and the adjacent object located within the range affected by the operation on the target object. Predict change. In this way, when the change in the arrangement state of the adjacent object located within the range affected by the operation on the target object satisfies a predetermined condition, the information processing device executes a process of operating the adjacent object to be adjacent. Even if there is an object to be processed, it is possible to appropriately operate the object.
- the information processing device includes an operation unit (operation unit 16 in the embodiment).
- the operation unit is driven according to the processing by the execution unit.
- the information processing apparatus can perform an appropriate operation on the object even when an adjacent object exists by the operation unit driven by the operation unit driven by the processing by the execution unit.
- the operation unit operates the adjacent object when the change regarding the arrangement state of the adjacent object satisfies a predetermined condition.
- the information processing apparatus can operate the adjacent object to perform an appropriate operation on the object even if the adjacent object exists. Can be.
- the information processing device includes a plurality of operation units (in the embodiment, the first operation unit 16a, the second operation unit 16b, and the third operation unit 16c).
- the plurality of operation units are driven according to the processing by the execution unit.
- the information processing apparatus can perform an appropriate operation on the object even when there are adjacent objects by the plurality of operation units driven by the processing by the execution unit.
- At least one of the plurality of operation units operates the adjacent object when the change regarding the arrangement state of the adjacent object satisfies a predetermined condition.
- the information processing apparatus can operate the adjacent object to perform an appropriate operation on the object even if the adjacent object exists. Can be.
- the execution unit executes a process of causing one of the plurality of operation units to operate the target object, and the other of the plurality of operation units. Performs a process of causing the operation unit of the above to operate an adjacent object.
- the information processing device executes a process of causing one operation unit of the plurality of operation units to operate the target object, and causes the other operation units to operate the target object.
- the execution unit executes a process of moving the target object to one operation unit.
- the information processing apparatus can perform an appropriate operation on the object even when the adjacent object exists by executing the process of moving the target object.
- the execution unit executes a process of suppressing the change in the arrangement state of the adjacent object due to the movement of the target object to another operation unit.
- the information processing device executes a process of suppressing a change in the arrangement state of the adjacent object due to the movement of the target object, so that an appropriate operation can be performed on the object even if the adjacent object exists. can do.
- the execution unit executes a process of causing another operation unit to support an adjacent object.
- the information processing apparatus can perform an appropriate operation on the adjacent object even when the adjacent object exists by executing the process of supporting the adjacent object.
- the execution unit executes a process of moving an adjacent object to another operation unit.
- the information processing apparatus can perform an appropriate operation on the object even when the adjacent object exists by executing the process of moving the adjacent object.
- the information processing device includes a force sensor (force sensor 142 in the embodiment).
- the force sensor detects the contact of the operation unit with an object.
- the execution unit executes a process of bringing the operation unit into contact with an object based on the sensor information detected by the force sensor.
- the information processing apparatus can acquire information on the state of the object by detecting the contact with the object by the operation unit, so that it is possible to perform an appropriate operation according to the object.
- the information processing device includes a determination unit (determination unit 136 in the embodiment).
- the determination unit determines whether or not the object has a portion that moves independently of the object based on the result of contact with the object by the operation unit.
- the execution unit executes a process of operating the object according to the number of operation units.
- the information processing device determines whether or not the object has a portion that moves independently of the object based on the result of contact with the object by the operation unit, and corresponds to the determination result and the number of operation units. By executing the process of operating the object, it is possible to perform an appropriate operation according to the state of the object and the number of operation units.
- FIG. 16 is a hardware configuration diagram showing an example of a computer 1000 that realizes the functions of information processing devices such as robot devices 100, 100A, 100B and information processing device 100C.
- the computer 1000 includes a CPU 1100, a RAM 1200, a ROM (Read Only Memory) 1300, an HDD (Hard Disk Drive) 1400, a communication interface 1500, and an input / output interface 1600. Each part of the computer 1000 is connected by a bus 1050.
- the CPU 1100 operates based on the program stored in the ROM 1300 or the HDD 1400, and controls each part. For example, the CPU 1100 expands the program stored in the ROM 1300 or the HDD 1400 into the RAM 1200 and executes processing corresponding to various programs.
- the ROM 1300 stores a boot program such as a BIOS (Basic Input Output System) executed by the CPU 1100 when the computer 1000 is started, a program that depends on the hardware of the computer 1000, and the like.
- BIOS Basic Input Output System
- the HDD 1400 is a computer-readable recording medium that non-temporarily records a program executed by the CPU 1100 and data used by the program.
- the HDD 1400 is a recording medium for recording an information processing program according to the present disclosure, which is an example of program data 1450.
- the communication interface 1500 is an interface for the computer 1000 to connect to an external network 1550 (for example, the Internet).
- the CPU 1100 receives data from another device or transmits data generated by the CPU 1100 to another device via the communication interface 1500.
- the input / output interface 1600 is an interface for connecting the input / output device 1650 and the computer 1000.
- the CPU 1100 receives data from an input device such as a keyboard or mouse via the input / output interface 1600. Further, the CPU 1100 transmits data to an output device such as a display, a speaker, or a printer via the input / output interface 1600. Further, the input / output interface 1600 may function as a media interface for reading a program or the like recorded on a predetermined recording medium (media).
- the media is, for example, an optical recording medium such as a DVD (Digital Versatile Disc) or PD (Phase change rewritable Disk), a magneto-optical recording medium such as an MO (Magneto-Optical disk), a tape medium, a magnetic recording medium, or a semiconductor memory.
- an optical recording medium such as a DVD (Digital Versatile Disc) or PD (Phase change rewritable Disk)
- a magneto-optical recording medium such as an MO (Magneto-Optical disk)
- a tape medium such as a magnetic tape
- magnetic recording medium or a semiconductor memory.
- semiconductor memory for example, when the computer 1000 functions as the robot device 100 according to the embodiment, the CPU 1100 of the computer 1000 realizes the functions of the control unit 13 and the like by executing the information processing program loaded on the RAM 1200. Further, the information processing program according to the present disclosure and the data in the storage unit 12 are stored in the HDD 1400. The CPU 1100 reads the
- the present technology can also have the following configurations.
- Prediction unit that predicts When the change in the arrangement state of the adjacent object predicted by the prediction unit satisfies a predetermined condition, the execution unit that executes the process of operating the adjacent object and the execution unit.
- Information processing device equipped with (2) The execution unit The information processing apparatus according to (1), wherein when the amount of change in the arrangement state of the adjacent object is equal to or greater than the threshold value, the process of operating the adjacent object is executed.
- the prediction unit Predicting changes in the posture of the adjacent object caused by the operation on the target object,
- the execution unit The information processing apparatus according to (1) or (2), wherein when the change in the posture of the adjacent object satisfies the condition regarding the change in posture, the process of operating the adjacent object is executed.
- the prediction unit Predicting changes in the position of the adjacent object caused by the operation on the target object,
- the execution unit The information processing apparatus according to any one of (1) to (3), which executes a process of manipulating the adjacent object when the change in the position of the adjacent object satisfies the condition relating to the position change.
- the prediction unit Based on the image information obtained by capturing the image of the target object and the adjacent object in contact with the target object, the change in the arrangement state of the adjacent object caused by the operation on the target object is predicted (1) to (4).
- the information processing apparatus according to any one of (). (6)
- the prediction unit Any of (1) to (5) for predicting a change in the arrangement state of the adjacent object caused by an operation on the target object based on the image information obtained by capturing the stacked target object and the adjacent object.
- the information processing apparatus according to item 1.
- the prediction unit Based on the image information obtained by imaging the target object and the adjacent object located within the range affected by the operation on the target object, the change regarding the arrangement state of the adjacent object caused by the operation on the target object is changed.
- the information processing apparatus according to any one of (1) to (6).
- An image sensor that captures the image information and An acquisition unit that acquires the image information captured by the image sensor, and The information processing apparatus according to any one of (1) to (7).
- An operation unit that is driven in response to processing by the execution unit, The information processing apparatus according to any one of (1) to (8).
- the operation unit The information processing device according to (9), which is a manipulator that operates an object.
- the operation unit The information processing apparatus according to (9) or (10), which operates the adjacent object when the change in the arrangement state of the adjacent object satisfies a predetermined condition.
- (12) A plurality of operation units driven according to processing by the execution unit, The information processing apparatus according to any one of (1) to (8).
- Each of the plurality of operation units The information processing apparatus according to (12), which is a manipulator that operates an object.
- At least one of the plurality of operation units The information processing apparatus according to (12) or (13), which operates the adjacent object when the change in the arrangement state of the adjacent object satisfies a predetermined condition.
- the execution unit When the change in the arrangement state of the adjacent object satisfies a predetermined condition, a process of causing one of the plurality of operation units to operate the target object is executed, and another operation of the plurality of operation units is performed.
- the information processing apparatus according to any one of (12) to (14), which executes a process of causing a unit to operate the adjacent object.
- the execution unit The information processing apparatus according to (15), which executes a process of moving the target object to the one operation unit.
- the execution unit The information processing apparatus according to (16), wherein the other operation unit executes a process of suppressing a change in the arrangement state of the adjacent object due to the movement of the target object.
- the execution unit The information processing apparatus according to (16) or (17), which executes a process of causing the other operating unit to support the adjacent object.
- the execution unit The information processing apparatus according to (16) or (17), which executes a process of moving the adjacent object to the other operation unit.
- (20) A force sensor that detects contact with an object by the operation unit, With more The execution unit The information processing apparatus according to any one of (9) to (19), which executes a process of bringing the operation unit into contact with an object based on the sensor information detected by the force sensor.
- (21) A determination unit that determines whether or not the object has a part that moves independently of the object based on the result of contact with the object by the operation unit. With more The execution unit The information processing apparatus according to (20), wherein when the determination unit determines that the object has the location, the processing for operating the object according to the number of operation units is executed.
- (22) A classification unit that classifies a group of objects included in the image information into an operable object and an inoperable object.
- a selection unit that selects a manipulable object from the object group as the target object based on the classification result by the classification unit.
- the prediction unit The information processing apparatus according to any one of (1) to (21), which predicts a change in the arrangement state of the adjacent object caused by an operation on the target object selected by the selection unit. (23) Changes related to the arrangement state of the adjacent object caused by the operation on the target object based on the image information captured by the target object which is a candidate object to be operated and the adjacent object which is an object adjacent to the target object. Predict and An information processing method that executes control to execute a process of operating the adjacent object when the predicted change in the arrangement state of the adjacent object satisfies a predetermined condition.
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
The information processing device according to the present disclosure is provided with: a prediction unit that predicts, on the basis of image information obtained by photographing a target object which is a candidate for an operation target and an adjacent object which is adjacent to the target object, a change related to an arrangement state of the adjacent object caused by an operation with respect to the target object; and an execution unit that executes a process for operating the adjacent object when the change in the arrangement state of the adjacent object predicted by the prediction unit satisfies a predetermined condition.
Description
本開示は、情報処理装置、情報処理方法及び情報処理プログラムに関する。
This disclosure relates to an information processing device, an information processing method, and an information processing program.
自律的に物品(物体)を片付けたり、状況に応じて物体の配置位置を変更したりする自律ロボットに関する技術が知られている。例えば、操作対象となる物体を電子タグ等により認識することで、物体に対する操作を実行する(特許文献1)。
Technology related to autonomous robots that autonomously clean up articles (objects) and change the placement position of objects according to the situation is known. For example, by recognizing an object to be operated by an electronic tag or the like, an operation on the object is executed (Patent Document 1).
従来技術によれば、物体の配置位置を変更する際に、指定された設置位置に他の物品が存在する場合の設置位置を修正する。
According to the prior art, when changing the arrangement position of an object, the installation position when another article exists at the specified installation position is corrected.
しかしながら、上記の従来技術では、変更後の設置位置に他の物品が存在する場合に設置位置を修正しているに過ぎず、変更前の状態で隣接する物体が存在する物体を対象に操作する場合については考慮されていない。そのため、上記の従来技術では、例えば隣接する物体を先に操作する必要がある物体など、その時点では操作困難な物体を操作対象としてしまう可能性が有る。
However, in the above-mentioned prior art, the installation position is only corrected when another article exists in the changed installation position, and the object in which the adjacent object exists in the state before the change is operated as a target. The case is not considered. Therefore, in the above-mentioned conventional technique, there is a possibility that an object that is difficult to operate at that time, such as an object that needs to operate an adjacent object first, may be an operation target.
そこで、本開示では、隣接する物体が存在する場合であっても物体に対する適切な操作を可能にする情報処理装置、情報処理方法及び情報処理プログラムを提案する。
Therefore, the present disclosure proposes an information processing device, an information processing method, and an information processing program that enable appropriate operation on an object even when an adjacent object exists.
上記の課題を解決するために、本開示に係る一形態の情報処理装置は、操作対象の候補となる物体である対象物体と、前記対象物体に隣接する物体である隣接物体とが撮像された画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測する予測部と、前記予測部により予測された前記隣接物体の配置状態の変化が所定の条件を満たす場合、前記隣接物体を操作する処理を実行する実行部と、を備える。
In order to solve the above problems, in one form of the information processing apparatus according to the present disclosure, an object object which is a candidate object to be operated and an adjacent object which is an object adjacent to the object object are imaged. When the prediction unit that predicts the change in the arrangement state of the adjacent object caused by the operation on the target object based on the image information and the change in the arrangement state of the adjacent object predicted by the prediction unit satisfy a predetermined condition. , An execution unit that executes a process of manipulating the adjacent object.
以下に、本開示の実施形態について図面に基づいて詳細に説明する。なお、この実施形態により本願にかかる情報処理装置、情報処理方法及び情報処理プログラムが限定されるものではない。また、以下の各実施形態において、同一の部位には同一の符号を付することにより重複する説明を省略する。
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings. The information processing apparatus, information processing method, and information processing program according to the present application are not limited by this embodiment. Further, in each of the following embodiments, duplicate description will be omitted by assigning the same reference numerals to the same parts.
以下に示す項目順序に従って本開示を説明する。
1.第1の実施形態
1-1.本開示の第1の実施形態に係る情報処理の概要
1-1-1.処理例
1-1-2.操作可能物体判定
1-1-3.物体群の操作
1-1-4.ロボット装置の概要・効果
1-2.第1の実施形態に係るロボット装置の構成
1-3.第1の実施形態に係る情報処理の手順
1-3-1.情報処理の手順の概要を示すフローチャート
1-3-2.情報処理の手順の詳細を示すフローチャート
1-4.情報処理装置の構成の概念図
1-5.N次物体操作の処理例
2.第2の実施形態
2-1.本開示の第2の実施形態に係るロボット装置の構成
2-2.第2の実施形態に係る情報処理の概要
3.第3の実施形態
3-1.本開示の第3の実施形態に係るロボット装置の構成
3-2.第3の実施形態に係る情報処理の概要
4.その他の実施形態
4-1.その他の構成例
4-2.その他
5.本開示に係る効果
6.ハードウェア構成 The present disclosure will be described according to the order of items shown below.
1. 1. First Embodiment 1-1. Outline of information processing according to the first embodiment of the present disclosure 1-1-1. Processing example 1-1-2. Manipulable object judgment 1-1-3. Manipulation of object group 1-1-4. Outline and effects of robot devices 1-2. Configuration of the robot device according to the first embodiment 1-3. Information processing procedure according to the first embodiment 1-3-1. Flowchart showing the outline of the information processing procedure 1-3-2. Flowchart showing details of information processing procedure 1-4. Conceptual diagram of the configuration of the information processing device 1-5. Processing example of Nth object operation 2. Second Embodiment 2-1. Configuration of the robot device according to the second embodiment of the present disclosure 2-2. Outline of information processing according to the second embodiment 3. Third Embodiment 3-1. Configuration of the robot device according to the third embodiment of the present disclosure 3-2. Outline of information processing according to the third embodiment 4. Other Embodiments 4-1. Other configuration examples 4-2. Others 5. Effect of this disclosure 6. Hardware configuration
1.第1の実施形態
1-1.本開示の第1の実施形態に係る情報処理の概要
1-1-1.処理例
1-1-2.操作可能物体判定
1-1-3.物体群の操作
1-1-4.ロボット装置の概要・効果
1-2.第1の実施形態に係るロボット装置の構成
1-3.第1の実施形態に係る情報処理の手順
1-3-1.情報処理の手順の概要を示すフローチャート
1-3-2.情報処理の手順の詳細を示すフローチャート
1-4.情報処理装置の構成の概念図
1-5.N次物体操作の処理例
2.第2の実施形態
2-1.本開示の第2の実施形態に係るロボット装置の構成
2-2.第2の実施形態に係る情報処理の概要
3.第3の実施形態
3-1.本開示の第3の実施形態に係るロボット装置の構成
3-2.第3の実施形態に係る情報処理の概要
4.その他の実施形態
4-1.その他の構成例
4-2.その他
5.本開示に係る効果
6.ハードウェア構成 The present disclosure will be described according to the order of items shown below.
1. 1. First Embodiment 1-1. Outline of information processing according to the first embodiment of the present disclosure 1-1-1. Processing example 1-1-2. Manipulable object judgment 1-1-3. Manipulation of object group 1-1-4. Outline and effects of robot devices 1-2. Configuration of the robot device according to the first embodiment 1-3. Information processing procedure according to the first embodiment 1-3-1. Flowchart showing the outline of the information processing procedure 1-3-2. Flowchart showing details of information processing procedure 1-4. Conceptual diagram of the configuration of the information processing device 1-5. Processing example of Nth object operation 2. Second Embodiment 2-1. Configuration of the robot device according to the second embodiment of the present disclosure 2-2. Outline of information processing according to the second embodiment 3. Third Embodiment 3-1. Configuration of the robot device according to the third embodiment of the present disclosure 3-2. Outline of information processing according to the third embodiment 4. Other Embodiments 4-1. Other configuration examples 4-2. Others 5. Effect of this disclosure 6. Hardware configuration
[1.第1の実施形態]
[1-1.本開示の第1の実施形態に係る情報処理の概要]
図1は、本開示の第1の実施形態に係る情報処理の一例を示す図である。本開示の第1の実施形態に係る情報処理は、図1に示すロボット装置100によって実現される。 [1. First Embodiment]
[1-1. Outline of information processing according to the first embodiment of the present disclosure]
FIG. 1 is a diagram showing an example of information processing according to the first embodiment of the present disclosure. The information processing according to the first embodiment of the present disclosure is realized by therobot device 100 shown in FIG.
[1-1.本開示の第1の実施形態に係る情報処理の概要]
図1は、本開示の第1の実施形態に係る情報処理の一例を示す図である。本開示の第1の実施形態に係る情報処理は、図1に示すロボット装置100によって実現される。 [1. First Embodiment]
[1-1. Outline of information processing according to the first embodiment of the present disclosure]
FIG. 1 is a diagram showing an example of information processing according to the first embodiment of the present disclosure. The information processing according to the first embodiment of the present disclosure is realized by the
ロボット装置100は、第1の実施形態に係る情報処理を実行する情報処理装置である。ロボット装置100は、位置の移動を行うための機能を有する移動部15を有し、所望の位置に移動可能な自律ロボットである。また、ロボット装置100は、第1操作部16aと、第2操作部16bとの2つの操作部(マニピュレータ)を有する。以下では、第1操作部16aや第2操作部16bを区別せずに説明する場合、「操作部16」と記載する場合がある。なお、ロボット装置100が有する操作部16の数は2つに限らず、1つや3つ以上であってもよいが、この点についての詳細は後述する。
The robot device 100 is an information processing device that executes information processing according to the first embodiment. The robot device 100 is an autonomous robot that has a moving unit 15 having a function for moving a position and can move to a desired position. Further, the robot device 100 has two operation units (manipulators), a first operation unit 16a and a second operation unit 16b. In the following, when the first operation unit 16a and the second operation unit 16b are described without distinction, they may be described as "operation unit 16". The number of operation units 16 included in the robot device 100 is not limited to two, and may be one or three or more. Details of this point will be described later.
ロボット装置100は、画像センサ141(図3参照)により検知(撮像)した画像情報(単に「画像」ともいう)に基づいて、物体を操作する処理を実行する情報処理装置である。ロボット装置100は、画像中の物体から操作対象の候補となる対象物体を選択し、対象物体に対する操作により生じる隣接物体の配置状態に関する変化を予測し、予測した隣接物体の配置状態の変化が所定の条件を満たす場合、隣接物体を操作する処理を実行する。なお、図1の例では、対象物体の隣接物体の一例として、対象物体と接触している物体を一例として説明するが、隣接物体は、対象物体に接触している物体に限らず、対象物体から所定の範囲内に位置する物体であってもよい。例えば、隣接物体は、対象物体の除去の影響を受ける範囲内に位置する物体であってもよい。例えば、対象物体が磁性を有する物体である場合、隣接物体は、対象物体の磁性の影響を受ける範囲内に位置する物体であってもよい。また、図1の例では、対象物体の除去を対象物体に対する操作の一例とし、対象物体の除去により生じる隣接物体の配置状態に関する変化を予測する場合を説明する。なお、対象物体に対する操作は、対象物体を除去する(取り除く)ことに限らず、例えば対象物体の位置を変更することや、対象物体の姿勢を変更する等、隣接物体に影響を与えうる種々の操作を含む概念である。
The robot device 100 is an information processing device that executes a process of manipulating an object based on image information (also simply referred to as an "image") detected (imaged) by an image sensor 141 (see FIG. 3). The robot device 100 selects a target object as a candidate for an operation target from the objects in the image, predicts a change in the arrangement state of the adjacent object caused by the operation on the target object, and determines the predicted change in the arrangement state of the adjacent object. When the condition of is satisfied, the process of manipulating the adjacent object is executed. In the example of FIG. 1, an object in contact with the target object will be described as an example of an object adjacent to the target object, but the adjacent object is not limited to an object in contact with the target object. It may be an object located within a predetermined range from. For example, the adjacent object may be an object located within the range affected by the removal of the target object. For example, when the target object is a magnetic object, the adjacent object may be an object located within a range affected by the magnetism of the target object. Further, in the example of FIG. 1, the case where the removal of the target object is taken as an example of the operation on the target object and the change regarding the arrangement state of the adjacent object caused by the removal of the target object is predicted will be described. The operation on the target object is not limited to removing (removing) the target object, and various operations that can affect the adjacent object, such as changing the position of the target object and changing the posture of the target object, are performed. It is a concept that includes operations.
[1-1-1.処理例]
ここから、図1を用いて、複数の物体が積み重ねられた物体群を対象に処理を行う場合を一例として説明する。図1では、書籍である物体OB1、OB2、OB3の3つの物体が積み重ねられている状態ST1を対象とする処理を示す。ロボット装置100は、状態ST1を画像センサ141により撮像し、物体OB1、OB2、OB3が積み重ねられた状態ST1を示す画像(以下「画像IM1」とする場合がある)を取得する。ロボット装置100は、画像解析等の技術により、画像IM1を解析することにより、画像IM1に物体OB1、OB2、OB3が含まれることを特定する。 [1-1-1. Processing example]
From here, using FIG. 1, a case where processing is performed on a group of objects in which a plurality of objects are stacked will be described as an example. FIG. 1 shows a process for ST1 in which three objects, OB1, OB2, and OB3, which are books, are stacked. Therobot device 100 takes an image of the state ST1 by the image sensor 141, and acquires an image (hereinafter, may be referred to as “image IM1”) showing the state ST1 in which the objects OB1, OB2, and OB3 are stacked. The robot device 100 identifies that the image IM1 includes the objects OB1, OB2, and OB3 by analyzing the image IM1 by a technique such as image analysis.
ここから、図1を用いて、複数の物体が積み重ねられた物体群を対象に処理を行う場合を一例として説明する。図1では、書籍である物体OB1、OB2、OB3の3つの物体が積み重ねられている状態ST1を対象とする処理を示す。ロボット装置100は、状態ST1を画像センサ141により撮像し、物体OB1、OB2、OB3が積み重ねられた状態ST1を示す画像(以下「画像IM1」とする場合がある)を取得する。ロボット装置100は、画像解析等の技術により、画像IM1を解析することにより、画像IM1に物体OB1、OB2、OB3が含まれることを特定する。 [1-1-1. Processing example]
From here, using FIG. 1, a case where processing is performed on a group of objects in which a plurality of objects are stacked will be described as an example. FIG. 1 shows a process for ST1 in which three objects, OB1, OB2, and OB3, which are books, are stacked. The
まず、ロボット装置100は、操作対象の候補となる物体(対象物体)を選択する(ステップS1)。例えば、ロボット装置100は、物体OB1、OB2、OB3の物体群から、対象物体をランダムに選択する。図1の例では、ロボット装置100は、物体OB1、OB2、OB3の物体群から、対象物体として物体OB2(以下「対象物体OB2」ともいう)を選択する。なお、ロボット装置100は、重量が所定の閾値未満であると推定される物体を操作対象物に選択するが、この点については図2で説明する。ロボット装置100は、対象物体OB2の周囲にある物体(隣接物体)と、対象物体OB2との物理接触状態を認識する。ロボット装置100は、画像IM1を解析することにより、対象物体OB2が、物体OB1及び物体OB3と接触していると認識する。
First, the robot device 100 selects an object (target object) that is a candidate for the operation target (step S1). For example, the robot device 100 randomly selects a target object from a group of objects OB1, OB2, and OB3. In the example of FIG. 1, the robot device 100 selects an object OB2 (hereinafter, also referred to as “target object OB2”) as a target object from a group of objects OB1, OB2, and OB3. The robot device 100 selects an object whose weight is estimated to be less than a predetermined threshold value as an operation target, and this point will be described with reference to FIG. The robot device 100 recognizes a physical contact state between an object (adjacent object) around the target object OB2 and the target object OB2. By analyzing the image IM1, the robot device 100 recognizes that the target object OB2 is in contact with the object OB1 and the object OB3.
そして、ロボット装置100は、対象物体を除去する(ステップS2)。図1の例では、ロボット装置100は、対象物体OB2を除去する。ロボット装置100は、物体OB1、OB2、OB3が積み重ねられた状態ST1を示す画像IM1から、対象物体OB2を除去する。ロボット装置100は、画像IM1から対象物体OB2を除去することにより、物体OB1、OB2、OB3の物体群から、対象物体OB2のみが除去された状態ST2を対象に処理を行う。
Then, the robot device 100 removes the target object (step S2). In the example of FIG. 1, the robot device 100 removes the target object OB2. The robot device 100 removes the target object OB2 from the image IM1 showing the state ST1 in which the objects OB1, OB2, and OB3 are stacked. By removing the target object OB2 from the image IM1, the robot device 100 performs processing on the state ST2 in which only the target object OB2 is removed from the object group of the objects OB1, OB2, and OB3.
ロボット装置100は、対象物体を除去した場合の隣接物体の配置状態に関する変化を予測する(ステップS3)。ロボット装置100は、対象物体を除去した場合の隣接物体の姿勢や位置の変化を予測する。ロボット装置100は、物理シミュレーションに関する種々の技術を適宜用いて、隣接物体の姿勢や位置の予測を行う。例えば、ロボット装置100は、画像を基に、各物体の形状データ(W[幅]、D[奥行き]、H[高さ])から重心を推定し、重力方向を検出する。そして、ロボット装置100は、内蔵の物理モデルシミュレータにより隣接物体の姿勢や位置の予測を行う。なお、上記は一例であり、ロボット装置100は、対象物体を除去した場合の隣接物体の姿勢や位置が予測可能であれば、どのような情報や方法により、予測を行ってもよい。
The robot device 100 predicts a change in the arrangement state of adjacent objects when the target object is removed (step S3). The robot device 100 predicts changes in the posture and position of adjacent objects when the target object is removed. The robot device 100 predicts the posture and position of an adjacent object by appropriately using various techniques related to physics simulation. For example, the robot device 100 estimates the center of gravity from the shape data (W [width], D [depth], H [height]) of each object based on the image, and detects the direction of gravity. Then, the robot device 100 predicts the posture and position of the adjacent object by the built-in physical model simulator. The above is an example, and the robot device 100 may perform prediction by any information or method as long as the posture and position of the adjacent object when the target object is removed can be predicted.
例えば、ロボット装置100は、対象物体を除去した場合の隣接物体の姿勢量(姿勢変化予測値)や位置の変化量(位置変化予測値)を予測する。図1の例では、ロボット装置100は、対象物体OB2を除去した場合の物体OB1、OB3の配置状態に関する変化を予測する。ロボット装置100は、対象物体OB2を除去した場合の物体OB1の姿勢や位置の変化や物体OB3の姿勢や位置の変化を予測する。
For example, the robot device 100 predicts the posture amount (posture change prediction value) and the position change amount (position change prediction value) of an adjacent object when the target object is removed. In the example of FIG. 1, the robot device 100 predicts a change in the arrangement state of the objects OB1 and OB3 when the target object OB2 is removed. The robot device 100 predicts changes in the posture and position of the object OB1 and changes in the posture and position of the object OB3 when the target object OB2 is removed.
ロボット装置100は、物体OB1が対象物体OB2の下に位置するため、対象物体OB2の除去により、物体OB1の位置や姿勢の変化はないと予測する。ロボット装置100は、物体OB1が対象物体OB2を支える状態であるため、対象物体OB2の除去により、物体OB1の位置や姿勢の変化はないと予測する。例えば、ロボット装置100は、対象物体OB2を除去した場合の物体OB1の姿勢変化予測値や位置の位置変化予測値が0であると予測する。
Since the object OB1 is located below the target object OB2, the robot device 100 predicts that the position and posture of the object OB1 will not change due to the removal of the target object OB2. Since the robot device 100 is in a state where the object OB1 supports the target object OB2, it is predicted that the position and posture of the object OB1 will not be changed by removing the target object OB2. For example, the robot device 100 predicts that the posture change predicted value and the position position change predicted value of the object OB1 when the target object OB2 is removed are 0.
また、ロボット装置100は、物体OB3が対象物体OB2の上に位置するため、対象物体OB2の除去により、物体OB3の位置や姿勢の変化があると予測する。ロボット装置100は、物体OB3が対象物体OB2により支えられている状態であるため、対象物体OB2の除去により、物体OB3の位置や姿勢の変化があると予測する。また、ロボット装置100は、対象物体OB2の除去による物体OB3の位置や姿勢の変化量を予測する。例えば、ロボット装置100は、対象物体OB2を除去した場合の物体OB1の姿勢変化予測値や位置の位置変化予測値を予測する。このように、ロボット装置100は、画像IM1から対象物体OB2を除去した場合、状態ST3に示すように、隣接物体の物体OB3に姿勢や位置の変化があると予測する。
Further, since the object OB3 is located on the target object OB2, the robot device 100 predicts that the position and posture of the object OB3 will change due to the removal of the target object OB2. Since the robot device 100 is in a state where the object OB3 is supported by the target object OB2, it is predicted that the position and posture of the object OB3 will change due to the removal of the target object OB2. Further, the robot device 100 predicts the amount of change in the position and posture of the object OB3 due to the removal of the target object OB2. For example, the robot device 100 predicts the posture change predicted value and the position position change predicted value of the object OB1 when the target object OB2 is removed. In this way, when the target object OB2 is removed from the image IM1, the robot device 100 predicts that the object OB3 of the adjacent object has a change in posture or position as shown in the state ST3.
そして、ロボット装置100は、隣接物体の周囲物体の動作が閾値以上であるかを判定する(ステップS4)。ロボット装置100は、隣接物体の姿勢または位置の変化が閾値以上であるかを判定する。例えば、ロボット装置100は、記憶部12(図4参照)に記憶された姿勢に関する閾値(姿勢閾値)や位置に関する閾値(位置閾値)を用いて、隣接物体の姿勢または位置の変化が閾値以上であるかを判定する。
Then, the robot device 100 determines whether the movement of the surrounding object of the adjacent object is equal to or higher than the threshold value (step S4). The robot device 100 determines whether the change in the posture or position of the adjacent object is equal to or greater than the threshold value. For example, the robot device 100 uses a posture-related threshold value (posture threshold value) and a position-related threshold value (position threshold value) stored in the storage unit 12 (see FIG. 4) when the posture or position change of an adjacent object is equal to or greater than the threshold value. Determine if there is.
ロボット装置100は、判定結果に基づいて操作に関する処理を実行する(ステップS5)。ロボット装置100は、対象物体の除去により生じる隣接物体の姿勢の変化量の予測値(姿勢変化予測値)と、姿勢閾値とを比較し、姿勢変化予測値が姿勢閾値以上である場合、隣接物体を操作する処理を実行する。また、ロボット装置100は、対象物体の除去により生じる隣接物体の位置の変化量の予測値(位置変化予測値)と、位置閾値とを比較し、位置変化予測値が位置閾値以上である場合、隣接物体を操作する処理を実行する。なお、ロボット装置100は、上記のように姿勢変化または位置変化の一方が閾値以上になる場合に隣接物体を操作する処理を実行してもよいし、姿勢変化及び位置変化の両方が閾値以上となる場合、隣接物体を操作する処理を実行してもよい。また、ロボット装置100は、姿勢変化と位置変化とを合成した変化量(合成変化量)を所定の閾値(図4中の閾値TH2等)と比較し、合成変化量が所定の閾値以上である場合、隣接物体を操作する処理を実行してもよい。
The robot device 100 executes a process related to the operation based on the determination result (step S5). The robot device 100 compares the predicted value of the posture change of the adjacent object (posture change predicted value) caused by the removal of the target object with the posture threshold, and when the posture change predicted value is equal to or more than the posture threshold, the adjacent object Executes the process of operating. Further, the robot device 100 compares the predicted value (position change predicted value) of the position change amount of the adjacent object caused by the removal of the target object with the position threshold value, and when the position change predicted value is equal to or more than the position threshold value, Executes the process of manipulating adjacent objects. The robot device 100 may execute a process of operating an adjacent object when one of the posture change or the position change becomes the threshold value or more as described above, and both the posture change and the position change become the threshold value or more. If so, the process of manipulating the adjacent object may be executed. Further, the robot device 100 compares the amount of change (combined change amount) obtained by combining the posture change and the position change with a predetermined threshold value (threshold value TH2 or the like in FIG. 4), and the combined change amount is equal to or larger than the predetermined threshold value. In that case, the process of manipulating the adjacent object may be executed.
図1の例では、ロボット装置100は、対象物体OB2の除去により生じる物体OB1の姿勢変化予測値が姿勢閾値未満であり、物体OB1の姿勢変化予測値が姿勢閾値未満であるため、物体OB1を操作する処理を実行しない。
In the example of FIG. 1, in the robot device 100, the posture change predicted value of the object OB1 caused by the removal of the target object OB2 is less than the posture threshold value, and the posture change predicted value of the object OB1 is less than the posture threshold value. Do not execute the operation.
また、図1の例では、ロボット装置100は、対象物体OB2の除去により生じる物体OB3の姿勢変化予測値が姿勢閾値以上であるため、物体OB3を操作する処理を実行する。例えば、ロボット装置100は、第1操作部16aで対象物体OB2の操作を実行し、第2操作部16bで物体OB3の操作を実行する。例えば、ロボット装置100は、第1操作部16aで対象物体OB2の操作を実行し、第2操作部16bで隣接物体である物体OB3を支える操作を実行する。このように、ロボット装置100は、第2操作部16bに対象物体の移動による隣接物体である物体OB3の配置状態の変化を抑制させる処理を実行する。ロボット装置100は、第1操作部16aに対象物体OB2を移動させる処理を実行する。この場合、ロボット装置100は、対象物体OB2を移動させるなどの操作後に、第2操作部16bを駆動させ、物体OB3を安定した位置に配置する操作を実行してもよい。
Further, in the example of FIG. 1, since the posture change predicted value of the object OB3 generated by the removal of the target object OB2 is equal to or more than the posture threshold value, the robot device 100 executes a process of operating the object OB3. For example, in the robot device 100, the first operation unit 16a executes the operation of the target object OB2, and the second operation unit 16b executes the operation of the object OB3. For example, in the robot device 100, the first operation unit 16a executes the operation of the target object OB2, and the second operation unit 16b executes the operation of supporting the object OB3 which is an adjacent object. In this way, the robot device 100 executes a process of suppressing the change in the arrangement state of the adjacent object OB3 due to the movement of the target object in the second operation unit 16b. The robot device 100 executes a process of moving the target object OB2 to the first operation unit 16a. In this case, the robot device 100 may perform an operation of driving the second operation unit 16b and arranging the object OB3 at a stable position after the operation such as moving the target object OB2.
また、ロボット装置100は、第1操作部16aで対象物体OB2を保持する操作を実行し、第2操作部16bで物体OB3を保持する操作を実行し、移動部15により、所望の位置まで物体OB2とともに物体OB3を運んでもよい。ロボット装置100は、第1操作部16aに対象物体OB2を移動させ、第2操作部16bに隣接物体である物体OB3を移動させてもよい。
Further, the robot device 100 executes an operation of holding the target object OB2 by the first operation unit 16a, an operation of holding the object OB3 by the second operation unit 16b, and an object to a desired position by the moving unit 15. The object OB3 may be carried together with the OB2. The robot device 100 may move the target object OB2 to the first operation unit 16a and move the object OB3 which is an adjacent object to the second operation unit 16b.
上述したように、ロボット装置100は、対象物体を除去した場合の対象物体に隣接する隣接物体の配置状態の変化を基に、対象物体や隣接物体を操作する処理を実行する。このように、ロボット装置100は、隣接する物体が存在する場合であっても物体に対する適切な操作を可能にすることができる。
As described above, the robot device 100 executes a process of manipulating the target object and the adjacent object based on the change in the arrangement state of the adjacent object adjacent to the target object when the target object is removed. In this way, the robot device 100 can enable an appropriate operation on an object even when an adjacent object exists.
[1-1-2.操作可能物体判定]
ここから、操作可能物体の判定の概要について図2を用いて説明する。図2は、第1の実施形態に係る操作可能物体の判定の一例を示す図である。ロボット装置100は、画像を基に、操作可能物体を判断する。 [1-1-2. Manipulable object judgment]
From here, the outline of the determination of the operable object will be described with reference to FIG. FIG. 2 is a diagram showing an example of determination of a manipulable object according to the first embodiment. Therobot device 100 determines an operable object based on the image.
ここから、操作可能物体の判定の概要について図2を用いて説明する。図2は、第1の実施形態に係る操作可能物体の判定の一例を示す図である。ロボット装置100は、画像を基に、操作可能物体を判断する。 [1-1-2. Manipulable object judgment]
From here, the outline of the determination of the operable object will be described with reference to FIG. FIG. 2 is a diagram showing an example of determination of a manipulable object according to the first embodiment. The
ロボット装置100は、画像センサ141により未知物体操作群(未知物体群SG1)を含む画像を検知(撮像)する(ステップS11)。ロボット装置100は、画像センサ141により検知された画像に基づいて、複数の未知物体操作群(未知物体群SG1)を認識する。図2の例では、ロボット装置100は、本棚及び収納された複数の本を含む未知物体群SG1を認識する。
The robot device 100 detects (images) an image including an unknown object operation group (unknown object group SG1) by the image sensor 141 (step S11). The robot device 100 recognizes a plurality of unknown object operation groups (unknown object group SG1) based on the image detected by the image sensor 141. In the example of FIG. 2, the robot device 100 recognizes the unknown object group SG1 including the bookshelf and a plurality of stored books.
ロボット装置100は、積み重なった複数の未知物体群の画像情報から、まず各物体にセグメンテーションする。ロボット装置100は、画像のセグメンテーションに関する種々の技術を適宜用いて、画像センサ141が検知した画像中に含まれる未知物体群SG1をセグメンテーションする。図2の例では、ロボット装置100は、物体OB11~OB17等の複数の書籍を含む書籍群SG11や複数の書籍を含む書籍群SG12や複数の書籍を含む書籍群SG13や本棚である物体OB10等に、未知物体群SG1をセグメンテーションする。なお、ロボット装置100は、書籍群SG11を物体OB11~OB17の各々にセグメンテーションし、書籍群SG12や書籍群SG13も各書籍にセグメンテーションする。
The robot device 100 first segments each object from the accumulated image information of a plurality of unknown objects. The robot device 100 segmentes the unknown object group SG1 included in the image detected by the image sensor 141 by appropriately using various techniques related to image segmentation. In the example of FIG. 2, the robot device 100 includes a book group SG11 including a plurality of books such as objects OB11 to OB17, a book group SG12 including a plurality of books, a book group SG13 including a plurality of books, an object OB10 which is a bookshelf, and the like. In addition, the unknown object group SG1 is segmented. The robot device 100 segmentes the book group SG11 into each of the objects OB11 to OB17, and also segments the book group SG12 and the book group SG13 into each book.
そして、ロボット装置100は、未知物体群SG1を分類するする(ステップS12)。ロボット装置100は、例えば「マニピュレータの可搬重量」などの閾値(図4中の閾値TH1の値「Wload」等)を用いて、閾値の外力で動かすことのできない物体群G0と、閾値内の外力で動かすことのできる物体群G1とのいずれかに未知物体群SG1を分類(区分け)する。
Then, the robot device 100 classifies the unknown object group SG1 (step S12). The robot device 100 uses a threshold value such as "carrying weight of the manipulator" (such as the value "Wlood" of the threshold value TH1 in FIG. 4), and the object group G0 that cannot be moved by the external force of the threshold value and the object group G0 within the threshold value. The unknown object group SG1 is classified (classified) into one of the object group G1 that can be moved by an external force.
ロボット装置100は、未知物体群SG1に含まれる各物体の重量と、「Wload」とを比較し、重量が「Wload」を超える物体を操作不可能な物体として、物体群G0に分類する。また、ロボット装置100は、未知物体群SG1に含まれる各物体の重量と、「Wload」とを比較し、重量が「Wload」以下である物体を操作可能な物体として、物体群G1に分類する。
The robot device 100 compares the weight of each object included in the unknown object group SG1 with the "Wlood", and classifies the object having a weight exceeding the "Wlood" into the object group G0 as an inoperable object. Further, the robot device 100 compares the weight of each object included in the unknown object group SG1 with the "Wlood", and classifies the object having a weight of "Wlood" or less as a manipulable object into the object group G1. ..
ここで、未知物体群SG1に含まれる各物体の質量については未知情報であるため、ロボット装置100は、画像データから推定する。ロボット装置100は、一般物体認識等の物体認識に関する種々の技術を適宜用いて、画像センサ141が検知した画像中に含まれる未知物体群SG1の各物体を認識する。ロボット装置100は、画像から抽出した物体の形状データ(W[幅]、D[奥行き]、H[高さ])を推定する。また、ロボット装置100は、物体の材質を推定し、物体の密度ρを推定する。そして、ロボット装置100は、推定した物体の形状データ(W[幅]、D[奥行き]、H[高さ])と、物体の密度ρとを用いて、物体の推定重量を算出する。例えば、ロボット装置100は、物体の幅、奥行き、高さ及び密度を乗算することにより、推定重量「Wp」を算出する。ロボット装置100は、下記の式(1)により、推定重量「Wp」を算出する。
Here, since the mass of each object included in the unknown object group SG1 is unknown information, the robot device 100 estimates from the image data. The robot device 100 recognizes each object of the unknown object group SG1 included in the image detected by the image sensor 141 by appropriately using various techniques related to object recognition such as general object recognition. The robot device 100 estimates the shape data (W [width], D [depth], H [height]) of the object extracted from the image. Further, the robot device 100 estimates the material of the object and estimates the density ρ of the object. Then, the robot device 100 calculates the estimated weight of the object by using the estimated shape data of the object (W [width], D [depth], H [height]) and the density ρ of the object. For example, the robot device 100 calculates the estimated weight "Wp" by multiplying the width, depth, height and density of the object. The robot device 100 calculates the estimated weight "Wp" by the following formula (1).
Wp = ρWDH … (1)
Wp = ρWDH ... (1)
なお、密度を推定できない場合、ロボット装置100は、環境の平均密度をあらかじめデータとして保持しておき、平均密度を用いて重量を推定する。例えば、ロボット装置100は、密度情報記憶部122(図5参照)に記憶された環境平均の密度DS1の値「VL1」を、物体の推定密度「ρ」の代わりに用いて、物体の重量を推定する。
If the density cannot be estimated, the robot device 100 holds the average density of the environment as data in advance, and estimates the weight using the average density. For example, the robot device 100 uses the value “VL1” of the environmental average density DS1 stored in the density information storage unit 122 (see FIG. 5) instead of the estimated density “ρ” of the object to determine the weight of the object. presume.
そして、ロボット装置100は、算出した各物体の推定重量「Wp」と、閾値「Wload」とを比較し、各物体が操作可能かどうかを判定し、各物体を操作可能な物体群G1と、操作不可能な物体群G0とのいずれかに分類する。
Then, the robot device 100 compares the calculated estimated weight "Wp" of each object with the threshold value "Wlood", determines whether or not each object can be operated, and sets the object group G1 capable of operating each object. It is classified as one of the inoperable object group G0.
図2の例では、ロボット装置100は、書籍群SG11~SG13に含まれる物体OB11~OB17等の各書籍の重量を推定し、推定重量が閾値「Wload」以下であると判定する。これにより、ロボット装置100は、書籍群SG11~SG13に含まれる物体OB11~OB17等の各書籍を、操作可能な物体群G1に分類する。
In the example of FIG. 2, the robot device 100 estimates the weight of each book such as the objects OB11 to OB17 included in the book group SG11 to SG13, and determines that the estimated weight is equal to or less than the threshold value “Wlood”. As a result, the robot device 100 classifies each book such as the objects OB11 to OB17 included in the book group SG11 to SG13 into the operable object group G1.
また、ロボット装置100は、本棚である物体OB10の重量を推定し、推定重量が閾値「Wload」より大きいと判定する。これにより、ロボット装置100は、本棚である物体OB10を、操作不可能な物体群G0に分類する。そして、ロボット装置100は、物体群G1に属する物体の中から操作対象とする対象物体を選択する。
Further, the robot device 100 estimates the weight of the object OB10 which is a bookshelf, and determines that the estimated weight is larger than the threshold value "Wlood". As a result, the robot device 100 classifies the object OB10, which is a bookshelf, into the inoperable object group G0. Then, the robot device 100 selects a target object to be operated from the objects belonging to the object group G1.
なお、ロボット装置100は、物体群G1の物体群のみを操作対象とするが、物体群G1に属する物体少ない場合、他のロボットに応援を要請したり、複数のマニピュレータで協調作業したりしてもよい。ロボット装置100は、物体群G1に属する物体少ない場合、物体群G0に属する物体を操作対象に選択し、他のロボットに応援を要請したり、複数のマニピュレータで協調作業したりしてもよい。
The robot device 100 targets only the object group of the object group G1, but when there are few objects belonging to the object group G1, the robot device 100 requests support from another robot or cooperates with a plurality of manipulators. May be good. When the number of objects belonging to the object group G1 is small, the robot device 100 may select an object belonging to the object group G0 as an operation target, request support from another robot, or perform cooperative work with a plurality of manipulators.
例えば、ロボット装置100は、物体群G1に属する物体の数が所定の基準値未満である場合、物体群G0に属する物体を操作対象に選択し、他のロボットに応援を要請したり、複数のマニピュレータで協調作業したりしてもよい。例えば、ロボット装置100は、物体群G1に属する全物体が隣接物体の変化の条件により、操作不可である場合、物体群G0に属する物体を操作対象に選択し、他のロボットに応援を要請したり、複数のマニピュレータで協調作業したりしてもよい。
For example, when the number of objects belonging to the object group G1 is less than a predetermined reference value, the robot device 100 selects an object belonging to the object group G0 as an operation target, requests support from another robot, or has a plurality of objects. You may work together with a manipulator. For example, when all the objects belonging to the object group G1 cannot be operated due to the change condition of the adjacent object, the robot device 100 selects the object belonging to the object group G0 as the operation target and requests support from other robots. Or you may work together with multiple manipulators.
また、上述のように、ロボット装置100は、操作対象物体を自律的に選択してもよいし、ロボット装置100の管理者等の人間による指示に応じて、操作対象物体を選択してもよい。例えば、ロボット装置100は、遠隔もしくはその場で操作可否を判断可能な人間がいる場合、物体の操作可否を示す情報を外部から取得してもよい。また、ロボット装置100は、事前に知識として各物体が操作可能かどうかを示す操作可否情報を記憶部12に記憶し、記憶部12に記憶した操作可否情報を基に操作可否を判定してもよい。
Further, as described above, the robot device 100 may autonomously select the operation target object, or may select the operation target object according to a human instruction such as the administrator of the robot device 100. .. For example, when there is a person who can determine whether or not the robot device 100 can be operated remotely or on the spot, the robot device 100 may acquire information indicating whether or not the object can be operated from the outside. Further, even if the robot device 100 stores in advance the operationability information indicating whether or not each object can be operated as knowledge in the storage unit 12, and determines whether or not the operation is possible based on the operationability information stored in the storage unit 12. Good.
[1-1-3.物体群の操作]
次に、互いに物理作用し合う物体群の操作方法についての概要を説明する。なお、互いに物理接触した物体群を操作するフローの詳細については図7において詳述する。ロボット装置100は、画像データより運動解析し、互いに物理作用し合う物体群を操作する。 [1-1-3. Manipulation of objects]
Next, an outline of a method of manipulating a group of objects that physically interact with each other will be described. The details of the flow for manipulating the objects in physical contact with each other will be described in detail in FIG. Therobot device 100 analyzes motion from image data and operates a group of objects that physically interact with each other.
次に、互いに物理作用し合う物体群の操作方法についての概要を説明する。なお、互いに物理接触した物体群を操作するフローの詳細については図7において詳述する。ロボット装置100は、画像データより運動解析し、互いに物理作用し合う物体群を操作する。 [1-1-3. Manipulation of objects]
Next, an outline of a method of manipulating a group of objects that physically interact with each other will be described. The details of the flow for manipulating the objects in physical contact with each other will be described in detail in FIG. The
例えば、物体Aに自重を支えられながら接触している物体Bが存在するとしたとき、物体Aを操作しようとすると物体Aに支えられながら姿勢を保っていた物体Bの位置・姿勢は崩れ、物体Bは倒れて落下したり破損したり、周囲の環境に影響を与える場合がある。このような事象発生を抑制するために、ロボット装置100は、事前に物体Bの運動を予測する。まず、ロボット装置100は、操作対象となる物体Aを取り除いたときに、該当物体に接触している他の物体(物体B等)がどのような動作をするのかを事前に解析し、隣接物体(物体B)の動きや姿勢を予測する.このとき、ロボット装置100は、各物体の形状データ(W[幅]、D[奥行き]、H[高さ])から重心を推定し、重力方向を検出して、内蔵の物理モデルシミュレータにより物体Bの姿勢や位置の予測を行う。隣接物体(物体B)の予測される動きが閾値内で収束するとき、物体Aは操作可能と判定する。
For example, if there is an object B that is in contact with the object A while being supported by its own weight, when trying to operate the object A, the position and posture of the object B that is being supported by the object A and maintaining its posture collapses, and the object B may fall down, fall or be damaged, or affect the surrounding environment. In order to suppress the occurrence of such an event, the robot device 100 predicts the movement of the object B in advance. First, the robot device 100 analyzes in advance what kind of operation other objects (object B, etc.) in contact with the object (object B, etc.) behave when the object A to be operated is removed, and the adjacent object. Predict the movement and posture of (object B). At this time, the robot device 100 estimates the center of gravity from the shape data (W [width], D [depth], H [height]) of each object, detects the direction of gravity, and uses the built-in physical model simulator to detect the object. Predict the posture and position of B. When the predicted movement of the adjacent object (object B) converges within the threshold value, the object A is determined to be operable.
なお、隣接物体の予測動作が閾値内かどうかで物体操作を選択すると、操作不可能な場合がある。このような場合、ロボット装置100は、動きの大きな物体等の隣接物体を別のマニピュレータ(操作部16)で支え動かないように固定しながら、対象物体を操作してもよい。なお、ロボット装置100は、単純に固定するだけでなく、2つ以上のマニピュレータ(操作部16)を用いて、互いに支え合ったり、一時的に把持したり、手渡したり等の協調作業することで、操作部16間で物体操作の役割を分割してもよい。また、ロボット装置100は、他のロボット装置と協調して、互いに支え合ったり、一時的に把持したり、手渡したり等の協調作業することで、ロボット装置間で物体操作の役割を分割してもよい。
If the object operation is selected depending on whether the prediction operation of the adjacent object is within the threshold value, the operation may not be possible. In such a case, the robot device 100 may operate the target object while supporting and fixing an adjacent object such as a large-moving object by another manipulator (operation unit 16) so as not to move. The robot device 100 is not only simply fixed, but is also supported by two or more manipulators (operation units 16), temporarily gripped, handed, and the like. , The role of object operation may be divided between the operation units 16. Further, the robot device 100 divides the role of object operation among the robot devices by performing cooperative work such as supporting each other, temporarily grasping, and handing the robot device in cooperation with other robot devices. May be good.
また、ロボット装置100は、操作部16(マニピュレータのエンドエフェクタ部)にカメラ(画像センサ)を備える場合、エンドエフェクタ部からの画像を参考に物体の姿勢や位置の予測が可能となる。そのため、ロボット装置100は、オクルージョンを回避し、より姿勢や位置の正確な予測判断が可能となる。
Further, when the robot device 100 is provided with a camera (image sensor) in the operation unit 16 (end effector unit of the manipulator), the posture and position of the object can be predicted with reference to the image from the end effector unit. Therefore, the robot device 100 avoids occlusion and enables more accurate prediction and determination of posture and position.
[1-1-4.ロボット装置の概要・効果]
上述のように、ロボット装置100は、操作部16(アーム、ハンド)、移動部15(台車等)、認識部(視覚センサとしての画像センサ141や力触覚センサ142等)を備えた移動可能なロボットであり、上述した流れにより未知物体を操作する。 [1-1-4. Outline and effects of robot equipment]
As described above, therobot device 100 is movable and includes an operation unit 16 (arm, hand), a moving unit 15 (vehicle, etc.), and a recognition unit (image sensor 141 as a visual sensor, force-tactile sensor 142, etc.). It is a robot and operates an unknown object by the above-mentioned flow.
上述のように、ロボット装置100は、操作部16(アーム、ハンド)、移動部15(台車等)、認識部(視覚センサとしての画像センサ141や力触覚センサ142等)を備えた移動可能なロボットであり、上述した流れにより未知物体を操作する。 [1-1-4. Outline and effects of robot equipment]
As described above, the
具体的には、ロボット装置100は、互いに接触し合った物体群に対して、画像情報を利用してマニピュレータが操作可能な物体かどうかを判別し、操作可能物体と操作不可能な物体に区別する。また、ロボット装置100は、互いに接触し合った物体群から操作対象物体を抽出し、操作対象物体を画像から排除し、操作対象物体に接触する物体について、操作対象物体排除後の姿勢や位置を予測する。そして、ロボット装置100は、操作対象物体に接触する物体の姿勢や位置の変化が閾値以上である場合、操作対象物体に接触する物体に対する操作を実行したり、操作対象を変更したりする。また、ロボット装置100は、操作対象物体に接触する物体の姿勢や位置の変化が閾値未満である場合、操作対象物体を操作可能とする。
Specifically, the robot device 100 determines whether or not the manipulator can operate an object for a group of objects in contact with each other by using image information, and distinguishes between an operable object and an inoperable object. To do. Further, the robot device 100 extracts the operation target object from the group of objects in contact with each other, excludes the operation target object from the image, and determines the posture and position of the object in contact with the operation target object after the operation target object is removed. Predict. Then, when the change in the posture or position of the object in contact with the operation target object is equal to or greater than the threshold value, the robot device 100 executes an operation on the object in contact with the operation target object or changes the operation target. Further, the robot device 100 enables the operation target object to be operated when the change in the posture or position of the object in contact with the operation target object is less than the threshold value.
マニピュレータ(操作部16)を複数備えている場合、ロボット装置100は、操作対象物体に接触する物体の姿勢が動かないように操作部16を把持制御する。ロボット装置100は、操作部16を複数備えている場合、一の操作部16で操作対象物体に対する操作を実行し、他の操作部16で操作対象物体に接触する物体に対する操作を実行する。
When a plurality of manipulators (operation units 16) are provided, the robot device 100 grips and controls the operation unit 16 so that the posture of the object in contact with the operation target object does not move. When the robot device 100 includes a plurality of operation units 16, one operation unit 16 executes an operation on an operation target object, and another operation unit 16 executes an operation on an object in contact with the operation target object.
また、ロボット装置100は、対象物体表面を閾値以内の力でロボットハンド(操作部16)と接触させて、物体に動く箇所がないかどうかを確認した後、操作対象物体を選択してもよい。なおこの点についての詳細は後述する。
Further, the robot device 100 may select the operation target object after contacting the surface of the target object with the robot hand (operation unit 16) with a force within the threshold value and confirming whether or not the object has a moving portion. .. The details of this point will be described later.
上述のように、ロボット装置100は、積み重なった複数の物体OB1~OB3のような物体群の中から、周辺物体の位置姿勢を変化させずに対象物体を安定操作することができる。ロボット装置100は、積み重なった複数の未知物体群の中から、周辺物体の位置姿勢を変化させずに特定の物体を安定操作することができる。例えば、ロボット装置100は、質量や摩擦係数などの物理特性が不明の物体の中から、周辺物体の位置姿勢を変化させずに特定の物体を安定操作することができる。
As described above, the robot device 100 can stably operate the target object from the stacked objects such as OB1 to OB3 without changing the position and orientation of the peripheral objects. The robot device 100 can stably operate a specific object from a plurality of stacked unknown object groups without changing the position and orientation of peripheral objects. For example, the robot device 100 can stably operate a specific object from among objects whose physical characteristics such as mass and friction coefficient are unknown without changing the position and orientation of peripheral objects.
このように、ロボット装置100は、部屋等の空間を移動可能であり、自律的に部屋を片付けることができ、空間や物体の情報が既知ではない場合であっても、円滑に操作を実行することができる。
In this way, the robot device 100 can move in a space such as a room, can autonomously clean up the room, and smoothly executes the operation even when the information on the space or the object is not known. be able to.
従来では、例えば操作対象物体を非接触タグ等で認識したり特定したりすることで、該当物体の形状・種類・位置・姿勢等の詳細な情報を、あらかじめ用意してあるデータベースを参照することで、物体に対する処理を実行する.この場合、データベースに記憶された対象物体のパラメータ等の情報を用いて、ロボットは物体の操作や移動を実施する。
Conventionally, for example, by recognizing or specifying an object to be operated by a non-contact tag or the like, detailed information such as the shape, type, position, and posture of the object can be referred to a database prepared in advance. Then, the processing for the object is executed. In this case, the robot operates or moves the object by using the information such as the parameters of the target object stored in the database.
しかし、上記の場合、物理パラメータが未知の物体に対して操作を行うことは困難であり、未知物体を含めたあらゆる物体を操作対象にすることができるとは限らない。自律ロボットがあらゆる状況や環境で自律的に動作できることが求められるため、ロボットが未知の環境に置かれたときに適切に操作できることが望まれる。
However, in the above case, it is difficult to operate on an object whose physical parameters are unknown, and not all objects including unknown objects can be operated. Since an autonomous robot is required to be able to operate autonomously in all situations and environments, it is desired that the robot can be operated appropriately when it is placed in an unknown environment.
そこで、ロボット装置100は、画像により物体を認識し、隣接物体に対する影響を考慮して、物体に対する操作を行う。そのため、ロボット装置100は、未知物体を含めたあらゆる物体を操作対象にすることができ、他の物体への影響も考慮して動作することができる。
Therefore, the robot device 100 recognizes the object from the image, considers the influence on the adjacent object, and operates the object. Therefore, the robot device 100 can operate on any object including an unknown object, and can operate in consideration of the influence on other objects.
上記のように、ロボット装置100は、構造化されていない未知の環境において、自律的に移動し、未知物理パラメータの物体を操作することができる。これにより、ロボット装置100は、データベース等が不要となり、様々な環境において操作可能となる。また、データベースを作成する手間やコストが不要となる。
As described above, the robot device 100 can autonomously move and operate an object with unknown physical parameters in an unstructured unknown environment. As a result, the robot device 100 does not require a database or the like and can be operated in various environments. In addition, the labor and cost of creating a database are not required.
また、ロボット装置100は、操作前に周囲の物体の姿勢や位置の予測を行うため、操作に伴う物体の落下や破損などリスクを低減することができる。また、ロボット装置100は、人間が細かく指示しなくても操作することができるようになる物体や環境が増えるため、ロボット装置100の自律性が高まり、生産性が向上することができる。
Further, since the robot device 100 predicts the posture and position of surrounding objects before the operation, it is possible to reduce the risk of the objects falling or being damaged due to the operation. Further, since the number of objects and environments that can be operated by the robot device 100 without detailed instructions by humans increases, the autonomy of the robot device 100 can be enhanced and the productivity can be improved.
[1-2.第1の実施形態に係るロボット装置の構成]
次に、第1の実施形態に係る情報処理を実行する情報処理装置の一例であるロボット装置100の構成について説明する。図3は、第1の実施形態に係るロボット装置100の構成例を示す図である。 [1-2. Configuration of Robot Device According to First Embodiment]
Next, the configuration of therobot device 100, which is an example of the information processing device that executes the information processing according to the first embodiment, will be described. FIG. 3 is a diagram showing a configuration example of the robot device 100 according to the first embodiment.
次に、第1の実施形態に係る情報処理を実行する情報処理装置の一例であるロボット装置100の構成について説明する。図3は、第1の実施形態に係るロボット装置100の構成例を示す図である。 [1-2. Configuration of Robot Device According to First Embodiment]
Next, the configuration of the
図3に示すように、ロボット装置100は、通信部11と、記憶部12と、制御部13と、センサ部14と、移動部15と、第1操作部16aと、第2操作部16bとを有する。
As shown in FIG. 3, the robot device 100 includes a communication unit 11, a storage unit 12, a control unit 13, a sensor unit 14, a moving unit 15, a first operation unit 16a, and a second operation unit 16b. Has.
通信部11は、例えば、NIC(Network Interface Card)や通信回路等によって実現される。通信部11は、ネットワークN(インターネット等)と有線又は無線で接続され、ネットワークNを介して、他の装置等との間で情報の送受信を行う。
The communication unit 11 is realized by, for example, a NIC (Network Interface Card), a communication circuit, or the like. The communication unit 11 is connected to the network N (Internet, etc.) by wire or wirelessly, and transmits / receives information to / from other devices via the network N.
記憶部12は、例えば、RAM(Random Access Memory)、フラッシュメモリ(Flash Memory)等の半導体メモリ素子、または、ハードディスク、光ディスク等の記憶装置によって実現される。記憶部12は、閾値情報記憶部121と密度情報記憶部122とを有する。なお、記憶部12は、閾値情報記憶部121や密度情報記憶部122に限らず、各種の情報が記憶される。記憶部12は、操作部16に関する各種情報を記憶してもよい。例えば、記憶部12は、操作部16の数や操作部16の設置位置を示す情報を記憶してもよい。例えば、記憶部12は、物体の特定(推定)に用いる各種情報を記憶してもよい。
The storage unit 12 is realized by, for example, a semiconductor memory element such as a RAM (Random Access Memory) or a flash memory (Flash Memory), or a storage device such as a hard disk or an optical disk. The storage unit 12 has a threshold information storage unit 121 and a density information storage unit 122. The storage unit 12 is not limited to the threshold information storage unit 121 and the density information storage unit 122, and various types of information are stored. The storage unit 12 may store various information related to the operation unit 16. For example, the storage unit 12 may store information indicating the number of operation units 16 and the installation position of the operation unit 16. For example, the storage unit 12 may store various types of information used for identifying (estimating) an object.
第1の実施形態に係る閾値情報記憶部121は、閾値に関する各種情報を記憶する。閾値情報記憶部121は、各種判定に用いる閾値に関する各種情報を記憶する。図4は、第1の実施形態に係る閾値情報記憶部の一例を示す図である。図4に示す閾値情報記憶部121には、「閾値ID」、「対象」、「用途」、「閾値」といった項目が含まれる。
The threshold information storage unit 121 according to the first embodiment stores various information related to the threshold value. The threshold information storage unit 121 stores various information related to the threshold used for various determinations. FIG. 4 is a diagram showing an example of the threshold information storage unit according to the first embodiment. The threshold information storage unit 121 shown in FIG. 4 includes items such as "threshold ID", "target", "use", and "threshold".
「閾値ID」は、閾値を識別するための識別情報を示す。「対象」は、閾値を用いる対象を示す。「用途」は、閾値の用途を示す。「閾値」は、対応する閾値IDにより識別される閾値の具体的な値を示す。
"Threshold ID" indicates identification information for identifying the threshold value. “Object” indicates an object for which a threshold value is used. "Use" indicates the use of the threshold. The "threshold value" indicates a specific value of the threshold value identified by the corresponding threshold ID.
図4の例では、閾値ID「TH1」により識別される閾値(閾値TH1)は、対象が「重量」であることを示す。閾値TH1は、物体の重量を対象とする判定に用いられる閾値であることを示す。閾値TH1の用途は、可搬物体判定であり、物体がロボット装置100により操作可能であるかの判定に用いられることを示す。また、閾値TH1の値は、「Wload」であることを示す。なお、図4の例では、「Wload」といった抽象的な符号で示すが、閾値TH1の値は具体的な数値である。
In the example of FIG. 4, the threshold value (threshold value TH1) identified by the threshold value ID “TH1” indicates that the target is “weight”. The threshold value TH1 indicates that the threshold value is used for determining the weight of the object. The use of the threshold value TH1 is for determining a portable object, and indicates that the object is used for determining whether or not the object can be operated by the robot device 100. Further, the value of the threshold value TH1 indicates that it is "Wroad". In the example of FIG. 4, although it is indicated by an abstract code such as “Wlood”, the value of the threshold value TH1 is a specific numerical value.
また、閾値ID「TH2」により識別される閾値(閾値TH2)は、対象が「位置姿勢」であることを示す。閾値TH2は、物体の位置姿勢を対象とする判定に用いられる閾値であることを示す。閾値TH2の用途は、隣接物体変化であり、対象物体に対する操作による隣接物体の配置状態の変化の判定に用いられることを示す。また、閾値TH2の値は、「PVL」であることを示す。なお、図4の例では、「PVL」といった抽象的な符号で示すが、閾値TH2の値は具体的な数値である。
Further, the threshold value (threshold value TH2) identified by the threshold value ID "TH2" indicates that the target is the "positional posture". The threshold value TH2 indicates that the threshold value is used for determining the position and orientation of the object. The use of the threshold value TH2 is to change an adjacent object, and it is shown that the threshold value TH2 is used for determining a change in the arrangement state of the adjacent object due to an operation on the target object. Further, the value of the threshold value TH2 indicates that it is "PVL". In the example of FIG. 4, although it is indicated by an abstract reference numeral such as “PVL”, the value of the threshold value TH2 is a specific numerical value.
なお、閾値情報記憶部121は、上記に限らず、目的に応じて種々の情報を記憶してもよい。閾値情報記憶部121は、姿勢に関する閾値(姿勢閾値)と、位置に関する閾値(位置閾値)とを記憶してもよい。この場合、ロボット装置100は、対象物体に対する操作により生じる隣接物体の姿勢の変化量の予測値(姿勢変化予測値)と、姿勢閾値とを比較し、姿勢変化予測値が姿勢閾値以上である場合、隣接物体を操作する処理を実行する。また、ロボット装置100は、対象物体に対する操作により生じる隣接物体の位置の変化量の予測値(位置変化予測値)と、位置閾値とを比較し、位置変化予測値が位置閾値以上である場合、隣接物体を操作する処理を実行する。
The threshold information storage unit 121 is not limited to the above, and may store various information depending on the purpose. The threshold information storage unit 121 may store a threshold value related to posture (posture threshold value) and a threshold value related to position (position threshold value). In this case, the robot device 100 compares the predicted value (posture change predicted value) of the posture change amount of the adjacent object caused by the operation on the target object with the posture threshold, and the posture change predicted value is equal to or more than the posture threshold. , Executes the process of manipulating adjacent objects. Further, the robot device 100 compares the predicted value (position change predicted value) of the position change amount of the adjacent object caused by the operation with respect to the target object and the position threshold value, and when the position change predicted value is equal to or more than the position threshold value, Executes the process of manipulating adjacent objects.
第1の実施形態に係る密度情報記憶部122は、密度に関する各種情報を記憶する。密度情報記憶部122は、物体の重量推定に用いる密度に関する各種情報を記憶する。図5は、第1の実施形態に係る密度情報記憶部の一例を示す図である。図5に示す密度情報記憶部122には、「密度ID」、「密度名」、「用途」、「密度」といった項目が含まれる。
The density information storage unit 122 according to the first embodiment stores various information related to the density. The density information storage unit 122 stores various information related to the density used for estimating the weight of the object. FIG. 5 is a diagram showing an example of the density information storage unit according to the first embodiment. The density information storage unit 122 shown in FIG. 5 includes items such as "density ID", "density name", "use", and "density".
「密度ID」は、密度を識別するための識別情報を示す。「対象」は、密度の対象を示す。「値」は、対応する密度IDにより識別される密度の具体的な値を示す。
"Density ID" indicates identification information for identifying the density. “Object” indicates an object of density. The "value" indicates a specific value of the density identified by the corresponding density ID.
図5の例では、密度ID「DS1」により識別される密度(密度DS1)は、対象が「環境平均」であることを示す。密度DS1は、地球環境における物体の平均密度であり、環境の各物体に適用される密度であることを示す。また、密度DS1の値は、「VL1」であることを示す。なお、図4の例では、「VL1」といった抽象的な符号で示すが、密度DS1の値は、例えば「3(g/cm3)」や「4(g/cm3)」といった具体的な数値である。
In the example of FIG. 5, the density (density DS1) identified by the density ID "DS1" indicates that the target is the "environmental average". Density DS1 is the average density of objects in the global environment and indicates the density applied to each object in the environment. Further, the value of the density DS1 indicates that it is "VL1". In the example of FIG. 4, it is indicated by an abstract code such as "VL1", but the value of the density DS1 is a specific value such as "3 (g / cm 3 )" or "4 (g / cm 3 )". It is a numerical value.
なお、密度情報記憶部122は、上記に限らず、目的に応じて種々の情報を記憶してもよい。密度情報記憶部122は、物体毎の密度を示す情報を記憶してもよい。密度情報記憶部122は、物体毎にその物体の平均密度を対応付けて記憶してもよい。
The density information storage unit 122 is not limited to the above, and may store various information depending on the purpose. The density information storage unit 122 may store information indicating the density of each object. The density information storage unit 122 may store the average density of each object in association with each other.
図3に戻り、説明を続ける。制御部13は、例えば、CPU(Central Processing Unit)やMPU(Micro Processing Unit)等によって、ロボット装置100内部に記憶されたプログラム(例えば、本開示に係る情報処理プログラム)がRAM(Random Access Memory)等を作業領域として実行されることにより実現される。また、制御部13は、例えば、ASIC(Application Specific Integrated Circuit)やFPGA(Field Programmable Gate Array)等の集積回路により実現されてもよい。
Return to Fig. 3 and continue the explanation. In the control unit 13, for example, a program (for example, an information processing program according to the present disclosure) stored inside the robot device 100 by a CPU (Central Processing Unit), an MPU (Micro Processing Unit), or the like is a RAM (Random Access Memory). It is realized by executing such as as a work area. Further, the control unit 13 may be realized by an integrated circuit such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array).
図3に示すように、制御部13は、取得部131と、解析部132と、分類部133と、選択部134と、予測部135と、判定部136と、計画部137と、実行部138とを有し、以下に説明する情報処理の機能や作用を実現または実行する。なお、制御部13の内部構成は、図3に示した構成に限られず、後述する情報処理を行う構成であれば他の構成であってもよい。
As shown in FIG. 3, the control unit 13 includes an acquisition unit 131, an analysis unit 132, a classification unit 133, a selection unit 134, a prediction unit 135, a determination unit 136, a planning unit 137, and an execution unit 138. And realizes or executes the functions and actions of information processing described below. The internal configuration of the control unit 13 is not limited to the configuration shown in FIG. 3, and may be another configuration as long as it is a configuration for performing information processing described later.
取得部131は、各種情報を取得する。取得部131は、外部の情報処理装置から各種情報を取得する。取得部131は、記憶部12から各種情報を取得する。取得部131は、閾値情報記憶部121や密度情報記憶部122から各種情報を取得する。取得部131は、解析部132や、分類部133や、選択部134や、予測部135や、判定部136や、計画部137から情報を取得する。取得部131は、取得した情報を記憶部12に格納する。
The acquisition unit 131 acquires various information. The acquisition unit 131 acquires various information from an external information processing device. The acquisition unit 131 acquires various information from the storage unit 12. The acquisition unit 131 acquires various types of information from the threshold information storage unit 121 and the density information storage unit 122. The acquisition unit 131 acquires information from the analysis unit 132, the classification unit 133, the selection unit 134, the prediction unit 135, the determination unit 136, and the planning unit 137. The acquisition unit 131 stores the acquired information in the storage unit 12.
取得部131は、センサ部14により検知されたセンサ情報を取得する。取得部131は、画像センサ141によって検知されるセンサ情報(画像情報)を取得する。取得部131は、画像センサ141により撮像された画像情報(画像)を取得する。取得部131は、力覚センサ142によって検知されるセンサ情報(接触情報)を取得する。
The acquisition unit 131 acquires the sensor information detected by the sensor unit 14. The acquisition unit 131 acquires the sensor information (image information) detected by the image sensor 141. The acquisition unit 131 acquires the image information (image) captured by the image sensor 141. The acquisition unit 131 acquires sensor information (contact information) detected by the force sensor 142.
取得部131は、操作対象の候補となる物体である対象物体と、対象物体に隣接する物体である隣接物体とが撮像された画像情報を取得する。取得部131は、対象物体と、対象物体に接触する隣接物体とが撮像された画像情報を取得する。取得部131は、積み重ねられた対象物体と隣接物体とが撮像された画像情報を取得する。取得部131は、対象物体と、対象物体に対する操作の影響を受ける範囲内に位置する隣接物体とが撮像された画像情報を取得する。
The acquisition unit 131 acquires image information obtained by capturing an image of a target object, which is a candidate object for operation, and an adjacent object, which is an object adjacent to the target object. The acquisition unit 131 acquires image information obtained by capturing an image of the target object and an adjacent object in contact with the target object. The acquisition unit 131 acquires image information obtained by capturing images of the stacked target objects and adjacent objects. The acquisition unit 131 acquires image information obtained by capturing an image of the target object and an adjacent object located within a range affected by the operation on the target object.
図1の例では取得部131は、物体OB1、OB2、OB3が積み重ねられた状態ST1を示す画像(画像IM1)を取得する。
In the example of FIG. 1, the acquisition unit 131 acquires an image (image IM1) showing the state ST1 in which the objects OB1, OB2, and OB3 are stacked.
解析部132は、各種情報を解析する。解析部132は、物理的な解析を行う物理解析部として機能する。解析部132は、物理的な性質に関する情報を用いて、各種情報を解析する。解析部132は、画像情報を解析する。解析部132は、外部の情報処理装置からの情報や記憶部120に記憶された情報に基づいて、画像情報から各種情報を解析する。解析部132は、画像情報から各種情報を特定する。解析部132は、画像情報から各種情報を抽出する。解析部132は、解析結果に基づく認識を行う。解析部132は、解析結果に基づいて、種々の情報を認識する。
The analysis unit 132 analyzes various information. The analysis unit 132 functions as a physical analysis unit that performs physical analysis. The analysis unit 132 analyzes various kinds of information by using the information regarding the physical properties. The analysis unit 132 analyzes the image information. The analysis unit 132 analyzes various information from the image information based on the information from the external information processing device and the information stored in the storage unit 120. The analysis unit 132 identifies various types of information from the image information. The analysis unit 132 extracts various information from the image information. The analysis unit 132 performs recognition based on the analysis result. The analysis unit 132 recognizes various information based on the analysis result.
解析部132は、画像に関する解析処理を行う。解析部132は、画像処理に関する各種処理を行う。解析部132は、取得部131により取得された画像情報(画像)に対して処理を行う。解析部132は、画像センサ141により撮像された画像情報(画像)に対して処理を行う。解析部132は、画像処理に関する技術を適宜用いて、画像に対する処理を行う。
The analysis unit 132 performs analysis processing related to the image. The analysis unit 132 performs various processes related to image processing. The analysis unit 132 processes the image information (image) acquired by the acquisition unit 131. The analysis unit 132 processes the image information (image) captured by the image sensor 141. The analysis unit 132 processes the image by appropriately using a technique related to image processing.
解析部132は、画像中の対象物体を除去する処理を実行する。解析部132は、画像処理に関する技術を適宜用いて、画像中の対象物体を除去する処理を実行する。図1の例では、解析部132は、物体OB1~OB3が隣接した状況ST1を撮像した画像中の対象物体OB2を除去する処理を実行する。解析部132は、物体OB2が物体OB1及び物体OB3と接触した状況ST1を撮像した画像から、対象物体OB2を除去する処理を実行する。
The analysis unit 132 executes a process of removing the target object in the image. The analysis unit 132 executes a process of removing the target object in the image by appropriately using a technique related to image processing. In the example of FIG. 1, the analysis unit 132 executes a process of removing the target object OB2 in the image obtained by capturing the situation ST1 in which the objects OB1 to OB3 are adjacent to each other. The analysis unit 132 executes a process of removing the target object OB2 from the image of the situation ST1 in which the object OB2 is in contact with the object OB1 and the object OB3.
図1の例では解析部132は、画像解析等の技術により、画像IM1を解析することにより、画像IM1に物体OB1、OB2、OB3が含まれることを特定する。解析部132は、対象物体OB2の周囲にある物体(隣接物体)と、対象物体OB2との物理接触状態を認識する。ロボット装置100は、画像IM1を解析することにより、対象物体OB2が、物体OB1及び物体OB3と接触していると認識する。
In the example of FIG. 1, the analysis unit 132 identifies that the image IM1 includes the objects OB1, OB2, and OB3 by analyzing the image IM1 by a technique such as image analysis. The analysis unit 132 recognizes the physical contact state between the object (adjacent object) around the target object OB2 and the target object OB2. By analyzing the image IM1, the robot device 100 recognizes that the target object OB2 is in contact with the object OB1 and the object OB3.
分類部133は、各種の分類を行う。分類部133は、各種情報を分類する。分類部133は、取得部131により取得された情報に基づいて、分類処理を行う。分類部133は、取得部131により取得された情報を分類する。分類部133は、記憶部12に記憶された情報に基づいて、分類処理を行う。分類部133は、各種の推定を行う。分類部133は、各種情報を推定する。分類部133は、物体の重量を推定する。
The classification unit 133 performs various classifications. The classification unit 133 classifies various types of information. The classification unit 133 performs the classification process based on the information acquired by the acquisition unit 131. The classification unit 133 classifies the information acquired by the acquisition unit 131. The classification unit 133 performs the classification process based on the information stored in the storage unit 12. The classification unit 133 makes various estimations. The classification unit 133 estimates various types of information. The classification unit 133 estimates the weight of the object.
分類部133は、取得部131により取得された情報に基づいて、各種分類を行う。分類部133は、センサ部14により検知された各種のセンサ情報を用いて、各種分類を行う。分類部133は、画像センサ141によって検知されるセンサ情報を用いて、各種分類を行う。分類部133は、力覚センサ142によって検知されるセンサ情報を用いて、各種分類を行う。
The classification unit 133 performs various classifications based on the information acquired by the acquisition unit 131. The classification unit 133 performs various classifications using various sensor information detected by the sensor unit 14. The classification unit 133 performs various classifications using the sensor information detected by the image sensor 141. The classification unit 133 performs various classifications using the sensor information detected by the force sensor 142.
分類部133は、画像情報に含まれる物体の重量を推定する。分類部133は、画像情報に含まれる物体の画像と、密度情報とに基づいて、画像情報に含まれる物体の重量を推定する。分類部133は、画像情報に含まれる物体のサイズと、密度情報とに基づいて、画像情報に含まれる物体の重量を推定する。分類部133は、画像情報に含まれる物体のサイズを推定し、推定したサイズと密度情報とを用いて、画像情報に含まれる物体の重量を推定する。
The classification unit 133 estimates the weight of the object included in the image information. The classification unit 133 estimates the weight of the object included in the image information based on the image of the object included in the image information and the density information. The classification unit 133 estimates the weight of the object included in the image information based on the size of the object included in the image information and the density information. The classification unit 133 estimates the size of the object included in the image information, and estimates the weight of the object included in the image information using the estimated size and the density information.
分類部133は、画像情報に含まれる物体群を、操作可能な物体と、操作不可能な物体とに分類する。分類部133は、推定した物体の重量と、閾値とを比較することにより、物体を、操作可能な物体と、操作不可能な物体とのいずれかに分類する。
The classification unit 133 classifies the object group included in the image information into an operable object and an inoperable object. The classification unit 133 classifies the object into either a manipulable object or an inoperable object by comparing the estimated weight of the object with the threshold value.
分類部133は、未知物体群SG1に含まれる各物体の重量と、「Wload」とを比較し、重量が「Wload」を超える物体を操作不可能な物体として、物体群G0に分類する。分類部133は、未知物体群SG1に含まれる各物体の重量と、「Wload」とを比較し、重量が「Wload」以下である物体を操作可能な物体として、物体群G1に分類する。分類部133は、書籍群SG11~SG13に含まれる物体OB11~OB17等の各書籍を、操作可能な物体群G1に分類する。分類部133は、本棚である物体OB10を、操作不可能な物体群G0に分類する。
The classification unit 133 compares the weight of each object included in the unknown object group SG1 with the "Wlood", and classifies the object whose weight exceeds the "Wlood" as an inoperable object into the object group G0. The classification unit 133 compares the weight of each object included in the unknown object group SG1 with the "Wlood", and classifies the object having a weight of "Wlood" or less as a manipulable object into the object group G1. The classification unit 133 classifies each book such as the objects OB11 to OB17 included in the book groups SG11 to SG13 into the operable object group G1. The classification unit 133 classifies the object OB10, which is a bookshelf, into the inoperable object group G0.
選択部134は、各種情報を選択する。選択部134は、各種情報を抽出する。選択部134は、各種情報を特定する。選択部134は、外部の情報処理装置から取得された情報に基づいて、各種情報を選択する。選択部134は、記憶部12に記憶された情報に基づいて、各種情報を選択する。
The selection unit 134 selects various information. The selection unit 134 extracts various information. The selection unit 134 specifies various types of information. The selection unit 134 selects various information based on the information acquired from the external information processing device. The selection unit 134 selects various information based on the information stored in the storage unit 12.
選択部134は、取得部131により取得された情報に基づいて、各種選択を行う。選択部134は、分類部133により分類された情報に基づいて、各種選択を行う。選択部134は、センサ部14により検知された各種のセンサ情報を用いて、各種選択を行う。選択部134は、画像センサ141によって検知されるセンサ情報を用いて、各種選択を行う。選択部134は、力覚センサ142によって検知されるセンサ情報を用いて、各種選択を行う。
The selection unit 134 makes various selections based on the information acquired by the acquisition unit 131. The selection unit 134 makes various selections based on the information classified by the classification unit 133. The selection unit 134 makes various selections using various sensor information detected by the sensor unit 14. The selection unit 134 makes various selections using the sensor information detected by the image sensor 141. The selection unit 134 makes various selections using the sensor information detected by the force sensor 142.
選択部134は、分類部133による分類結果に基づいて、物体群のうち、操作可能な物体を、対象物体として選択する。
The selection unit 134 selects an operable object from the object group as the target object based on the classification result by the classification unit 133.
図1の例では、選択部134は、物体OB1、OB2、OB3の物体群から、対象物体をランダムに選択する。選択部134は、物体OB1、OB2、OB3の物体群から、対象物体として物体OB2を選択する。
In the example of FIG. 1, the selection unit 134 randomly selects a target object from the object group of the objects OB1, OB2, and OB3. The selection unit 134 selects the object OB2 as the target object from the object group of the objects OB1, OB2, and OB3.
予測部135は、各種情報を予測する。予測部135は、外部の情報処理装置から取得された情報に基づいて、各種情報を予測する。予測部135は、記憶部12に記憶された情報に基づいて、各種情報を予測する。予測部135は、解析部132による解析処理の結果に基づいて、各種情報を予測する。
The prediction unit 135 predicts various types of information. The prediction unit 135 predicts various types of information based on the information acquired from the external information processing device. The prediction unit 135 predicts various types of information based on the information stored in the storage unit 12. The prediction unit 135 predicts various information based on the result of the analysis process by the analysis unit 132.
予測部135は、取得部131により取得された情報に基づいて、各種予測を行う。予測部135は、センサ部14により検知された各種のセンサ情報を用いて、各種予測を行う。予測部135は、画像センサ141によって検知されるセンサ情報を用いて、各種予測を行う。予測部135は、力覚センサ142によって検知されるセンサ情報を用いて、各種予測を行う。
The prediction unit 135 makes various predictions based on the information acquired by the acquisition unit 131. The prediction unit 135 makes various predictions using various sensor information detected by the sensor unit 14. The prediction unit 135 makes various predictions using the sensor information detected by the image sensor 141. The prediction unit 135 makes various predictions using the sensor information detected by the force sensor 142.
予測部135は、取得部131により取得された画像情報に基づいて、対象物体に対する操作により生じる隣接物体の配置状態に関する変化を予測する。例えば、予測部135は、対象物体と隣接物体とが撮像された画像情報に基づいて、対象物体の除去により生じる隣接物体の配置状態に関する変化を予測する。例えば、予測部135は、対象物体と隣接物体とが撮像された画像情報に基づいて、対象物体の位置の変更により生じる隣接物体の配置状態に関する変化を予測する。例えば、予測部135は、対象物体と隣接物体とが撮像された画像情報に基づいて、対象物体の姿勢の変更により生じる隣接物体の配置状態に関する変化を予測する。予測部135は、対象物体に対する操作により生じる隣接物体の姿勢の変化を予測する。予測部135は、対象物体に対する操作により生じる隣接物体の位置の変化を予測する。
The prediction unit 135 predicts a change in the arrangement state of an adjacent object caused by an operation on the target object based on the image information acquired by the acquisition unit 131. For example, the prediction unit 135 predicts a change in the arrangement state of the adjacent object caused by the removal of the target object based on the image information obtained by capturing the image of the target object and the adjacent object. For example, the prediction unit 135 predicts a change in the arrangement state of the adjacent object caused by the change in the position of the target object based on the image information obtained by capturing the image of the target object and the adjacent object. For example, the prediction unit 135 predicts a change in the arrangement state of the adjacent object caused by a change in the posture of the target object based on the image information obtained by capturing the image of the target object and the adjacent object. The prediction unit 135 predicts a change in the posture of an adjacent object caused by an operation on the target object. The prediction unit 135 predicts a change in the position of an adjacent object caused by an operation on the target object.
予測部135は、対象物体と、対象物体に接触する隣接物体とが撮像された画像情報に基づいて、対象物体に対する操作により生じる隣接物体の配置状態に関する変化を予測する。予測部135は、積み重ねられた対象物体と隣接物体とが撮像された画像情報に基づいて、対象物体に対する操作により生じる隣接物体の配置状態に関する変化を予測する。予測部135は、対象物体と、対象物体に対する操作の影響を受ける範囲内に位置する隣接物体とが撮像された画像情報に基づいて、対象物体に対する操作により生じる隣接物体の配置状態に関する変化を予測する。予測部135は、選択部134により選択された対象物体に対する操作により生じる隣接物体の配置状態に関する変化を予測する。
The prediction unit 135 predicts a change in the arrangement state of the adjacent object caused by the operation on the target object based on the image information obtained by capturing the image information of the target object and the adjacent object in contact with the target object. The prediction unit 135 predicts a change in the arrangement state of the adjacent object caused by the operation on the target object based on the image information obtained by capturing the stacked target object and the adjacent object. The prediction unit 135 predicts a change in the arrangement state of the adjacent object caused by the operation on the target object based on the image information obtained by capturing the image information of the target object and the adjacent object located within the range affected by the operation on the target object. To do. The prediction unit 135 predicts a change in the arrangement state of the adjacent object caused by the operation on the target object selected by the selection unit 134.
図1の例では、予測部135は、対象物体を除去した場合の隣接物体の姿勢量(姿勢変化予測値)や位置の変化量(位置変化予測値)を予測する。予測部135は、対象物体OB2を除去した場合の物体OB1、OB3の配置状態に関する変化を予測する。予測部135は、対象物体OB2を除去した場合の物体OB1の姿勢や位置の変化やOB3の姿勢や位置の変化を予測する。
In the example of FIG. 1, the prediction unit 135 predicts the posture amount (posture change prediction value) and the position change amount (position change prediction value) of the adjacent object when the target object is removed. The prediction unit 135 predicts a change in the arrangement state of the objects OB1 and OB3 when the target object OB2 is removed. The prediction unit 135 predicts changes in the posture and position of the object OB1 and changes in the posture and position of the object OB3 when the target object OB2 is removed.
予測部135は、対象物体OB2を除去した場合の物体OB1の姿勢変化予測値や位置の位置変化予測値が0であると予測する。予測部135は、対象物体OB2の除去による物体OB3の位置や姿勢の変化量を予測する。予測部135は、対象物体OB2を除去した場合の物体OB1の姿勢変化予測値や位置の位置変化予測値を予測する。
The prediction unit 135 predicts that the posture change prediction value and the position position change prediction value of the object OB1 when the target object OB2 is removed are 0. The prediction unit 135 predicts the amount of change in the position and posture of the object OB3 due to the removal of the target object OB2. The prediction unit 135 predicts the posture change prediction value of the object OB1 and the position change prediction value of the position when the target object OB2 is removed.
判定部136は、各種情報を判定する。判定部136は、各種情報を決定する。判定部136は、各種情報を特定する。判定部136は、外部の情報処理装置から取得された情報に基づいて、各種情報を判定する。判定部136は、記憶部12に記憶された情報に基づいて、各種情報を判定する。
The determination unit 136 determines various information. The determination unit 136 determines various information. The determination unit 136 specifies various types of information. The determination unit 136 determines various types of information based on the information acquired from the external information processing device. The determination unit 136 determines various types of information based on the information stored in the storage unit 12.
判定部136は、取得部131により取得された情報に基づいて、各種判定を行う。判定部136は、センサ部14により検知された各種のセンサ情報を用いて、各種判定を行う。判定部136は、画像センサ141によって検知されるセンサ情報を用いて、各種判定を行う。判定部136は、力覚センサ142によって検知されるセンサ情報を用いて、各種判定を行う。判定部136は、解析部132による解析処理の結果に基づいて、各種情報を判定する。判定部136は、予測部135による予測処理の結果に基づいて、各種情報を判定する。
The determination unit 136 makes various determinations based on the information acquired by the acquisition unit 131. The determination unit 136 makes various determinations using various sensor information detected by the sensor unit 14. The determination unit 136 makes various determinations using the sensor information detected by the image sensor 141. The determination unit 136 makes various determinations using the sensor information detected by the force sensor 142. The determination unit 136 determines various information based on the result of the analysis process by the analysis unit 132. The determination unit 136 determines various information based on the result of the prediction process by the prediction unit 135.
判定部136は、操作部16による物体への接触結果に基づいて、当該物体に当該物体から独立して動く箇所があるかどうかを判定する。
The determination unit 136 determines whether or not the object has a portion that moves independently of the object, based on the result of contact with the object by the operation unit 16.
図1の例では、判定部136は、隣接物体の姿勢または位置の変化が閾値以上であるかを判定する。判定部136は、記憶部12に記憶された姿勢に関する閾値(姿勢閾値)や位置に関する閾値(位置閾値)を用いて、隣接物体の姿勢または位置の変化が閾値以上であるかを判定する。
In the example of FIG. 1, the determination unit 136 determines whether the change in the posture or position of the adjacent object is equal to or greater than the threshold value. The determination unit 136 determines whether or not the change in the attitude or position of the adjacent object is equal to or greater than the threshold value by using the threshold value (posture threshold value) regarding the posture and the threshold value (position threshold value) regarding the position stored in the storage unit 12.
図2の例では、判定部136は、書籍群SG11~SG13に含まれる物体OB11~OB17等の各書籍の重量を推定し、推定重量が閾値「Wload」以下であると判定する。判定部136は、本棚である物体OB10の重量を推定し、推定重量が閾値「Wload」より大きいと判定する。
In the example of FIG. 2, the determination unit 136 estimates the weight of each book such as the objects OB11 to OB17 included in the book group SG11 to SG13, and determines that the estimated weight is equal to or less than the threshold value "Wlood". The determination unit 136 estimates the weight of the object OB10, which is a bookshelf, and determines that the estimated weight is larger than the threshold value “Wlood”.
計画部137は、各種計画を行う。計画部137は、行動計画に関する各種情報を生成する。計画部137は、取得部131により取得された情報に基づいて、各種計画を行う。計画部137は、予測部135による予測結果に基づいて、各種計画を行う。計画部137は、判定部136による判定結果に基づいて、各種計画を行う。計画部137は、行動計画に関する種々の技術を用いて、行動計画を行う。
Planning department 137 makes various plans. The planning unit 137 generates various information regarding the action plan. The planning unit 137 makes various plans based on the information acquired by the acquisition unit 131. The planning unit 137 makes various plans based on the prediction result by the prediction unit 135. The planning unit 137 makes various plans based on the determination result by the determination unit 136. The planning unit 137 makes an action plan by using various techniques related to the action plan.
実行部138は、各種処理を実行する。実行部138は、外部の情報処理装置からの情報に基づいて、各種処理を実行する。実行部138は、記憶部12に記憶された情報に基づいて、各種処理を実行する。実行部138は、閾値情報記憶部121や密度情報記憶部122に記憶された情報に基づいて、各種処理を実行する。実行部138は、取得部131により取得された情報に基づいて、各種処理を実行する。実行部138は、操作部16の操作を制御する操作制御部として機能する。
Execution unit 138 executes various processes. The execution unit 138 executes various processes based on information from an external information processing device. The execution unit 138 executes various processes based on the information stored in the storage unit 12. The execution unit 138 executes various processes based on the information stored in the threshold information storage unit 121 and the density information storage unit 122. The execution unit 138 executes various processes based on the information acquired by the acquisition unit 131. The execution unit 138 functions as an operation control unit that controls the operation of the operation unit 16.
実行部138は、予測部135による予測結果に基づいて、各種処理を実行する。実行部138は、判定部136による判定結果に基づいて、各種処理を実行する。実行部138は、計画部137による行動計画に基づいて、各種処理を実行する。
Execution unit 138 executes various processes based on the prediction result by the prediction unit 135. The execution unit 138 executes various processes based on the determination result by the determination unit 136. The execution unit 138 executes various processes based on the action plan by the planning unit 137.
実行部138は、計画部137により生成された行動計画の情報に基づいて、移動部15を制御して行動計画に対応する行動を実行する。実行部138は、行動計画の情報に基づく移動部15の制御により、行動計画に沿ってロボット装置100の移動処理を実行する。
The execution unit 138 controls the moving unit 15 to execute the action corresponding to the action plan based on the information of the action plan generated by the planning unit 137. The execution unit 138 executes the movement process of the robot device 100 according to the action plan under the control of the movement unit 15 based on the information of the action plan.
実行部138は、計画部137により生成された行動計画の情報に基づいて、操作部16を制御して行動計画に対応する行動を実行する。実行部138は、行動計画の情報に基づく操作部16の制御により、行動計画に沿ってロボット装置100による物体の操作処理を実行する。
The execution unit 138 controls the operation unit 16 based on the information of the action plan generated by the planning unit 137 to execute the action corresponding to the action plan. The execution unit 138 executes the operation processing of the object by the robot device 100 according to the action plan under the control of the operation unit 16 based on the information of the action plan.
図1の例では、実行部138は、対象物体OB2の除去により生じる物体OB3の姿勢変化予測値が姿勢閾値以上であるため、物体OB3を操作する処理を実行する。実行部138は、第1操作部16aで対象物体OB2の操作を実行し、第2操作部16bで物体OB3の操作を実行する。実行部138は、第1操作部16aで対象物体OB2の操作を実行し、第2操作部16bで隣接物体である物体OB3を支える操作を実行する。
In the example of FIG. 1, since the posture change predicted value of the object OB3 caused by the removal of the target object OB2 is equal to or more than the posture threshold value, the execution unit 138 executes a process of operating the object OB3. The execution unit 138 executes the operation of the target object OB2 by the first operation unit 16a, and executes the operation of the object OB3 by the second operation unit 16b. The execution unit 138 executes the operation of the target object OB2 by the first operation unit 16a, and executes the operation of supporting the object OB3 which is an adjacent object by the second operation unit 16b.
センサ部14は、所定の情報を検知する。センサ部14は、画像を撮像する撮像手段としての画像センサ141や力覚センサ142を有する。
The sensor unit 14 detects predetermined information. The sensor unit 14 includes an image sensor 141 and a force sensor 142 as an imaging means for capturing an image.
画像センサ141は、画像情報を検知、ロボット装置100の視覚として機能する。例えば、画像センサ141は、ロボット装置100の頭部に設けられる。画像センサ141は、画像情報を撮像する。図1の例では、画像センサ141は、未知物体操作群(未知物体群SG1)を含む画像を検知(撮像)する。
The image sensor 141 detects image information and functions as vision for the robot device 100. For example, the image sensor 141 is provided on the head of the robot device 100. The image sensor 141 captures image information. In the example of FIG. 1, the image sensor 141 detects (images) an image including an unknown object operation group (unknown object group SG1).
力覚センサ142は、力を検知し、ロボット装置100の触覚として機能する。例えば、力覚センサ142は、操作部16の先端部(保持部)に設けられる。力覚センサ142は、操作部16による物体への接触に関する検知を行う。
The force sensor 142 detects the force and functions as a tactile sense of the robot device 100. For example, the force sensor 142 is provided at the tip end portion (holding portion) of the operation portion 16. The force sensor 142 detects the contact of the operation unit 16 with an object.
また、センサ部14は、画像センサ141や力覚センサ142に限らず、各種センサを有してもよい。センサ部14は、近接センサを有してもよい。センサ部14は、LiDAR(Light Detection and Ranging、Laser Imaging Detection and Ranging)やToF(Time of Flight)センサやステレオカメラ等の測距センサを有してもよい。センサ部14は、GPS(Global Positioning System)センサ等のロボット装置100の位置情報を検知するセンサ(位置センサ)を有してもよい。なお、センサ部14は、上記に限らず、種々のセンサを有してもよい。センサ部14は、加速度センサ、ジャイロセンサ等の種々のセンサを有してもよい。また、センサ部14における上記の各種情報を検知するセンサは共通のセンサであってもよいし、各々異なるセンサにより実現されてもよい。
Further, the sensor unit 14 is not limited to the image sensor 141 and the force sensor 142, and may have various sensors. The sensor unit 14 may have a proximity sensor. The sensor unit 14 may have a range finder such as a LiDAR (Light Detection and Ringing, Laser Imaging Detection and Ringing), a ToF (Time of Flight) sensor, or a stereo camera. The sensor unit 14 may have a sensor (position sensor) that detects the position information of the robot device 100 such as a GPS (Global Positioning System) sensor. The sensor unit 14 is not limited to the above, and may have various sensors. The sensor unit 14 may have various sensors such as an acceleration sensor and a gyro sensor. Further, the sensors that detect the above-mentioned various information in the sensor unit 14 may be common sensors, or may be realized by different sensors.
移動部15は、ロボット装置100における物理的構成を駆動する機能を有する。移動部15は、ロボット装置100の位置の移動を行うための機能を有する。移動部15は、例えばアクチュエータである。なお、移動部15は、ロボット装置100が所望の動作を実現可能であれば、どのような構成であってもよい。移動部15は、ロボット装置100の位置の移動等を実現可能であれば、どのような構成であってもよい。ロボット装置100がキャタピラやタイヤ等の移動機構を有する場合、移動部15は、キャタピラやタイヤ等を駆動する。例えば、移動部15は、実行部138による指示に応じて、ロボット装置100の移動機構を駆動することにより、ロボット装置100を移動させ、ロボット装置100の位置を変更する。
The moving unit 15 has a function of driving the physical configuration of the robot device 100. The moving unit 15 has a function for moving the position of the robot device 100. The moving unit 15 is, for example, an actuator. The moving unit 15 may have any configuration as long as the robot device 100 can realize a desired operation. The moving unit 15 may have any configuration as long as the position of the robot device 100 can be moved. When the robot device 100 has a moving mechanism such as caterpillars and tires, the moving unit 15 drives the caterpillars and tires. For example, the moving unit 15 moves the robot device 100 and changes the position of the robot device 100 by driving the moving mechanism of the robot device 100 in response to an instruction from the execution unit 138.
第1の実施形態に係るロボット装置100は、第1操作部16a及び第2操作部16bの2つの操作部16を有する。操作部16は、人間でいう「手(腕)」に相当する部であり、ロボット装置100が他の物体に作用するための機能を実現する。ロボット装置100は、2本の手としての第1操作部16aや第2操作部16bを有する。
The robot device 100 according to the first embodiment has two operation units 16 of a first operation unit 16a and a second operation unit 16b. The operation unit 16 is a unit corresponding to a human “hand (arm)” and realizes a function for the robot device 100 to act on another object. The robot device 100 has a first operation unit 16a and a second operation unit 16b as two hands.
操作部16は、実行部138による処理に応じて駆動する。操作部16は、物体を操作するマニピュレータである。例えば、操作部16は、アームとエンドエフェクタを有するマニピュレータであってもよい。操作部16は、隣接物体の配置状態に関する変化が所定の条件を満たす場合、隣接物体を操作する。複数の操作部16のうち少なくとも1つは、隣接物体の配置状態に関する変化が所定の条件を満たす場合、隣接物体を操作する。
The operation unit 16 is driven according to the processing by the execution unit 138. The operation unit 16 is a manipulator that operates an object. For example, the operating unit 16 may be a manipulator having an arm and an end effector. The operation unit 16 operates the adjacent object when the change regarding the arrangement state of the adjacent object satisfies a predetermined condition. At least one of the plurality of operation units 16 operates the adjacent object when the change regarding the arrangement state of the adjacent object satisfies a predetermined condition.
操作部16は、例えばエンドエフェクタやロボットハンド等である物体を保持する保持部と、例えばアクチュエータ等である保持部を駆動する駆動部とを有する。操作部16の保持部は、グリッパー、多指ハンド、ジャミングハンド、吸着ハンド、ソフトハンド等、所望の機能を実現可能であればどのような方式であってもよい。なお、操作部16の保持部は、物体を保持可能であればどのような構成により実現されてもよく、物体を把持する把持部であってもよいし、物体を吸着し保持する吸着部であってもよい。
The operation unit 16 has a holding unit that holds an object such as an end effector or a robot hand, and a driving unit that drives the holding unit such as an actuator. The holding unit of the operation unit 16 may be of any method as long as a desired function can be realized, such as a gripper, a multi-finger hand, a jamming hand, a suction hand, and a soft hand. The holding portion of the operating portion 16 may be realized by any configuration as long as it can hold the object, may be a gripping portion that grips the object, or is a suction portion that sucks and holds the object. There may be.
また、操作部16の保持部は、対象物体の位置および力の情報を取得できるように、力覚センサ、画像センサ、近接覚センサ等を備えてもよい。例えば、ロボット装置100は、操作部16の保持部に力覚センサ142が設けられ、操作部16による対象物体への接触による力の情報を取得することができる。
Further, the holding unit of the operation unit 16 may be provided with a force sensor, an image sensor, a proximity sensor, or the like so that information on the position and force of the target object can be acquired. For example, in the robot device 100, a force sensor 142 is provided in the holding portion of the operation unit 16, and information on the force due to the contact of the operation unit 16 with the target object can be acquired.
第1操作部16aや第2操作部16bは、ロボット装置100の胴体部(基部)の両側部に各々設けられる。第1操作部16aは、ロボット装置100の左側部から延び、ロボット装置100の左手として機能する。また、第2操作部16bは、ロボット装置100の右側部から延び、ロボット装置100の右手として機能する。なお、操作部16は、その数やロボット装置100の形状に応じて、種々の位置に設けられてもよい。
The first operation unit 16a and the second operation unit 16b are provided on both side portions of the body portion (base portion) of the robot device 100, respectively. The first operation unit 16a extends from the left side portion of the robot device 100 and functions as the left hand of the robot device 100. Further, the second operation unit 16b extends from the right side portion of the robot device 100 and functions as the right hand of the robot device 100. The operation units 16 may be provided at various positions depending on the number of the operation units 16 and the shape of the robot device 100.
[1-3.第1の実施形態に係る情報処理の手順]
次に、図6及び図7を用いて、第1の実施形態に係る情報処理の手順について説明する。図6及び図7は、第1の実施形態に係る情報処理の手順を示すフローチャートである。図6は、ロボット装置100による情報処理の手順の概要を示すフローチャートである。図7は、ロボット装置100による情報処理の手順の詳細を示すフローチャートである。 [1-3. Information processing procedure according to the first embodiment]
Next, the procedure of information processing according to the first embodiment will be described with reference to FIGS. 6 and 7. 6 and 7 are flowcharts showing the information processing procedure according to the first embodiment. FIG. 6 is a flowchart showing an outline of the information processing procedure by therobot device 100. FIG. 7 is a flowchart showing details of the information processing procedure by the robot device 100.
次に、図6及び図7を用いて、第1の実施形態に係る情報処理の手順について説明する。図6及び図7は、第1の実施形態に係る情報処理の手順を示すフローチャートである。図6は、ロボット装置100による情報処理の手順の概要を示すフローチャートである。図7は、ロボット装置100による情報処理の手順の詳細を示すフローチャートである。 [1-3. Information processing procedure according to the first embodiment]
Next, the procedure of information processing according to the first embodiment will be described with reference to FIGS. 6 and 7. 6 and 7 are flowcharts showing the information processing procedure according to the first embodiment. FIG. 6 is a flowchart showing an outline of the information processing procedure by the
[1-3-1.情報処理の手順の概要を示すフローチャート]
まず、図6を用いて、第1の実施形態に係る情報処理の流れの概要について説明する。 [1-3-1. Flowchart showing the outline of the information processing procedure]
First, with reference to FIG. 6, an outline of the flow of information processing according to the first embodiment will be described.
まず、図6を用いて、第1の実施形態に係る情報処理の流れの概要について説明する。 [1-3-1. Flowchart showing the outline of the information processing procedure]
First, with reference to FIG. 6, an outline of the flow of information processing according to the first embodiment will be described.
図6に示すように、ロボット装置100は、対象物体と、対象物体に隣接する隣接物体とが撮像された画像情報を取得する(ステップS101)。例えば、ロボット装置100は、画像センサ141から複数の物体が撮像された画像情報を取得する。
As shown in FIG. 6, the robot device 100 acquires image information obtained by capturing an image of the target object and an adjacent object adjacent to the target object (step S101). For example, the robot device 100 acquires image information obtained by capturing images of a plurality of objects from the image sensor 141.
ロボット装置100は、画像情報に基づいて、対象物体に対する操作により生じる隣接物体の配置状態に関する変化を予測する(ステップS102)。例えば、ロボット装置100は、画像情報中の複数の物体のうち、対象物体に選択した物体の除去により生じる隣接物体の配置状態に関する変化を予測する。
The robot device 100 predicts a change in the arrangement state of an adjacent object caused by an operation on the target object based on the image information (step S102). For example, the robot device 100 predicts a change in the arrangement state of an adjacent object caused by the removal of the object selected as the target object among the plurality of objects in the image information.
そして、ロボット装置100は、隣接物体の配置状態の変化が所定の条件を満たす場合、隣接物体を操作する 処理を実行する(ステップS103)。例えば、ロボット装置100は、対象物体の除去により、隣接物体の位置または姿勢が所定の閾値以上変化する場合、隣接物体を操作する処理を実行する。
Then, the robot device 100 executes a process of operating the adjacent object when the change in the arrangement state of the adjacent object satisfies a predetermined condition (step S103). For example, the robot device 100 executes a process of manipulating an adjacent object when the position or posture of the adjacent object changes by a predetermined threshold value or more due to the removal of the target object.
[1-3-2.情報処理の手順の詳細を示すフローチャート]
次に、図7を用いて、第1の実施形態に係る情報処理の流れの詳細について説明する。なお、図7の例では、ロボット装置100は、複数の物体が撮像された画像情報を取得済みであるものとする。 [1-3-2. Flowchart showing details of information processing procedure]
Next, the details of the flow of information processing according to the first embodiment will be described with reference to FIG. 7. In the example of FIG. 7, it is assumed that therobot device 100 has already acquired image information obtained by capturing images of a plurality of objects.
次に、図7を用いて、第1の実施形態に係る情報処理の流れの詳細について説明する。なお、図7の例では、ロボット装置100は、複数の物体が撮像された画像情報を取得済みであるものとする。 [1-3-2. Flowchart showing details of information processing procedure]
Next, the details of the flow of information processing according to the first embodiment will be described with reference to FIG. 7. In the example of FIG. 7, it is assumed that the
図7に示すように、ロボット装置100は、操作対象物をランダムに選択する(ステップS201)。ロボット装置100は、画像に含まれる複数の物体のうち、重量が所定の閾値未満であると推定される物体を操作対象物に選択する。図1の例では、ロボット装置100は、物体OB1~OB3のうち、物体OB2を操作対象物(対象物体)に選択する。なお、ロボット装置100は、選択可能な物体が無い場合処理を終了してもよい。
As shown in FIG. 7, the robot device 100 randomly selects an operation target (step S201). The robot device 100 selects, as an operation target, an object whose weight is estimated to be less than a predetermined threshold value among a plurality of objects included in the image. In the example of FIG. 1, the robot device 100 selects the object OB2 from the objects OB1 to OB3 as the operation target object (target object). The robot device 100 may end the process when there is no selectable object.
そして、ロボット装置100は、対象物体の周囲にある物体との物理接触状態を認識する(ステップS202)。ロボット装置100は、画像を解析することにより、対象物体の周囲にある物体との物理接触状態を認識する。図1の例では、ロボット装置100は、対象物体OB2が、物体OB1及び物体OB3と接触していると認識する。
Then, the robot device 100 recognizes the physical contact state with the object around the target object (step S202). The robot device 100 recognizes a physical contact state with an object around the target object by analyzing the image. In the example of FIG. 1, the robot device 100 recognizes that the target object OB2 is in contact with the object OB1 and the object OB3.
そして、ロボット装置100は、周囲に物理接触する物体があるかを判定する(ステップS203)。ロボット装置100は、対象物体の周囲に物理接触する隣接物体があるかを判定する。図1の例では、ロボット装置100は、対象物体OB2の周囲に物理接触する隣接物体があるかを判定する。
Then, the robot device 100 determines whether or not there is an object in physical contact with the surroundings (step S203). The robot device 100 determines whether or not there is an adjacent object in physical contact around the target object. In the example of FIG. 1, the robot device 100 determines whether or not there is an adjacent object in physical contact around the target object OB2.
そして、ロボット装置100は、周囲に物理接触する物体がないと判定した場合(ステップS203:No)、対象物体の操作を実行する(ステップS208)。例えば、ロボット装置100は、第1操作部16aや第2操作部16bを制御し、対象物体の位置や姿勢を変更する操作を実行する。
Then, when the robot device 100 determines that there is no object in physical contact with the surroundings (step S203: No), the robot device 100 executes the operation of the target object (step S208). For example, the robot device 100 controls the first operation unit 16a and the second operation unit 16b, and executes an operation of changing the position and orientation of the target object.
一方、ロボット装置100は、周囲に物理接触する物体があると判定した場合(ステップS203:Yes)、対象物体を取り除いたとき物理接触のある周辺物体(隣接物体)の姿勢を予測する(ステップS204)。ロボット装置100は、周囲に物理接触する物体がある場合、対象物体を取り除いたとき物理接触のある周辺物体の姿勢や位置の変化を予測する。
On the other hand, when the robot device 100 determines that there is an object in physical contact in the vicinity (step S203: Yes), the robot device 100 predicts the posture of the peripheral object (adjacent object) in physical contact when the target object is removed (step S204). ). When the robot device 100 has an object in physical contact with the surrounding object, the robot device 100 predicts a change in the posture or position of the peripheral object in physical contact when the target object is removed.
そして、ロボット装置100は、周囲物体の動作が閾値以上であるかを判定する(ステップS205)。ロボット装置100は、隣接物体の姿勢または位置の変化が閾値以上であるかを判定する。図1の例では、ロボット装置100は、対象物体OB2の隣接物体である物体OB1やOB3の姿勢または位置の変化が閾値以上であるかを判定する。
Then, the robot device 100 determines whether the movement of the surrounding object is equal to or higher than the threshold value (step S205). The robot device 100 determines whether the change in the posture or position of the adjacent object is equal to or greater than the threshold value. In the example of FIG. 1, the robot device 100 determines whether the change in the posture or the position of the objects OB1 and OB3, which are adjacent objects of the target object OB2, is equal to or more than the threshold value.
そして、ロボット装置100は、周囲物体の動作が閾値以上でないと判定した場合(ステップS205:No)、対象物体の操作を実行する(ステップS208)。
Then, when the robot device 100 determines that the movement of the surrounding object is not equal to or greater than the threshold value (step S205: No), the robot device 100 executes the operation of the target object (step S208).
一方、ロボット装置100は、周囲物体の動作が閾値以上であると判定した場合(ステップS205:Yes)、別の操作可能なマニピュレータがあるかを判定する(ステップS206)。ロボット装置100は、対象物体を操作するマニピュレータ(操作部16)の他に操作可能なマニピュレータ(操作部16)があるかを判定する。
On the other hand, when the robot device 100 determines that the movement of the surrounding object is equal to or higher than the threshold value (step S205: Yes), the robot device 100 determines whether or not there is another manipulator that can be operated (step S206). The robot device 100 determines whether or not there is an operable manipulator (operation unit 16) in addition to the manipulator (operation unit 16) that operates the target object.
ロボット装置100は、対象物体を操作する操作部16以外に、動作が閾値以上である周辺物体の数の他の操作部16があるかを判定する。例えば、ロボット装置100は、動作が閾値以上である周辺物体が2つである場合、対象物体を操作する操作部16以外に、操作可能な2つの操作部16があるかを判定する。例えば、ロボット装置100は、動作が閾値以上である周辺物体が1つである場合、対象物体を操作する操作部16以外に、操作可能な1つの操作部16があるかを判定する。例えば、ロボット装置100は、対象物体を操作する操作部16(例えば、第1操作部16a)以外に、他の操作部16(例えば、第2操作部16b)があるかを判定する。
The robot device 100 determines whether or not there are other operation units 16 having a number of peripheral objects whose movement is equal to or higher than the threshold value, in addition to the operation unit 16 that operates the target object. For example, when the robot device 100 has two peripheral objects whose movements are equal to or higher than the threshold value, the robot device 100 determines whether or not there are two operable operation units 16 in addition to the operation unit 16 that operates the target object. For example, when the robot device 100 has one peripheral object whose movement is equal to or higher than the threshold value, the robot device 100 determines whether or not there is one operable operating unit 16 in addition to the operating unit 16 that operates the target object. For example, the robot device 100 determines whether or not there is another operation unit 16 (for example, the second operation unit 16b) in addition to the operation unit 16 (for example, the first operation unit 16a) that operates the target object.
ロボット装置100は、別の操作可能なマニピュレータがないと判定した場合(ステップS206:No)、ステップS201に戻って処理を繰り返す。
When the robot device 100 determines that there is no other manipulator that can be operated (step S206: No), the robot device 100 returns to step S201 and repeats the process.
ロボット装置100は、別の操作可能なマニピュレータがあると判定した場合(ステップS206:Yes)、別のマニピュレータで周辺物体を支える(ステップS207)。ロボット装置100は、対象物体を操作する操作部16以外に、別の操作可能な操作部16がある場合、別の操作部16で周辺物体(隣接物体)を支える等の隣接物体に対する操作を実行する。なお、ロボット装置100は、隣接物体に対する操作として、隣接物体を支える操作に限らず、隣接物体の位置や姿勢を変化させる操作を実行してもよい。ロボット装置100は、一の操作部16で対象物体を移動させるとともに、別の操作部16で隣接物体も移動させてよい。
When the robot device 100 determines that there is another manipulator that can be operated (step S206: Yes), the robot device 100 supports the peripheral object with another manipulator (step S207). When the robot device 100 has another operable operation unit 16 in addition to the operation unit 16 that operates the target object, the robot device 100 executes an operation on an adjacent object such as supporting a peripheral object (adjacent object) by the other operation unit 16. To do. The robot device 100 may perform not only the operation of supporting the adjacent object but also the operation of changing the position and the posture of the adjacent object as the operation for the adjacent object. In the robot device 100, one operation unit 16 may move the target object, and another operation unit 16 may move the adjacent object.
そして、ロボット装置100は、対象物体の操作を実行する(ステップS208)。例えば、ロボット装置100は、第1操作部16aで対象物体の操作を実行し、第2操作部16bで隣接物体の操作を実行してもよい。図1の例では、ロボット装置100は、第1操作部16aで対象物体OB2の操作を実行し、第2操作部16bで隣接物体であるOB3を支える操作を実行してもよい。
Then, the robot device 100 executes the operation of the target object (step S208). For example, in the robot device 100, the first operation unit 16a may execute the operation of the target object, and the second operation unit 16b may execute the operation of the adjacent object. In the example of FIG. 1, the robot device 100 may execute the operation of the target object OB2 by the first operation unit 16a and the operation of supporting the adjacent object OB3 by the second operation unit 16b.
このように、ロボット装置100は、操作部16の数に応じて対象物体と周辺物体とに対する操作を行うことができる。
In this way, the robot device 100 can perform operations on the target object and surrounding objects according to the number of operation units 16.
[1-4.情報処理装置の構成の概念図]
ここで、図8を用いて、ロボット装置100における各機能構成を概念的に示す。図8は、ロボットの構成の概念図の一例を示す図である。図8に示す構成群FCB1は、センサ処理部、物体・環境判定部、タスク計画部、動作計画部、制御部等が含まれる。 [1-4. Conceptual diagram of the configuration of the information processing device]
Here, with reference to FIG. 8, each functional configuration in therobot device 100 is conceptually shown. FIG. 8 is a diagram showing an example of a conceptual diagram of a robot configuration. The configuration group FCB1 shown in FIG. 8 includes a sensor processing unit, an object / environment determination unit, a task planning unit, an operation planning unit, a control unit, and the like.
ここで、図8を用いて、ロボット装置100における各機能構成を概念的に示す。図8は、ロボットの構成の概念図の一例を示す図である。図8に示す構成群FCB1は、センサ処理部、物体・環境判定部、タスク計画部、動作計画部、制御部等が含まれる。 [1-4. Conceptual diagram of the configuration of the information processing device]
Here, with reference to FIG. 8, each functional configuration in the
センサ処理部は、例えば図4中のセンサ部14や取得部131に対応し、視覚、力覚、触覚(振動)、近接覚、温度といった種々の種別の情報を検知する。物体・環境判定部は、例えば図4中の解析部132~判定部136に対応し、推定や判定などの各種処理を実行する。物体・環境判定部は、隣接物体の判定、隣接物体の重量推定、隣接物体の重心推定、隣接物体の動作解析、隣接物体の動作予測、隣接物体の安定性判定等の種々の処理を実行する。
The sensor processing unit corresponds to, for example, the sensor unit 14 and the acquisition unit 131 in FIG. 4, and detects various types of information such as visual sense, force sense, tactile sense (vibration), proximity sense, and temperature. The object / environment determination unit corresponds to, for example, the analysis unit 132 to the determination unit 136 in FIG. 4, and executes various processes such as estimation and determination. The object / environment determination unit executes various processes such as determination of an adjacent object, weight estimation of the adjacent object, estimation of the center of gravity of the adjacent object, motion analysis of the adjacent object, motion prediction of the adjacent object, and stability determination of the adjacent object. ..
動作計画部は、例えば図4中の計画部137に対応し、把持計画(エンドエフェクタ)、移動経路計画(移動体)、腕部軌道計画(マニピュレータ)を行う。動作計画部は、操作部16の保持部(エンドエフェクタ)による把持計画や、移動部15による移動経路計画や、操作部16(マニピュレータ)による腕部軌道計画を行う。制御部は、例えば図4中の実行部138に対応し、アクチュエータ制御やセンサ制御を行う。
The motion planning unit, for example, corresponds to the planning unit 137 in FIG. 4, and performs gripping planning (end effector), movement route planning (moving body), and arm trajectory planning (manipulator). The motion planning unit performs gripping planning by the holding unit (end effector) of the operating unit 16, movement route planning by the moving unit 15, and arm trajectory planning by the operating unit 16 (manipulator). The control unit corresponds to, for example, the execution unit 138 in FIG. 4 and performs actuator control and sensor control.
[1-5.N次物体操作の処理例]
ロボット装置100は、単体の物体を操作するだけでなく、積み重なった複数の物体や、主物に従物が付随するような複数の物体を操作するケースも想定される。この場合のロボット装置100による処理について、図9を用いて説明する。図9は、N次物体操作の処理の一例を示す図である。具体的には、図9は、二次物体操作の処理の一例を示す図である。なお、図1と同様の点については、適宜説明を省略する。 [1-5. Processing example of Nth object operation]
It is assumed that therobot device 100 not only operates a single object, but also operates a plurality of stacked objects and a plurality of objects such that a main object is accompanied by a follower. The processing by the robot device 100 in this case will be described with reference to FIG. FIG. 9 is a diagram showing an example of processing of Nth-order object operation. Specifically, FIG. 9 is a diagram showing an example of processing of secondary object manipulation. The same points as in FIG. 1 will be omitted as appropriate.
ロボット装置100は、単体の物体を操作するだけでなく、積み重なった複数の物体や、主物に従物が付随するような複数の物体を操作するケースも想定される。この場合のロボット装置100による処理について、図9を用いて説明する。図9は、N次物体操作の処理の一例を示す図である。具体的には、図9は、二次物体操作の処理の一例を示す図である。なお、図1と同様の点については、適宜説明を省略する。 [1-5. Processing example of Nth object operation]
It is assumed that the
例えば、物体Aに別の物体Bが重力方向に積み重なっている状態で、物体Bの姿勢や物体Aとの接触状態に注意しながら物体Aを操作するとき、これを二次物体操作と定義する。図9では、急須の蓋(物体PT1)がこぼれ落ちないように急須(物体OB21)を操作する場合を一例として示す。このように、図9では、急須である物体OB21(主物)から急須の蓋である物体PT1(従物)が落下などしないように、物体OB21を操作する例を示す。
For example, when another object B is stacked on the object A in the direction of gravity and the object A is operated while paying attention to the posture of the object B and the contact state with the object A, this is defined as a secondary object operation. .. FIG. 9 shows, as an example, a case where the teapot (object OB21) is operated so that the lid (object PT1) of the teapot does not spill. As described above, FIG. 9 shows an example in which the object OB21 is operated so that the object PT1 (subordinate) which is the lid of the teapot does not fall from the object OB21 (main object) which is the teapot.
ここで、このようなN次物体操作(Nは2以上の任意の数、図9の場合「2」)の場合、画像センサ141等による視覚情報をベースとした、物体抽出、重心検出、物理モデルシミュレーションだけでは安定に操作できるかどうか判断できない場合がある。例えば、急須(物体OB21)の蓋(物体PT1)を抑えながら、急須を傾けてお茶を注ぐ場合、急須の蓋は急須本体によって重力方向に支えられているだけで、弱い力でどの方向にも動く可能性がある。このように、姿勢変化や弱い外力によって動く可能性がある未知物体群の操作安定性検出には、例えば操作の事前にマニピュレータによって対象物体に弱い外力を与えて、対象物に動く箇所があるかどうか確認するプロセスを追加する。
Here, in the case of such an Nth-order object operation (N is an arbitrary number of 2 or more, “2” in the case of FIG. 9), object extraction, center of gravity detection, and physics based on visual information by an image sensor 141 or the like. It may not be possible to judge whether stable operation can be performed only by model simulation. For example, when pouring tea by tilting the teapot while holding down the lid (object PT1) of the teapot (object OB21), the lid of the teapot is only supported in the direction of gravity by the body of the teapot, and it can be moved in any direction with a weak force. May move. In this way, in the operation stability detection of an unknown object group that may move due to a change in posture or a weak external force, for example, is there a place where the object moves by applying a weak external force to the object by a manipulator in advance of the operation? Add a process to check if.
図9において、ロボット装置100は、急須本体と急須の蓋が動くという知識を有しないものとする。すなわち、ロボット装置100は、急須(物体OB21)と蓋(物体PT1)とが分離可能であるという知識を有しないものとする。ロボット装置100は、対象物体を閾値以内の力で、画像センサ141によって認識された対象物体表面とロボットハンドを接触させて動く箇所がないかどうかを確認する。
In FIG. 9, it is assumed that the robot device 100 does not have the knowledge that the main body of the kyusu and the lid of the kyusu move. That is, the robot device 100 does not have the knowledge that the kyusu (object OB21) and the lid (object PT1) can be separated. The robot device 100 confirms whether or not there is a moving portion of the target object by bringing the surface of the target object recognized by the image sensor 141 into contact with the robot hand with a force within the threshold value.
ロボット装置100は、物体OB21や物体PT1に操作部16を接触させる(ステップS21)。ロボット装置100は、閾値以内の力で第2操作部16bを物体OB21(物体PT1)に接触させる。ロボット装置100は、第2操作部16bによる物体OB21(物体PT1)への接触に関する力を力覚センサ142により検知し、検知した力に関する情報を基に、第2操作部16bを物体OB21(物体PT1)に接触させる強さを制御する。ロボット装置100は、接触により動いた場合、動く前の画像との差分を抽出してセグメンテーションし、物体を認識する。ロボット装置100は、第2操作部16bによる物体OB21(物体PT1)への接触により物体PT1が動いた場合、動く前の画像との差分を抽出してセグメンテーションし、急須本体(物体OB21)と蓋(物体PT1)を認識する。なお、ロボット装置100は、操作部16の保持部(エンドエフェクタ)に搭載されたToFなどの測距センサを用いて、動く物体(物体PT1)の形状認識をしてもよい。
The robot device 100 brings the operation unit 16 into contact with the object OB21 and the object PT1 (step S21). The robot device 100 brings the second operation unit 16b into contact with the object OB21 (object PT1) with a force within the threshold value. The robot device 100 detects the force related to the contact of the second operation unit 16b with the object OB21 (object PT1) by the force sensor 142, and based on the information regarding the detected force, the second operation unit 16b is used as the object OB21 (object PT1). The strength of contact with PT1) is controlled. When the robot device 100 moves due to contact, the robot device 100 extracts and segments the difference from the image before the movement, and recognizes the object. When the object PT1 moves due to the contact of the second operation unit 16b with the object OB21 (object PT1), the robot device 100 extracts and segments the difference from the image before the movement, and sets the kyusu body (object OB21) and the lid. (Object PT1) is recognized. The robot device 100 may recognize the shape of a moving object (object PT1) by using a distance measuring sensor such as ToF mounted on the holding unit (end effector) of the operating unit 16.
ロボット装置100は、操作部16による物体への接触結果に基づいて、物体にその物体から独立して動く箇所があるかどうかを判定する(ステップS22)。図9の例では、ロボット装置100は、第2操作部16bの物体OB21(物体PT1)への接触に応じて、物体PT1が物体OB21から独立して動いたため、物体OB21に物体PT1があると判定する。
The robot device 100 determines whether or not the object has a portion that moves independently of the object based on the result of contact with the object by the operation unit 16 (step S22). In the example of FIG. 9, in the robot device 100, the object PT1 moves independently of the object OB21 in response to the contact of the second operation unit 16b with the object OB21 (object PT1). judge.
ロボット装置100は、判定結果に応じて、物体を操作する(ステップS23)。ロボット装置100は、物体にその物体から独立して動く箇所があると判定した場合、操作部16の数に応じた当該物体を操作する処理を実行する。図9の例では、ロボット装置100は、物体OB21にその物体OB21から独立して動く箇所(物体PT1)があると判定し、操作部16の数に応じた当該物体を操作する処理を実行する。ロボット装置100は、操作部16の数が2つであるため、第2操作部16bで蓋である物体PT1を保持し、第1操作部16aで急須である物体OB21を操作することにより、茶器(カップ)にお茶を注ぐという行動を実行する。
The robot device 100 operates an object according to the determination result (step S23). When the robot device 100 determines that the object has a portion that moves independently of the object, the robot device 100 executes a process of operating the object according to the number of operation units 16. In the example of FIG. 9, the robot device 100 determines that the object OB21 has a portion (object PT1) that moves independently of the object OB21, and executes a process of operating the object according to the number of operation units 16. .. Since the robot device 100 has two operation units 16, the second operation unit 16b holds the object PT1 which is a lid, and the first operation unit 16a operates the object OB21 which is a steeple. Perform the action of pouring tea into (cup).
このように、ロボット装置100は、物体にその物体から独立して動く箇所があると判定し、判定した結果に応じて、物体を操作することにより、物体に応じた適切な操作を実行することができる。
In this way, the robot device 100 determines that the object has a portion that moves independently of the object, and operates the object according to the result of the determination to execute an appropriate operation according to the object. Can be done.
[2.第2の実施形態]
[2-1.本開示の第2の実施形態に係るロボット装置の構成]
上記第1の実施形態においては、ロボット装置100が複数(2つ)の操作部16を有する場合を示したが、ロボット装置は、単一(1つ)の操作部16を有してもよい。第2の実施形態では、ロボット装置100Aが1つの操作部16のみを有する場合を一例として説明する。なお、第1の実施形態に係るロボット装置100と同様の点については、適宜説明を省略する。 [2. Second Embodiment]
[2-1. Configuration of Robot Device According to Second Embodiment of the Present Disclosure]
In the first embodiment, the case where therobot device 100 has a plurality of (two) operation units 16 is shown, but the robot device may have a single (one) operation unit 16. .. In the second embodiment, the case where the robot device 100A has only one operation unit 16 will be described as an example. The same points as the robot device 100 according to the first embodiment will be omitted as appropriate.
[2-1.本開示の第2の実施形態に係るロボット装置の構成]
上記第1の実施形態においては、ロボット装置100が複数(2つ)の操作部16を有する場合を示したが、ロボット装置は、単一(1つ)の操作部16を有してもよい。第2の実施形態では、ロボット装置100Aが1つの操作部16のみを有する場合を一例として説明する。なお、第1の実施形態に係るロボット装置100と同様の点については、適宜説明を省略する。 [2. Second Embodiment]
[2-1. Configuration of Robot Device According to Second Embodiment of the Present Disclosure]
In the first embodiment, the case where the
まず、第2の実施形態に係る情報処理を実行する情報処理装置の一例であるロボット装置100Aの構成について説明する。図10は、本開示の第2の実施形態に係るロボット装置の構成例を示す図である。
First, the configuration of the robot device 100A, which is an example of the information processing device that executes the information processing according to the second embodiment, will be described. FIG. 10 is a diagram showing a configuration example of the robot device according to the second embodiment of the present disclosure.
図10に示すように、ロボット装置100Aは、通信部11と、記憶部12と、制御部13と、センサ部14と、移動部15と、操作部16とを有する。
As shown in FIG. 10, the robot device 100A includes a communication unit 11, a storage unit 12, a control unit 13, a sensor unit 14, a moving unit 15, and an operation unit 16.
図11の例では、操作部16は、ロボット装置100aの基部に設けられる。操作部16は、ロボット装置100aの移動部15を連結する基部から延びるように設けられる。なお、操作部16は、ロボット装置100Aの形状に応じて、別の位置に設けられてもよい。
In the example of FIG. 11, the operation unit 16 is provided at the base of the robot device 100a. The operation unit 16 is provided so as to extend from a base portion that connects the moving unit 15 of the robot device 100a. The operation unit 16 may be provided at a different position depending on the shape of the robot device 100A.
[2-2.第2の実施形態に係る情報処理の概要]
次に、第2の実施形態に係る情報処理の概要について、図11を用いて説明する。図11は、第2の実施形態に係る情報処理の一例を示す図である。第2の実施形態に係る情報処理は、図11に示すロボット装置100Aによって実現される。図11を用いて、複数の物体が積み重ねられた物体群を対象に処理を行う場合を一例として説明する。なお、図11中の図1と同様の点については、適宜説明を省略する。図11に示す状態ST1~ST3やステップS1~S4の処理は図1と同様であるため説明を省略する。 [2-2. Outline of information processing according to the second embodiment]
Next, the outline of the information processing according to the second embodiment will be described with reference to FIG. FIG. 11 is a diagram showing an example of information processing according to the second embodiment. The information processing according to the second embodiment is realized by therobot device 100A shown in FIG. A case where processing is performed on a group of objects in which a plurality of objects are stacked will be described as an example with reference to FIG. The same points as in FIG. 1 in FIG. 11 will be omitted as appropriate. Since the processes of the states ST1 to ST3 and steps S1 to S4 shown in FIG. 11 are the same as those of FIG. 1, the description thereof will be omitted.
次に、第2の実施形態に係る情報処理の概要について、図11を用いて説明する。図11は、第2の実施形態に係る情報処理の一例を示す図である。第2の実施形態に係る情報処理は、図11に示すロボット装置100Aによって実現される。図11を用いて、複数の物体が積み重ねられた物体群を対象に処理を行う場合を一例として説明する。なお、図11中の図1と同様の点については、適宜説明を省略する。図11に示す状態ST1~ST3やステップS1~S4の処理は図1と同様であるため説明を省略する。 [2-2. Outline of information processing according to the second embodiment]
Next, the outline of the information processing according to the second embodiment will be described with reference to FIG. FIG. 11 is a diagram showing an example of information processing according to the second embodiment. The information processing according to the second embodiment is realized by the
ステップS4の処理により、ロボット装置100aは、対象物体OB2の除去により生じる物体OB3の姿勢変化予測値が姿勢閾値以上であると判定する。また、図11の例では、ロボット装置100aは、1つの操作部16しか有しない。そのため、ロボット装置100aは、対象物体OB2の操作が不可であると判定し、他の操作対象の候補となる対象物体を選択する(ステップS31)。ロボット装置100aは、物体OB2以外の残りの物体OB1、OB3の物体群から、対象物体として物体OB3(以下「対象物体OB3」ともいう)を選択する。すなわち、ロボット装置100aは、対象物体OB2の隣接物体である物体OB3を対象として処理を実行する。
By the process of step S4, the robot device 100a determines that the posture change predicted value of the object OB3 generated by the removal of the target object OB2 is equal to or more than the posture threshold value. Further, in the example of FIG. 11, the robot device 100a has only one operation unit 16. Therefore, the robot device 100a determines that the operation of the target object OB2 is impossible, and selects a target object as a candidate for another operation target (step S31). The robot device 100a selects an object OB3 (hereinafter, also referred to as “target object OB3”) as a target object from the object group of the remaining objects OB1 and OB3 other than the object OB2. That is, the robot device 100a executes the process for the object OB3 which is an adjacent object of the target object OB2.
そして、ロボット装置100aは、対象物体を除去する(ステップS2)。図11の例では、ロボット装置100aは、対象物体OB3を除去する。ロボット装置100aは、物体OB1、OB2、OB3が積み重ねられた状態ST1を示す画像IM1から、対象物体OB3を除去する。ロボット装置100aは、画像IM1から対象物体OB3を除去することにより、物体OB1、OB2、OB3の物体群から、対象物体OB3のみが除去された状態ST32を対象に処理を行う。
Then, the robot device 100a removes the target object (step S2). In the example of FIG. 11, the robot device 100a removes the target object OB3. The robot device 100a removes the target object OB3 from the image IM1 showing the state ST1 in which the objects OB1, OB2, and OB3 are stacked. By removing the target object OB3 from the image IM1, the robot device 100a performs processing on the state ST32 in which only the target object OB3 is removed from the object group of the objects OB1, OB2, and OB3.
ロボット装置100aは、対象物体を除去した場合の隣接物体の配置状態に関する変化を予測する(ステップS33)。ロボット装置100aは、対象物体を除去した場合の隣接物体の姿勢や位置の変化を予測する。なお、ロボット装置100aは、対象物体に直接接触している物体(接触物体)に限らず、接触物体に接触している物体、すなわち対象物体に連鎖的に接触している物体等を、隣接物体として処理を行ってもよい。図11の状態ST1に示すように、対象物体OB3には物体OB2のみが直接接触しているが、物体OB1は物体OB2に接触しており、対象物体OB3に連鎖的に接触している物体であるため、ロボット装置100aは、物体OB1を隣接物体として処理を行う。このように、ロボット装置100aは、対象物体OB3を除去した場合の物体OB1、OB2の配置状態に関する変化を予測する。ロボット装置100aは、対象物体OB3を除去した場合の物体OB1の姿勢や位置の変化やOB2の姿勢や位置の変化を予測する。
The robot device 100a predicts a change in the arrangement state of adjacent objects when the target object is removed (step S33). The robot device 100a predicts changes in the posture and position of adjacent objects when the target object is removed. The robot device 100a is not limited to an object that is in direct contact with the target object (contact object), but an object that is in contact with the contact object, that is, an object that is in chain contact with the target object, or the like, is an adjacent object. It may be processed as. As shown in the state ST1 of FIG. 11, only the object OB2 is in direct contact with the target object OB3, but the object OB1 is in contact with the object OB2 and is in contact with the target object OB3 in a chain reaction. Therefore, the robot device 100a processes the object OB1 as an adjacent object. In this way, the robot device 100a predicts a change in the arrangement state of the objects OB1 and OB2 when the target object OB3 is removed. The robot device 100a predicts changes in the posture and position of the object OB1 and changes in the posture and position of the object OB2 when the target object OB3 is removed.
ロボット装置100aは、物体OB2が対象物体OB3の下に位置するため、対象物体OB3の除去により、物体OB2の位置や姿勢の変化はないと予測する。ロボット装置100aは、物体OB2が対象物体OB3を支える状態であるため、対象物体OB3の除去により、物体OB2の位置や姿勢の変化はないと予測する。例えば、ロボット装置100aは、対象物体OB3を除去した場合の物体OB2の姿勢変化予測値や位置の位置変化予測値が0であると予測する。
Since the object OB2 is located below the target object OB3, the robot device 100a predicts that the position and posture of the object OB2 will not change due to the removal of the target object OB3. Since the robot device 100a is in a state where the object OB2 supports the target object OB3, it is predicted that the position and posture of the object OB2 will not be changed by removing the target object OB3. For example, the robot device 100a predicts that the posture change predicted value and the position position change predicted value of the object OB2 when the target object OB3 is removed are 0.
また、ロボット装置100aは、物体OB1が対象物体OB3や物体OB2の下に位置するため、対象物体OB3の除去により、物体OB1の位置や姿勢の変化はないと予測する。ロボット装置100aは、物体OB1が対象物体OB3や物体OB2を支える状態であるため、対象物体OB3の除去により、物体OB1の位置や姿勢の変化はないと予測する。例えば、ロボット装置100aは、対象物体OB3を除去した場合の物体OB1の姿勢変化予測値や位置の位置変化予測値が0であると予測する。
Further, since the object OB1 is located below the target object OB3 and the object OB2, the robot device 100a predicts that the position and posture of the object OB1 will not be changed by removing the target object OB3. Since the robot device 100a is in a state where the object OB1 supports the target object OB3 and the object OB2, it is predicted that the position and the posture of the object OB1 will not be changed by removing the target object OB3. For example, the robot device 100a predicts that the posture change predicted value and the position position change predicted value of the object OB1 when the target object OB3 is removed are 0.
そして、ロボット装置100aは、隣接物体の周囲物体の動作が閾値以上であるかを判定する(ステップS4)。ロボット装置100aは、対象物体OB3の隣接物体である物体OB1及び物体OB2の両方とも、姿勢や位置の変化が閾値未満であると判定する。
Then, the robot device 100a determines whether the movement of the surrounding object of the adjacent object is equal to or higher than the threshold value (step S4). The robot device 100a determines that the change in posture and position of both the object OB1 and the object OB2, which are adjacent objects of the target object OB3, is less than the threshold value.
ロボット装置100aは、判定結果に基づいて操作に関する処理を実行する(ステップS5)。図11の例では、ロボット装置100aは、対象物体OB3の隣接物体である物体OB1及び物体OB2の両方とも、姿勢や位置の変化が閾値未満であるため、対象物体OB3が操作可能であるとして、操作部16で対象物体OB2の操作を実行する。この場合、ロボット装置100aは、対象物体OB3を移動させるなどの操作を実行する。
The robot device 100a executes a process related to the operation based on the determination result (step S5). In the example of FIG. 11, the robot device 100a assumes that the target object OB3 can be operated because the change in posture and position of both the object OB1 and the object OB2, which are adjacent objects of the target object OB3, is less than the threshold value. The operation unit 16 executes the operation of the target object OB2. In this case, the robot device 100a executes an operation such as moving the target object OB3.
その後、ロボット装置100aは、物体OB2、物体OB1の順で操作を実行することにより、物体OB1~OB3の全ての操作の実行を完了する。
After that, the robot device 100a completes the execution of all the operations of the objects OB1 to OB3 by executing the operations in the order of the object OB2 and the object OB1.
上述したように、ロボット装置100aは、対象物体を除去した場合の対象物体に隣接する隣接物体の配置状態の変化を基に、対象物体や隣接物体を操作する処理を実行する。このように、ロボット装置100aは、隣接する物体が存在する場合であっても物体に対する適切な操作を可能にすることができる。
As described above, the robot device 100a executes a process of manipulating the target object and the adjacent object based on the change in the arrangement state of the adjacent object adjacent to the target object when the target object is removed. In this way, the robot device 100a can enable an appropriate operation on an object even when an adjacent object exists.
[3.第3の実施形態]
[3-1.本開示の第3の実施形態に係るロボット装置の構成]
上記第1の実施形態や第2の実施形態においては、ロボット装置100、100aが複1つまたは2つの操作部16を有する場合を示したが、ロボット装置は、3つ以上の操作部16を有してもよい。第3の実施形態では、ロボット装置100Bが3つの操作部16を有する場合を一例として説明する。なお、第1の実施形態に係るロボット装置100や第2の実施形態に係るロボット装置100aと同様の点については、適宜説明を省略する。 [3. Third Embodiment]
[3-1. Configuration of robot device according to the third embodiment of the present disclosure]
In the first embodiment and the second embodiment, the case where therobot devices 100 and 100a have one or two operation units 16 is shown, but the robot device has three or more operation units 16. You may have. In the third embodiment, the case where the robot device 100B has three operation units 16 will be described as an example. The same points as the robot device 100 according to the first embodiment and the robot device 100a according to the second embodiment will be omitted as appropriate.
[3-1.本開示の第3の実施形態に係るロボット装置の構成]
上記第1の実施形態や第2の実施形態においては、ロボット装置100、100aが複1つまたは2つの操作部16を有する場合を示したが、ロボット装置は、3つ以上の操作部16を有してもよい。第3の実施形態では、ロボット装置100Bが3つの操作部16を有する場合を一例として説明する。なお、第1の実施形態に係るロボット装置100や第2の実施形態に係るロボット装置100aと同様の点については、適宜説明を省略する。 [3. Third Embodiment]
[3-1. Configuration of robot device according to the third embodiment of the present disclosure]
In the first embodiment and the second embodiment, the case where the
まず。第3の実施形態に係る情報処理を実行する情報処理装置の一例であるロボット装置100Bの構成について説明する。図12は、本開示の第3の実施形態に係るロボット装置の構成例を示す図である。
First of all. The configuration of the robot device 100B, which is an example of the information processing device that executes the information processing according to the third embodiment, will be described. FIG. 12 is a diagram showing a configuration example of the robot device according to the third embodiment of the present disclosure.
図12に示すように、ロボット装置100Bは、通信部11と、記憶部12と、制御部13と、センサ部14と、移動部15と、第1操作部16aと、第2操作部16bと、第3操作部16cとを有する。
As shown in FIG. 12, the robot device 100B includes a communication unit 11, a storage unit 12, a control unit 13, a sensor unit 14, a moving unit 15, a first operation unit 16a, and a second operation unit 16b. , And a third operation unit 16c.
第3の実施形態に係るロボット装置100bは、第1操作部16a、第2操作部16b及び第3操作部16cの3つの操作部16を有する。第1操作部16aや第2操作部16bや第3操作部16cは、ロボット装置100の胴体部(基部)の両側部に設けられる。第1操作部16aは、ロボット装置100の左側部から延び、ロボット装置100の左手として機能する。また、第2操作部16b及び第3操作部16cは、ロボット装置100の右側部から延び、ロボット装置100の右手として機能する。なお、操作部16は、その数やロボット装置100の形状に応じて、種々の位置に設けられてもよい。例えば、第3操作部16cは、基部の中央部に設けられてもよい。
The robot device 100b according to the third embodiment has three operation units 16 of a first operation unit 16a, a second operation unit 16b, and a third operation unit 16c. The first operation unit 16a, the second operation unit 16b, and the third operation unit 16c are provided on both sides of the body portion (base portion) of the robot device 100. The first operation unit 16a extends from the left side portion of the robot device 100 and functions as the left hand of the robot device 100. Further, the second operation unit 16b and the third operation unit 16c extend from the right side portion of the robot device 100 and function as the right hand of the robot device 100. The operation units 16 may be provided at various positions depending on the number of the operation units 16 and the shape of the robot device 100. For example, the third operation unit 16c may be provided in the central portion of the base portion.
ロボット装置100Bの判定部136は、図1中の状態ST1に示すような物体OB1~OB3を操作する場合、物体OB1が最初に対象物体に選択された場合であっても、操作可能であると判定する。ロボット装置100Bの判定部136は、物体OB1に接触する物体OB2に接触する物体OB1も隣接物体として処理してもよい。
The determination unit 136 of the robot device 100B can operate the objects OB1 to OB3 as shown in the state ST1 in FIG. 1 even when the object OB1 is first selected as the target object. judge. The determination unit 136 of the robot device 100B may also process the object OB1 in contact with the object OB2 in contact with the object OB1 as an adjacent object.
例えば、ロボット装置100Bは、対象物体OB1の除去により生じる物体OB2、OB3の姿勢変化予測値が姿勢閾値以上であるため、物体OB2、OB3を操作する処理を実行する。例えば、ロボット装置100は、第1操作部16aで対象物体OB1の操作を実行し、第2操作部16bで物体OB2の操作を実行し、第3操作部16cで物体OB3の操作を実行する。例えば、ロボット装置100は、第1操作部16aで対象物体OB1の操作を実行し、第2操作部16bで隣接物体である物体OB2を支える操作を実行し、第3操作部16cで隣接物体である物体OB3を支える操作を実行する。この場合、ロボット装置100は、対象物体OB1を移動させるなどの操作後に、第2操作部16bを駆動させ、物体OB2を安定した位置に配置する操作を実行し、第3操作部16cを駆動させ、物体OB3を安定した位置に配置する操作を実行してもよい。
For example, the robot device 100B executes a process of operating the objects OB2 and OB3 because the predicted posture change values of the objects OB2 and OB3 caused by the removal of the target object OB1 are equal to or higher than the posture threshold value. For example, in the robot device 100, the first operation unit 16a executes the operation of the target object OB1, the second operation unit 16b executes the operation of the object OB2, and the third operation unit 16c executes the operation of the object OB3. For example, in the robot device 100, the first operation unit 16a executes the operation of the target object OB1, the second operation unit 16b executes the operation of supporting the adjacent object OB2, and the third operation unit 16c executes the operation of the adjacent object OB2. An operation for supporting a certain object OB3 is executed. In this case, the robot device 100 drives the second operation unit 16b after an operation such as moving the target object OB1, executes an operation of arranging the object OB2 at a stable position, and drives the third operation unit 16c. , The operation of arranging the object OB3 at a stable position may be executed.
また、ロボット装置100は、第1操作部16aで対象物体OB1を保持する操作を実行し、第2操作部16bで物体OB3を保持する操作を実行し、第3操作部16cで物体OB3を保持する操作を実行してもよい。そして、ロボット装置100は、操作部16により物体OB1~OB3を保持した状態で、移動部15により、所望の位置まで物体OB1とともに物体OB2、OB3を運んでもよい。
Further, in the robot device 100, the first operation unit 16a executes an operation of holding the target object OB1, the second operation unit 16b executes an operation of holding the object OB3, and the third operation unit 16c holds the object OB3. You may perform the operation to do. Then, the robot device 100 may carry the objects OB2 and OB3 together with the object OB1 to a desired position by the moving unit 15 while the operating unit 16 holds the objects OB1 to OB3.
上述したように、ロボット装置100bは、対象物体を除去した場合の対象物体に隣接する隣接物体の配置状態の変化を基に、対象物体や隣接物体を操作する処理を実行する。このように、ロボット装置100bは、隣接する物体が存在する場合であっても物体に対する適切な操作を可能にすることができる。
As described above, the robot device 100b executes a process of manipulating the target object and the adjacent object based on the change in the arrangement state of the adjacent object adjacent to the target object when the target object is removed. In this way, the robot device 100b can enable an appropriate operation on an object even when an adjacent object exists.
[3-2.第3の実施形態に係る情報処理の概要]
次に、第3の実施形態に係る情報処理の概要について、図13を用いて説明する。図13は、第3の実施形態に係る情報処理の一例を示す図である。第3の実施形態に係る情報処理は、図12に示すロボット装置100Bによって実現される。図13を用いて、複数の物体が載ったお盆を運ぶ処理を行う場合を一例として説明する。なお、上記の例と同様の点については、適宜説明を省略する。 [3-2. Outline of information processing according to the third embodiment]
Next, the outline of the information processing according to the third embodiment will be described with reference to FIG. FIG. 13 is a diagram showing an example of information processing according to the third embodiment. The information processing according to the third embodiment is realized by therobot device 100B shown in FIG. A case where a process of carrying a tray on which a plurality of objects are placed is performed will be described as an example with reference to FIG. The same points as in the above example will be omitted as appropriate.
次に、第3の実施形態に係る情報処理の概要について、図13を用いて説明する。図13は、第3の実施形態に係る情報処理の一例を示す図である。第3の実施形態に係る情報処理は、図12に示すロボット装置100Bによって実現される。図13を用いて、複数の物体が載ったお盆を運ぶ処理を行う場合を一例として説明する。なお、上記の例と同様の点については、適宜説明を省略する。 [3-2. Outline of information processing according to the third embodiment]
Next, the outline of the information processing according to the third embodiment will be described with reference to FIG. FIG. 13 is a diagram showing an example of information processing according to the third embodiment. The information processing according to the third embodiment is realized by the
図13の例では、ロボット装置100Bは、複数の料理(食器)である物体OB41~OB46が載ったお盆である物体OB40を操作する場合を示す。このように、ロボット装置100Bがお盆(物体OB40)の上に料理(物体OB41~OB46)を載せて、料理を配膳する場合、お盆の上のコップの中の飲み物をこぼさないように注意する必要がある。この場合、操作部16(ロボットアーム)→お盆→コップ→飲み物との関係になり、操作部16から見た場合、飲み物は三次物体となる。また、お盆の上の料理を運ぶときにも、振動や外部接触によって、料理や飲み物がこぼれる可能性がある。
In the example of FIG. 13, the robot device 100B shows a case where the object OB40, which is a tray on which the objects OB41 to OB46, which are a plurality of dishes (tableware), are placed. In this way, when the robot device 100B places food (objects OB41 to OB46) on the tray (object OB40) and serves the food, it is necessary to be careful not to spill the drink in the cup on the tray. There is. In this case, the relationship is as follows: operation unit 16 (robot arm) → tray → cup → drink, and when viewed from the operation unit 16, the drink is a tertiary object. Also, when carrying food on the tray, food and drink may spill due to vibration and external contact.
例えば、お盆で料理を配膳する場合、事前に食器を動きやすい順にラベル付けしてもよい。ロボット装置100Bは、事前に操作部16(マニピュレータ)による外力によって動きやすい食器を動きやすい順にラベル付けしてもよい。例えば、ロボット装置100Bは、操作部16(マニピュレータ)による外力を食器に加え、各物体OB41~OB46の動きを計測して、物体OB41~OB46動きやすい順にラベル付けしてもよい。そして、ロボット装置100Bは、お盆である物体OB40を保持するために必要な操作部16以外の残りの操作部16により、動きやすい物体を保持してもよい。ロボット装置100Bは、物体OB46、OB42、OB45、OB41、OB43、OB44の順に動きやすいとラベル付する。このように、ロボット装置100Bは、内容物が液体である物体OB46、OB42が動きやすいとラベリングする。
For example, when serving dishes in Obon, the dishes may be labeled in advance in order of ease of movement. The robot device 100B may label tableware that is easy to move by an external force by the operation unit 16 (manipulator) in order of ease of movement. For example, the robot device 100B may apply an external force from the operation unit 16 (manipulator) to the tableware, measure the movement of each object OB41 to OB46, and label the objects OB41 to OB46 in the order in which they are easy to move. Then, the robot device 100B may hold an easily movable object by the remaining operation units 16 other than the operation unit 16 necessary for holding the object OB40 which is a tray. The robot device 100B is labeled as being easy to move in the order of objects OB46, OB42, OB45, OB41, OB43, and OB44. In this way, the robot device 100B labels the objects OB46 and OB42 whose contents are liquid when they are easy to move.
図13の例では、ロボット装置100Bは、第2操作部16bによりお盆である物体OB40を保持し、残りの第1操作部16aで内容物(液体)がこぼれやすい物体OB46を保持し、第3操作部16cで内容物(液体)がこぼれやすい物体OB42を保持する。このように、ロボット装置100Bは、お盆を持つ操作部16(マニピュレータ)とは別の操作部16(マニピュレータ)で不安定な食器を押さえることで、安定的に配膳タスクを実施することが可能となる。
In the example of FIG. 13, the robot device 100B holds the object OB40, which is a tray, by the second operation unit 16b, and holds the object OB46, which is liable to spill the contents (liquid), by the remaining first operation unit 16a. The operation unit 16c holds an object OB42 in which the contents (liquid) are likely to spill. In this way, the robot device 100B can stably perform the serving task by holding down the unstable tableware with the operation unit 16 (manipulator) different from the operation unit 16 (manipulator) having the tray. Become.
[4.その他の実施形態]
上述した各実施形態に係る処理は、上記各実施形態以外にも種々の異なる形態(変形例)にて実施されてよい。 [4. Other embodiments]
The processing according to each of the above-described embodiments may be carried out in various different forms (modifications) other than each of the above-described embodiments.
上述した各実施形態に係る処理は、上記各実施形態以外にも種々の異なる形態(変形例)にて実施されてよい。 [4. Other embodiments]
The processing according to each of the above-described embodiments may be carried out in various different forms (modifications) other than each of the above-described embodiments.
[4-1.その他の構成例]
例えば、上述した例では、情報処理を行う情報処理装置がロボット装置100、100A、100Bである例を示したが、情報処理装置とロボット装置とは別体であってもよい。この点について、図14及び図15を用いて説明する。図14は、本開示の変形例に係る情報処理システムの構成例を示す図である。図15は、本開示の変形例に係る情報処理装置の構成例を示す図である。 [4-1. Other configuration examples]
For example, in the above-described example, the information processing devices that perform information processing are the robot devices 100, 100A, and 100B, but the information processing device and the robot device may be separate bodies. This point will be described with reference to FIGS. 14 and 15. FIG. 14 is a diagram showing a configuration example of an information processing system according to a modified example of the present disclosure. FIG. 15 is a diagram showing a configuration example of an information processing device according to a modified example of the present disclosure.
例えば、上述した例では、情報処理を行う情報処理装置がロボット装置100、100A、100Bである例を示したが、情報処理装置とロボット装置とは別体であってもよい。この点について、図14及び図15を用いて説明する。図14は、本開示の変形例に係る情報処理システムの構成例を示す図である。図15は、本開示の変形例に係る情報処理装置の構成例を示す図である。 [4-1. Other configuration examples]
For example, in the above-described example, the information processing devices that perform information processing are the
図14に示すように、情報処理システム1は、ロボット装置10と、情報処理装置100Cとが含まれる。ロボット装置10及び情報処理装置100CはネットワークNを介して、有線又は無線により通信可能に接続される。なお、図14に示した情報処理システム1には、複数台のロボット装置10や、複数台の情報処理装置100Cが含まれてもよい。この場合、情報処理装置100Cは、ネットワークNを介してロボット装置10と通信し、ロボット装置10や各種センサが収集した情報を基に、ロボット装置10の制御の指示を行なったりしてもよい。
As shown in FIG. 14, the information processing system 1 includes a robot device 10 and an information processing device 100C. The robot device 10 and the information processing device 100C are connected to each other via a network N so as to be communicable by wire or wirelessly. The information processing system 1 shown in FIG. 14 may include a plurality of robot devices 10 and a plurality of information processing devices 100C. In this case, the information processing device 100C may communicate with the robot device 10 via the network N and give an instruction to control the robot device 10 based on the information collected by the robot device 10 and various sensors.
ロボット装置10は、画像センサや力覚センサ等のセンサにより検知したセンサ情報を情報処理装置100Cへ送信する。ロボット装置10は、画像センサにより物体群を撮像した画像情報を情報処理装置100Cへ送信する。これにより、情報処理装置100Cは、物体群を含む画像情報を取得する。ロボット装置10は、情報処理装置100Cとの間で情報の送受信が可能であれば、どのような装置であってもよく、例えば、自律移動ロボット等の種々のロボットであってもよい。
The robot device 10 transmits sensor information detected by sensors such as an image sensor and a force sensor to the information processing device 100C. The robot device 10 transmits image information obtained by capturing an image of a group of objects by an image sensor to the information processing device 100C. As a result, the information processing apparatus 100C acquires image information including a group of objects. The robot device 10 may be any device as long as information can be transmitted and received to and from the information processing device 100C, and may be various robots such as an autonomous mobile robot.
情報処理装置100Cは、行動計画等、ロボット装置10を制御するための情報(制御情報)をロボット装置10へ送信する情報処理装置である。例えば、情報処理装置100Cは、記憶部12Cに記憶された情報や、ロボット装置10から取得した情報に基づいて、ロボット装置10の行動計画等、ロボット装置10を制御するための制御情報を生成する。情報処理装置100Cは、生成した制御情報をロボット装置10へ送信する。情報処理装置100Cからロボット装置10を受信したロボット装置10は、制御情報を基に移動部15を制御し、移動したり、制御情報を基に操作部16を制御し、物体を操作したりする。
The information processing device 100C is an information processing device that transmits information (control information) for controlling the robot device 10, such as an action plan, to the robot device 10. For example, the information processing device 100C generates control information for controlling the robot device 10, such as an action plan of the robot device 10, based on the information stored in the storage unit 12C and the information acquired from the robot device 10. .. The information processing device 100C transmits the generated control information to the robot device 10. The robot device 10 that has received the robot device 10 from the information processing device 100C controls the moving unit 15 based on the control information and moves, or controls the operating unit 16 based on the control information to operate the object. ..
図15に示すように、情報処理装置100Cは、通信部11Cと、記憶部12Cと、制御部13Cとを有する。通信部11Cは、ネットワークN(インターネット等)と有線又は無線で接続され、ネットワークNを介して、ロボット装置10との間で情報の送受信を行う。
As shown in FIG. 15, the information processing device 100C includes a communication unit 11C, a storage unit 12C, and a control unit 13C. The communication unit 11C is connected to the network N (Internet or the like) by wire or wirelessly, and transmits / receives information to / from the robot device 10 via the network N.
記憶部12Cは、例えば、RAM、フラッシュメモリ等の半導体メモリ素子、または、ハードディスク、光ディスク等の記憶装置によって実現される。記憶部12Cは、記憶部12と同様の情報を記憶する。記憶部12Cは、閾値情報記憶部121と密度情報記憶部122とを有する。記憶部12Cは、ロボット装置10の移動を制御するための情報やロボット装置10から受信した各種情報やロボット装置10へ送信する各種情報を記憶する。
The storage unit 12C is realized by, for example, a semiconductor memory element such as a RAM or a flash memory, or a storage device such as a hard disk or an optical disk. The storage unit 12C stores the same information as the storage unit 12. The storage unit 12C has a threshold information storage unit 121 and a density information storage unit 122. The storage unit 12C stores information for controlling the movement of the robot device 10, various information received from the robot device 10, and various information to be transmitted to the robot device 10.
制御部13Cは、例えば、CPUやMPU等によって、情報処理装置100C内部に記憶されたプログラム(例えば、本開示に係る情報処理プログラム)がRAM等を作業領域として実行されることにより実現される。また、制御部13Cは、例えば、ASICやFPGA等の集積回路により実現されてもよい。制御部13Cは、取得部131と、解析部132と、分類部133と、選択部134と、予測部135と、判定部136と、計画部137と、送信部138Cとを有する。
The control unit 13C is realized by, for example, a CPU, an MPU, or the like executing a program stored inside the information processing device 100C (for example, an information processing program according to the present disclosure) using a RAM or the like as a work area. Further, the control unit 13C may be realized by an integrated circuit such as an ASIC or FPGA. The control unit 13C includes an acquisition unit 131, an analysis unit 132, a classification unit 133, a selection unit 134, a prediction unit 135, a determination unit 136, a planning unit 137, and a transmission unit 138C.
送信部138Cは、外部の情報処理装置へ各種情報を送信する。送信部138Cは、外部の情報処理装置へ各種情報を送信する。例えば、送信部138Cは、ロボット装置10へ各種情報を送信する。送信部138Cは、記憶部12に記憶された情報を提供する。送信部138Cは、記憶部12に記憶された情報を送信する。
The transmission unit 138C transmits various information to an external information processing device. The transmission unit 138C transmits various information to an external information processing device. For example, the transmission unit 138C transmits various information to the robot device 10. The transmission unit 138C provides the information stored in the storage unit 12. The transmission unit 138C transmits the information stored in the storage unit 12.
送信部138Cは、ロボット装置10からの情報に基づいて、各種情報を提供する。送信部138Cは、記憶部12に記憶された情報に基づいて、各種情報を提供する。
The transmission unit 138C provides various information based on the information from the robot device 10. The transmission unit 138C provides various information based on the information stored in the storage unit 12.
送信部138Cは、制御情報をロボット装置10へ送信する。送信部138Cは、行動計画部により作成された行動計画をロボット装置10へ送信する。送信部138Cは、ロボット装置10に隣接物体を操作させるためにロボット装置10に制御情報を送信する処理を実行する。このように、送信部138Cは、ロボット装置10に制御情報を送信する処理を実行することにより、隣接物体を操作する処理を実行する実行部として機能する。
The transmission unit 138C transmits the control information to the robot device 10. The transmission unit 138C transmits the action plan created by the action planning unit to the robot device 10. The transmission unit 138C executes a process of transmitting control information to the robot device 10 in order to cause the robot device 10 to operate an adjacent object. In this way, the transmission unit 138C functions as an execution unit that executes a process of manipulating an adjacent object by executing a process of transmitting control information to the robot device 10.
このように、情報処理装置100Cは、センサ部や移動部や操作部等を有さず、ロボット装置としての機能を実現するための構成を有しなくてもよい。なお、情報処理装置100Cは、情報処理装置100Cを管理する管理者等から各種操作を受け付ける入力部(例えば、キーボードやマウス等)や、各種情報を表示するための表示部(例えば、液晶ディスプレイ等)を有してもよい。
As described above, the information processing device 100C does not have a sensor unit, a moving unit, an operation unit, or the like, and does not have to have a configuration for realizing a function as a robot device. The information processing device 100C includes an input unit (for example, a keyboard, a mouse, etc.) that receives various operations from an administrator or the like that manages the information processing device 100C, and a display unit (for example, a liquid crystal display, etc.) for displaying various information. ) May have.
[4-2.その他]
また、上記各実施形態において説明した各処理のうち、自動的に行われるものとして説明した処理の全部または一部を手動的に行うこともでき、あるいは、手動的に行われるものとして説明した処理の全部または一部を公知の方法で自動的に行うこともできる。この他、上記文書中や図面中で示した処理手順、具体的名称、各種のデータやパラメータを含む情報については、特記する場合を除いて任意に変更することができる。例えば、各図に示した各種情報は、図示した情報に限られない。 [4-2. Others]
Further, among the processes described in each of the above embodiments, all or a part of the processes described as being automatically performed can be manually performed, or the processes described as being manually performed. It is also possible to automatically perform all or part of the above by a known method. In addition, the processing procedure, specific name, and information including various data and parameters shown in the above document and drawings can be arbitrarily changed unless otherwise specified. For example, the various information shown in each figure is not limited to the illustrated information.
また、上記各実施形態において説明した各処理のうち、自動的に行われるものとして説明した処理の全部または一部を手動的に行うこともでき、あるいは、手動的に行われるものとして説明した処理の全部または一部を公知の方法で自動的に行うこともできる。この他、上記文書中や図面中で示した処理手順、具体的名称、各種のデータやパラメータを含む情報については、特記する場合を除いて任意に変更することができる。例えば、各図に示した各種情報は、図示した情報に限られない。 [4-2. Others]
Further, among the processes described in each of the above embodiments, all or a part of the processes described as being automatically performed can be manually performed, or the processes described as being manually performed. It is also possible to automatically perform all or part of the above by a known method. In addition, the processing procedure, specific name, and information including various data and parameters shown in the above document and drawings can be arbitrarily changed unless otherwise specified. For example, the various information shown in each figure is not limited to the illustrated information.
また、図示した各装置の各構成要素は機能概念的なものであり、必ずしも物理的に図示の如く構成されていることを要しない。すなわち、各装置の分散・統合の具体的形態は図示のものに限られず、その全部または一部を、各種の負荷や使用状況などに応じて、任意の単位で機能的または物理的に分散・統合して構成することができる。
Further, each component of each device shown in the figure is a functional concept, and does not necessarily have to be physically configured as shown in the figure. That is, the specific form of distribution / integration of each device is not limited to the one shown in the figure, and all or part of the device is functionally or physically distributed / physically in arbitrary units according to various loads and usage conditions. Can be integrated and configured.
また、上述してきた各実施形態及び変形例は、処理内容を矛盾させない範囲で適宜組み合わせることが可能である。
Further, each of the above-described embodiments and modifications can be appropriately combined as long as the processing contents do not contradict each other.
また、本明細書に記載された効果はあくまで例示であって限定されるものでは無く、他の効果があってもよい。
Further, the effects described in the present specification are merely examples and are not limited, and other effects may be obtained.
[5.本開示に係る効果]
上述のように、本開示に係る情報処理装置(実施形態ではロボット装置100、100A、100B、情報処理装置100C)は、予測部(実施形態では予測部135)と、実行部(実施形態では実行部138)とを備える。予測部は、操作対象の候補となる物体である対象物体と、対象物体に隣接する物体である隣接物体とが撮像された画像情報に基づいて、対象物体に対する操作により生じる隣接物体の配置状態に関する変化を予測する。実行部は、予測部により予測された隣接物体の配置状態の変化が所定の条件を満たす場合、隣接物体を操作する処理を実行する。 [5. Effect of this disclosure]
As described above, the information processing devices ( robot devices 100, 100A, 100B, information processing device 100C in the embodiment) according to the present disclosure include a prediction unit (prediction unit 135 in the embodiment) and an execution unit (execution in the embodiment). A unit 138) is provided. The prediction unit relates to the arrangement state of the adjacent object generated by the operation on the target object based on the image information obtained by capturing the image information of the target object which is a candidate object to be operated and the adjacent object which is an object adjacent to the target object. Predict changes. The execution unit executes a process of manipulating the adjacent object when the change in the arrangement state of the adjacent object predicted by the prediction unit satisfies a predetermined condition.
上述のように、本開示に係る情報処理装置(実施形態ではロボット装置100、100A、100B、情報処理装置100C)は、予測部(実施形態では予測部135)と、実行部(実施形態では実行部138)とを備える。予測部は、操作対象の候補となる物体である対象物体と、対象物体に隣接する物体である隣接物体とが撮像された画像情報に基づいて、対象物体に対する操作により生じる隣接物体の配置状態に関する変化を予測する。実行部は、予測部により予測された隣接物体の配置状態の変化が所定の条件を満たす場合、隣接物体を操作する処理を実行する。 [5. Effect of this disclosure]
As described above, the information processing devices (
このように、本開示に係る情報処理装置は、操作対象の候補となる物体である対象物体に隣接物体が有る場合、対象物体と隣接物体とが撮像された画像情報に基づいて、対象物体に対する操作により生じる隣接物体の配置状態に関する変化を予測する。例えば、情報処理装置は、対象物体と隣接物体とが撮像された画像情報に基づいて、対象物体の除去や、対象物体の位置の変更や、対象物体の姿勢の変更により生じる隣接物体の配置状態に関する変化を予測する。そして、情報処理装置は、隣接物体の配置状態の変化が所定の条件を満たす場合、隣接物体を操作する処理を実行することで、隣接する物体が存在する場合であっても物体に対する適切な操作を可能にすることができる。
As described above, when the target object, which is a candidate object to be operated, has an adjacent object, the information processing apparatus according to the present disclosure refers to the target object based on the image information captured by the target object and the adjacent object. Predict changes in the arrangement of adjacent objects caused by the operation. For example, the information processing apparatus removes the target object, changes the position of the target object, or changes the posture of the target object based on the image information obtained by capturing the image information of the target object and the adjacent object. Predict changes in. Then, when the change in the arrangement state of the adjacent object satisfies a predetermined condition, the information processing device executes a process of operating the adjacent object, so that an appropriate operation on the object is performed even if the adjacent object exists. Can be made possible.
また、実行部は、隣接物体の配置状態に関する変化量が閾値以上である場合、隣接物体を操作する処理を実行する。このように、情報処理装置は、隣接物体の配置状態に関する変化量が閾値以上である場合、隣接物体を操作する処理を実行することで、隣接する物体が存在する場合であっても物体に対する適切な操作を可能にすることができる。
Further, when the amount of change regarding the arrangement state of the adjacent object is equal to or greater than the threshold value, the execution unit executes a process of operating the adjacent object. In this way, when the amount of change in the arrangement state of the adjacent object is equal to or greater than the threshold value, the information processing device executes a process of operating the adjacent object, so that the information processing device is appropriate for the object even if the adjacent object exists. Operation is possible.
また、予測部は、対象物体に対する操作により生じる隣接物体の姿勢の変化を予測する。実行部は、隣接物体の姿勢の変化が姿勢変化に関する条件を満たす場合、隣接物体を操作する処理を実行する。このように、情報処理装置は、隣接物体の姿勢の変化を予測し、隣接物体の姿勢の変化が姿勢変化に関する条件を満たす場合、隣接物体を操作する処理を実行することで、隣接する物体が存在する場合であっても物体に対する適切な操作を可能にすることができる。
In addition, the prediction unit predicts changes in the posture of adjacent objects caused by operations on the target object. When the change in the posture of the adjacent object satisfies the condition related to the change in posture, the execution unit executes a process of operating the adjacent object. In this way, the information processing device predicts the change in the posture of the adjacent object, and when the change in the posture of the adjacent object satisfies the condition related to the posture change, the information processing device executes a process of manipulating the adjacent object to cause the adjacent object to move. Appropriate manipulation of the object, even if present, can be made possible.
また、予測部は、対象物体に対する操作により生じる隣接物体の位置の変化を予測する。実行部は、隣接物体の位置の変化が位置変化に関する条件を満たす場合、隣接物体を操作する処理を実行する。このように、情報処理装置は、隣接物体の位置の変化を予測し、隣接物体の位置の変化が位置変化に関する条件を満たす場合、隣接物体を操作する処理を実行することで、隣接する物体が存在する場合であっても物体に対する適切な操作を可能にすることができる。
In addition, the prediction unit predicts changes in the position of adjacent objects caused by operations on the target object. The execution unit executes a process of manipulating the adjacent object when the change in the position of the adjacent object satisfies the condition regarding the position change. In this way, the information processing device predicts the change in the position of the adjacent object, and when the change in the position of the adjacent object satisfies the condition regarding the position change, the information processing device executes a process of manipulating the adjacent object so that the adjacent object can be moved. Appropriate manipulation of the object, even if present, can be made possible.
また、予測部は、対象物体と、対象物体に接触する隣接物体とが撮像された画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測する。このように、情報処理装置は、対象物体に接触する隣接物体の配置状態の変化が所定の条件を満たす場合、隣接物体を操作する処理を実行することで、接触する物体が存在する場合であっても適切な操作を可能にすることができる。
Further, the prediction unit predicts changes in the arrangement state of the adjacent object caused by the operation on the target object based on the image information obtained by capturing the image information of the target object and the adjacent object in contact with the target object. As described above, when the change in the arrangement state of the adjacent object in contact with the target object satisfies a predetermined condition, the information processing device may execute a process of manipulating the adjacent object so that the contacting object exists. However, proper operation can be enabled.
また、予測部は、積み重ねられた対象物体と隣接物体とが撮像された画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測する。このように、情報処理装置は、対象物体に積み重ねられた隣接物体の配置状態の変化が所定の条件を満たす場合、隣接物体を操作する処理を実行することで、積み重ねられた物体が存在する場合であっても適切な操作を可能にすることができる。
Further, the prediction unit predicts changes in the arrangement state of the adjacent objects caused by the operation on the target objects based on the image information obtained by capturing the stacked target objects and the adjacent objects. As described above, when the change in the arrangement state of the adjacent objects stacked on the target object satisfies a predetermined condition, the information processing device executes a process of operating the adjacent objects to exist the stacked objects. Even so, it is possible to enable an appropriate operation.
また、予測部は、対象物体と、対象物体に対する操作の影響を受ける範囲内に位置する隣接物体とが撮像された画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測する。このように、情報処理装置は、対象物体に対する操作の影響を受ける範囲内に位置する隣接物体の配置状態の変化が所定の条件を満たす場合、隣接物体を操作する処理を実行することで、隣接する物体が存在する場合であっても物体に対する適切な操作を可能にすることができる。
Further, the prediction unit relates to the arrangement state of the adjacent object generated by the operation on the target object based on the image information obtained by capturing the image information of the target object and the adjacent object located within the range affected by the operation on the target object. Predict change. In this way, when the change in the arrangement state of the adjacent object located within the range affected by the operation on the target object satisfies a predetermined condition, the information processing device executes a process of operating the adjacent object to be adjacent. Even if there is an object to be processed, it is possible to appropriately operate the object.
また、情報処理装置は、操作部(実施形態では操作部16)を備える。操作部は、実行部による処理に応じて駆動する。これにより、情報処理装置は、実行部による処理に応じて駆動する操作部により、隣接する物体が存在する場合であっても物体に対する適切な操作を行うことができる。
Further, the information processing device includes an operation unit (operation unit 16 in the embodiment). The operation unit is driven according to the processing by the execution unit. As a result, the information processing apparatus can perform an appropriate operation on the object even when an adjacent object exists by the operation unit driven by the operation unit driven by the processing by the execution unit.
また、操作部は、隣接物体の配置状態に関する変化が所定の条件を満たす場合、隣接物体を操作する。このように、情報処理装置は、隣接物体の配置状態に関する変化が所定の条件を満たす場合、隣接物体を操作することで、隣接する物体が存在する場合であっても物体に対する適切な操作を可能にすることができる。
Further, the operation unit operates the adjacent object when the change regarding the arrangement state of the adjacent object satisfies a predetermined condition. In this way, when the change in the arrangement state of the adjacent object satisfies a predetermined condition, the information processing apparatus can operate the adjacent object to perform an appropriate operation on the object even if the adjacent object exists. Can be.
また、情報処理装置は、複数の操作部(実施形態では第1操作部16a、第2操作部16b、第3操作部16c)を備える。複数の操作部は、実行部による処理に応じて駆動する。これにより、情報処理装置は、実行部による処理に応じて駆動する複数の操作部により、隣接する物体が存在する場合であっても物体に対する適切な操作を行うことができる。
Further, the information processing device includes a plurality of operation units (in the embodiment, the first operation unit 16a, the second operation unit 16b, and the third operation unit 16c). The plurality of operation units are driven according to the processing by the execution unit. As a result, the information processing apparatus can perform an appropriate operation on the object even when there are adjacent objects by the plurality of operation units driven by the processing by the execution unit.
また、複数の操作部のうち少なくとも1つは、隣接物体の配置状態に関する変化が所定の条件を満たす場合、隣接物体を操作する。このように、情報処理装置は、隣接物体の配置状態に関する変化が所定の条件を満たす場合、隣接物体を操作することで、隣接する物体が存在する場合であっても物体に対する適切な操作を可能にすることができる。
Further, at least one of the plurality of operation units operates the adjacent object when the change regarding the arrangement state of the adjacent object satisfies a predetermined condition. In this way, when the change in the arrangement state of the adjacent object satisfies a predetermined condition, the information processing apparatus can operate the adjacent object to perform an appropriate operation on the object even if the adjacent object exists. Can be.
また、実行部は、隣接物体の配置状態に関する変化が所定の条件を満たす場合、複数の操作部のうち一の操作部に対象物体を操作させる処理を実行するとともに、複数の操作部のうち他の操作部に隣接物体を操作させる処理を実行する。このように、情報処理装置は、隣接物体の配置状態に関する変化が所定の条件を満たす場合、複数の操作部のうち一の操作部に対象物体を操作させる処理を実行し、他の操作部に隣接物体を操作させる処理を実行することで、隣接する物体が存在する場合であっても物体に対する適切な操作を可能にすることができる。
Further, when the change regarding the arrangement state of the adjacent object satisfies a predetermined condition, the execution unit executes a process of causing one of the plurality of operation units to operate the target object, and the other of the plurality of operation units. Performs a process of causing the operation unit of the above to operate an adjacent object. In this way, when the change in the arrangement state of the adjacent object satisfies a predetermined condition, the information processing device executes a process of causing one operation unit of the plurality of operation units to operate the target object, and causes the other operation units to operate the target object. By executing the process of operating the adjacent object, it is possible to appropriately operate the object even when the adjacent object exists.
また、実行部は、一の操作部に対象物体を移動させる処理を実行する。このように、情報処理装置は、対象物体を移動させる処理を実行することで、隣接する物体が存在する場合であっても物体に対する適切な操作を可能にすることができる。
In addition, the execution unit executes a process of moving the target object to one operation unit. In this way, the information processing apparatus can perform an appropriate operation on the object even when the adjacent object exists by executing the process of moving the target object.
また、実行部は、他の操作部に対象物体の移動による隣接物体の配置状態の変化を抑制させる処理を実行する。このように、情報処理装置は、対象物体の移動による隣接物体の配置状態の変化を抑制させる処理を実行することで、隣接する物体が存在する場合であっても物体に対する適切な操作を可能にすることができる。
In addition, the execution unit executes a process of suppressing the change in the arrangement state of the adjacent object due to the movement of the target object to another operation unit. In this way, the information processing device executes a process of suppressing a change in the arrangement state of the adjacent object due to the movement of the target object, so that an appropriate operation can be performed on the object even if the adjacent object exists. can do.
また、実行部は、他の操作部に隣接物体を支持させる処理を実行する。このように、情報処理装置は、隣接物体を支持させる処理を実行することで、隣接する物体が存在する場合であっても物体に対する適切な操作を可能にすることができる。
In addition, the execution unit executes a process of causing another operation unit to support an adjacent object. In this way, the information processing apparatus can perform an appropriate operation on the adjacent object even when the adjacent object exists by executing the process of supporting the adjacent object.
また、実行部は、他の操作部に隣接物体を移動させる処理を実行する。このように、情報処理装置は、隣接物体を移動させる処理を実行することで、隣接する物体が存在する場合であっても物体に対する適切な操作を可能にすることができる。
In addition, the execution unit executes a process of moving an adjacent object to another operation unit. In this way, the information processing apparatus can perform an appropriate operation on the object even when the adjacent object exists by executing the process of moving the adjacent object.
また、情報処理装置は、力覚センサ(実施形態では力覚センサ142)を備える。力覚センサは、操作部による物体への接触に関する検知を行う。実行部は、力覚センサにより検知されたセンサ情報に基づいて、操作部を物体に接触させる処理を実行する。これにより、情報処理装置は、操作部による物体への接触に関する検知を行うことで、物体の状態に関する情報を取得できるため、物体に応じた適切な操作を行うことができる。
Further, the information processing device includes a force sensor (force sensor 142 in the embodiment). The force sensor detects the contact of the operation unit with an object. The execution unit executes a process of bringing the operation unit into contact with an object based on the sensor information detected by the force sensor. As a result, the information processing apparatus can acquire information on the state of the object by detecting the contact with the object by the operation unit, so that it is possible to perform an appropriate operation according to the object.
また、情報処理装置は、判定部(実施形態では判定部136)を備える。判定部は、操作部による物体への接触結果に基づいて、当該物体に当該物体から独立して動く箇所があるかどうかを判定する。実行部は、判定部により当該物体に箇所があると判定された場合、操作部の数に応じた当該物体を操作する処理を実行する。これにより、情報処理装置は、操作部による物体への接触結果に基づいて、当該物体に当該物体から独立して動く箇所があるかどうかを判定し、判定結果と操作部の数とに応じた当該物体を操作する処理を実行することで、物体の状態や操作部の数に応じた適切な操作を行うことができる。
Further, the information processing device includes a determination unit (determination unit 136 in the embodiment). The determination unit determines whether or not the object has a portion that moves independently of the object based on the result of contact with the object by the operation unit. When the determination unit determines that the object has a location, the execution unit executes a process of operating the object according to the number of operation units. As a result, the information processing device determines whether or not the object has a portion that moves independently of the object based on the result of contact with the object by the operation unit, and corresponds to the determination result and the number of operation units. By executing the process of operating the object, it is possible to perform an appropriate operation according to the state of the object and the number of operation units.
[6.ハードウェア構成]
上述してきた各実施形態に係るロボット装置100、100A、100Bや情報処理装置100C等の情報機器は、例えば図16に示すような構成のコンピュータ1000によって実現される。図16は、ロボット装置100、100A、100Bや情報処理装置100C等の情報処理装置の機能を実現するコンピュータ1000の一例を示すハードウェア構成図である。以下、第1の実施形態に係るロボット装置100を例に挙げて説明する。コンピュータ1000は、CPU1100、RAM1200、ROM(Read Only Memory)1300、HDD(Hard Disk Drive)1400、通信インターフェイス1500、及び入出力インターフェイス1600を有する。コンピュータ1000の各部は、バス1050によって接続される。 [6. Hardware configuration]
The information devices such as the robot devices 100, 100A, 100B and the information processing device 100C according to the above-described embodiments are realized by, for example, the computer 1000 having the configuration shown in FIG. FIG. 16 is a hardware configuration diagram showing an example of a computer 1000 that realizes the functions of information processing devices such as robot devices 100, 100A, 100B and information processing device 100C. Hereinafter, the robot device 100 according to the first embodiment will be described as an example. The computer 1000 includes a CPU 1100, a RAM 1200, a ROM (Read Only Memory) 1300, an HDD (Hard Disk Drive) 1400, a communication interface 1500, and an input / output interface 1600. Each part of the computer 1000 is connected by a bus 1050.
上述してきた各実施形態に係るロボット装置100、100A、100Bや情報処理装置100C等の情報機器は、例えば図16に示すような構成のコンピュータ1000によって実現される。図16は、ロボット装置100、100A、100Bや情報処理装置100C等の情報処理装置の機能を実現するコンピュータ1000の一例を示すハードウェア構成図である。以下、第1の実施形態に係るロボット装置100を例に挙げて説明する。コンピュータ1000は、CPU1100、RAM1200、ROM(Read Only Memory)1300、HDD(Hard Disk Drive)1400、通信インターフェイス1500、及び入出力インターフェイス1600を有する。コンピュータ1000の各部は、バス1050によって接続される。 [6. Hardware configuration]
The information devices such as the
CPU1100は、ROM1300又はHDD1400に格納されたプログラムに基づいて動作し、各部の制御を行う。例えば、CPU1100は、ROM1300又はHDD1400に格納されたプログラムをRAM1200に展開し、各種プログラムに対応した処理を実行する。
The CPU 1100 operates based on the program stored in the ROM 1300 or the HDD 1400, and controls each part. For example, the CPU 1100 expands the program stored in the ROM 1300 or the HDD 1400 into the RAM 1200 and executes processing corresponding to various programs.
ROM1300は、コンピュータ1000の起動時にCPU1100によって実行されるBIOS(Basic Input Output System)等のブートプログラムや、コンピュータ1000のハードウェアに依存するプログラム等を格納する。
The ROM 1300 stores a boot program such as a BIOS (Basic Input Output System) executed by the CPU 1100 when the computer 1000 is started, a program that depends on the hardware of the computer 1000, and the like.
HDD1400は、CPU1100によって実行されるプログラム、及び、かかるプログラムによって使用されるデータ等を非一時的に記録する、コンピュータが読み取り可能な記録媒体である。具体的には、HDD1400は、プログラムデータ1450の一例である本開示に係る情報処理プログラムを記録する記録媒体である。
The HDD 1400 is a computer-readable recording medium that non-temporarily records a program executed by the CPU 1100 and data used by the program. Specifically, the HDD 1400 is a recording medium for recording an information processing program according to the present disclosure, which is an example of program data 1450.
通信インターフェイス1500は、コンピュータ1000が外部ネットワーク1550(例えばインターネット)と接続するためのインターフェイスである。例えば、CPU1100は、通信インターフェイス1500を介して、他の機器からデータを受信したり、CPU1100が生成したデータを他の機器へ送信したりする。
The communication interface 1500 is an interface for the computer 1000 to connect to an external network 1550 (for example, the Internet). For example, the CPU 1100 receives data from another device or transmits data generated by the CPU 1100 to another device via the communication interface 1500.
入出力インターフェイス1600は、入出力デバイス1650とコンピュータ1000とを接続するためのインターフェイスである。例えば、CPU1100は、入出力インターフェイス1600を介して、キーボードやマウス等の入力デバイスからデータを受信する。また、CPU1100は、入出力インターフェイス1600を介して、ディスプレイやスピーカーやプリンタ等の出力デバイスにデータを送信する。また、入出力インターフェイス1600は、所定の記録媒体(メディア)に記録されたプログラム等を読み取るメディアインターフェイスとして機能してもよい。メディアとは、例えばDVD(Digital Versatile Disc)、PD(Phase change rewritable Disk)等の光学記録媒体、MO(Magneto-Optical disk)等の光磁気記録媒体、テープ媒体、磁気記録媒体、または半導体メモリ等である。例えば、コンピュータ1000が実施形態に係るロボット装置100として機能する場合、コンピュータ1000のCPU1100は、RAM1200上にロードされた情報処理プログラムを実行することにより、制御部13等の機能を実現する。また、HDD1400には、本開示に係る情報処理プログラムや、記憶部12内のデータが格納される。なお、CPU1100は、プログラムデータ1450をHDD1400から読み取って実行するが、他の例として、外部ネットワーク1550を介して、他の装置からこれらのプログラムを取得してもよい。
The input / output interface 1600 is an interface for connecting the input / output device 1650 and the computer 1000. For example, the CPU 1100 receives data from an input device such as a keyboard or mouse via the input / output interface 1600. Further, the CPU 1100 transmits data to an output device such as a display, a speaker, or a printer via the input / output interface 1600. Further, the input / output interface 1600 may function as a media interface for reading a program or the like recorded on a predetermined recording medium (media). The media is, for example, an optical recording medium such as a DVD (Digital Versatile Disc) or PD (Phase change rewritable Disk), a magneto-optical recording medium such as an MO (Magneto-Optical disk), a tape medium, a magnetic recording medium, or a semiconductor memory. Is. For example, when the computer 1000 functions as the robot device 100 according to the embodiment, the CPU 1100 of the computer 1000 realizes the functions of the control unit 13 and the like by executing the information processing program loaded on the RAM 1200. Further, the information processing program according to the present disclosure and the data in the storage unit 12 are stored in the HDD 1400. The CPU 1100 reads the program data 1450 from the HDD 1400 and executes the program, but as another example, these programs may be acquired from another device via the external network 1550.
なお、本技術は以下のような構成も取ることができる。
(1)
操作対象の候補となる物体である対象物体と、前記対象物体に隣接する物体である隣接物体とが撮像された画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測する予測部と、
前記予測部により予測された前記隣接物体の配置状態の変化が所定の条件を満たす場合、前記隣接物体を操作する処理を実行する実行部と、
を備える情報処理装置。
(2)
前記実行部は、
前記隣接物体の配置状態に関する変化量が閾値以上である場合、前記隣接物体を操作する処理を実行する
(1)に記載の情報処理装置。
(3)
前記予測部は、
前記対象物体に対する操作により生じる前記隣接物体の姿勢の変化を予測し、
前記実行部は、
前記隣接物体の姿勢の変化が姿勢変化に関する条件を満たす場合、前記隣接物体を操作する処理を実行する
(1)または(2)に記載の情報処理装置。
(4)
前記予測部は、
前記対象物体に対する操作により生じる前記隣接物体の位置の変化を予測し、
前記実行部は、
前記隣接物体の位置の変化が位置変化に関する条件を満たす場合、前記隣接物体を操作する処理を実行する
(1)~(3)のいずれか1項に記載の情報処理装置。
(5)
前記予測部は、
前記対象物体と、前記対象物体に接触する前記隣接物体とが撮像された前記画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測する
(1)~(4)のいずれか1項に記載の情報処理装置。
(6)
前記予測部は、
積み重ねられた前記対象物体と前記隣接物体とが撮像された前記画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測する
(1)~(5)のいずれか1項に記載の情報処理装置。
(7)
前記予測部は、
前記対象物体と、前記対象物体に対する操作の影響を受ける範囲内に位置する前記隣接物体とが撮像された前記画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測する
(1)~(6)のいずれか1項に記載の情報処理装置。
(8)
前記画像情報を撮像する画像センサと、
前記画像センサにより撮像された前記画像情報を取得する取得部と、
をさらに備える
(1)~(7)のいずれか1項に記載の情報処理装置。
(9)
前記実行部による処理に応じて駆動する操作部、
をさらに備える
(1)~(8)のいずれか1項に記載の情報処理装置。
(10)
前記操作部は、
物体を操作するマニピュレータである
(9)に記載の情報処理装置。
(11)
前記操作部は、
前記隣接物体の配置状態に関する変化が所定の条件を満たす場合、前記隣接物体を操作する
(9)または(10)に記載の情報処理装置。
(12)
前記実行部による処理に応じて駆動する複数の操作部、
をさらに備える
(1)~(8)のいずれか1項に記載の情報処理装置。
(13)
前記複数の操作部の各々は、
物体を操作するマニピュレータである
(12)に記載の情報処理装置。
(14)
前記複数の操作部のうち少なくとも1つは、
前記隣接物体の配置状態に関する変化が所定の条件を満たす場合、前記隣接物体を操作する
(12)または(13)に記載の情報処理装置。
(15)
前記実行部は、
前記隣接物体の配置状態に関する変化が所定の条件を満たす場合、前記複数の操作部のうち一の操作部に前記対象物体を操作させる処理を実行するとともに、前記複数の操作部のうち他の操作部に前記隣接物体を操作させる処理を実行する
(12)~(14)のいずれか1項に記載の情報処理装置。
(16)
前記実行部は、
前記一の操作部に前記対象物体を移動させる処理を実行する
(15)に記載の情報処理装置。
(17)
前記実行部は、
前記他の操作部に前記対象物体の移動による前記隣接物体の配置状態の変化を抑制させる処理を実行する
(16)に記載の情報処理装置。
(18)
前記実行部は、
前記他の操作部に前記隣接物体を支持させる処理を実行する
(16)または(17)に記載の情報処理装置。
(19)
前記実行部は、
前記他の操作部に前記隣接物体を移動させる処理を実行する
(16)または(17)に記載の情報処理装置。
(20)
操作部による物体への接触に関する検知を行う力覚センサ、
をさらに備え、
前記実行部は、
前記力覚センサにより検知されたセンサ情報に基づいて、操作部を物体に接触させる処理を実行する
(9)~(19)のいずれか1項に記載の情報処理装置。
(21)
操作部による物体への接触結果に基づいて、当該物体に当該物体から独立して動く箇所があるかどうかを判定する判定部、
をさらに備え、
前記実行部は、
前記判定部により当該物体に前記箇所があると判定された場合、操作部の数に応じた当該物体を操作する処理を実行する
(20)に記載の情報処理装置。
(22)
前記画像情報に含まれる物体群を、操作可能な物体と、操作不可能な物体とに分類する分類部と、
前記分類部による分類結果に基づいて、前記物体群のうち、操作可能な物体を、前記対象物体として選択する選択部と、
をさらに備え、
前記予測部は、
前記選択部により選択された前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測する
(1)~(21)のいずれか1項に記載の情報処理装置。
(23)
操作対象の候補となる物体である対象物体と、前記対象物体に隣接する物体である隣接物体とが撮像された画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測し、
予測した前記隣接物体の配置状態の変化が所定の条件を満たす場合、前記隣接物体を操作する処理を実行する
制御を実行する情報処理方法。
(24)
操作対象の候補となる物体である対象物体と、前記対象物体に隣接する物体である隣接物体とが撮像された画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測し、
予測した前記隣接物体の配置状態の変化が所定の条件を満たす場合、前記隣接物体を操作する処理を実行する、
制御を実行させる情報処理プログラム。 The present technology can also have the following configurations.
(1)
Changes related to the arrangement state of the adjacent object caused by the operation on the target object based on the image information captured by the target object which is a candidate object to be operated and the adjacent object which is an object adjacent to the target object. Prediction unit that predicts
When the change in the arrangement state of the adjacent object predicted by the prediction unit satisfies a predetermined condition, the execution unit that executes the process of operating the adjacent object and the execution unit.
Information processing device equipped with.
(2)
The execution unit
The information processing apparatus according to (1), wherein when the amount of change in the arrangement state of the adjacent object is equal to or greater than the threshold value, the process of operating the adjacent object is executed.
(3)
The prediction unit
Predicting changes in the posture of the adjacent object caused by the operation on the target object,
The execution unit
The information processing apparatus according to (1) or (2), wherein when the change in the posture of the adjacent object satisfies the condition regarding the change in posture, the process of operating the adjacent object is executed.
(4)
The prediction unit
Predicting changes in the position of the adjacent object caused by the operation on the target object,
The execution unit
The information processing apparatus according to any one of (1) to (3), which executes a process of manipulating the adjacent object when the change in the position of the adjacent object satisfies the condition relating to the position change.
(5)
The prediction unit
Based on the image information obtained by capturing the image of the target object and the adjacent object in contact with the target object, the change in the arrangement state of the adjacent object caused by the operation on the target object is predicted (1) to (4). The information processing apparatus according to any one of ().
(6)
The prediction unit
Any of (1) to (5) for predicting a change in the arrangement state of the adjacent object caused by an operation on the target object based on the image information obtained by capturing the stacked target object and the adjacent object. The information processing apparatus according toitem 1.
(7)
The prediction unit
Based on the image information obtained by imaging the target object and the adjacent object located within the range affected by the operation on the target object, the change regarding the arrangement state of the adjacent object caused by the operation on the target object is changed. Prediction The information processing apparatus according to any one of (1) to (6).
(8)
An image sensor that captures the image information and
An acquisition unit that acquires the image information captured by the image sensor, and
The information processing apparatus according to any one of (1) to (7).
(9)
An operation unit that is driven in response to processing by the execution unit,
The information processing apparatus according to any one of (1) to (8).
(10)
The operation unit
The information processing device according to (9), which is a manipulator that operates an object.
(11)
The operation unit
The information processing apparatus according to (9) or (10), which operates the adjacent object when the change in the arrangement state of the adjacent object satisfies a predetermined condition.
(12)
A plurality of operation units driven according to processing by the execution unit,
The information processing apparatus according to any one of (1) to (8).
(13)
Each of the plurality of operation units
The information processing apparatus according to (12), which is a manipulator that operates an object.
(14)
At least one of the plurality of operation units
The information processing apparatus according to (12) or (13), which operates the adjacent object when the change in the arrangement state of the adjacent object satisfies a predetermined condition.
(15)
The execution unit
When the change in the arrangement state of the adjacent object satisfies a predetermined condition, a process of causing one of the plurality of operation units to operate the target object is executed, and another operation of the plurality of operation units is performed. The information processing apparatus according to any one of (12) to (14), which executes a process of causing a unit to operate the adjacent object.
(16)
The execution unit
The information processing apparatus according to (15), which executes a process of moving the target object to the one operation unit.
(17)
The execution unit
The information processing apparatus according to (16), wherein the other operation unit executes a process of suppressing a change in the arrangement state of the adjacent object due to the movement of the target object.
(18)
The execution unit
The information processing apparatus according to (16) or (17), which executes a process of causing the other operating unit to support the adjacent object.
(19)
The execution unit
The information processing apparatus according to (16) or (17), which executes a process of moving the adjacent object to the other operation unit.
(20)
A force sensor that detects contact with an object by the operation unit,
With more
The execution unit
The information processing apparatus according to any one of (9) to (19), which executes a process of bringing the operation unit into contact with an object based on the sensor information detected by the force sensor.
(21)
A determination unit that determines whether or not the object has a part that moves independently of the object based on the result of contact with the object by the operation unit.
With more
The execution unit
The information processing apparatus according to (20), wherein when the determination unit determines that the object has the location, the processing for operating the object according to the number of operation units is executed.
(22)
A classification unit that classifies a group of objects included in the image information into an operable object and an inoperable object.
A selection unit that selects a manipulable object from the object group as the target object based on the classification result by the classification unit.
With more
The prediction unit
The information processing apparatus according to any one of (1) to (21), which predicts a change in the arrangement state of the adjacent object caused by an operation on the target object selected by the selection unit.
(23)
Changes related to the arrangement state of the adjacent object caused by the operation on the target object based on the image information captured by the target object which is a candidate object to be operated and the adjacent object which is an object adjacent to the target object. Predict and
An information processing method that executes control to execute a process of operating the adjacent object when the predicted change in the arrangement state of the adjacent object satisfies a predetermined condition.
(24)
Changes related to the arrangement state of the adjacent object caused by the operation on the target object based on the image information captured by the target object which is a candidate object to be operated and the adjacent object which is an object adjacent to the target object. Predict and
When the predicted change in the arrangement state of the adjacent object satisfies a predetermined condition, the process of manipulating the adjacent object is executed.
An information processing program that executes control.
(1)
操作対象の候補となる物体である対象物体と、前記対象物体に隣接する物体である隣接物体とが撮像された画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測する予測部と、
前記予測部により予測された前記隣接物体の配置状態の変化が所定の条件を満たす場合、前記隣接物体を操作する処理を実行する実行部と、
を備える情報処理装置。
(2)
前記実行部は、
前記隣接物体の配置状態に関する変化量が閾値以上である場合、前記隣接物体を操作する処理を実行する
(1)に記載の情報処理装置。
(3)
前記予測部は、
前記対象物体に対する操作により生じる前記隣接物体の姿勢の変化を予測し、
前記実行部は、
前記隣接物体の姿勢の変化が姿勢変化に関する条件を満たす場合、前記隣接物体を操作する処理を実行する
(1)または(2)に記載の情報処理装置。
(4)
前記予測部は、
前記対象物体に対する操作により生じる前記隣接物体の位置の変化を予測し、
前記実行部は、
前記隣接物体の位置の変化が位置変化に関する条件を満たす場合、前記隣接物体を操作する処理を実行する
(1)~(3)のいずれか1項に記載の情報処理装置。
(5)
前記予測部は、
前記対象物体と、前記対象物体に接触する前記隣接物体とが撮像された前記画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測する
(1)~(4)のいずれか1項に記載の情報処理装置。
(6)
前記予測部は、
積み重ねられた前記対象物体と前記隣接物体とが撮像された前記画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測する
(1)~(5)のいずれか1項に記載の情報処理装置。
(7)
前記予測部は、
前記対象物体と、前記対象物体に対する操作の影響を受ける範囲内に位置する前記隣接物体とが撮像された前記画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測する
(1)~(6)のいずれか1項に記載の情報処理装置。
(8)
前記画像情報を撮像する画像センサと、
前記画像センサにより撮像された前記画像情報を取得する取得部と、
をさらに備える
(1)~(7)のいずれか1項に記載の情報処理装置。
(9)
前記実行部による処理に応じて駆動する操作部、
をさらに備える
(1)~(8)のいずれか1項に記載の情報処理装置。
(10)
前記操作部は、
物体を操作するマニピュレータである
(9)に記載の情報処理装置。
(11)
前記操作部は、
前記隣接物体の配置状態に関する変化が所定の条件を満たす場合、前記隣接物体を操作する
(9)または(10)に記載の情報処理装置。
(12)
前記実行部による処理に応じて駆動する複数の操作部、
をさらに備える
(1)~(8)のいずれか1項に記載の情報処理装置。
(13)
前記複数の操作部の各々は、
物体を操作するマニピュレータである
(12)に記載の情報処理装置。
(14)
前記複数の操作部のうち少なくとも1つは、
前記隣接物体の配置状態に関する変化が所定の条件を満たす場合、前記隣接物体を操作する
(12)または(13)に記載の情報処理装置。
(15)
前記実行部は、
前記隣接物体の配置状態に関する変化が所定の条件を満たす場合、前記複数の操作部のうち一の操作部に前記対象物体を操作させる処理を実行するとともに、前記複数の操作部のうち他の操作部に前記隣接物体を操作させる処理を実行する
(12)~(14)のいずれか1項に記載の情報処理装置。
(16)
前記実行部は、
前記一の操作部に前記対象物体を移動させる処理を実行する
(15)に記載の情報処理装置。
(17)
前記実行部は、
前記他の操作部に前記対象物体の移動による前記隣接物体の配置状態の変化を抑制させる処理を実行する
(16)に記載の情報処理装置。
(18)
前記実行部は、
前記他の操作部に前記隣接物体を支持させる処理を実行する
(16)または(17)に記載の情報処理装置。
(19)
前記実行部は、
前記他の操作部に前記隣接物体を移動させる処理を実行する
(16)または(17)に記載の情報処理装置。
(20)
操作部による物体への接触に関する検知を行う力覚センサ、
をさらに備え、
前記実行部は、
前記力覚センサにより検知されたセンサ情報に基づいて、操作部を物体に接触させる処理を実行する
(9)~(19)のいずれか1項に記載の情報処理装置。
(21)
操作部による物体への接触結果に基づいて、当該物体に当該物体から独立して動く箇所があるかどうかを判定する判定部、
をさらに備え、
前記実行部は、
前記判定部により当該物体に前記箇所があると判定された場合、操作部の数に応じた当該物体を操作する処理を実行する
(20)に記載の情報処理装置。
(22)
前記画像情報に含まれる物体群を、操作可能な物体と、操作不可能な物体とに分類する分類部と、
前記分類部による分類結果に基づいて、前記物体群のうち、操作可能な物体を、前記対象物体として選択する選択部と、
をさらに備え、
前記予測部は、
前記選択部により選択された前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測する
(1)~(21)のいずれか1項に記載の情報処理装置。
(23)
操作対象の候補となる物体である対象物体と、前記対象物体に隣接する物体である隣接物体とが撮像された画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測し、
予測した前記隣接物体の配置状態の変化が所定の条件を満たす場合、前記隣接物体を操作する処理を実行する
制御を実行する情報処理方法。
(24)
操作対象の候補となる物体である対象物体と、前記対象物体に隣接する物体である隣接物体とが撮像された画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測し、
予測した前記隣接物体の配置状態の変化が所定の条件を満たす場合、前記隣接物体を操作する処理を実行する、
制御を実行させる情報処理プログラム。 The present technology can also have the following configurations.
(1)
Changes related to the arrangement state of the adjacent object caused by the operation on the target object based on the image information captured by the target object which is a candidate object to be operated and the adjacent object which is an object adjacent to the target object. Prediction unit that predicts
When the change in the arrangement state of the adjacent object predicted by the prediction unit satisfies a predetermined condition, the execution unit that executes the process of operating the adjacent object and the execution unit.
Information processing device equipped with.
(2)
The execution unit
The information processing apparatus according to (1), wherein when the amount of change in the arrangement state of the adjacent object is equal to or greater than the threshold value, the process of operating the adjacent object is executed.
(3)
The prediction unit
Predicting changes in the posture of the adjacent object caused by the operation on the target object,
The execution unit
The information processing apparatus according to (1) or (2), wherein when the change in the posture of the adjacent object satisfies the condition regarding the change in posture, the process of operating the adjacent object is executed.
(4)
The prediction unit
Predicting changes in the position of the adjacent object caused by the operation on the target object,
The execution unit
The information processing apparatus according to any one of (1) to (3), which executes a process of manipulating the adjacent object when the change in the position of the adjacent object satisfies the condition relating to the position change.
(5)
The prediction unit
Based on the image information obtained by capturing the image of the target object and the adjacent object in contact with the target object, the change in the arrangement state of the adjacent object caused by the operation on the target object is predicted (1) to (4). The information processing apparatus according to any one of ().
(6)
The prediction unit
Any of (1) to (5) for predicting a change in the arrangement state of the adjacent object caused by an operation on the target object based on the image information obtained by capturing the stacked target object and the adjacent object. The information processing apparatus according to
(7)
The prediction unit
Based on the image information obtained by imaging the target object and the adjacent object located within the range affected by the operation on the target object, the change regarding the arrangement state of the adjacent object caused by the operation on the target object is changed. Prediction The information processing apparatus according to any one of (1) to (6).
(8)
An image sensor that captures the image information and
An acquisition unit that acquires the image information captured by the image sensor, and
The information processing apparatus according to any one of (1) to (7).
(9)
An operation unit that is driven in response to processing by the execution unit,
The information processing apparatus according to any one of (1) to (8).
(10)
The operation unit
The information processing device according to (9), which is a manipulator that operates an object.
(11)
The operation unit
The information processing apparatus according to (9) or (10), which operates the adjacent object when the change in the arrangement state of the adjacent object satisfies a predetermined condition.
(12)
A plurality of operation units driven according to processing by the execution unit,
The information processing apparatus according to any one of (1) to (8).
(13)
Each of the plurality of operation units
The information processing apparatus according to (12), which is a manipulator that operates an object.
(14)
At least one of the plurality of operation units
The information processing apparatus according to (12) or (13), which operates the adjacent object when the change in the arrangement state of the adjacent object satisfies a predetermined condition.
(15)
The execution unit
When the change in the arrangement state of the adjacent object satisfies a predetermined condition, a process of causing one of the plurality of operation units to operate the target object is executed, and another operation of the plurality of operation units is performed. The information processing apparatus according to any one of (12) to (14), which executes a process of causing a unit to operate the adjacent object.
(16)
The execution unit
The information processing apparatus according to (15), which executes a process of moving the target object to the one operation unit.
(17)
The execution unit
The information processing apparatus according to (16), wherein the other operation unit executes a process of suppressing a change in the arrangement state of the adjacent object due to the movement of the target object.
(18)
The execution unit
The information processing apparatus according to (16) or (17), which executes a process of causing the other operating unit to support the adjacent object.
(19)
The execution unit
The information processing apparatus according to (16) or (17), which executes a process of moving the adjacent object to the other operation unit.
(20)
A force sensor that detects contact with an object by the operation unit,
With more
The execution unit
The information processing apparatus according to any one of (9) to (19), which executes a process of bringing the operation unit into contact with an object based on the sensor information detected by the force sensor.
(21)
A determination unit that determines whether or not the object has a part that moves independently of the object based on the result of contact with the object by the operation unit.
With more
The execution unit
The information processing apparatus according to (20), wherein when the determination unit determines that the object has the location, the processing for operating the object according to the number of operation units is executed.
(22)
A classification unit that classifies a group of objects included in the image information into an operable object and an inoperable object.
A selection unit that selects a manipulable object from the object group as the target object based on the classification result by the classification unit.
With more
The prediction unit
The information processing apparatus according to any one of (1) to (21), which predicts a change in the arrangement state of the adjacent object caused by an operation on the target object selected by the selection unit.
(23)
Changes related to the arrangement state of the adjacent object caused by the operation on the target object based on the image information captured by the target object which is a candidate object to be operated and the adjacent object which is an object adjacent to the target object. Predict and
An information processing method that executes control to execute a process of operating the adjacent object when the predicted change in the arrangement state of the adjacent object satisfies a predetermined condition.
(24)
Changes related to the arrangement state of the adjacent object caused by the operation on the target object based on the image information captured by the target object which is a candidate object to be operated and the adjacent object which is an object adjacent to the target object. Predict and
When the predicted change in the arrangement state of the adjacent object satisfies a predetermined condition, the process of manipulating the adjacent object is executed.
An information processing program that executes control.
100、100A、100B ロボット装置
100C 情報処理装置
11、11C 通信部
12、12C 記憶部
121 閾値情報記憶部
122 密度情報記憶部
13、13C 制御部
131 取得部
132 解析部
133 分類部
134 選択部
135 予測部
136 判定部
137 計画部
138 実行部
138C 送信部
14 センサ部
141 画像センサ
142 力覚センサ
15 移動部
16 操作部
16a 第1操作部
16b 第2操作部
16c 第3操作部 100, 100A, 100B Robot device 100C Information processing device 11, 11C Communication unit 12,12C Storage unit 121 Threshold information storage unit 122 Density information storage unit 13, 13C Control unit 131 Acquisition unit 132 Analysis unit 133 Classification unit 134 Selection unit 135 Prediction Unit 136 Judgment unit 137 Planning unit 138 Execution unit 138C Transmission unit 14 Sensor unit 141 Image sensor 142 Force sensor 15 Moving unit 16 Operation unit 16a 1st operation unit 16b 2nd operation unit 16c 3rd operation unit
100C 情報処理装置
11、11C 通信部
12、12C 記憶部
121 閾値情報記憶部
122 密度情報記憶部
13、13C 制御部
131 取得部
132 解析部
133 分類部
134 選択部
135 予測部
136 判定部
137 計画部
138 実行部
138C 送信部
14 センサ部
141 画像センサ
142 力覚センサ
15 移動部
16 操作部
16a 第1操作部
16b 第2操作部
16c 第3操作部 100, 100A, 100B Robot device 100C Information processing device 11, 11C Communication unit 12,
Claims (20)
- 操作対象の候補となる物体である対象物体と、前記対象物体に隣接する物体である隣接物体とが撮像された画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測する予測部と、
前記予測部により予測された前記隣接物体の配置状態の変化が所定の条件を満たす場合、前記隣接物体を操作する処理を実行する実行部と、
を備える情報処理装置。 Changes related to the arrangement state of the adjacent object caused by the operation on the target object based on the image information captured by the target object which is a candidate object to be operated and the adjacent object which is an object adjacent to the target object. Prediction unit that predicts
When the change in the arrangement state of the adjacent object predicted by the prediction unit satisfies a predetermined condition, the execution unit that executes the process of operating the adjacent object and the execution unit.
Information processing device equipped with. - 前記実行部は、
前記隣接物体の配置状態に関する変化量が閾値以上である場合、前記隣接物体を操作する処理を実行する
請求項1に記載の情報処理装置。 The execution unit
The information processing apparatus according to claim 1, wherein when the amount of change in the arrangement state of the adjacent object is equal to or greater than a threshold value, a process of operating the adjacent object is executed. - 前記予測部は、
前記対象物体に対する操作により生じる前記隣接物体の姿勢の変化を予測し、
前記実行部は、
前記隣接物体の姿勢の変化が姿勢変化に関する条件を満たす場合、前記隣接物体を操作する処理を実行する
請求項1に記載の情報処理装置。 The prediction unit
Predicting changes in the posture of the adjacent object caused by the operation on the target object,
The execution unit
The information processing apparatus according to claim 1, wherein when the change in the posture of the adjacent object satisfies the condition related to the change in posture, the process of operating the adjacent object is executed. - 前記予測部は、
前記対象物体に対する操作により生じる前記隣接物体の位置の変化を予測し、
前記実行部は、
前記隣接物体の位置の変化が位置変化に関する条件を満たす場合、前記隣接物体を操作する処理を実行する
請求項1に記載の情報処理装置。 The prediction unit
Predicting changes in the position of the adjacent object caused by the operation on the target object,
The execution unit
The information processing apparatus according to claim 1, wherein when the change in the position of the adjacent object satisfies the condition regarding the position change, the process of operating the adjacent object is executed. - 前記予測部は、
前記対象物体と、前記対象物体に接触する前記隣接物体とが撮像された前記画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測する
請求項1に記載の情報処理装置。 The prediction unit
The first aspect of claim 1, wherein the change in the arrangement state of the adjacent object caused by the operation on the target object is predicted based on the image information obtained by capturing the image of the target object and the adjacent object in contact with the target object. Information processing device. - 前記予測部は、
積み重ねられた前記対象物体と前記隣接物体とが撮像された前記画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測する
請求項1に記載の情報処理装置。 The prediction unit
The information processing apparatus according to claim 1, wherein a change in the arrangement state of the adjacent object caused by an operation on the target object is predicted based on the image information obtained by capturing the stacked target object and the adjacent object. - 前記予測部は、
前記対象物体と、前記対象物体に対する操作の影響を受ける範囲内に位置する前記隣接物体とが撮像された前記画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測する
請求項1に記載の情報処理装置。 The prediction unit
Based on the image information obtained by imaging the target object and the adjacent object located within the range affected by the operation on the target object, the change regarding the arrangement state of the adjacent object caused by the operation on the target object is changed. The information processing apparatus according to claim 1. - 前記実行部による処理に応じて駆動する操作部、
をさらに備える
請求項1に記載の情報処理装置。 An operation unit that is driven in response to processing by the execution unit,
The information processing apparatus according to claim 1. - 前記操作部は、
前記隣接物体の配置状態に関する変化が所定の条件を満たす場合、前記隣接物体を操作する
請求項8に記載の情報処理装置。 The operation unit
The information processing device according to claim 8, wherein when the change in the arrangement state of the adjacent object satisfies a predetermined condition, the adjacent object is operated. - 前記実行部による処理に応じて駆動する複数の操作部、
をさらに備える
請求項1に記載の情報処理装置。 A plurality of operation units driven according to processing by the execution unit,
The information processing apparatus according to claim 1. - 前記複数の操作部のうち少なくとも1つは、
前記隣接物体の配置状態に関する変化が所定の条件を満たす場合、前記隣接物体を操作する
請求項10に記載の情報処理装置。 At least one of the plurality of operation units
The information processing apparatus according to claim 10, wherein the information processing device operates the adjacent object when the change in the arrangement state of the adjacent object satisfies a predetermined condition. - 前記実行部は、
前記隣接物体の配置状態に関する変化が所定の条件を満たす場合、前記複数の操作部のうち一の操作部に前記対象物体を操作させる処理を実行するとともに、前記複数の操作部のうち他の操作部に前記隣接物体を操作させる処理を実行する
請求項10に記載の情報処理装置。 The execution unit
When the change in the arrangement state of the adjacent object satisfies a predetermined condition, a process of causing one of the plurality of operation units to operate the target object is executed, and another operation of the plurality of operation units is performed. The information processing apparatus according to claim 10, wherein a process of causing a unit to operate the adjacent object is executed. - 前記実行部は、
前記一の操作部に前記対象物体を移動させる処理を実行する
請求項12に記載の情報処理装置。 The execution unit
The information processing device according to claim 12, wherein the process of moving the target object to the one operation unit is executed. - 前記実行部は、
前記他の操作部に前記対象物体の移動による前記隣接物体の配置状態の変化を抑制させる処理を実行する
請求項13に記載の情報処理装置。 The execution unit
The information processing apparatus according to claim 13, wherein the other operation unit executes a process of suppressing a change in the arrangement state of the adjacent object due to the movement of the target object. - 前記実行部は、
前記他の操作部に前記隣接物体を支持させる処理を実行する
請求項13に記載の情報処理装置。 The execution unit
The information processing device according to claim 13, wherein a process of causing the other operating unit to support the adjacent object is executed. - 前記実行部は、
前記他の操作部に前記隣接物体を移動させる処理を実行する
請求項13に記載の情報処理装置。 The execution unit
The information processing device according to claim 13, wherein the process of moving the adjacent object to the other operation unit is executed. - 操作部による物体への接触に関する検知を行う力覚センサ、
をさらに備え、
前記実行部は、
前記力覚センサにより検知されたセンサ情報に基づいて、操作部を物体に接触させる処理を実行する
請求項8に記載の情報処理装置。 A force sensor that detects contact with an object by the operation unit,
With more
The execution unit
The information processing device according to claim 8, wherein the processing of bringing the operation unit into contact with an object is executed based on the sensor information detected by the force sensor. - 操作部による物体への接触結果に基づいて、当該物体に当該物体から独立して動く箇所があるかどうかを判定する判定部、
をさらに備え、
前記実行部は、
前記判定部により当該物体に前記箇所があると判定された場合、操作部の数に応じた当該物体を操作する処理を実行する
請求項17に記載の情報処理装置。 A determination unit that determines whether or not the object has a part that moves independently of the object based on the result of contact with the object by the operation unit.
With more
The execution unit
The information processing apparatus according to claim 17, wherein when the determination unit determines that the object has the location, the information processing device executes a process of operating the object according to the number of operation units. - 操作対象の候補となる物体である対象物体と、前記対象物体に隣接する物体である隣接物体とが撮像された画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測し、
予測した前記隣接物体の配置状態の変化が所定の条件を満たす場合、前記隣接物体を操作する処理を実行する
制御を実行する情報処理方法。 Changes related to the arrangement state of the adjacent object caused by the operation on the target object based on the image information captured by the target object which is a candidate object to be operated and the adjacent object which is an object adjacent to the target object. Predict and
An information processing method that executes control to execute a process of operating the adjacent object when the predicted change in the arrangement state of the adjacent object satisfies a predetermined condition. - 操作対象の候補となる物体である対象物体と、前記対象物体に隣接する物体である隣接物体とが撮像された画像情報に基づいて、前記対象物体に対する操作により生じる前記隣接物体の配置状態に関する変化を予測し、
予測した前記隣接物体の配置状態の変化が所定の条件を満たす場合、前記隣接物体を操作する処理を実行する、
制御を実行させる情報処理プログラム。 Changes related to the arrangement state of the adjacent object caused by the operation on the target object based on the image information captured by the target object which is a candidate object to be operated and the adjacent object which is an object adjacent to the target object. Predict and
When the predicted change in the arrangement state of the adjacent object satisfies a predetermined condition, the process of manipulating the adjacent object is executed.
An information processing program that executes control.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019-159316 | 2019-09-02 | ||
JP2019159316 | 2019-09-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021044751A1 true WO2021044751A1 (en) | 2021-03-11 |
Family
ID=74852108
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/028134 WO2021044751A1 (en) | 2019-09-02 | 2020-07-20 | Information processing device, information processing method, and information processing program |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2021044751A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013119121A (en) * | 2011-12-06 | 2013-06-17 | Ihi Corp | Device and method for taking out object |
JP2016196077A (en) * | 2015-04-06 | 2016-11-24 | キヤノン株式会社 | Information processor, information processing method, and program |
JP2018144152A (en) * | 2017-03-03 | 2018-09-20 | 株式会社キーエンス | Robot simulation device, robot simulation method, robot simulation program, computer-readable recording medium and recording device |
JP2019516568A (en) * | 2016-05-20 | 2019-06-20 | グーグル エルエルシー | Method and apparatus for machine learning related to predicting movement of an object in a robot's environment based on parameters relating to future robot movement in the environment based on an image capturing the object |
-
2020
- 2020-07-20 WO PCT/JP2020/028134 patent/WO2021044751A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013119121A (en) * | 2011-12-06 | 2013-06-17 | Ihi Corp | Device and method for taking out object |
JP2016196077A (en) * | 2015-04-06 | 2016-11-24 | キヤノン株式会社 | Information processor, information processing method, and program |
JP2019516568A (en) * | 2016-05-20 | 2019-06-20 | グーグル エルエルシー | Method and apparatus for machine learning related to predicting movement of an object in a robot's environment based on parameters relating to future robot movement in the environment based on an image capturing the object |
JP2018144152A (en) * | 2017-03-03 | 2018-09-20 | 株式会社キーエンス | Robot simulation device, robot simulation method, robot simulation program, computer-readable recording medium and recording device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7069110B2 (en) | Assessment of robot grip | |
JP7466150B2 (en) | Robotic system with automatic package registration mechanism and method of operation thereof | |
US11597085B2 (en) | Locating and attaching interchangeable tools in-situ | |
CN111571593B (en) | Control device, transport device, recording medium, and control method | |
US20210187735A1 (en) | Positioning a Robot Sensor for Object Classification | |
US9873199B2 (en) | Robotic grasping of items in inventory system | |
US10919151B1 (en) | Robotic device control optimization using spring lattice deformation model | |
EP3347171B1 (en) | Using sensor-based observations of agents in an environment to estimate the pose of an object in the environment and to estimate an uncertainty measure for the pose | |
US10702986B2 (en) | Order picking method and mechanism | |
EP3284563A2 (en) | Picking system | |
US20190291282A1 (en) | Optimization-based spring lattice deformation model for soft materials | |
US20220203547A1 (en) | System and method for improving automated robotic picking via pick planning and interventional assistance | |
US10507584B2 (en) | Fixture manipulation systems and methods | |
JP6948033B1 (en) | Method and calculation system for performing grip area detection | |
CN110597251B (en) | Method and device for controlling intelligent mobile equipment | |
CN116600945A (en) | Pixel-level prediction for grab generation | |
US10933526B2 (en) | Method and robotic system for manipulating instruments | |
WO2021044751A1 (en) | Information processing device, information processing method, and information processing program | |
Lo et al. | Developing a collaborative robotic dishwasher cell system for restaurants | |
WO2021044953A1 (en) | Information processing system and information processing method | |
WO2022050122A1 (en) | Information processing device, robot, and information processing method | |
Mamaev et al. | A concept for a HRC workspace using proximity sensors | |
US20240308082A1 (en) | Apparatus and method for controlling robotic manipulators | |
US20230316557A1 (en) | Retail computer vision system for sensory impaired | |
WO2020075589A1 (en) | Information processing device, information processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20860145 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20860145 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |