CN107765855A - A kind of method and system based on gesture identification control machine people motion - Google Patents
A kind of method and system based on gesture identification control machine people motion Download PDFInfo
- Publication number
- CN107765855A CN107765855A CN201711009892.1A CN201711009892A CN107765855A CN 107765855 A CN107765855 A CN 107765855A CN 201711009892 A CN201711009892 A CN 201711009892A CN 107765855 A CN107765855 A CN 107765855A
- Authority
- CN
- China
- Prior art keywords
- gesture
- user
- machine people
- image
- control
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/113—Recognition of static hand signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/117—Biometrics derived from hands
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a kind of method and system based on gesture identification control machine people motion.Methods described includes:User makes gesture, and obtains the image or video of user gesture;Described image is handled, acquisition possesses the representational gesture feature image of user gesture;Or the processing video, obtain user gesture and be intended to;The gesture feature image or user gesture are intended to match with Pre-defined gesture by gesture identification, after the match is successful, obtain control instruction corresponding with the Pre-defined gesture matched, and the control instruction is transmitted to controlled machine people;The present invention has the advantage that:Using new man-machine interaction mode, user only need to make in camera picture simple gesture can control machine people, simplify interaction, reduce interactive difficulty.
Description
Technical field
It is more particularly to a kind of to be moved based on gesture identification control machine people the present invention relates to the control technology field of robot
Method and system.
Background technology
With the development of science and technology, smart machine is also widely used among live and work, is provided for people
The service such as diversified entertainment way and reduction workload.But some smart machine builds are larger, system is more complicated, program
Operation is more complicated cumbersome, so as to considerably increase the difficulty that domestic consumer's study uses.So it can simply give intelligence machine
People sends instruction, and intelligent robot makes corresponding actions again, simplifies interactive process and reduces interaction difficulty, becomes at present
It is badly in need of the realistic problem solved.
Under the trend of human-computer interaction technology development, man-machine interaction mode can gradually easily facilitate operation, it is no longer necessary to
The input equipments such as traditional mouse, keyboard, traditional screen display output are also not intended to be limited to certainly, I/O mode will
It is varied.No longer be point-to-point input data in the current big data epoch, but add language, posture,
The input datas such as environmental condition, to improve interactive efficiency.Interactive mode also can be gradually intelligent, is no longer with keyboard, mouse, hand
The input data of the equipment such as plate slowly is write, but the manifestation mode such as body gesture, voice directly from user quickly obtains
Interactive information, reduce interactive difficulty.The operation of interactive mode also can more hommization, traditional approach is to need user to go to fit
Answer the interactive mode of machine, centered on machine, the thus difficulty of increased interaction, present interactive mode be using user as
Center, interacted in a manner of meeting user, make full use of the cooperative cooperatings such as the vision, tactile, voice of people, realize hommization
, free efficient input and output, greatly lift the experience of user.The application field of Gesture Recognition mainly has:Machine
Device people, digital product, sign Language Recognition, remote control, virtual reality technology etc., at present robot control aspect should
With being also a very concerned field.
In terms of comprehensive, new interactive mode can be people-oriented, using machine as multichannel, multi-mode, multimedia sense
The other receiver of knowledge, transmission information is come by modes such as gesture, voice, body appearance, faces, and passing through Computer Identification and Analysis
User view, then make suitable response.This respect research at present is quite a few, there is gesture identification, recognition of face, human body tracking etc.
Gesture identification based on computer vision, it is that gesture motion image is obtained by equipment such as vision cameras, is carrying out digitized map
As Treatment Analysis, it is able to identify gesture, then reaches the purpose of man-machine interaction.New interactive mode is more convenient, intelligence, efficiency
Efficiently, the developing direction of human-computer interaction technology is also complied with.
The content of the invention
It is an object of the invention to using more convenient, intelligence, efficient man-machine interactive mode, there is provided one kind is based on hand
Gesture identifies the method and system of control machine people, and user can make the motion that gesture carrys out control machine people, improve user with
The interactive efficiency of robot and convenient degree.
In order to realize foregoing invention purpose, the invention provides the method based on gesture identification control machine people, including:
Step 1: user makes gesture, and obtain the image or video of user gesture;
Step 2: processing described image, the gesture feature image of acquisition user's static gesture;Or the processing video,
The gesture for obtaining user's dynamic gesture is intended to;
Step 3: carrying out gesture identification, the gesture feature image or user gesture are intended to and Pre-defined gesture
Match somebody with somebody, after the match is successful, obtain control instruction corresponding with the Pre-defined gesture matched, and the control instruction is transmitted to controlled
Robot.
Preferably, methods described also includes establishing gesture ATL, specially sets the Pre-defined gesture, each is pre-
Define gesture and correspond to corresponding control instruction;The gesture ATL includes static gesture ATL and dynamic gesture ATL.
Preferably, the gesture includes static gesture and dynamic gesture;The static gesture, including by temporarily it is static not
Dynamic finger, palm or palm makes certain special shape or posture together with arm;The dynamic gesture, including by one section
The gesture for the time-varying that a series of continuous static gestures in time are formed.
Preferably, the step of processing described image, it is specially:Pass through filtering process, Morphological scale-space, color space
Changed with skin color segmentation and separate from the static gesture image gesture area of user's static gesture, and to the hand
Gesture area carries out profile description, obtains the gesture contour images of user's static gesture.
Preferably, the video of the processing user gesture, including Camshift tracings, obtain dynamic gesture and are intended to:When
When user gesture is appeared in the range of vision camera, gesture is just captured, and is positioned, then captures next two field picture again
The position of middle palm, the moving direction of palm is judged by alternate position spike, to obtain dynamic gesture intention.
The present invention also provides a kind of system based on gesture identification control machine people motion, including:
Vision camera, the image or video of the gesture made for obtaining user;
Graphics processing unit, for handling described image, obtain the gesture feature image of user's static gesture, or processing
The video, the gesture for obtaining user's dynamic gesture are intended to;
Computing unit is controlled, for being intended to match with Pre-defined gesture by the gesture feature image or gesture, is matched
After success, control instruction corresponding with the Pre-defined gesture matched is obtained, and the control instruction is transmitted to controlled machine people.
Further, described image processing unit and control computing unit are integrated into a control device.
Preferably, the control device is desktop computer or server.
Further, the vision camera, graphics processing unit and control computing unit are integrated into a control and set
It is standby.
Preferably, the control device is notebook computer, tablet personal computer or smart mobile phone.
Compared with prior art, the beneficial effects of the present invention are:User only needs to make in camera picture simply
Gesture can control machine people, simplify interaction, reduce interactive difficulty.Meanwhile the height collection of intelligent movable equipment
Into with it is portable, make the required hardware device of the present invention more integrated, portable, contribute to user to be convenient for carrying, expand this hair
The bright scope of application, and possess the prospect of being widely applied.
Brief description of the drawings
Fig. 1 is the flow chart of the embodiment of method and system one of gesture identification control machine people of the present invention.
Fig. 2 is the static gesture figure of the embodiment of method and system one of gesture identification control machine people of the present invention.
Fig. 3 is the robot control flow chart of the embodiment of method and system one of gesture identification control machine people of the present invention.
Fig. 4 is the robot course Parameter Map of the embodiment of method and system one of gesture identification control machine people of the present invention.
The system schematic of the embodiment of method and system one of Fig. 5 gesture identification control machine people of the present invention.
Fig. 6 is the static gesture control machine people of the embodiment of method and system one of gesture identification control machine people of the present invention
Move example 1.
Fig. 7 is the static gesture control machine people of the embodiment of method and system one of gesture identification control machine people of the present invention
Move example 2.
Fig. 8 is the dynamic gesture control machine people of the embodiment of method and system one of gesture identification control machine people of the present invention
Move example 1.
Fig. 9 is the dynamic gesture control machine people of the embodiment of method and system one of gesture identification control machine people of the present invention
Move example 2.
Marked in figure:1- vision cameras, 2- graphics processing units, 3- control computing units.
Embodiment
Below in conjunction with the accompanying drawings and embodiment the present invention is described in further detail.But this should not be interpreted as to the present invention
The scope of above-mentioned theme is only limitted to following embodiment.
Referring to Fig. 1, the method based on gesture identification control machine people disclosed in one embodiment of the invention is applied to control and moved
The typical case scene of mobile robot movement is described in detail.
Step S100, user gesture is obtained
User is interacted with system, and Pre-defined gesture, the vision shooting are made in the vision camera picture
Head obtains images of gestures or video;The vision camera can intelligently detect whether vision camera has been switched on, such as
Open, images of gestures and video are directly obtained by vision camera, if not opening, first open vision camera, then taken the photograph by vision
As head obtains images of gestures and video;Static gesture control can also be carried out by the selection of user or dynamic gesture controls,
To control the working method of the vision camera:When selecting static gesture control, then user is obtained by vision camera
Static gesture image;When selecting dynamic gesture control, then user's dynamic gesture video is obtained by vision camera.
In the preferred embodiment of the invention, it is to carry out static gesture control that the vision camera, which can be identified intelligently,
Or dynamic gesture control:When the gesture that user makes in vision camera picture, (for example, in 2 seconds) are not sent out in the short time
Raw movement, then be considered as static gesture, carries out shooting image function;It is short when the gesture that user makes in vision camera picture
(for example, in 2 seconds) are moved in time, then are considered as dynamic gesture, carry out shooting video capability.
Step S200, user gesture image or video are handled
By described image processing unit, user gesture result is obtained;
When user gesture is static gesture, then image procossing is carried out, to obtain the gesture feature figure of user's static gesture
Picture;Specifically, changed by filtering process, Morphological scale-space, color space and skin color segmentation by the gesture of user's static gesture
Area separates from the static gesture image, and carries out profile description to the gesture area, and it is quiet to obtain user
The gesture contour images of state gesture;
When user gesture is dynamic gesture, then by Camshift tracings, the gesture for obtaining dynamic gesture is intended to:When
When user gesture is appeared in the range of vision camera, gesture is just captured, and is positioned, then captures next two field picture again
The position of middle palm, the moving direction of palm is judged by alternate position spike, to obtain the gesture of dynamic gesture intention;
The CamShift tracings are only the preferred gesture tracking method of one embodiment of the invention.World's left-hand seat at present
Gesture tracking mainly has three:Kalman filtering (Kalman filter) tracing, light stream (optic flow) tracing and
CamShift tracings, the gesture tracking method can be completed at the step S200 images of the method for one embodiment of the invention
The gesture for managing the acquisition dynamic gesture of unit is intended to function, and under disturbance factor, CamShift tracings can be realized
Tracking to user gesture, experiment effect is preferable, therefore selects CamShift tracings as currently preferred gesture tracking
Method.
Step S300, gesture identification
By controlling computing unit, the gesture feature image or gesture are intended to match with Pre-defined gesture, matched
After success, control instruction corresponding with the Pre-defined gesture matched is obtained, then transmits the control instruction to robot.
In the preferred embodiment of the invention, gesture identification result can also be shown by display screen.
The method of one embodiment of the invention also includes establishing gesture ATL, specially sets the Pre-defined gesture, often
A kind of Pre-defined gesture corresponds to corresponding control instruction;The gesture ATL includes static gesture ATL and dynamic gesture mould
Plate storehouse.
It is the static gesture ATL in one embodiment of the invention referring to table 1, Fig. 2, when gesture is " 1 refers to ", identification knot
Fruit is " 1 ", and control machine people does the motion for moving forward a certain distance;When gesture is " 2 refer to ", recognition result is " 2 ", control
Robot processed moves right the motion of a certain distance;When gesture is " 3 refer to ", recognition result is " 3 ", and control machine people does
It is moved to the left the motion of a certain distance;When gesture is " 4 refer to ", recognition result is " 4 ", and control machine people, which does, to be retreated necessarily
The motion of distance;When gesture is " palm ", recognition result is " 5 ", and control machine people stops current action, is parked in original place,
Wait instructs next time;
In the preferred embodiment of the invention, in order to reduce serious forgiveness, the degree of accuracy is improved, each gesture can repeat to enroll
(for example, repeating admission 100 or more) similar gesture is as template.
Table 1
It is the dynamic gesture ATL in one embodiment of the invention referring to table 2.When hand is to left, recognition result is
“Left!", control machine people is moved to the left the motion of a certain distance;When hand is to during right translation, recognition result is
“Right!", control machine people moves right the motion of a certain distance;When hand translates up, recognition result is
“Ahead!", control machine people does the motion for moving forward a certain distance;When subordinate translates, recognition result is " Back!",
Control machine people does the motion for retreating a certain distance backward;When hand makes fit, recognition result is " Stop!", control
Robot stops current action, is parked in original place, and wait instructs next time.
Table 2
The controlled machine people obtains and performs control command.Referring to Fig. 3,4, the fortune of the robot of one embodiment of the invention
Dynamic control is realized:Using the initial initial point of robot as origin, direct north is the positive direction of principal axis of Y, with Y-axis turn clockwise 90 ° for X just
Direction of principal axis axle, global coordinate system is established, the information of robot controller is transferred to after system identification gesture, wherein reference coordinate is
Referring to target in the information of transmission, relative to the position of robot, θ is target location relative coordinate and global coordinate system X positive axis institute shape
Into angle, then by controller control machine people move, finally reach objective after be such as not connected to new instruction, machine
Device people will stop moving;The initial velocity of robot is first to set in advance, can control two by controlling to adjust rotating speed thereafter
The speed of wheel.
It is as follows that robot motion implements step:
The first step:Machine after the angle α of robot forward direction and direct north, motion before the motion of calculating electronic compass feedback
The angle Φ of people's target point and direct north, α and Φ difference e (t) is calculated, if not equal to 0, rotated in place, until
Untill difference is 0, to determine direction of advance;
Second step:Sampled during advance, judge whether angle α and Φ difference e (t) exceed error threshold,
Robot both sides vehicle wheel rotational speed is adjusted by PID again, to realize that the robot for making deviation returns correct navigation channel, until arriving
Up to target point.
Fig. 5 is a kind of system construction drawing based on gesture identification control machine people motion that one embodiment of the invention provides,
The system includes:Vision camera 1, graphics processing unit 2, control computing unit 3;
The vision camera 1, for obtaining the image or video of user gesture, it can be monocular, binocular or three
Mesh camera.Monocular cam, which can be used only, in the present invention can complete the acquisition of user gesture image or video, so as to
To reduce the hardware cost of measuring system.
Described image processing unit 2, for handling described image or video, when user gesture is static gesture
When, the gesture feature image of user's static gesture is obtained by image procossing, when user gesture is dynamic gesture, is passed through
The gesture that CamShift tracings obtain dynamic gesture is intended to.
The control computing unit 3, for the gesture feature image or gesture to be intended to match with Pre-defined gesture,
After the match is successful, control instruction corresponding with the Pre-defined gesture matched is obtained, then passes control instruction by communication module
Transport to controlled machine people.Specifically, the control computing unit 3 can be provided with storage device, can be deposited in advance by performing
Storage programmed instruction (for example, application program of gesture control robot) in the storage device obtains gesture identification result, and
And the ATL of the Pre-defined gesture is also stored in the storage device of the control computing unit 3.
More clearly illustrate the mistake for passing through gesture control robot motion below in conjunction with the example of one embodiment of the invention
Journey.
Referring to Fig. 6, when user makes static gesture, user makes the gesture of " 1 refers to ", and system is imaged by the vision
First 1 capture user gesture image, described image processing unit 2 pre-process to images of gestures, and the control computing unit 3 will
Gesture feature image after processing is matched with Pre-defined gesture, and it is " 1 " that result is identified after the match is successful, is obtained simultaneously
Corresponding control instruction is " travelling forward ", and the controller of the robot is transferred to by communication module, passes through controller
Control machine people makes the action of advance.
Referring to Fig. 7, when user makes static gesture, user makes the gesture of " 2 refer to ", and system is imaged by the vision
First 1 capture user gesture image, described image processing unit 2 pre-process to images of gestures, and the control computing unit 3 will
Gesture feature image after processing is matched with Pre-defined gesture, and it is " 2 " that result is identified after the match is successful, is obtained simultaneously
Corresponding control instruction is " moving right ", and the controller of the robot is transferred to by communication module, passes through controller
Control machine people makes the action walked to the right.
Referring to Fig. 8, user makes the gesture of " to left ", and system captures user gesture by the vision camera 1,
By the CamShift tracings of described image processing unit 2 determine user gesture be intended to, then by control computing unit 3 with
Pre-defined gesture is matched, and it is " Left that result is identified after the match is successful!", while obtain corresponding control instruction and be
" to left movement ", and be transferred to by communication module the controller of the robot, by controller control machine people make to
The action that a left side is walked.
Referring to Fig. 9, user makes the gesture of " translation downwards ", and system captures user gesture by the vision camera 1,
The gesture for determining user by the CamShift tracings of graphics processing unit 2 is intended to, then by controlling computing unit 3 with making a reservation for
Adopted gesture is matched, and after the match is successful, it is " Back to be identified result!", while obtain corresponding control instruction for " after
Move back ", and the controller of the robot is transferred to by communication module, the dynamic of retrogressing is made by controller control machine people
Make.
It should be appreciated that " embodiment " or " preferred embodiment " that specification is previously mentioned in the whole text means have with embodiment
During special characteristic, structure or the characteristic of pass are included at least one embodiment of the present invention.Therefore, go out everywhere in entire disclosure
Existing " in one embodiment " or " in a preferred embodiment " not necessarily refers to identical embodiment.In addition, these are specific
Feature, structure or characteristic can in any suitable manner combine in one or more embodiments, also, the present invention it is each
In individual embodiment, the size of the sequence number of above-mentioned each process is not meant to the priority of execution sequence, and the execution sequence of each process should
Determined with its function and internal logic, the implementation process without tackling the embodiment of the present invention forms any restriction.
In embodiment provided herein, it should be understood that the method and system provided, can be by others side
Formula is realized.Apparatus embodiments described above are only schematical, for example, described image processing unit, control calculate list
Member can be combined into One function unit, or described image processing unit is segmented into static gesture processing unit and dynamic hand
Gesture processing unit, the connection between these units can also be directly connected to by communication interface, can also pass through radio communication
INDIRECT COUPLING.And the Pre-defined gesture and corresponding control instruction can also be other control instructions, for example, with
When family is by gesture control television set, " 1 refers to " can correspond to the instruction of " switching on and shutting down ", and " 2 refer to " can correspond to " increase volume "
Instruction, " 3 refer to " can correspond to the instruction of " reduction volume ", and " 4 refer to " can correspond to the instruction of " a upper channel ", and " 5 refer to " can be with
The instruction of corresponding " next channel ".
Also, the method and system of one embodiment of the invention is realized in the form of hardware adds software.Thus propose, at this
In invention preferred embodiment, a computer equipment can be used (for example, personal computer, tablet personal computer or intelligent hand
Machine), and by operating in the application program on the smart machine (application program carries human-computer interaction interface), described in completion
Vision camera, graphics processing unit and the function of controlling computing unit.Wherein, the vision camera can be notebook electricity
Brain, tablet personal computer, the camera built in smart mobile phone, or the camera that desktop computer, server are external;At described image
Unit and control computing unit are managed, can be personal computer, server, tablet personal computer or smart mobile phone, and these set
The application program of standby upper operation.In summary, method and system of the invention possesses high integration, portable degree, and the present invention
Gather around and have broad application prospects.
" one embodiment of the invention " and " preferred embodiment of the invention " described above are only the sides of being preferable to carry out of the present invention
Formula, be not intended to limit the invention, it is noted that for made within the principle of the art any modification, equally replace
Change and improve, be regarded as within protection scope of the present invention.
Claims (10)
- A kind of 1. method based on gesture identification control machine people motion, it is characterised in that including:Step 1: user makes gesture, and obtain the image or video of user gesture;Step 2: processing described image, the gesture feature image of acquisition user's static gesture;Or the processing video, obtain The gesture of user's dynamic gesture is intended to;Step 3: carrying out gesture identification, the gesture feature image or gesture are intended to match with Pre-defined gesture, matched into After work(, control instruction corresponding with the Pre-defined gesture matched is obtained, and the control instruction is transmitted to controlled machine people.
- 2. according to the method for claim 1, it is characterised in that also include establishing gesture ATL, specially described in setting Pre-defined gesture, each Pre-defined gesture correspond to corresponding control instruction;The gesture ATL includes static gesture template Storehouse and dynamic gesture ATL.
- 3. according to the method for claim 1, it is characterised in that the gesture, including static gesture and dynamic gesture;It is described Static gesture, including by temporary transient actionless finger, palm or palm together with arm make certain special shape or Posture;The dynamic gesture, include the hand for the time-varying being made up of a series of continuous static gestures in a period of time Gesture.
- 4. according to the method for claim 1, it is characterised in that described the step of handling described image, be specially:Pass through filter Ripple processing, Morphological scale-space, color space and skin color segmentation conversion are by the gesture area of user's static gesture from described quiet Separated in state images of gestures, and profile description is carried out to the gesture area, obtain the gesture profile of user's static gesture Image.
- 5. according to the method for claim 1, it is characterised in that described the step of handling user gesture video, including Camshift tracings, obtain dynamic gesture and be intended to:When user gesture is appeared in the range of vision camera, hand is just captured Gesture, and positioned, the position of palm in next two field picture is then captured again, and the mobile side of palm is judged by alternate position spike To obtain the gesture of dynamic gesture intention.
- A kind of 6. system based on gesture identification control machine people motion, it is characterised in that including:Vision camera, the image or video of the gesture made for obtaining user;Graphics processing unit, for handling described image, obtain the gesture feature image of user's static gesture;Or processing institute Video is stated, the gesture for obtaining user's dynamic gesture is intended to;Computing unit is controlled, for being intended to match with Pre-defined gesture by the gesture feature image or gesture, is matched into After work(, control instruction corresponding with the Pre-defined gesture matched is obtained, and the control instruction is transmitted to controlled machine people.
- 7. system according to claim 6, it is characterised in that described image processing unit and control computing unit are integrated into One control device.
- 8. system according to claim 7, it is characterised in that the control device is desktop computer or server.
- 9. system according to claim 6, it is characterised in that the vision camera, graphics processing unit and control meter Calculate unit and be integrated into a control device.
- 10. system according to claim 9, it is characterised in that the control device be notebook computer, tablet personal computer or Smart mobile phone.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711009892.1A CN107765855A (en) | 2017-10-25 | 2017-10-25 | A kind of method and system based on gesture identification control machine people motion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711009892.1A CN107765855A (en) | 2017-10-25 | 2017-10-25 | A kind of method and system based on gesture identification control machine people motion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107765855A true CN107765855A (en) | 2018-03-06 |
Family
ID=61270716
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711009892.1A Pending CN107765855A (en) | 2017-10-25 | 2017-10-25 | A kind of method and system based on gesture identification control machine people motion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107765855A (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108568820A (en) * | 2018-04-27 | 2018-09-25 | 深圳市商汤科技有限公司 | Robot control method and device, electronic equipment and storage medium |
CN108615055A (en) * | 2018-04-19 | 2018-10-02 | 咪咕动漫有限公司 | A kind of similarity calculating method, device and computer readable storage medium |
CN108762250A (en) * | 2018-04-27 | 2018-11-06 | 深圳市商汤科技有限公司 | The control method and device of equipment, equipment, computer program and storage medium |
CN109327760A (en) * | 2018-08-13 | 2019-02-12 | 北京中科睿芯科技有限公司 | A kind of intelligent sound and its control method for playing back |
CN109543652A (en) * | 2018-12-06 | 2019-03-29 | 北京奥康达体育产业股份有限公司 | A kind of wisdom ski training device and its training result display methods, Cloud Server |
CN109634415A (en) * | 2018-12-11 | 2019-04-16 | 哈尔滨拓博科技有限公司 | It is a kind of for controlling the gesture identification control method of analog quantity |
CN109828576A (en) * | 2019-02-22 | 2019-05-31 | 北京京东尚科信息技术有限公司 | Gestural control method, device, equipment and medium for unmanned dispensing machine people |
CN109857778A (en) * | 2019-01-09 | 2019-06-07 | 公牛集团股份有限公司 | It wears the clothes proposal recommending method, system and device |
CN110228065A (en) * | 2019-04-29 | 2019-09-13 | 北京云迹科技有限公司 | Motion planning and robot control method and device |
CN110347243A (en) * | 2019-05-30 | 2019-10-18 | 深圳乐行天下科技有限公司 | A kind of working method and robot of robot |
CN110434853A (en) * | 2019-08-05 | 2019-11-12 | 北京云迹科技有限公司 | A kind of robot control method, device and storage medium |
CN110465937A (en) * | 2019-06-27 | 2019-11-19 | 平安科技(深圳)有限公司 | Synchronous method, image processing method, man-machine interaction method and relevant device |
CN111080537A (en) * | 2019-11-25 | 2020-04-28 | 厦门大学 | Intelligent control method, medium, equipment and system for underwater robot |
CN111158457A (en) * | 2019-12-31 | 2020-05-15 | 苏州莱孚斯特电子科技有限公司 | Vehicle-mounted HUD (head Up display) human-computer interaction system based on gesture recognition |
CN111290577A (en) * | 2020-01-22 | 2020-06-16 | 北京明略软件系统有限公司 | Non-contact input method and device |
CN111489117A (en) * | 2020-03-11 | 2020-08-04 | 北京联合大学 | Article distribution method and system based on visual computing interaction |
CN112053505A (en) * | 2020-08-21 | 2020-12-08 | 杭州小电科技股份有限公司 | Mobile power supply leasing method, device and system, electronic device and storage medium |
CN112224304A (en) * | 2020-10-28 | 2021-01-15 | 北京理工大学 | Wheel step composite mobile platform and gesture and voice control method thereof |
CN113038149A (en) * | 2019-12-09 | 2021-06-25 | 上海幻电信息科技有限公司 | Live video interaction method and device and computer equipment |
CN113171472A (en) * | 2020-05-26 | 2021-07-27 | 中科王府(北京)科技有限公司 | Disinfection robot |
CN113183133A (en) * | 2021-04-28 | 2021-07-30 | 华南理工大学 | Gesture interaction method, system, device and medium for multi-degree-of-freedom robot |
CN113510707A (en) * | 2021-07-23 | 2021-10-19 | 上海擎朗智能科技有限公司 | Robot control method and device, electronic equipment and storage medium |
US20220083049A1 (en) * | 2019-01-22 | 2022-03-17 | Honda Motor Co., Ltd. | Accompanying mobile body |
CN114648811A (en) * | 2022-03-31 | 2022-06-21 | 华视伟业(深圳)科技有限公司 | Man-machine interaction method and system based on gesture recognition |
CN117301059A (en) * | 2023-10-12 | 2023-12-29 | 河海大学 | Teleoperation system, teleoperation method and storage medium for mobile robot |
WO2024212553A1 (en) * | 2023-04-12 | 2024-10-17 | 深圳先进技术研究院 | Robot remote control method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103390168A (en) * | 2013-07-18 | 2013-11-13 | 重庆邮电大学 | Intelligent wheelchair dynamic gesture recognition method based on Kinect depth information |
CN103903011A (en) * | 2014-04-02 | 2014-07-02 | 重庆邮电大学 | Intelligent wheelchair gesture recognition control method based on image depth information |
CN105787471A (en) * | 2016-03-25 | 2016-07-20 | 南京邮电大学 | Gesture identification method applied to control of mobile service robot for elder and disabled |
CN106005086A (en) * | 2016-06-02 | 2016-10-12 | 北京航空航天大学 | Leg-wheel composite robot based on Xtion equipment and gesture control method thereof |
CN106681508A (en) * | 2016-12-29 | 2017-05-17 | 杭州电子科技大学 | System for remote robot control based on gestures and implementation method for same |
CN106934333A (en) * | 2015-12-31 | 2017-07-07 | 芋头科技(杭州)有限公司 | A kind of gesture identification method and system |
-
2017
- 2017-10-25 CN CN201711009892.1A patent/CN107765855A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103390168A (en) * | 2013-07-18 | 2013-11-13 | 重庆邮电大学 | Intelligent wheelchair dynamic gesture recognition method based on Kinect depth information |
CN103903011A (en) * | 2014-04-02 | 2014-07-02 | 重庆邮电大学 | Intelligent wheelchair gesture recognition control method based on image depth information |
CN106934333A (en) * | 2015-12-31 | 2017-07-07 | 芋头科技(杭州)有限公司 | A kind of gesture identification method and system |
CN105787471A (en) * | 2016-03-25 | 2016-07-20 | 南京邮电大学 | Gesture identification method applied to control of mobile service robot for elder and disabled |
CN106005086A (en) * | 2016-06-02 | 2016-10-12 | 北京航空航天大学 | Leg-wheel composite robot based on Xtion equipment and gesture control method thereof |
CN106681508A (en) * | 2016-12-29 | 2017-05-17 | 杭州电子科技大学 | System for remote robot control based on gestures and implementation method for same |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108615055A (en) * | 2018-04-19 | 2018-10-02 | 咪咕动漫有限公司 | A kind of similarity calculating method, device and computer readable storage medium |
CN108615055B (en) * | 2018-04-19 | 2021-04-27 | 咪咕动漫有限公司 | Similarity calculation method and device and computer readable storage medium |
CN108762250A (en) * | 2018-04-27 | 2018-11-06 | 深圳市商汤科技有限公司 | The control method and device of equipment, equipment, computer program and storage medium |
CN108568820A (en) * | 2018-04-27 | 2018-09-25 | 深圳市商汤科技有限公司 | Robot control method and device, electronic equipment and storage medium |
CN109327760A (en) * | 2018-08-13 | 2019-02-12 | 北京中科睿芯科技有限公司 | A kind of intelligent sound and its control method for playing back |
CN109543652B (en) * | 2018-12-06 | 2020-04-17 | 北京奥康达体育产业股份有限公司 | Intelligent skiing trainer, training result display method thereof and cloud server |
CN109543652A (en) * | 2018-12-06 | 2019-03-29 | 北京奥康达体育产业股份有限公司 | A kind of wisdom ski training device and its training result display methods, Cloud Server |
CN109634415A (en) * | 2018-12-11 | 2019-04-16 | 哈尔滨拓博科技有限公司 | It is a kind of for controlling the gesture identification control method of analog quantity |
CN109634415B (en) * | 2018-12-11 | 2019-10-18 | 哈尔滨拓博科技有限公司 | It is a kind of for controlling the gesture identification control method of analog quantity |
CN109857778A (en) * | 2019-01-09 | 2019-06-07 | 公牛集团股份有限公司 | It wears the clothes proposal recommending method, system and device |
US20220083049A1 (en) * | 2019-01-22 | 2022-03-17 | Honda Motor Co., Ltd. | Accompanying mobile body |
CN109828576A (en) * | 2019-02-22 | 2019-05-31 | 北京京东尚科信息技术有限公司 | Gestural control method, device, equipment and medium for unmanned dispensing machine people |
CN110228065A (en) * | 2019-04-29 | 2019-09-13 | 北京云迹科技有限公司 | Motion planning and robot control method and device |
CN110347243A (en) * | 2019-05-30 | 2019-10-18 | 深圳乐行天下科技有限公司 | A kind of working method and robot of robot |
CN110465937A (en) * | 2019-06-27 | 2019-11-19 | 平安科技(深圳)有限公司 | Synchronous method, image processing method, man-machine interaction method and relevant device |
CN110434853B (en) * | 2019-08-05 | 2021-05-14 | 北京云迹科技有限公司 | Robot control method, device and storage medium |
CN110434853A (en) * | 2019-08-05 | 2019-11-12 | 北京云迹科技有限公司 | A kind of robot control method, device and storage medium |
CN111080537A (en) * | 2019-11-25 | 2020-04-28 | 厦门大学 | Intelligent control method, medium, equipment and system for underwater robot |
CN111080537B (en) * | 2019-11-25 | 2023-09-12 | 厦门大学 | Intelligent control method, medium, equipment and system for underwater robot |
CN113038149A (en) * | 2019-12-09 | 2021-06-25 | 上海幻电信息科技有限公司 | Live video interaction method and device and computer equipment |
US11778263B2 (en) | 2019-12-09 | 2023-10-03 | Shanghai Hode Information Technology Co., Ltd. | Live streaming video interaction method and apparatus, and computer device |
CN111158457A (en) * | 2019-12-31 | 2020-05-15 | 苏州莱孚斯特电子科技有限公司 | Vehicle-mounted HUD (head Up display) human-computer interaction system based on gesture recognition |
CN111290577A (en) * | 2020-01-22 | 2020-06-16 | 北京明略软件系统有限公司 | Non-contact input method and device |
CN111290577B (en) * | 2020-01-22 | 2024-03-22 | 北京明略软件系统有限公司 | Non-contact input method and device |
CN111489117A (en) * | 2020-03-11 | 2020-08-04 | 北京联合大学 | Article distribution method and system based on visual computing interaction |
CN113171472A (en) * | 2020-05-26 | 2021-07-27 | 中科王府(北京)科技有限公司 | Disinfection robot |
CN112053505A (en) * | 2020-08-21 | 2020-12-08 | 杭州小电科技股份有限公司 | Mobile power supply leasing method, device and system, electronic device and storage medium |
CN112053505B (en) * | 2020-08-21 | 2022-07-01 | 杭州小电科技股份有限公司 | Mobile power supply leasing method, device and system, electronic device and storage medium |
CN112224304A (en) * | 2020-10-28 | 2021-01-15 | 北京理工大学 | Wheel step composite mobile platform and gesture and voice control method thereof |
CN113183133A (en) * | 2021-04-28 | 2021-07-30 | 华南理工大学 | Gesture interaction method, system, device and medium for multi-degree-of-freedom robot |
CN113183133B (en) * | 2021-04-28 | 2024-02-09 | 华南理工大学 | Gesture interaction method, system, device and medium for multi-degree-of-freedom robot |
CN113510707A (en) * | 2021-07-23 | 2021-10-19 | 上海擎朗智能科技有限公司 | Robot control method and device, electronic equipment and storage medium |
CN114648811A (en) * | 2022-03-31 | 2022-06-21 | 华视伟业(深圳)科技有限公司 | Man-machine interaction method and system based on gesture recognition |
CN114648811B (en) * | 2022-03-31 | 2024-11-01 | 华视伟业(深圳)科技有限公司 | Man-machine interaction method and system based on gesture recognition |
WO2024212553A1 (en) * | 2023-04-12 | 2024-10-17 | 深圳先进技术研究院 | Robot remote control method and system |
CN117301059A (en) * | 2023-10-12 | 2023-12-29 | 河海大学 | Teleoperation system, teleoperation method and storage medium for mobile robot |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107765855A (en) | A kind of method and system based on gesture identification control machine people motion | |
CN104410883B (en) | The mobile wearable contactless interactive system of one kind and method | |
Garg et al. | Vision based hand gesture recognition | |
US20180186452A1 (en) | Unmanned Aerial Vehicle Interactive Apparatus and Method Based on Deep Learning Posture Estimation | |
CN108453742B (en) | Kinect-based robot man-machine interaction system and method | |
CN109800676B (en) | Gesture recognition method and system based on depth information | |
CN108983636B (en) | Man-machine intelligent symbiotic platform system | |
CN106598226A (en) | UAV (Unmanned Aerial Vehicle) man-machine interaction method based on binocular vision and deep learning | |
US20140267004A1 (en) | User Adjustable Gesture Space | |
CN105528082A (en) | Three-dimensional space and hand gesture recognition tracing interactive method, device and system | |
CN104407694A (en) | Man-machine interaction method and device combining human face and gesture control | |
CN107357428A (en) | Man-machine interaction method and device based on gesture identification, system | |
CN102830798A (en) | Mark-free hand tracking method of single-arm robot based on Kinect | |
CN108052901B (en) | Binocular-based gesture recognition intelligent unmanned aerial vehicle remote control method | |
CN109839827B (en) | Gesture recognition intelligent household control system based on full-space position information | |
CN110807391A (en) | Human body posture instruction identification method for human-unmanned aerial vehicle interaction based on vision | |
US20180260031A1 (en) | Method for controlling distribution of multiple sub-screens and device using the same | |
Raja et al. | Voice Assistant and Gesture Controlled Virtual Mouse using Deep Learning Technique | |
CN211293894U (en) | Hand-written interaction device in air | |
WO2021203368A1 (en) | Image processing method and apparatus, electronic device and storage medium | |
CN116476074A (en) | Remote mechanical arm operation system based on mixed reality technology and man-machine interaction method | |
Kim et al. | A gesture based camera controlling method in the 3D virtual space | |
CN114296543B (en) | Intelligent interaction system for fingertip force detection and gesture recognition and intelligent finger ring | |
Hu et al. | Augmented pointing gesture estimation for human-robot interaction | |
Choondal et al. | Design and implementation of a natural user interface using hand gesture recognition method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180306 |
|
RJ01 | Rejection of invention patent application after publication |