CN112256135B - Equipment control method and device, equipment and storage medium - Google Patents

Equipment control method and device, equipment and storage medium Download PDF

Info

Publication number
CN112256135B
CN112256135B CN202011195754.9A CN202011195754A CN112256135B CN 112256135 B CN112256135 B CN 112256135B CN 202011195754 A CN202011195754 A CN 202011195754A CN 112256135 B CN112256135 B CN 112256135B
Authority
CN
China
Prior art keywords
gesture
recognition model
information
target
control instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011195754.9A
Other languages
Chinese (zh)
Other versions
CN112256135A (en
Inventor
邵帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202011195754.9A priority Critical patent/CN112256135B/en
Publication of CN112256135A publication Critical patent/CN112256135A/en
Application granted granted Critical
Publication of CN112256135B publication Critical patent/CN112256135B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C17/00Arrangements for transmitting signals characterised by the use of a wireless electrical link
    • G08C17/02Arrangements for transmitting signals characterised by the use of a wireless electrical link using a radio link

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a device control method, a device, equipment and a storage medium, wherein the method comprises the following steps: receiving a gesture detection instruction sent by second equipment; acquiring first gesture information by detecting a first gesture; transmitting the first gesture information to the second device; the first gesture information is used for obtaining a target gesture recognition model, and the target gesture recognition model can recognize the first gesture as a first control instruction.

Description

Equipment control method and device, equipment and storage medium
Technical Field
The present application relates to man-machine interaction technologies, and in particular, to a device control method and apparatus, a device, and a storage medium.
Background
Human interaction with a machine includes contact interaction and non-contact interaction. The interaction modes of the contact interaction comprise: keyboard input, mouse input, touch screen input, force input, and switch touches, etc., are based on interactions with the electronic device. The non-touch interactions include: the interaction of control of the electronic equipment can be realized by means of a visible light shooting technology, an infrared light sensing sensor, a sensor applying a laser technology and the like without contacting the electronic equipment.
Gesture recognition is used as a non-contact interaction, and the electronic device can acquire and convert gesture actions into instructions executable by the electronic device only by requiring a user to put out the gesture actions in a limited free space. The gesture information has individual differences, such as physiological differences of palm and finger sizes, and habitual differences of gesture action positions and speeds, so that when small-sized electronic equipment such as headphones and the like conduct gesture recognition through a gesture recognition model, the gesture recognition model with fixed parameters cannot meet the individual differences of different users.
Disclosure of Invention
The embodiment of the application provides a device control method and device, equipment and a storage medium, which can meet gesture recognition requirements of different users with individual differences.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a device control method, applied to a first device, where the method includes:
Receiving a gesture detection instruction sent by second equipment;
Acquiring first gesture information by detecting a first gesture;
transmitting the first gesture information to the second device;
The first gesture information is used for obtaining a target gesture recognition model, and the target gesture recognition model can recognize the first gesture as a first control instruction.
In a second aspect, an embodiment of the present application provides a device control method, applied to a second device, where the method includes:
generating a detection instruction and sending the gesture detection instruction to first equipment;
receiving first gesture information sent by first equipment;
obtaining a target gesture recognition model through the first gesture information; the target gesture recognition model is capable of recognizing the first gesture as a first control instruction.
In a third aspect, an embodiment of the present application provides an apparatus control device, applied to a first apparatus, where the apparatus includes:
The first receiving module is used for receiving gesture detection instructions sent by the second equipment;
the detection module is used for acquiring first gesture information by detecting the first gesture action;
the first sending module is used for sending the first gesture information to the second equipment;
The first gesture information is used for obtaining a target gesture recognition model, and the target gesture recognition model can recognize the first gesture as a first control instruction.
In a fourth aspect, an embodiment of the present application provides an apparatus control device, applied to a second apparatus, where the apparatus includes:
the second sending module is used for generating a gesture detection instruction and sending the gesture detection instruction to the first equipment;
The second receiving module is used for receiving the first gesture information sent by the first equipment;
the obtaining module is used for obtaining a target gesture recognition model through the first gesture information; the target gesture recognition model is capable of recognizing the first gesture as a first control instruction.
In a fifth aspect, an embodiment of the present application provides a computer device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the steps in the above device control method are implemented when the processor executes the computer program.
In a sixth aspect, an embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described device control method.
The device control method provided by the embodiment of the application comprises the following steps: the second device generates a gesture detection instruction and sends the gesture detection instruction to the first device; the first device starts to detect the first gesture based on the received gesture detection instruction, obtains first gesture information, and sends the obtained first gesture information to the second device, and the second device obtains a target gesture recognition model through the first gesture information, so that the target gesture recognition model can recognize the first gesture as a first control instruction. In this way, the gesture operation performed by the user is detected through the first equipment to obtain the first gesture information, and the second equipment obtains the target gesture recognition model capable of recognizing the current gesture according to the received first gesture information, so that the gesture recognition model can meet individual differences of different users, and the accuracy of gesture recognition is improved.
Drawings
FIG. 1 is a schematic diagram of an alternative architecture of an information handling system provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of an alternative circuit configuration of an information handling system provided by an embodiment of the present application;
FIG. 3 is a schematic flow chart of an alternative method for controlling a device according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of an alternative method for controlling a device according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of an alternative method for controlling a device according to an embodiment of the present application;
FIG. 6 is an alternative interface schematic of a gesture guidance interface provided by an embodiment of the present application;
FIG. 7 is a schematic flow chart of an alternative method for controlling a device according to an embodiment of the present application;
FIG. 8 is a schematic flow chart of an alternative method for controlling a device according to an embodiment of the present application;
FIG. 9 is a schematic view of an alternative scenario of a device control method provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of an alternative configuration of a first device provided by an embodiment of the present application;
FIG. 11 is a schematic illustration of an alternative page of a gesture guidance interface provided by an embodiment of the present application;
FIG. 12 is a schematic flow chart of an alternative method of controlling a device according to an embodiment of the present application;
FIG. 13 is a schematic view showing an alternative construction of a device control apparatus according to an embodiment of the present application;
FIG. 14 is a schematic view showing an alternative construction of a device control apparatus according to an embodiment of the present application;
fig. 15 is an alternative structural schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
The embodiment of the application can be provided as a device control method and device, a device and a storage medium. In practical application, the device control method may be implemented by a device control apparatus, and each functional entity in the device control apparatus may be cooperatively implemented by hardware resources of a computer device (such as an earphone, a wearable device, and a terminal device), such as computing resources of a processor, and communication resources (such as for supporting communications in various modes such as optical cables and cellular communications).
Of course, the embodiment of the present application is not limited to being provided as a method and hardware, but may be implemented in various manners, such as being provided as a storage medium (storing instructions for executing the device control method provided by the embodiment of the present application).
The device control method provided by the embodiment of the application can be applied to the information processing system shown in fig. 1 or fig. 2, and as shown in fig. 1 or fig. 2, the information processing system comprises: a first device 10 and a second device 20, wherein the first device 10 may be a small device with low computing power such as a headset, car key, wearable device, etc. The second device may be an electronic device with a display screen, such as a mobile terminal, an AR device, a notebook, etc., having a relatively high computing power. In the embodiment of the application, the first device does not have a display screen, and the second device has a display screen.
The first equipment and the second equipment can be connected in a Bluetooth, WIFI, mobile data and other modes, the second equipment generates a gesture detection instruction, and the gesture detection instruction is sent to the first equipment; the first device detects a first gesture of a user based on triggering of the received gesture detection instruction, so as to obtain first gesture information through the detected first gesture, and sends the first gesture information to the second device, and the second device obtains a target gesture recognition model capable of recognizing the first gesture as a first control instruction according to the received first gesture information, so that the first device can execute corresponding operation of the first control instruction based on the received first gesture.
In one example, a first device sends first gesture information to a second device, the second device obtains a target gesture recognition model using the first gesture information, and sends the target gesture recognition model to the first device.
In an example, as shown in fig. 2, the information processing system further includes: a third device 30, wherein the third device may be a server or a server cluster formed by a plurality of servers; the second device communicates with the third device via a network 40.
In the information processing system shown in fig. 2, a first device sends first gesture information to a second device, the second device sends the first gesture information to a third device, the third device obtains a target gesture recognition model by using the first gesture information, and sends the target gesture recognition model to the second device, and the second device forwards the received target gesture recognition model to the first device.
In combination with the information processing system, the embodiment provides a device control method, which can meet gesture recognition requirements of different users with individual differences.
Embodiments of a device control method, apparatus, device, and storage medium according to embodiments of the present application are described below with reference to a schematic diagram of an information processing system shown in fig. 1 or fig. 2.
The embodiment provides a device control method, which is applied to a first device, and fig. 3 is a schematic implementation flow diagram of a device method according to an embodiment of the present application, as shown in fig. 3, where the method may include the following steps:
S301, receiving a gesture detection instruction sent by the second device.
The gesture detection instruction is sent by the second device in the case of exhibiting gesture guidance information. In an example, the gesture guidance information is a gesture guidance interface, and the display content of the gesture guidance interface includes: an image of the first gesture and a control identification of the first control instruction.
A connection is established between the first device and the second device, and communication of data is enabled based on the established connection.
When the second device displays gesture guiding information through the display screen and generates a gesture detection instruction, the second device sends the gesture detection instruction to the first device through connection with the first device, and the first device receives the gesture detection instruction. Wherein the gesture guidance information may include one of: gesture-guided interfaces, gesture-guided speech, and the like.
And under the condition that the first equipment receives the gesture detection instruction, entering a gesture detection state.
In the embodiment of the application, the first device is provided with the gesture detection sensor, and the first device starts the gesture detection sensor based on the gesture detection instruction under the condition that the gesture detection instruction is received, so that gesture operation of a user is detected through the gesture detection sensor. Wherein the gesture detection sensor may comprise: and a radar, a laser detection module, an ultrasonic detection module and the like can detect gestures.
In the embodiment of the present application, the first device may be an earphone unit, and the earphone includes two earphone units. In an example, one of the two earphone units is a first earphone unit, and the other earphone unit is a second earphone unit, and in one case, a gesture detection sensor is provided in the first earphone unit, and in one case, gesture detection sensors are provided in both the first earphone unit and the second earphone unit. The first earphone unit receives the gesture detection instruction, and opens the gesture detection sensor based on the instruction of the gesture detection instruction. At this time, in the case where the gesture detection sensor is provided in the first headphone unit, the gesture detection sensor in the first headphone unit is turned on. In the case where the gesture detection sensors are provided in both the first earphone unit and the second earphone unit, only the gesture detection sensor in the first earphone unit may be turned on, only the gesture detection sensor in the second earphone unit may be turned on, and both the gesture detection sensor in the first earphone unit and the gesture detection sensor in the second earphone unit may be turned on.
In practical application, in the case where gesture detection sensors are provided in both the first earphone unit and the second earphone unit, the gesture guidance information may include a plurality of pieces of guidance information, and the plurality of pieces of information include: first guidance information for the first earphone unit and second guidance information for the second earphone unit.
And starting a gesture detection sensor in the first earphone unit under the condition that the gesture guide information currently output by the second equipment is the first guide information. And under the condition that the gesture guiding interface currently output by the second equipment is the second guiding information, starting a gesture detection sensor in the second earphone unit.
In practical application, under the condition that the second device outputs a plurality of gesture guiding information, only one gesture detection instruction is sent to the first device, and the gesture detection instruction carries the identification of the earphone unit to which the gesture detection sensor to be started belongs, so as to indicate to start one or two of the first earphone unit and the second earphone unit.
S302, acquiring first gesture information by detecting the first gesture.
When the first device detects a gesture operation executed by a user through a gesture detection sensor, detecting the action of the gesture operation, and judging whether the detected action of the gesture operation is a first gesture action. And when the detected gesture operation is not the first gesture operation, the detected gesture information is regarded as invalid gesture information.
In the embodiment of the present application, implementation of S302 includes: the first gesture is detected by the following detection modality: millimeter wave, laser, or ultrasonic; and converting the first gesture into the first gesture information according to the detection form.
The gesture detection sensors in the first device are different, and detection modes for detecting the first gesture actions are different. In the case that the gesture detection sensor is a radar, the detection form is millimeter waves; under the condition that the gesture detection sensor is a laser module, the detection mode is laser; in the case that the gesture detection sensor is an ultrasonic module, the detection form is ultrasonic.
After the first device detects the first gesture through the gesture detection sensor, the data of the gesture detection sensor is converted into gesture information which can be recognized and processed by the first device, namely, first gesture information.
In the embodiment of the present application, a specific algorithm for determining whether the detected gesture operation is the first gesture operation is not limited.
S303, the first gesture information is sent to the second device.
The first gesture information is used for obtaining a target gesture recognition model, and the target gesture recognition model can recognize the first gesture as a first control instruction.
And after the first gesture information is obtained by the first equipment, the first gesture information is sent to the second equipment, so that the second equipment obtains a target gesture recognition model through the first gesture information.
The first device sends the first gesture information to the second device through the connection with the second device, and the second device updates the parameters of the reference gesture recognition model through the first gesture information or directly forwards the first gesture information to the third device. The second device obtains a target gesture recognition model through updating parameters of the reference gesture recognition model or obtains the target gesture recognition model sent by the third device, and then updates the target gesture recognition model to the first device.
In the embodiment of the present application, the reference gesture recognition model may be a model that cannot recognize the first gesture, or may be a model that can recognize the first gesture as the first control command but that has a probability of recognizing the first gesture as the first control command lower than a set probability threshold.
In the embodiment of the present application, as shown in fig. 4, after S303, the following steps are further implemented:
S304, receiving the target gesture recognition model sent by the second device.
In the embodiment of the application, the first device can receive the complete target gesture recognition model sent by the second device, and also can receive the model update parameter sent by the second device under the condition that the first device has the reference gesture recognition model, wherein the model update parameter is the parameter updated by the target gesture recognition model relative to the reference gesture recognition model.
In the embodiment of the application, the target gesture recognition model can recognize at least one gesture action. In one example, one gesture recognition model may recognize one gesture action, where different gesture models recognize different gesture actions. Such as: the gesture recognition model A recognizes the gesture motion 1 as a control instruction 1, the gesture recognition model B recognizes the gesture motion 2 as a control instruction 2, and the gesture recognition model C recognizes the gesture motion 3 as a control instruction 3. In an example, the first gesture recognition model may recognize a plurality of gesture actions. Such as: the gesture recognition model can recognize a gesture as 1 as a control instruction 1, recognize a gesture motion 2 as a control instruction 2, and recognize a gesture motion 3 as a control instruction 3.
In the case where one gesture recognition model recognizes one gesture, after receiving the target gesture recognition model in the first device, another gesture may be received to obtain a target gesture recognition model capable of recognizing the gesture. Such as: the first device detects gesture 1, and detects gesture 2 to obtain a target gesture recognition model B when obtaining the target gesture recognition model a based on gesture information of gesture 1.
When one gesture recognition model recognizes a plurality of gesture actions, after gesture information of one gesture action is detected in the first device, other gesture actions are continuously detected, so that a target gesture recognition model capable of recognizing the plurality of gesture actions is obtained. Such as: the first device detects gesture motion 1 and gesture motion 2, and obtains a target gesture recognition model based on gesture information of gesture motion 1 and gesture information of gesture motion 2.
The device control method provided by the embodiment of the application comprises the following steps: receiving a gesture detection instruction sent by second equipment; acquiring first gesture information by detecting the first gesture; transmitting the first gesture information to the second device; the first gesture information is used for obtaining a target gesture recognition model, so that the gesture recognition model can recognize the first gesture as a first control instruction. In this way, the first device starts to detect gesture operation performed by the user under the control of the second device to obtain first gesture information, and sends the obtained first gesture information to the second device, so that the gesture information of the gesture operation performed by the user is used as learning data to obtain a target gesture recognition model, the gesture recognition model can meet individual differences of different users, and accuracy of gesture recognition is improved.
In some embodiments, the first gesture information is used to obtain the target gesture recognition model if the first gesture information satisfies a correction condition.
Here, the first device determines whether the first gesture information satisfies a correction condition when the first gesture information is acquired, and correspondingly, sends the first gesture information to the second device when the first gesture information satisfies the correction condition, so as to acquire a target gesture recognition model through the first gesture information.
When the first device determines that the first gesture information meets the correction condition, the first device sends the first gesture information to the second device, and when the first device determines that the first gesture information does not meet the correction condition, the first device continues to detect the first gesture.
In the embodiment of the application, when the first gesture information meets the correction condition, the first gesture information is sent to the second device, and the first gesture detection completion instruction is sent to the second device, so that the first device of the second device is indicated to meet the correction condition currently, and the second device can output a gesture guiding interface with display content of another gesture action.
In an embodiment of the present application, the correction condition includes at least one of the following conditions:
The execution times of the first gesture corresponding to the first gesture information are larger than a time threshold;
And under the second condition, the target gesture recognition model does not exist in the first equipment.
The correction conditions may include a condition one, a condition two, or a combination of both.
The threshold number of times in the condition one can be set according to actual requirements, for example: 3.5, etc.
For the second condition, the first device does not include a target gesture recognition model capable of accurately recognizing the first gesture of the current user as the first control command, where the reference gesture model in the first device cannot recognize the first recognition action or cannot accurately recognize the first gesture of the current user as the first control command. In the embodiment of the application, the fact that the reference gesture recognition model cannot accurately recognize the first gesture of the current user as the first control instruction comprises the following conditions:
the first gesture recognition model is referred to recognize the first gesture of other users as a first control command, and when the first gesture of the current user cannot be recognized as the first control command;
in the second case, the first gesture of the current user can be identified as the first control command by referring to the gesture identification model, but the probability of identifying the first gesture of the current user as the first control command is smaller than the probability threshold.
In the embodiment of the application, the reference gesture recognition model has the possibility of recognizing the first gesture action of the current user as the second control instruction. Here, the second control instruction is different from the first control instruction, such as: the first control instruction is: turning up the volume, and turning on the second control command.
In the embodiment of the application, under the condition that the correction condition comprises the condition one, the first equipment sends the first gesture information obtained by the user executing the same gesture action for a plurality of times to the second equipment, and the second equipment learns to obtain the target gesture recognition model based on the data of the gesture action input for a plurality of times, so that the quantity of learning data is increased, errors of gesture recognition caused by slight differences of specific gestures of the user are reduced, optimization of the gesture recognition model is improved, and accuracy of gesture recognition is improved.
In some embodiments, after S304, further comprising: obtaining second gesture information by detecting the first gesture; and taking the second gesture information as the input of the target gesture recognition model, and obtaining the first control instruction output by the target gesture recognition model so as to execute the operation corresponding to the first control instruction.
When the first device detects the first gesture recognition action again under the condition that the target gesture recognition model is obtained, second gesture information is obtained, the second gesture information is input into the target gesture recognition model, the first gesture action is recognized through the target gesture recognition model, and at the moment, the output of the target gesture recognition model is a first control instruction.
After the first control instruction output by the target gesture recognition model is obtained, the first device executes the operation corresponding to the first control instruction, so that the accurate control of the non-contact gesture operation on the first device is realized.
In an example, when the first device is an earphone and the first control instruction is a shutdown instruction, controlling the first device to be shutdown; and when the first control instruction is a volume adjusting instruction, sending the volume adjusting instruction to target equipment of volume to be adjusted.
The embodiment provides a device control method, which is applied to a second device, and fig. 5 is a schematic implementation flow diagram of a device method according to an embodiment of the present application, as shown in fig. 5, where the method may include the following steps:
s501, generating a gesture detection instruction and sending the gesture detection instruction to first equipment.
The second device generates a gesture detection instruction, and sends the generated gesture detection instruction to the first device to instruct the first device to turn on the gesture detection sensor so as to detect a first gesture of the user. Here, the gesture detection instruction may carry an identifier indicating the target device unit, and instruct the first device to turn on a gesture detection sensor in the target device unit. In an example, the gesture detection instruction carries an identifier indicating the first device unit, and indicates that a gesture detection sensor in the first device unit is turned on. In an example, the gesture detection instruction carries an identifier indicating the first device unit and an identifier indicating the second device unit, and the gesture detection sensor in the first device unit and the gesture detection sensor in the second device unit are indicated to be turned on.
The first device obtains first gesture information by detecting a first gesture action in a state that the gesture detection sensor is turned on, and sends the first gesture information to the second device.
S502, receiving first gesture information sent by the first device.
The second device receives first gesture information sent by the first device, so that parameters of a reference gesture recognition model are updated through the first gesture information to obtain a target gesture recognition model, and the target gesture recognition model can recognize the first gesture as the first control instruction.
S503, obtaining a target gesture recognition model through the first gesture information.
The target gesture recognition model is capable of recognizing the first gesture as a first control instruction.
In the embodiment of the present application, the manner of obtaining, by the second device, the target gesture recognition model according to the first gesture information includes one of the following obtaining manners:
the first device and the second device update parameters of the reference gesture recognition model through the first gesture information to obtain a target gesture recognition model
The second obtaining mode is that the second device sends the received first gesture information to the third device, so that the third device obtains a target gesture recognition model through the first gesture information, and the second device receives the target gesture recognition model sent by the third device.
Taking the first example of the manner of obtaining the target gesture recognition model as the first example, the implementation of S503 includes the following steps:
And updating parameters of a reference gesture recognition model through the first gesture information and the first control instruction to obtain the target gesture recognition model.
Taking the second mode as an example of the mode of obtaining the target gesture recognition model, the implementation of S503 includes the following steps:
Transmitting the first gesture information and the first control instruction to a third device; and receiving the target gesture recognition model sent by the third device.
Here, the second device updates the parameters of the reference gesture recognition model by the first gesture information itself or directly forwards the first gesture information to the third device. And the second device obtains the target gesture recognition model through learning or obtains the target gesture recognition model sent by the third device, and then updates the target gesture recognition model to the first device.
In the embodiment of the application, the second device sends the generated gesture detection instruction to the first device to instruct the first device to start detecting the gesture action of the current user; the second device receives first gesture information sent by the first device, and a target gesture recognition model is obtained through the first gesture information, wherein the target gesture recognition model can recognize the first gesture as the first control instruction. In this way, first gesture information obtained by detecting gesture operations performed by the user by the first device is obtained, the gesture information of the gesture operations performed by the user is used as learning data, and a target gesture recognition model is obtained, so that the target gesture recognition model can meet individual differences of different users, and accuracy of gesture recognition is improved.
In some embodiments, the first gesture information satisfies a correction condition.
In an example, the first device determines whether the first gesture information satisfies a correction condition before transmitting the first gesture information, and transmits the first gesture information to the second device in a case where the first gesture information satisfies the correction condition.
In the embodiment of the application, when the first device determines that the first gesture information meets the correction condition, a first gesture detection completion instruction can be sent to the second device, and the second device determines that the current first gesture information meets the correction condition based on the first gesture detection completion instruction.
In an example, the second device determines, after receiving the first gesture information, whether the first gesture information satisfies a correction condition; correspondingly, under the condition that the first gesture information meets the correction condition, a target gesture recognition model is obtained through the first gesture information.
In practical application, both ends of the first device and the second device may detect whether the first gesture information meets the correction condition, or one end of the first device and the second device may detect whether the first gesture information meets the correction condition.
In an embodiment of the present application, the correction condition includes at least one of the following conditions:
The execution times of the first gesture corresponding to the first gesture information are larger than a time threshold;
And under the second condition, the first equipment does not exist the target gesture recognition model.
The correction conditions may include a condition one, a condition two, or a combination of both.
In the embodiment of the application, under the condition that the correction condition comprises the condition one, the user is guided to execute multiple inputs on the same gesture action through the gesture guiding interface, and the target gesture recognition model is obtained based on the data learning of the gesture action input multiple times, so that the quantity of learning data is increased, the error of gesture recognition caused by the slight difference of the specific gesture of the user is reduced, the optimization of the gesture recognition model is improved, and the accuracy of gesture recognition is improved.
In the embodiment of the present application, after S503, the method further includes: the target gesture recognition model is sent to the first device.
The second device may send the complete target gesture recognition model to the first device, or may send a model update parameter to the first device if the first device itself has a reference gesture model, where the model update parameter is a parameter of the target gesture recognition model updated relative to the reference gesture recognition model.
In some embodiments, prior to S501, further comprising: a gesture guidance interface is shown.
The display content of the gesture guidance interface comprises: an image of the first gesture and a control identification of the first control instruction.
The second device has a display screen through which the second device can output a gesture guidance interface. The display type of the display screen in the second device may be a light emitting Diode (LIGHT EMITTING LED) display screen, a Liquid crystal display screen (Liquid CRYSTAL DISPLAY, LCD), or the like, and in the embodiment of the present application, the display type of the display screen is not limited.
An application or browser capable of presenting a gesture guidance interface may be installed in the second device, and the gesture guidance interface may be displayed based on the installed application or browser.
The gesture guidance interface may be a dynamic page or a static page. When the gesture guiding interface is a dynamic page, the image of the first gesture action is a dynamic image; when the gesture guiding interface is a static page, the image of the first gesture action is a static image.
In an example, as shown in fig. 6, the display content of the gesture guidance interface 601 includes: a hand 602 and a movement track 603 of the hand, wherein the movement track 603 indicates a gesture motion. Also included in the gesture guidance interface 601 is a first device identifier 604 to indicate the spatial relationship of the gesture motion of the hand 602 to the first device. The control identifier of the first control instruction in the gesture guidance interface 601 is not shown, and in practical application, the control identifier of the first control instruction may be presented in a text, an icon, or the like.
In the case where the first device includes a plurality of device units, the gesture guidance interface where the display content includes a first gesture action may include a page for each device unit. In an example, the first device is a headset, and the headset comprises two headset units: the first earphone unit and the second earphone unit, the gesture guiding page includes: a first page for a first earpiece unit and a second page for a second earpiece unit. At this time, the second device prompts the user to perform gesture operation on the corresponding earphone unit through the prompt content in the gesture guiding interface.
In the embodiment of the present application, the display content of the gesture guidance interface further includes: the method comprises the steps of inputting times and effective inputting times, wherein the inputting times are times that a user needs to execute first gesture actions, the effective inputting times represent times that the user has executed effective gesture actions, and at the moment, when the first device detects the effective first gesture actions once, a notification instruction is sent to the second device to instruct the second device to update the effective inputting times in a gesture guiding interface.
And under the condition that the effective input times reach the input times, the second equipment determines that the guidance of the first gesture action is completed, and the displayed page can be switched to other pages by the gesture guidance interface corresponding to the first gesture action. And when the second equipment needs to continue to output the gesture guiding interface, outputting an image with display content of a second gesture, wherein the second gesture is different from the first gesture.
In some embodiments, the second device further performs the steps of: displaying a first control instruction receiving interface; the first control instruction is determined based on a selection operation or an input operation for the first control instruction receiving interface.
In the embodiment of the application, the first control instruction can be sent by the second device, and can also be set based on the input operation or the selection operation of the user. In the case where the first control instruction is set based on an input operation or a selection operation of the user, the second device may output a first control instruction receiving interface through the display screen, and the first control instruction receiving interface may include at least one candidate control instruction therein. And when the second equipment receives the input operation, characterizing that the at least one candidate control instruction does not comprise the first control instruction expected to be identified by the user.
When the at least one candidate control instruction comprises a first control instruction which is expected to be identified by the user, the user can perform selection operation based on the first control instruction receiving interface, and the second device takes the candidate control instruction selected by the selection operation of the user as the first control instruction. When the at least one candidate control instruction does not include the first control instruction expected to be identified by the user, the user can perform input operation based on the first control instruction receiving interface, and the second device takes the control instruction input by the input operation of the user as the first control instruction.
The embodiment of the application provides a device control method, which comprises an information processing system, wherein the information processing system comprises the following steps: fig. 7 is a schematic flow chart of an implementation of a method of an apparatus according to an embodiment of the present application, as shown in fig. 7, where the method may include the following steps:
s701, the second device generates a gesture detection instruction and sends the gesture detection instruction to the first device.
S702, the first device receives a gesture detection instruction sent by the second device.
S703, the first device acquires first gesture information by detecting the first gesture.
S704, the first device sends the first gesture information to the second device.
And S705, the second equipment obtains a target gesture recognition model through the first gesture information.
The target gesture recognition model is capable of recognizing the first gesture as a first control instruction.
The method for obtaining the target gesture recognition model in S705 includes:
The method comprises the steps that firstly, a second device updates parameters of a reference gesture recognition model through first gesture information; and obtaining a target gesture recognition model.
The second obtaining mode is that the first device sends the first gesture information to the third device through the second device, so that the third device obtains a target gesture recognition model through the first gesture information, and the second device receives the target gesture recognition model sent by the third device.
As shown in fig. 8, before S701, the method further includes: s706, the second device displays a gesture guiding interface.
The display content of the gesture guidance interface comprises: an image of the first gesture and a control identification of the first control instruction.
In the device control method shown in fig. 7, the implementation of the first device may refer to the description in the device control method shown in fig. 3, and the implementation of the second device may refer to the description in the device control method shown in fig. 5, which is not repeated here.
The device control method provided by the embodiment of the application comprises the following steps: the second equipment generates a gesture detection instruction and sends the gesture detection instruction to the first equipment; the first device detects the first gesture based on the received gesture detection instruction to obtain first gesture information, and sends the first gesture information to the second device, and the second device obtains a target gesture recognition model capable of recognizing the first gesture as a first control instruction through the first gesture information. In this way, the first equipment detects gesture operation performed by the user to obtain first gesture information, and the second equipment obtains a target gesture recognition model capable of recognizing the first gesture of the current user as a first control instruction according to the gesture information of the gesture operation performed by the user as learning data, so that the gesture recognition model can meet individual differences of different users, and accuracy of gesture recognition is improved.
The device control method provided by the embodiment of the application is further described below by taking the first device as a wireless earphone as an example.
The device control method provided by the embodiment of the application can be used in the scene shown in fig. 9, the user puts out a gesture 902 in the space environment where the wireless earphone 901 is worn, the gesture track of the gesture 902 is a track 903 or a track 904, the wireless earphone 901 recognizes the gesture of the user as a corresponding instruction, and controls the wireless earphone 901 to execute a corresponding operation based on the recognized instruction. The operation of the wireless earphone controlled by the gesture can comprise the following steps: power on and off, volume control, and play order modulation.
In practical applications, the volume is controlled by a gesture of moving away from or moving close to the earphone, the playing sequence is controlled by a gesture of moving forward and backward, and the earphone is controlled to be turned on or off by a special gesture (such as a finger-sounding gesture).
In some embodiments, the variety of wireless headphones includes in-ear headphones as shown in fig. 9, as well as earmuff headphones.
In some embodiments, for a true wireless stereo (true wireless earbuds, TWS) headset, the spatial gesture recognition device may be on one of the headset units, or may be present on both headset units.
As shown in fig. 10, the hardware structure of the earphone unit of the wireless earphone includes: an antenna 1001, a processor 1002, a memory unit 1003, a wireless signal transmitter/receiver 1004, a gesture detection sensor 1005, a speaker and microphone 1006, and a power supply control device 1007. Wherein,
The number of antennas 1001 may be one or more. Single antenna or multiple antennas may be used for a particular communication module (e.g., bluetooth). For multiple communication modules, a "shared antenna" may be used, or a single antenna may be used for each communication module, or a "shared antenna" may coexist with a single antenna.
The processor 1002 may be a microprocessor (Micro Central Unit, MCU) or other processor chip containing computing functionality. The processor chip may integrate a machine learning accelerator for accelerating machine learning model calculation, where the machine learning accelerator may be a function specific integrated Circuit (ASIC) such as a network processor (Network Process Units, NPU).
The Memory unit 1003 may be one or a combination of a random access Memory (Random Access Memory, RAM), a Read-Only Memory (ROM), a charged erasable programmable Read-Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ ONLY MEMORY, EEPROM).
The storage unit 1003 stores therein: audio decoding software, a machine learning model for gesture recognition and instruction conversion, i.e. a gesture control model. The gesture control model is a machine learning reasoning model. The gesture control model is used for converting the sampled gesture electronic signals into computer executable commands. The storage unit 1003 may further include: software for wireless signal transmission and reception, earphone control software, and other software programs for realizing earphone functions.
The wireless signal transmitter/receiver 1004 may be the following communication modules: communication module based on bluetooth (blue) standard, communication standard module utilizing back scattering technique. In practical applications, the wireless signal transmitter/receiver 804 may be integrated with a special function module in addition to the module used for basic communication. The special function module may be an Ultra Wideband (UWB) module for positioning of electronic devices, a Near Field Communication (NFC) module for identification of electronic devices, a wireless fidelity (WIRELESS FIDELITY, WIFI) module for mass data exchange.
The gesture detection sensor 1005 is configured to convert a gesture signal into a gesture electronic signal. The gesture detection sensor 1005 may be a sensor module based on electromagnetic waves, such as a 10-100GHz radar system, or a module based on ultrasonic principle.
In some implementations the gesture detection sensor may employ a laser technology based module.
In the device control method provided by the embodiment of the application, gesture input is performed in a device guiding mode.
As shown in fig. 11, in a state where the wireless earphone is connected to the mobile phone, the mobile phone presents gesture input guide information to the user through an embedded application (App) to guide gesture input of the user. Wherein, the gesture guidance interface in fig. 11 includes: page 1101, page 1102, page 1103, page 1104, and page 1105, and page 1101, page 1102, page 1103, page 1104, and page 1105 are pages of different gesture actions, respectively. Wherein the gesture of page 1101 is: the right hand dials the finger right, and the number of inputs is twice, the gesture of page 1102 acts as: the right hand dials the finger forward, and the number of inputs is twice, the gesture of the page 1103 is: the right hand is close to the headset and the number of inputs is twice, the gesture of page 1104 acts as: the right hand is far away from the headset, and the number of inputs is two, the gesture of page 1105 acts as: finger-sounding action.
In the embodiment of the application, the gesture input guide information displayed on the gesture guide interface can be static or dynamic pictures.
When the plurality of gesture actions are input, the mobile phone can input the plurality of gesture actions continuously according to a specific sequence. For example, when the input of the gesture in the page 1101 is completed and the input gesture signal meets the requirement, the mobile phone enters the page 1102 to input the next gesture.
In order to increase the number of sampling data, i.e. learning data, reduce errors of the user due to specific gesture nuances, promote optimization of a machine learning model, and can require the user to input the same gesture action for multiple times. As input of a gesture based on the guiding interface 1101 in fig. 11, APP may guide the user to make more than one input of the same gesture.
The memory of the wireless earphone contains a gesture recognition model for gesture recognition, and the gesture recognition model is an inference model, namely a non-training model. In order to implement the gesture recognition model according to the individualization and upgrade of a specific user, a gesture input process implemented by the mobile phone 121 and the wireless headset 122 is shown in fig. 12.
S1201, the mobile phone 121 transmits a gesture recognition detection instruction to the wireless headset 122.
The mobile phone 121 is wirelessly connected with the wireless headset 122, the mobile phone 121 enters the guide interface 1203 and sends a gesture recognition detection instruction to the wireless headset 122 to inform the wireless headset 122 of entering a gesture recognition state, and at the moment, the wireless headset 122 turns on a gesture detection sensor.
S1202, the wireless headset 122 detects a gesture operation of the user.
The user performs a gesture operation based on the guidance of the guidance interface of the mobile phone 121, and the mobile phone detection sensor in the wireless headset 122 collects gesture signals and converts the gesture signals into electrical signals, i.e., gesture information.
S1203, the wireless headset 122 transmits gesture information of the gesture motion to the mobile phone 121.
The wireless headset 122 transmits gesture information of the detected gesture motion of the user to the mobile phone 121. Wherein the wireless headset 122 itself does not perform data analysis on the gesture information.
S1204, the mobile phone 1001 updates the original gesture recognition model through the gesture information.
The electric signal 1004 is collected, the electric signal and the gesture signal result corresponding to the gesture recognition model are checked, the original machine learning model is updated through the gesture signal, the updated gesture recognition model is obtained, and the probability of generating the result correctness under the gesture signal of the user is increased.
S1205, the mobile phone 121 sends the updated gesture recognition model to the wireless headset 122.
The mobile phone 121 transmits the updated gesture recognition model to the wireless earphone 122, and the wireless earphone 122 updates the gesture recognition model in the memory to the updated gesture recognition model, and performs recognition of the gesture signal according to the updated gesture recognition model.
In some embodiments, in S1204 of fig. 12, after receiving the gesture information, the mobile phone 121 may send the gesture information to a cloud server with stronger computing power, and the cloud server updates the gesture recognition model with the stronger computing power to obtain an updated gesture recognition model, and then transmits the updated gesture recognition model to the mobile phone 121.
In some embodiments, the user may define the operation represented by the gesture signal according to his habit. Such as "finger" action, represents turning on and off the active noise reduction function. In which case the user may select a gesture among the recommended gestures. The recommended gestures are adopted in a way that the recommended gestures have a perfect gesture recognition model. After the user selects the gesture of his own heart instrument, the gesture recognition model in the earphone is updated according to the device control method of fig. 12.
In practical application, the gesture recognition model in the earphone is an inference model, the inference model is an ultimate product of the machine learning model, namely gesture information can be converted into an executable control instruction, and the earphone cannot train (train) the machine learning model, so that the original machine learning model cannot be functionally perfected.
The device control method provided by the embodiment of the application is a method capable of optimizing a machine learning model in small-sized devices, collecting gesture information of a user individual, checking the gesture information with an original machine learning model, and optimizing the machine learning model, so that a machine learning model which is built for the user individual and is most suitable for the user individual is built. And the use experience of the user is improved. Further, the non-contact earphone control can complete the earphone control without interfering with the use of the earphone by the user. The use experience of the earphone of the user is greatly improved. The earphone is particularly suitable for controlling the earphone in the process of sports (such as running). In addition, the accuracy of the machine learning model is improved by using the computing power of the intelligent terminal or the cloud server, and a personalized machine learning model is created for the user. The complexity of the hardware structure of the earphone is reduced.
Fig. 13 is a schematic implementation flow diagram of a device control apparatus according to an embodiment of the present application, which is applied to a first device, as shown in fig. 13, an apparatus 1300 includes:
the first receiving module 1301 is configured to receive a gesture detection instruction sent by the second device;
The detection module 1302 is configured to obtain a first gesture by detecting a first gesture;
A first sending module 1303, configured to send the first gesture information to the second device; the first gesture information is used for obtaining a target gesture recognition model, and the target gesture recognition model can recognize the first gesture as a first control instruction.
In some embodiments, the first receiving module 1301 is configured to receive the target gesture recognition model sent by the second device.
In some embodiments, the first gesture information is used to obtain the target gesture recognition model if the first gesture information satisfies a correction condition.
In some embodiments, the correction condition includes at least one of the following conditions:
the execution times of the first gesture corresponding to the first gesture information are greater than a time threshold;
The first device is free of the target gesture recognition model.
In some embodiments, the detecting module 1302 is further configured to obtain second gesture information by detecting the first gesture;
the recognition module is used for taking the second gesture information as the input of the target gesture recognition model, obtaining the first control instruction output by the target gesture recognition model, and executing the operation corresponding to the first control instruction.
In some embodiments, the detection module 1302 is further configured to:
the first gesture is detected by the following detection modality: millimeter wave, laser, or ultrasonic;
And converting the first gesture into the first gesture information according to the detection form.
Fig. 14 is a schematic implementation flow chart of a device control apparatus according to an embodiment of the present application, which is applied to a second device, as shown in fig. 14, an apparatus 1400 includes:
A second sending module 1401, configured to generate a gesture detection instruction, and send the gesture detection instruction to the first device;
A second receiving module 1402, configured to receive first gesture information sent by the first device;
An obtaining module 1403, configured to obtain a target gesture recognition model according to the first gesture information; the target gesture recognition model is capable of recognizing the first gesture as a first control instruction.
In some embodiments, the first gesture information satisfies a correction condition.
In some embodiments, the correction condition includes at least one of the following conditions:
the execution times of the first gesture corresponding to the first gesture information are greater than a time threshold;
The first device is free of the target gesture recognition model.
In some embodiments, obtaining module 1403 is further to:
And updating parameters of a reference gesture recognition model through the first gesture information and the first control instruction to obtain the target gesture recognition model.
In some embodiments, obtaining module 1403 is further to:
transmitting the first gesture information and the first control instruction to a third device;
And receiving the target gesture recognition model sent by the third device.
In some embodiments, the second transmitting module 1401 is further configured to:
The target gesture recognition model is sent to the first device.
In some embodiments, the apparatus 1400 further comprises: a display module for:
Displaying a gesture guiding interface; the display content of the gesture guidance interface comprises: and the image of the first gesture and the control identifier of the first control instruction.
It should be noted that, the device control apparatus provided in the embodiment of the present application includes each unit that is included, and may be implemented by a processor in an electronic device; of course, the method can also be realized by a specific logic circuit; in an implementation, the Processor may be a central processing unit (CPU, central Processing Unit), a microprocessor (MPU, micro Processor Unit), a digital signal Processor (DSP, digital Signal Processor), or a Field-Programmable gate array (FPGA), etc.
The description of the apparatus embodiments above is similar to that of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, please refer to the description of the embodiments of the method of the present application.
It should be noted that, in the embodiment of the present application, if the above-mentioned device control method is implemented in the form of a software function module, and sold or used as a separate product, it may also be stored in a computer readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be embodied essentially or in a part contributing to the related art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, an optical disk, or other various media capable of storing program codes. Thus, embodiments of the application are not limited to any specific combination of hardware and software.
Correspondingly, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program executable on the processor, and the processor implements the steps in the device control method provided in the above embodiment when executing the program. The electronic device may be a first device or a second device.
Accordingly, an embodiment of the present application provides a storage medium, that is, a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the device control method provided in the above embodiment.
It should be noted here that: the description of the storage medium and apparatus embodiments above is similar to that of the method embodiments described above, with similar benefits as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and the apparatus of the present application, please refer to the description of the method embodiments of the present application.
It should be noted that fig. 15 is a schematic diagram of a hardware entity of an electronic device according to an embodiment of the present application, as shown in fig. 15, the electronic device 1500 includes: a processor 1501, at least one communication bus 1502, at least one external communication interface 1504, and a memory 1505. Wherein communication bus 1502 is configured to enable connected communication between these components. In an example, the electronic device 1500 further includes: the user interface 1503, wherein the user interface 1503 may comprise a display screen and the external communication interface 1504 may comprise a standard wired interface and a wireless interface.
The memory 1505 is configured to store instructions and applications executable by the processor 1501, and may also cache data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or processed by the various modules in the processor 1501 and the electronic device, which may be implemented by a FLASH memory (FLASH) or a random access memory (Random Access Memory, RAM).
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in some embodiments" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application. The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read Only Memory (ROM), a magnetic disk or an optical disk, or the like, which can store program codes.
Or the above-described integrated units of the application may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solution of the embodiments of the present application may be embodied essentially or in a part contributing to the related art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
The foregoing is merely an embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (16)

1. A device control method, applied to a first device, the method comprising:
Receiving a gesture detection instruction sent by a second device under the condition that a gesture guiding interface is displayed, wherein the display content of the gesture guiding interface comprises: the gesture detection instruction is used for indicating the first equipment to start detecting gesture actions of a user;
under the condition that the detected gesture operation is the first gesture action, acquiring first gesture information by detecting the first gesture action;
transmitting the first gesture information to the second device;
The first gesture information and the first control instruction are used for obtaining a target gesture recognition model, and the target gesture recognition model can recognize the first gesture as the first control instruction.
2. The method according to claim 1, wherein the method further comprises:
And receiving the target gesture recognition model sent by the second equipment.
3. The method of claim 1, wherein the first gesture information is used to obtain the target gesture recognition model if the first gesture information satisfies a correction condition.
4. A method according to claim 3, wherein the correction conditions comprise at least one of the following conditions:
the execution times of the first gesture corresponding to the first gesture information are greater than a time threshold;
The first device is free of the target gesture recognition model.
5. The method according to claim 1, wherein the method further comprises:
obtaining second gesture information by detecting the first gesture;
And taking the second gesture information as the input of the target gesture recognition model, and obtaining the first control instruction output by the target gesture recognition model so as to execute the operation corresponding to the first control instruction.
6. The method of claim 1, wherein detecting the first gesture to obtain first gesture information comprises:
the first gesture is detected by the following detection modality: millimeter wave, laser, or ultrasonic;
And converting the first gesture into the first gesture information according to the detection form.
7. A device control method, characterized by being applied to a second device, the method comprising:
displaying a gesture guiding interface, wherein the display content of the gesture guiding interface comprises: an image of the first gesture and a control identifier of the first control instruction;
Generating a gesture detection instruction, and sending the gesture detection instruction to a first device, wherein the gesture detection instruction is used for indicating the first device to start detecting gesture actions of a user;
receiving first gesture information of the first gesture sent by first equipment;
obtaining a target gesture recognition model through the first gesture information and the first control instruction; the target gesture recognition model is capable of recognizing the first gesture action as the first control instruction.
8. The method of claim 7, wherein the first gesture information satisfies a correction condition.
9. The method of claim 8, wherein the correction condition comprises at least one of:
the execution times of the first gesture corresponding to the first gesture information are greater than a time threshold;
the target gesture recognition model is not present in the first device.
10. The method according to any one of claims 7 to 9, wherein the obtaining a target gesture recognition model by the first gesture information and the first control instruction includes:
And updating parameters of a reference gesture recognition model through the first gesture information and the first control instruction to obtain the target gesture recognition model.
11. The method according to any one of claims 7 to 9, wherein the obtaining a target gesture recognition model by the first gesture information and the first control instruction includes:
transmitting the first gesture information and the first control instruction to a third device;
And receiving the target gesture recognition model sent by the third device.
12. The method of claim 7, wherein the method further comprises:
The target gesture recognition model is sent to the first device.
13. A device control apparatus, characterized by being applied to a first device, the apparatus comprising:
The first receiving module is used for receiving a gesture detection instruction sent by the second device under the condition of displaying a gesture guiding interface, and the display content of the gesture guiding interface comprises: the gesture detection instruction is used for indicating the first equipment to start detecting gesture actions of a user;
the detection module is used for acquiring first gesture information by detecting the first gesture under the condition that the detected gesture operation is the first gesture;
the first sending module is used for sending the first gesture information to the second equipment;
The first gesture information and the first control instruction are used for obtaining a target gesture recognition model, and the target gesture recognition model can recognize the first gesture as the first control instruction.
14. A device control apparatus, characterized by being applied to a second device, the apparatus comprising:
The display module is used for displaying a gesture guiding interface, and the display content of the gesture guiding interface comprises: an image of the first gesture and a control identifier of the first control instruction;
the second sending module is used for generating a gesture detection instruction and sending the gesture detection instruction to the first device, wherein the gesture detection instruction is used for indicating the first device to start detecting gesture actions of a user;
The second receiving module is used for receiving first gesture information of the first gesture action sent by the first equipment;
The obtaining module is used for obtaining a target gesture recognition model through the first gesture information and the first control instruction; the target gesture recognition model is capable of recognizing the first gesture action as the first control instruction.
15. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor, when executing the computer program, implements the steps of the device control method of any one of claims 1 to 6 or the steps of the device control method of any one of claims 7 to 12.
16. A storage medium storing an executable program, wherein the executable program, when executed by a processor, implements the apparatus control method of any one of claims 1 to 6, or implements the apparatus control method of any one of claims 7 to 12.
CN202011195754.9A 2020-10-30 2020-10-30 Equipment control method and device, equipment and storage medium Active CN112256135B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011195754.9A CN112256135B (en) 2020-10-30 2020-10-30 Equipment control method and device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011195754.9A CN112256135B (en) 2020-10-30 2020-10-30 Equipment control method and device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112256135A CN112256135A (en) 2021-01-22
CN112256135B true CN112256135B (en) 2024-08-06

Family

ID=74267346

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011195754.9A Active CN112256135B (en) 2020-10-30 2020-10-30 Equipment control method and device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112256135B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112965639B (en) * 2021-03-17 2024-09-03 北京小米移动软件有限公司 Gesture recognition method and device, electronic equipment and storage medium
CN114415825B (en) * 2021-12-13 2024-05-31 珠海格力电器股份有限公司 Control method, control device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109862274A (en) * 2019-03-18 2019-06-07 北京字节跳动网络技术有限公司 Earphone with camera function, the method and apparatus for exporting control signal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108595003A (en) * 2018-04-23 2018-09-28 Oppo广东移动通信有限公司 Function control method and relevant device
CN110505549A (en) * 2019-08-21 2019-11-26 Oppo(重庆)智能科技有限公司 The control method and device of earphone

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109862274A (en) * 2019-03-18 2019-06-07 北京字节跳动网络技术有限公司 Earphone with camera function, the method and apparatus for exporting control signal

Also Published As

Publication number Publication date
CN112256135A (en) 2021-01-22

Similar Documents

Publication Publication Date Title
CN108519871B (en) Audio signal processing method and related product
KR102270394B1 (en) Method, terminal, and storage medium for recognizing an image
JP7179273B2 (en) Translation model training methods, phrase translation methods, devices, storage media and computer programs
WO2019052293A1 (en) Machine translation method and apparatus, computer device and storage medium
CN108646971B (en) Screen sounding control method and device and electronic device
WO2019105376A1 (en) Gesture recognition method, terminal and storage medium
CN112751648B (en) Packet loss data recovery method, related device, equipment and storage medium
CN110399474B (en) Intelligent dialogue method, device, equipment and storage medium
CN108932102B (en) Data processing method and device and mobile terminal
US11693484B2 (en) Device control method, electronic device, and storage medium
CN109212534B (en) Method, device, equipment and storage medium for detecting holding gesture of mobile terminal
CN112256135B (en) Equipment control method and device, equipment and storage medium
CN111522592A (en) Intelligent terminal awakening method and device based on artificial intelligence
CN111158487A (en) Man-machine interaction method for interacting with intelligent terminal by using wireless earphone
CN109189360B (en) Screen sounding control method and device and electronic device
CN111816168A (en) Model training method, voice playing method, device and storage medium
CN109164908B (en) Interface control method and mobile terminal
CN108958631B (en) Screen sounding control method and device and electronic device
CN110597973A (en) Man-machine conversation method, device, terminal equipment and readable storage medium
CN114065168A (en) Information processing method, intelligent terminal and storage medium
CN109947345B (en) Fingerprint identification method and terminal equipment
CN107484082A (en) Method for controlling audio signal transmission based on sound channel and user terminal
CN111897916A (en) Voice instruction recognition method and device, terminal equipment and storage medium
US12019792B2 (en) Electronic device for providing alternative content and operating method thereof
CN117157613A (en) Electronic device for performing a capture function and method for operating an electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant