CN104598140A - Information processing method and first electronic equipment - Google Patents
Information processing method and first electronic equipment Download PDFInfo
- Publication number
- CN104598140A CN104598140A CN201410838327.6A CN201410838327A CN104598140A CN 104598140 A CN104598140 A CN 104598140A CN 201410838327 A CN201410838327 A CN 201410838327A CN 104598140 A CN104598140 A CN 104598140A
- Authority
- CN
- China
- Prior art keywords
- electronic device
- user
- path data
- state
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 25
- 238000003672 processing method Methods 0.000 title claims abstract description 24
- 238000001514 detection method Methods 0.000 claims abstract description 34
- 238000000034 method Methods 0.000 claims abstract description 20
- 230000033001 locomotion Effects 0.000 claims description 128
- 210000001508 eye Anatomy 0.000 claims description 56
- 210000005252 bulbus oculi Anatomy 0.000 claims description 37
- 210000003128 head Anatomy 0.000 claims description 9
- 238000004891 communication Methods 0.000 description 34
- 238000010586 diagram Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 16
- 230000001960 triggered effect Effects 0.000 description 10
- 238000004590 computer program Methods 0.000 description 7
- 239000011521 glass Substances 0.000 description 6
- 239000004984 smart glass Substances 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention discloses an information processing method and first electronic equipment. The first electronic equipment comprises a support frame and a detection unit arranged on the support frame, and the first electronic equipment can maintain the relative position relationship with the head of a user of the first electronic equipment through the support frame. The method comprises the following steps that when a first preset condition is met, the first electronic equipment is controlled to be in the first state; the first information is obtained through the detection unit; the first information represents the moving information of the human body part of the user of the first electronic equipment; a first path data is determined on the basis of the first information, and the first path data represents the moving track of the human body part of the user of the first electronic equipment; when the first path data is matched and consistent with the first preset path data, the first electronic equipment is controlled to be switched to a second state.
Description
Technical Field
The present invention relates to an information processing technology, and in particular, to an information processing method and a first electronic device.
Background
At present, wearable electronic devices are beginning to appear in people's lives, such as smart glasses. The wearable electronic device has certain defects in the aspect of unlocking or locking the device due to the particularity of the wearable electronic device, and certain risks are brought to the use of a user, so that the user experience of the wearable electronic device can be influenced.
Disclosure of Invention
In order to solve the existing technical problem, embodiments of the present invention provide an information processing method and a first electronic device, which can implement unlocking or locking of a wearable electronic device.
In order to achieve the above purpose, the technical solution of the embodiment of the present invention is realized as follows:
the embodiment of the invention provides an information processing method, which is applied to first electronic equipment; the first electronic device includes: the detection device comprises a bracket and a detection unit arranged on the bracket; the first electronic equipment can maintain the relative position relation with the head of the user of the first electronic equipment through the bracket; the method comprises the following steps:
when a first preset condition is met, controlling the first electronic equipment to be in a first state;
acquiring first information through the detection unit; the first information represents motion information of a human body part of a user of the first electronic equipment;
determining first path data based on the first information; the first path data represents a motion trajectory of a human body part of a user of the first electronic device;
and when the first path data is matched with first preset path data in a consistent manner, controlling the first electronic equipment to be switched to a second state.
An embodiment of the present invention further provides a first electronic device, where the first electronic device includes: the detection device comprises a bracket and a detection unit arranged on the bracket; the bracket is used for maintaining the relative position relationship between the first electronic equipment and the head of a user of the first electronic equipment; the first electronic device includes: the device comprises a control unit, an acquisition unit and a determination unit; wherein,
the control unit is used for controlling the first electronic equipment to be in a first state when a first preset condition is met; the first electronic equipment is further used for controlling the first electronic equipment to be switched to a second state when the determining unit determines that the first path data is matched with first preset path data;
the acquisition unit is used for acquiring first information through the detection unit; the first information represents motion information of a human body part of a user of the first electronic equipment;
the determining unit is used for determining first path data based on the first information acquired by the acquiring unit; the first path data represents a motion trajectory of a human body part of a user of the first electronic device.
According to the information processing method and the first electronic device provided by the embodiment of the invention, the first electronic device can be matched with the preset path data by identifying the motion track of the human body part of the user, so that the first electronic device is switched from the locking state to the unlocking success state or from the unlocking success state to the locking state; in addition, the motion trail of the human body part can be the eye motion trail of a user, namely the first electronic device can realize state switching of the first electronic device by identifying the eye motion trail of the user. In a third aspect, the first electronic device in this embodiment is a wearable electronic device, and the wearable electronic device may be in communication connection with a second electronic device (e.g., a smartphone) through a preset communication manner; therefore, according to the technical scheme of the embodiment of the invention, after the first electronic device recognizes that the motion trajectory of the human body part of the user is successfully matched with the preset path data, the state switching of the second electronic device is triggered, for example, the switching from the locking state to the unlocking successful state or from the unlocking successful state to the locking state, so that the user can trigger the switching of the state of the second electronic device by the first electronic device (namely, the wearable electronic device) without performing gesture operation on the second electronic device, and the safety of the second electronic device is greatly improved. In a fourth aspect, according to the technical scheme of the embodiment of the invention, after the first electronic device recognizes that the motion trajectory of the human body part of the user is successfully matched with the preset path data, the second electronic device is triggered to execute a predetermined action, such as making and receiving a call, and the like, and the second electronic device can be triggered to make an emergency call through the first electronic device in an emergency, so that the operation of the user is simplified, and the safety of the user can be improved to a certain extent.
Drawings
Fig. 1 is a schematic flowchart of an information processing method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a first electronic device according to an embodiment of the invention;
FIG. 3 is a flowchart illustrating an information processing method according to a second embodiment of the present invention;
FIG. 4 is a flowchart illustrating an information processing method according to a third embodiment of the present invention;
FIG. 5 is a flowchart illustrating an information processing method according to a fourth embodiment of the present invention;
fig. 6 is a schematic diagram of a first structure of a first electronic device according to an embodiment of the invention;
fig. 7 is a schematic diagram of a second structure of the first electronic device according to the embodiment of the invention;
fig. 8 is a schematic diagram of a third structure of the first electronic device according to the embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Example one
The embodiment of the invention provides an information processing method, which is applied to first electronic equipment, wherein the first electronic equipment comprises a detection unit. Fig. 1 is a schematic flowchart of an information processing method according to a first embodiment of the present invention; as shown in fig. 1, the information processing method includes:
step 101: when a first preset condition is met, controlling the first electronic equipment to be in a first state;
in this embodiment, the first electronic device is a smart wearable device, such as smart glasses. FIG. 2 is a schematic diagram of a first electronic device according to an embodiment of the invention; as shown in fig. 2, the first electronic device includes: the device comprises a bracket 11 and a detection unit arranged on the bracket; the first electronic device can maintain a relative positional relationship with the head of the user of the first electronic device through the holder 11. The electronic equipment is also provided with a display unit, wherein the display unit is specifically a projection unit 12 and a lens module 13 which are arranged on the bracket 11; the projection unit 11 may be implemented by a micro projector, and a projection source in the projection unit 12 may emit a light beam forming display content; the display content may be multimedia content such as picture content, video content, document content, and the like. The lens module 13 is configured to change a light path direction of a light beam emitted by the projection source, so as to project the light beam forming the first display content to eyes of a user of the electronic device, so that the user observes the display content.
Here, the first electronic device can respond to a first part of instructions in a preset instruction set when in the first state. Specifically, when the first electronic device emits a first light beam representing first display content through the projection unit 11 shown in fig. 2 and projects the first light beam to the eyes of the user of the first electronic device, the user of the first electronic device observes a first interface, and the content of the first interface is the first display content; in this embodiment, the first state of the first electronic device is a state in which the first electronic device emits a first light beam to be projected to eyes of a user of the first electronic device, so that the user of the first electronic device observes the first interface; when the first electronic device user operates the first interface through a preset operation mode, the first electronic device can detect the operation, and generate and respond to an instruction based on the operation, wherein the instruction is a first part of instructions in a preset instruction set of the first electronic device. The preset operation mode includes, but is not limited to, a voice operation mode, a hover gesture operation mode, an eyeball trajectory matching mode, and the like.
In this embodiment, the meeting the first preset condition includes: and receiving a voice instruction, and determining that a first preset condition is met when the voice instruction is matched with a preset instruction. Or, the meeting the first preset condition includes: when the triggering operation aiming at the preset key is detected, it is determined that a first preset condition is met, wherein the preset key can be a power key and the like, when the power key is triggered, a projection unit of the first electronic device can be triggered to project a light beam to enable a user of the first electronic device to observe a first interface, and the preset key can be understood as a power key or a home key on the existing smart phone.
Step 102: acquiring first information through the detection unit; the first information represents motion information of a human body part of a user of the first electronic device.
In this embodiment, the first information represents motion information of a human body part of the first electronic device user, where the human body part may be an eyeball of the first electronic device user, a hand of the first electronic device user, or the like, that is, the first information may represent motion information of the eyeball of the first electronic device user, or the first information represents motion information of the hand of the first electronic device user.
Here, when the first information represents the movement information of the eyeball of the user of the first electronic device, the detection unit may be an eye tracker or an image capture unit, based on which the movement of the eyeball of the user can be tracked, specifically, when the detection unit is an image capture unit, the image capture unit is different from the image capture unit 14 shown in fig. 2, and a viewing range of the image capture unit includes the eye of the user of the first electronic device, that is, the image capture unit in this step is disposed inside the first electronic device, facing the eye of the user of the first electronic device; when the eyes of the user look at different directions, the eyes of the user have slight changes, the characteristics of the eyes of the user are extracted through the image data collected by the image collecting unit, and the motion information of the eyes of the user is tracked through image capture or scanning based on the characteristics. When the detection unit is the eye tracker, the eye tracker is arranged on the inner side of the first electronic device and faces the eyes of the user of the first electronic device, and the movement information of the eyeballs of the user of the first electronic device is obtained through the movement data of the eyes of the user, which is acquired by the eye tracker.
Or, when the first information represents the motion information of the hand of the first electronic device user, the detecting unit may be an image capturing unit, the image capturing unit may be an image capturing unit 14 as shown in fig. 2, the image capturing unit 14 is disposed outside the first electronic device, and a direction in which the image capturing unit 14 faces is the same as a direction in which the eyes of the first electronic device user face. When image acquisition unit 14 opens, just user's hand is in when image acquisition unit 14's the range of framing carries out unsettled gesture operation, image acquisition unit 14 can gather and contain the motion image data of user's hand, through to image data to user's hand carries out image recognition and feature extraction and obtains the motion information of first electronic equipment user's hand.
Step 103: determining first path data based on the first information; the first path data represents a motion trajectory of a human body part of a user of the first electronic device.
In this embodiment, when the first information represents the movement information of the eyeball of the first electronic device user, the first path data represents the movement track of the eyeball of the first electronic device user; when the first information represents the motion information of the hand of the first electronic equipment user, the first path data represents the motion track of the hand of the first electronic equipment user.
Step 104: and when the first path data is matched with first preset path data in a consistent manner, controlling the first electronic equipment to be switched to a second state.
Here, the first electronic device can respond to a second part of the preset instruction set when in the second state. Specifically, when the first electronic device is in the second state, and the first electronic device emits a second light beam representing second display content through the projection unit 11 shown in fig. 2 to be projected to eyes of a user of the first electronic device, the user of the first electronic device observes a second interface, where content of the second interface is the second display content; in this embodiment, the state that the first electronic device is in the second state is a state that the first electronic device emits a second light beam to be projected to eyes of a user of the first electronic device, so that the user of the first electronic device observes the second interface; when the user of the first electronic device operates the second interface through a preset operation mode, the first electronic device can detect the operation, and generate and respond to an instruction based on the operation, wherein the instruction is a second part of instructions in a preset instruction set of the first electronic device.
In this embodiment, the first portion is smaller than the second portion, or the first portion is larger than the second portion. Specifically, when the first portion is smaller than the second portion, that is, the first electronic device emits a first light beam to be projected into the eye of the user of the first electronic device so that the user observes the first interface, the first electronic device can respond to the first portion of the preset instruction set; the first electronic device can respond to a second part of instructions in the preset instruction set when the first electronic device emits a second light beam to be projected into the eyes of the user of the first electronic device so that the user observes the second interface; it is understood that the number of instructions that can be responded to by the first electronic device when displaying the first interface is smaller than the number of instructions that can be responded to by the first electronic device when displaying the second interface. The present embodiment can be applied to the following scenarios: when the first interface is an unlocking interface, the first electronic equipment can only respond to specific unlocking operation when displaying the unlocking interface, namely can only respond to a specific unlocking instruction; enabling the second interface to be an interface after successful unlocking through the specific unlocking operation, and enabling the first electronic device to respond to various trigger operations when displaying the interface after successful unlocking, namely responding to all trigger instructions in the preset instruction set, such as instructions for starting, editing, deleting, unloading, checking and the like; based thereon, the first portion is smaller than the second portion.
The first electronic device is capable of responding to a first part of the set of instructions when the first part is larger than the second part, i.e. when the first electronic device emits a first light beam projected into the eye of the user of the first electronic device to make the user observe the first interface; the first electronic device can respond to a second part of instructions in the preset instruction set when the first electronic device emits a second light beam to be projected into the eyes of the user of the first electronic device so that the user observes the second interface; it is understood that the number of instructions that can be responded to by the first electronic device when displaying the first interface is greater than the number of instructions that can be responded to by the first electronic device when displaying the second interface. The applicable scenario of this embodiment may be an inverse scenario of the application scenario in which the first part is smaller than the second part: the first interface is an interface after the unlocking is successful (that is, an interface which enters a desktop system of the first electronic device and can present a plurality of display icons), so that the first electronic device can respond to various trigger operations when displaying the interface after the unlocking is successful, that is, can respond to all trigger instructions in the preset instruction set, such as instructions of starting, editing, deleting, unloading, viewing and the like; enabling the second interface to be an unlocking interface through a specific locking operation, wherein the first electronic equipment can only respond to a specific unlocking operation when displaying the unlocking interface, namely can only respond to a specific unlocking instruction; based thereon, the first portion is smaller than the second portion.
By adopting the technical scheme of the embodiment of the invention, the first electronic equipment can realize the switching from the locking state to the unlocking success state or from the unlocking success state to the locking state by identifying the motion track of the human body part of the user and matching with the preset path data; in addition, the motion trail of the human body part can be the eye motion trail of a user, namely the first electronic device can realize state switching of the first electronic device by identifying the eye motion trail of the user.
Example two
The embodiment of the invention also provides an information processing method, which is applied to the first electronic equipment, wherein the first electronic equipment comprises a detection unit. In this embodiment, the first electronic device can also communicate with a second electronic device. FIG. 3 is a flowchart illustrating an information processing method according to a second embodiment of the present invention; as shown in fig. 3, the information processing method includes:
step 201: and when a first preset condition is met, controlling the first electronic equipment to be in a first state.
In this embodiment, the first electronic device is an intelligent wearable device, such as intelligent glasses, the intelligent glasses may be as shown in fig. 2, and a specific configuration may be as shown in step 101 in the first embodiment, which is not described herein again.
Here, the first electronic device can respond to a first part of instructions in a preset instruction set when in the first state. Specifically, when the first electronic device emits a first light beam representing first display content through the projection unit 11 shown in fig. 2 and projects the first light beam to the eyes of the user of the first electronic device, the user of the first electronic device observes a first interface, and the content of the first interface is the first display content; in this embodiment, the first state of the first electronic device is a state in which the first electronic device emits a first light beam to be projected to eyes of a user of the first electronic device, so that the user of the first electronic device observes the first interface; when the first electronic device user operates the first interface through a preset operation mode, the first electronic device can detect the operation, and generate and respond to an instruction based on the operation, wherein the instruction is a first part of instructions in a preset instruction set of the first electronic device. The preset operation mode includes, but is not limited to, a voice operation mode, a hover gesture operation mode, an eyeball trajectory matching mode, and the like.
In this embodiment, the meeting the first preset condition includes: and receiving a voice instruction, and determining that a first preset condition is met when the voice instruction is matched with a preset instruction. Or, the meeting the first preset condition includes: when the triggering operation aiming at a preset key is detected, it is determined that a first preset condition is met, wherein the preset key can be a power key or the like, and when the power key is triggered, a projection unit of the first electronic device can be triggered to project a light beam so that a user of the first electronic device observes a first interface.
Step 202: acquiring first information through the detection unit; the first information represents motion information of a human body part of a user of the first electronic device.
In this embodiment, the first information represents motion information of a human body part of the first electronic device user, where the human body part may be an eyeball of the first electronic device user, a hand of the first electronic device user, or the like, that is, the first information may represent motion information of the eyeball of the first electronic device user, or the first information represents motion information of the hand of the first electronic device user. Specifically, the obtaining manner of the first information may be as shown in step 102 in the first embodiment, and is not described herein again.
Step 203: determining first path data based on the first information; the first path data represents a motion trajectory of a human body part of a user of the first electronic device.
In this embodiment, when the first information represents the movement information of the eyeball of the first electronic device user, the first path data represents the movement track of the eyeball of the first electronic device user; when the first information represents the motion information of the hand of the first electronic equipment user, the first path data represents the motion track of the hand of the first electronic equipment user.
Step 204: and when the first path data is matched with first preset path data in a consistent manner, controlling the first electronic equipment to be switched to a second state, and sending a first instruction to the second electronic equipment so as to control the second electronic equipment to execute a preset operation.
Here, the first electronic device can respond to a second part of the preset instruction set when in the second state. Specifically, when the first electronic device is in the second state, and the first electronic device emits a second light beam representing second display content through the projection unit 11 shown in fig. 2 to be projected to eyes of a user of the first electronic device, the user of the first electronic device observes a second interface, where content of the second interface is the second display content; in this embodiment, the state that the first electronic device is in the second state is a state that the first electronic device emits a second light beam to be projected to eyes of a user of the first electronic device, so that the user of the first electronic device observes the second interface; when the user of the first electronic device operates the second interface through a preset operation mode, the first electronic device can detect the operation, and generate and respond to an instruction based on the operation, wherein the instruction is a second part of instructions in a preset instruction set of the first electronic device.
In this embodiment, the first portion is smaller than the second portion, or the first portion is larger than the second portion. Specifically, when the first portion is smaller than the second portion, that is, the first electronic device emits a first light beam to be projected into the eye of the user of the first electronic device so that the user observes the first interface, the first electronic device can respond to the first portion of the preset instruction set; the first electronic device can respond to a second part of instructions in the preset instruction set when the first electronic device emits a second light beam to be projected into the eyes of the user of the first electronic device so that the user observes the second interface; it is understood that the number of instructions that can be responded to by the first electronic device when displaying the first interface is smaller than the number of instructions that can be responded to by the first electronic device when displaying the second interface. The present embodiment can be applied to the following scenarios: when the first interface is an unlocking interface, the first electronic equipment can only respond to specific unlocking operation when displaying the unlocking interface, namely can only respond to a specific unlocking instruction; enabling the second interface to be an interface after successful unlocking through the specific unlocking operation, and enabling the first electronic device to respond to various trigger operations when displaying the interface after successful unlocking, namely responding to all trigger instructions in the preset instruction set, such as instructions for starting, editing, deleting, unloading, checking and the like; based thereon, the first portion is smaller than the second portion.
The first electronic device is capable of responding to a first part of the set of instructions when the first part is larger than the second part, i.e. when the first electronic device emits a first light beam projected into the eye of the user of the first electronic device to make the user observe the first interface; the first electronic device can respond to a second part of instructions in the preset instruction set when the first electronic device emits a second light beam to be projected into the eyes of the user of the first electronic device so that the user observes the second interface; it is understood that the number of instructions that can be responded to by the first electronic device when displaying the first interface is greater than the number of instructions that can be responded to by the first electronic device when displaying the second interface. The applicable scenario of this embodiment may be an inverse scenario of the application scenario in which the first part is smaller than the second part: the first interface is an interface after the unlocking is successful (that is, an interface which enters a desktop system of the first electronic device and can present a plurality of display icons), so that the first electronic device can respond to various trigger operations when displaying the interface after the unlocking is successful, that is, can respond to all trigger instructions in the preset instruction set, such as instructions of starting, editing, deleting, unloading, viewing and the like; enabling the second interface to be an unlocking interface through a specific locking operation, wherein the first electronic equipment can only respond to a specific unlocking operation when displaying the unlocking interface, namely can only respond to a specific unlocking instruction; based thereon, the first portion is smaller than the second portion.
In this embodiment, the first electronic device and the second electronic device can communicate with each other based on a predetermined Communication manner, which may be a bluetooth Communication manner or a Near Field Communication (NFC) Communication manner, and is not limited to the two Communication manners. The second electronic device may be an electronic device with the preset communication mode, such as a mobile phone, a tablet computer, and the like. And when the first electronic equipment and the second electronic equipment are successfully connected based on the preset communication mode, the first electronic equipment and the second electronic equipment can transmit data through the preset communication mode. In this embodiment, when the first path data in the first electronic device matches and is consistent with first preset path data, the first electronic device generates a first instruction and sends the first instruction to the second electronic device based on the preset communication mode.
The first instruction can be preset according to the matching of different path data and different preset path data; for example, a plurality of preset path data are pre-configured in the first electronic device, and the first path data in this step is one of the preset path data. After each preset path data is successfully matched, an instruction is correspondingly generated, that is, the preset path data correspond to an instruction respectively. When the detected path data is matched and consistent with any path data in the plurality of preset path data, generating a corresponding instruction, where the instruction may be used to control the second electronic device to perform a predetermined operation, where the predetermined operation may be, for example, an operation of answering a call, an operation of dialing a preset number (e.g., dialing an emergency call such as 110 or 120), an unlocking operation, and the predetermined operation may be set by a user. For example, when a user wears the smart glasses shown in fig. 2, the smart glasses and the smart phone of the user are successfully connected based on a bluetooth communication mode; when the smart phone receives a certain call, the user can enable the smart glasses to recognize the eyeball motion through the eyeball motion so as to determine path data, when the path data is matched with first preset path data matched with a call answering instruction in the preset path data, a first instruction for representing call answering is generated, and the first instruction is sent to the smart phone so that the smart phone answers the call according to the first instruction.
By adopting the technical scheme of the embodiment of the invention, the first electronic equipment can realize the switching from the locking state to the unlocking success state or from the unlocking success state to the locking state by identifying the motion track of the human body part of the user and matching with the preset path data; in addition, the movement track of the human body part can be the movement track of eyes of a user, namely the first electronic equipment can realize the state switching of the first electronic equipment by identifying the movement track of the eyes of the user; in a third aspect, according to the technical solution of the embodiment of the present invention, the first electronic device can trigger the second electronic device to perform a predetermined action, such as making and receiving a call, and the second electronic device can be triggered by the first electronic device to make an emergency call in an emergency, so that the operation of a user is simplified, and the safety of the user can be improved to a certain extent.
EXAMPLE III
The embodiment of the invention also provides an information processing method, which is applied to the first electronic equipment, wherein the first electronic equipment comprises a detection unit. In this embodiment, the first electronic device can also communicate with a second electronic device. FIG. 4 is a flowchart illustrating an information processing method according to a third embodiment of the present invention; as shown in fig. 4, the information processing method includes:
step 301: and when receiving the specific display content sent by the second electronic equipment, determining that a first preset condition is met, and controlling the first electronic equipment to be in a first state.
In this embodiment, the first electronic device is an intelligent wearable device, such as intelligent glasses, the intelligent glasses may be as shown in fig. 2, and a specific configuration may be as shown in step 101 in the first embodiment, which is not described herein again.
Here, the first electronic device and the second electronic device can communicate based on a preset communication mode, which may be a bluetooth communication mode or an NFC mode, and is not limited to the above two communication modes. The second electronic device may be an electronic device with the preset communication mode, such as a mobile phone, a tablet computer, and the like. And when the first electronic equipment and the second electronic equipment are successfully connected based on the preset communication mode, the first electronic equipment and the second electronic equipment can transmit data through the preset communication mode.
In this embodiment, the first electronic device receives, based on the preset communication manner, specific display content sent by the second electronic device; the specific display content is display content capable of controlling the first electronic equipment to be in a first state. In this step, the controlling the first electronic device user to be in the first state includes: generating first display content from the specific display content according to a preset generation mode, and controlling a first light beam to be emitted to the eyes of a first electronic equipment user so that the first electronic equipment user can perceive a first interface; the display content of the first interface is the first display content.
Specifically, the second electronic device is also in the first state at this time, and the specific display content may be the display content of the second electronic device in the first state,
in this embodiment, the first electronic device can respond to a first part of instructions in a preset instruction set when being in the first state. Specifically, when the first electronic device emits a first light beam representing first display content through the projection unit 11 shown in fig. 2 and projects the first light beam to the eyes of the user of the first electronic device, the user of the first electronic device observes a first interface, and the content of the first interface is the first display content; in this embodiment, the first state of the first electronic device is a state in which the first electronic device emits a first light beam to be projected to eyes of a user of the first electronic device, so that the user of the first electronic device observes the first interface; when the first electronic device user operates the first interface through a preset operation mode, the first electronic device can detect the operation, and generate and respond to an instruction based on the operation, wherein the instruction is a first part of instructions in a preset instruction set of the first electronic device. The preset operation mode includes, but is not limited to, a voice operation mode, a hover gesture operation mode, an eyeball trajectory matching mode, and the like.
Step 302: acquiring first information through the detection unit; the first information represents motion information of a human body part of a user of the first electronic device.
In this embodiment, the first information represents motion information of a human body part of the first electronic device user, where the human body part may be an eyeball of the first electronic device user, a hand of the first electronic device user, or the like, that is, the first information may represent motion information of the eyeball of the first electronic device user, or the first information represents motion information of the hand of the first electronic device user. Specifically, the obtaining manner of the first information may be as shown in step 102 in the first embodiment, and is not described herein again.
Step 303: determining first path data based on the first information; the first path data represents a motion trajectory of a human body part of a user of the first electronic device.
In this embodiment, when the first information represents the movement information of the eyeball of the first electronic device user, the first path data represents the movement track of the eyeball of the first electronic device user; when the first information represents the motion information of the hand of the first electronic equipment user, the first path data represents the motion track of the hand of the first electronic equipment user.
Step 304: and when the first path data is matched with first preset path data in a consistent manner, controlling the first electronic equipment to be switched to a second state, and sending a first instruction to the second electronic equipment so as to control the second electronic equipment to be switched from the first state to the second state according to the first instruction.
In this embodiment, the first electronic device and the second electronic device can communicate with each other based on a preset communication mode, where the preset communication mode may be a bluetooth communication mode or an NFC mode, and is not limited to the two communication modes. The second electronic device may be an electronic device with the preset communication mode, such as a mobile phone, a tablet computer, and the like. And when the first electronic equipment and the second electronic equipment are successfully connected based on the preset communication mode, the first electronic equipment and the second electronic equipment can transmit data through the preset communication mode.
Here, the first electronic device and the second electronic device can respond to a second part of the preset instruction set when in the second state.
In this embodiment, the first portion is smaller than the second portion, or the first portion is larger than the second portion. Specifically, when the first portion is smaller than the second portion, that is, the first electronic device emits a first light beam to be projected into the eye of the user of the first electronic device so that the user observes the first interface, the first electronic device can respond to the first portion of the preset instruction set; the first electronic device can respond to a second part of instructions in the preset instruction set when the first electronic device emits a second light beam to be projected into the eyes of the user of the first electronic device so that the user observes the second interface; it is understood that the number of instructions that can be responded to by the first electronic device when displaying the first interface is smaller than the number of instructions that can be responded to by the first electronic device when displaying the second interface. The present embodiment can be applied to the following scenarios: when the first interface is an unlocking interface, the first electronic equipment can only respond to specific unlocking operation when displaying the unlocking interface, namely can only respond to a specific unlocking instruction; and if the second interface is the interface which is successfully unlocked through the specific unlocking operation, the first electronic device can respond to various trigger operations when displaying the interface which is successfully unlocked, namely can respond to all trigger instructions in the preset instruction set, such as instructions of starting, editing, deleting, unloading, checking and the like. When the second electronic device displays a third interface, the second electronic device can respond to a first part of instructions in the preset instruction set, and the display content of the third interface is the specific display content; when the second electronic device displays a fourth interface, the second electronic device can respond to a second part of instructions in the preset instruction set, and the display content of the fourth interface is the second specific display content; it is understood that the number of instructions that can be responded to by the second electronic device when displaying the third interface is smaller than the number of instructions that can be responded to by the second electronic device when displaying the fourth interface. The present embodiment can be applied to the following scenarios: when the third interface is an unlocking interface, the second electronic device can only respond to specific unlocking operation when displaying the unlocking interface, namely can only respond to a specific unlocking instruction; and if the fourth interface is the interface which is successfully unlocked through the specific unlocking operation, the second electronic device can respond to various trigger operations when displaying the interface which is successfully unlocked, namely can respond to all trigger instructions in the preset instruction set, such as instructions of starting, editing, deleting, unloading, checking and the like. Based thereon, the first portion is smaller than the second portion.
The first electronic device is capable of responding to a first part of the set of instructions when the first part is larger than the second part, i.e. when the first electronic device emits a first light beam projected into the eye of the user of the first electronic device to make the user observe the first interface; the first electronic device can respond to a second part of instructions in the preset instruction set when the first electronic device emits a second light beam to be projected into the eyes of the user of the first electronic device so that the user observes the second interface; it is understood that the number of instructions that can be responded to by the first electronic device when displaying the first interface is greater than the number of instructions that can be responded to by the first electronic device when displaying the second interface. The applicable scenario of this embodiment may be an inverse scenario of the application scenario in which the first part is smaller than the second part: the first interface is an interface after the unlocking is successful (that is, an interface which enters a desktop system of the first electronic device and can present a plurality of display icons), so that the first electronic device can respond to various trigger operations when displaying the interface after the unlocking is successful, that is, can respond to all trigger instructions in the preset instruction set, such as instructions of starting, editing, deleting, unloading, viewing and the like; and enabling the second interface to be an unlocking interface through a specific locking operation, wherein the first electronic equipment can only respond to a specific unlocking operation when displaying the unlocking interface, namely can only respond to a specific unlocking instruction. When the second electronic device displays a third interface, the second electronic device can respond to a first part of instructions in the preset instruction set, and the display content of the third interface is the specific display content; when the second electronic device displays a fourth interface, the second electronic device can respond to a second part of instructions in the preset instruction set, and the display content of the fourth interface is the second specific display content; it is understood that the number of instructions that can be responded to by the second electronic device when displaying the third interface is greater than the number of instructions that can be responded to by the second electronic device when displaying the fourth interface. The applicable scenario of this embodiment may be an inverse scenario of the application scenario in which the first part is smaller than the second part: the third interface is an interface after the unlocking is successful (i.e., an interface which can present a plurality of display icons and enters the desktop system of the second electronic device), and the second electronic device can respond to various trigger operations when displaying the interface after the unlocking is successful, that is, can respond to all trigger instructions in the preset instruction set, such as instructions of starting, editing, deleting, unloading, viewing and the like; and the fourth interface is an unlocking interface through a specific locking operation, and the second electronic equipment can only respond to a specific unlocking operation when displaying the unlocking interface, namely can only respond to a specific unlocking instruction. Based thereon, the first portion is smaller than the second portion.
By adopting the technical scheme of the embodiment of the invention, the first electronic equipment can realize the switching from the locking state to the unlocking success state or from the unlocking success state to the locking state by identifying the motion track of the human body part of the user and matching with the preset path data; in addition, the movement track of the human body part can be the movement track of eyes of a user, namely the first electronic equipment can realize the state switching of the first electronic equipment by identifying the movement track of the eyes of the user; in a third aspect, according to the technical solution of the embodiment of the present invention, the first electronic device can trigger the state switching of the second electronic device, for example, the state switching from the locked state to the unlocked successful state or the state switching from the unlocked successful state to the locked state, so that a user can trigger the state switching of the second electronic device by the first electronic device (i.e., the wearable electronic device) without performing a gesture operation on the second electronic device, thereby greatly improving the security of the second electronic device.
Example four
The embodiment of the invention also provides an information processing method, which is applied to the first electronic equipment, wherein the first electronic equipment comprises a detection unit. In this embodiment, the first electronic device can also communicate with a second electronic device. FIG. 5 is a flowchart illustrating an information processing method according to a fourth embodiment of the present invention; as shown in fig. 5, the information processing method includes:
step 401: and when a first preset condition is met, controlling the first electronic equipment to be in a first state.
In this embodiment, the first electronic device is an intelligent wearable device, such as intelligent glasses, the intelligent glasses may be as shown in fig. 2, and a specific configuration may be as shown in step 101 in the first embodiment, which is not described herein again.
Here, the first electronic device can respond to a first part of instructions in a preset instruction set when in the first state. Specifically, when the first electronic device emits a first light beam representing first display content through the projection unit 11 shown in fig. 2 and projects the first light beam to the eyes of the user of the first electronic device, the user of the first electronic device observes a first interface, and the content of the first interface is the first display content; in this embodiment, the first state of the first electronic device is a state in which the first electronic device emits a first light beam to be projected to eyes of a user of the first electronic device, so that the user of the first electronic device observes the first interface; when the first electronic device user operates the first interface through a preset operation mode, the first electronic device can detect the operation, and generate and respond to an instruction based on the operation, wherein the instruction is a first part of instructions in a preset instruction set of the first electronic device. The preset operation mode includes, but is not limited to, a voice operation mode, a hover gesture operation mode, an eyeball trajectory matching mode, and the like.
In this embodiment, the meeting the first preset condition includes: and receiving a voice instruction, and determining that a first preset condition is met when the voice instruction is matched with a preset instruction. Or, the meeting the first preset condition includes: when the triggering operation aiming at a preset key is detected, it is determined that a first preset condition is met, wherein the preset key can be a power key or the like, and when the power key is triggered, a projection unit of the first electronic device can be triggered to project a light beam so that a user of the first electronic device observes a first interface.
Step 402: acquiring first information through the detection unit; the first information represents motion information of a human body part of a user of the first electronic device.
In this embodiment, the first information represents motion information of a human body part of the first electronic device user, where the human body part may be an eyeball of the first electronic device user, a hand of the first electronic device user, or the like, that is, the first information may represent motion information of the eyeball of the first electronic device user, or the first information represents motion information of the hand of the first electronic device user. Specifically, the obtaining manner of the first information may be as shown in step 102 in the first embodiment, and is not described herein again.
Step 403: determining first path data based on the first information; the first path data represents a motion trajectory of a human body part of a user of the first electronic device.
In this embodiment, when the first information represents the movement information of the eyeball of the first electronic device user, the first path data represents the movement track of the eyeball of the first electronic device user; when the first information represents the motion information of the hand of the first electronic equipment user, the first path data represents the motion track of the hand of the first electronic equipment user.
Step 404: when the first part of the first path data is matched with the first part of the first preset path data in a consistent manner, and the second part of the first path data is matched with the first part of the second preset path data of the second electronic equipment in a consistent manner, controlling the first electronic equipment to be switched to a second state; and/or sending a first instruction to the second electronic device to enable the second electronic device to be switched to a second state according to the first instruction; and the second electronic equipment can respond to a second part of instructions in the preset instruction set when in the second state.
In this embodiment, the first electronic device and the second electronic device can communicate with each other based on a preset communication mode, where the preset communication mode may be a bluetooth communication mode or an NFC mode, and is not limited to the two communication modes. The second electronic device may be an electronic device with the preset communication mode, such as a mobile phone, a tablet computer, and the like. And when the first electronic equipment and the second electronic equipment are successfully connected based on the preset communication mode, the first electronic equipment and the second electronic equipment can transmit data through the preset communication mode.
The first electronic device stores first preset path data, and the second electronic device stores second preset path data; after the first electronic device determines the first path data, on one hand, the first path data is matched with the first path data, and on the other hand, the first path data is sent to the second electronic device based on the preset communication mode, so that the second electronic device matches the first path data with the second preset path data. In this embodiment, the first part and the second part may be any part in the first path data; the first part and the second part of the first path data may form complete first path data, and of course, the first part and the second part of the first path data may also form a preset proportion of the first path data, for example, 80% of the first path data may not form complete first path data. For example, the first portion is the first 50% of the first path data, and the second portion is the last 50% of the first path data, but the ratio may be other values, and is not limited to the values listed in the present embodiment.
Here, the first electronic device and the second electronic device can respond to a second part of the preset instruction set when in the second state.
In this embodiment, the first portion is smaller than the second portion, or the first portion is larger than the second portion. Specifically, when the first portion is smaller than the second portion, that is, the first electronic device emits a first light beam to be projected into the eye of the user of the first electronic device so that the user observes the first interface, the first electronic device can respond to the first portion of the preset instruction set; the first electronic device can respond to a second part of instructions in the preset instruction set when the first electronic device emits a second light beam to be projected into the eyes of the user of the first electronic device so that the user observes the second interface; it is understood that the number of instructions that can be responded to by the first electronic device when displaying the first interface is smaller than the number of instructions that can be responded to by the first electronic device when displaying the second interface. The present embodiment can be applied to the following scenarios: when the first interface is an unlocking interface, the first electronic equipment can only respond to specific unlocking operation when displaying the unlocking interface, namely can only respond to a specific unlocking instruction; and if the second interface is the interface which is successfully unlocked through the specific unlocking operation, the first electronic device can respond to various trigger operations when displaying the interface which is successfully unlocked, namely can respond to all trigger instructions in the preset instruction set, such as instructions of starting, editing, deleting, unloading, checking and the like. And/or when the second electronic device displays a third interface, the second electronic device can respond to the first part of instructions in the preset instruction set, and the display content of the third interface is the specific display content; when the second electronic device displays a fourth interface, the second electronic device can respond to a second part of instructions in the preset instruction set, and the display content of the fourth interface is the second specific display content; it is understood that the number of instructions that can be responded to by the second electronic device when displaying the third interface is smaller than the number of instructions that can be responded to by the second electronic device when displaying the fourth interface. The present embodiment can be applied to the following scenarios: when the third interface is an unlocking interface, the second electronic device can only respond to specific unlocking operation when displaying the unlocking interface, namely can only respond to a specific unlocking instruction; and if the fourth interface is the interface which is successfully unlocked through the specific unlocking operation, the second electronic device can respond to various trigger operations when displaying the interface which is successfully unlocked, namely can respond to all trigger instructions in the preset instruction set, such as instructions of starting, editing, deleting, unloading, checking and the like. Based thereon, the first portion is smaller than the second portion.
The first electronic device is capable of responding to a first part of the set of instructions when the first part is larger than the second part, i.e. when the first electronic device emits a first light beam projected into the eye of the user of the first electronic device to make the user observe the first interface; the first electronic device can respond to a second part of instructions in the preset instruction set when the first electronic device emits a second light beam to be projected into the eyes of the user of the first electronic device so that the user observes the second interface; it is understood that the number of instructions that can be responded to by the first electronic device when displaying the first interface is greater than the number of instructions that can be responded to by the first electronic device when displaying the second interface. The applicable scenario of this embodiment may be an inverse scenario of the application scenario in which the first part is smaller than the second part: the first interface is an interface after the unlocking is successful (that is, an interface which enters a desktop system of the first electronic device and can present a plurality of display icons), so that the first electronic device can respond to various trigger operations when displaying the interface after the unlocking is successful, that is, can respond to all trigger instructions in the preset instruction set, such as instructions of starting, editing, deleting, unloading, viewing and the like; and enabling the second interface to be an unlocking interface through a specific locking operation, wherein the first electronic equipment can only respond to a specific unlocking operation when displaying the unlocking interface, namely can only respond to a specific unlocking instruction. And/or when the second electronic device displays a third interface, the second electronic device can respond to the first part of instructions in the preset instruction set, and the display content of the third interface is the specific display content; when the second electronic device displays a fourth interface, the second electronic device can respond to a second part of instructions in the preset instruction set, and the display content of the fourth interface is the second specific display content; it is understood that the number of instructions that can be responded to by the second electronic device when displaying the third interface is greater than the number of instructions that can be responded to by the second electronic device when displaying the fourth interface. The applicable scenario of this embodiment may be an inverse scenario of the application scenario in which the first part is smaller than the second part: the third interface is an interface after the unlocking is successful (i.e., an interface which can present a plurality of display icons and enters the desktop system of the second electronic device), and the second electronic device can respond to various trigger operations when displaying the interface after the unlocking is successful, that is, can respond to all trigger instructions in the preset instruction set, such as instructions of starting, editing, deleting, unloading, viewing and the like; and the fourth interface is an unlocking interface through a specific locking operation, and the second electronic equipment can only respond to a specific unlocking operation when displaying the unlocking interface, namely can only respond to a specific unlocking instruction. Based thereon, the first portion is smaller than the second portion.
By adopting the technical scheme of the embodiment of the invention, the first electronic equipment can realize the switching from the locking state to the unlocking success state or from the unlocking success state to the locking state by identifying the motion track of the human body part of the user and matching with the preset path data; in addition, the movement track of the human body part can be the movement track of eyes of a user, namely the first electronic equipment can realize the state switching of the first electronic equipment by identifying the movement track of the eyes of the user; in a third aspect, according to the technical solution of the embodiment of the present invention, the first electronic device can trigger the state switching of the second electronic device, for example, the state switching from the locked state to the unlocked successful state or the state switching from the unlocked successful state to the locked state, so that a user can trigger the state switching of the second electronic device by the first electronic device (i.e., the wearable electronic device) without performing a gesture operation on the second electronic device, thereby greatly improving the security of the second electronic device.
EXAMPLE five
The embodiment of the invention also provides a first electronic device; the first electronic device includes: the detection device comprises a bracket and a detection unit arranged on the bracket; the support is used for maintaining the relative position relation between the first electronic equipment and the head of a user of the first electronic equipment. Fig. 6 is a schematic diagram of a first structure of a first electronic device according to an embodiment of the invention; as shown in fig. 6, the first electronic device includes: a control unit 63, an acquisition unit 61, and a determination unit 62; wherein,
the control unit 63 is configured to control the first electronic device to be in a first state when a first preset condition is met; the determining unit 62 is further configured to control the first electronic device to switch to a second state when it is determined that the first path data matches and matches with first preset path data;
the acquiring unit 61 is configured to acquire the first information through the detecting unit; the first information represents motion information of a human body part of a user of the first electronic equipment;
the determining unit 62 is configured to determine first path data based on the first information acquired by the acquiring unit 61; the first path data represents a motion trajectory of a human body part of a user of the first electronic device.
Specifically, the first electronic device can respond to a first part of instructions in a preset instruction set when being in the first state; the first electronic device can respond to a second part of instructions in the preset instruction set when in the second state. Wherein preferably the first portion is smaller than the second portion.
In this embodiment, the motion information of the human body part of the user of the first electronic device represented by the first information is: the first information represents the movement information of the eyeball of the first electronic equipment user; correspondingly, the first path data represents a motion trajectory of a human body part of the user of the first electronic device, and includes: the first path data represents a movement locus of an eyeball of a user of the first electronic equipment.
Or, the first information represents motion information of a human body part of the user of the first electronic device, and is: the first information represents motion information of a hand of a user of the first electronic equipment; correspondingly, the first path data represents a motion trajectory of a human body part of the user of the first electronic device, and includes: the first path data represents a motion trajectory of a hand of a user of the first electronic device.
It should be understood by those skilled in the art that, the functions of each processing unit in the first electronic device according to the embodiment of the present invention may be understood by referring to the description of the foregoing control method, and each processing unit in the electronic device according to the embodiment of the present invention may be implemented by an analog circuit that implements the functions described in the embodiment of the present invention, or may be implemented by running software that executes the functions described in the embodiment of the present invention on an intelligent terminal.
EXAMPLE six
An embodiment of the present invention further provides a first electronic device, where the first electronic device includes: the detection device comprises a bracket and a detection unit arranged on the bracket; the support is used for maintaining the relative position relation between the first electronic equipment and the head of a user of the first electronic equipment. Fig. 7 is a schematic diagram of a second structure of the first electronic device according to the embodiment of the invention; as shown in fig. 7, the first electronic device includes: a control unit 63, an acquisition unit 61, a determination unit 62, and a transmission unit 64; wherein,
the control unit 63 is configured to control the first electronic device to be in a first state when a first preset condition is met; the determining unit 62 is further configured to control the first electronic device to switch to a second state when it is determined that the first path data matches and matches with first preset path data;
the acquiring unit 61 is configured to acquire the first information through the detecting unit; the first information represents motion information of a human body part of a user of the first electronic equipment;
the determining unit 62 is configured to determine first path data based on the first information acquired by the acquiring unit 61; the first path data represents a motion trajectory of a human body part of a user of the first electronic device;
the sending unit 64 is configured to send a first instruction to the second electronic device to control the second electronic device to execute a predetermined operation when the determining unit 62 determines that the first path data matches and matches with the first preset path data.
Specifically, the first electronic device can respond to a first part of instructions in a preset instruction set when being in the first state; the first electronic device can respond to a second part of instructions in the preset instruction set when in the second state. Wherein preferably the first portion is smaller than the second portion.
In this embodiment, the motion information of the human body part of the user of the first electronic device represented by the first information is: the first information represents the movement information of the eyeball of the first electronic equipment user; correspondingly, the first path data represents a motion trajectory of a human body part of the user of the first electronic device, and includes: the first path data represents a movement locus of an eyeball of a user of the first electronic equipment.
Or, the first information represents motion information of a human body part of the user of the first electronic device, and is: the first information represents motion information of a hand of a user of the first electronic equipment; correspondingly, the first path data represents a motion trajectory of a human body part of the user of the first electronic device, and includes: the first path data represents a motion trajectory of a hand of a user of the first electronic device.
It should be understood by those skilled in the art that, the functions of each processing unit in the first electronic device according to the embodiment of the present invention may be understood by referring to the description of the foregoing control method, and each processing unit in the electronic device according to the embodiment of the present invention may be implemented by an analog circuit that implements the functions described in the embodiment of the present invention, or may be implemented by running software that executes the functions described in the embodiment of the present invention on an intelligent terminal.
EXAMPLE seven
An embodiment of the present invention further provides a first electronic device, where the first electronic device includes: the detection device comprises a bracket and a detection unit arranged on the bracket; the support is used for maintaining the relative position relation between the first electronic equipment and the head of a user of the first electronic equipment. Fig. 8 is a schematic diagram of a third structure of a first electronic device according to an embodiment of the present invention, where as shown in fig. 8, the first electronic device includes: a control unit 63, an acquisition unit 61, a determination unit 62, a transmission unit 64, and a reception unit 65; wherein,
the receiving unit 65 is configured to receive specific display content sent by the second electronic device;
the control unit 63 is configured to determine that a first preset condition is met when the receiving unit 65 receives the specific display content sent by the second electronic device, and control the first electronic device to be in a first state; the determining unit 62 is further configured to control the first electronic device to switch to a second state when it is determined that the first path data matches and matches with first preset path data;
the acquiring unit 61 is configured to acquire the first information through the detecting unit; the first information represents motion information of a human body part of a user of the first electronic equipment;
the determining unit 62 is configured to determine first path data based on the first information acquired by the acquiring unit 61; the first path data represents a motion trajectory of a human body part of a user of the first electronic device;
the sending unit 64 is configured to send a first instruction to the second electronic device when the determining unit 62 determines that the first path data matches and matches with first preset path data, so as to control the second electronic device to switch from a first state to a second state according to the first instruction; wherein,
the second electronic device is switched from the first state to the second state, and can respond to a first part of instructions in a preset instruction set when being in the first state; the second electronic equipment can respond to a second part of instructions in the preset instruction set when in a second state; the first portion is smaller than the second portion;
or, the second electronic device is switched from the first state to the second state, and when the second electronic device is in the first state, the second electronic device can respond to a first part of instructions in a preset instruction set; the second electronic equipment can respond to a second part of instructions in the preset instruction set when in a second state; the first portion is larger than the second portion. Specifically, the controlling unit 63 controls the first electronic device to be in a first state, which includes: generating second display content from the specific display content according to a preset generation mode, and controlling a first light beam to be emitted to the eyes of a first electronic equipment user so that the first electronic equipment user can perceive a first interface; the display content of the first interface is the first display content.
Specifically, the first electronic device can respond to a first part of instructions in a preset instruction set when being in the first state; the first electronic device can respond to a second part of instructions in the preset instruction set when in the second state. Wherein preferably the first portion is smaller than the second portion.
In this embodiment, the motion information of the human body part of the user of the first electronic device represented by the first information is: the first information represents the movement information of the eyeball of the first electronic equipment user; correspondingly, the first path data represents a motion trajectory of a human body part of the user of the first electronic device, and includes: the first path data represents a movement locus of an eyeball of a user of the first electronic equipment.
Or, the first information represents motion information of a human body part of the user of the first electronic device, and is: the first information represents motion information of a hand of a user of the first electronic equipment; correspondingly, the first path data represents a motion trajectory of a human body part of the user of the first electronic device, and includes: the first path data represents a motion trajectory of a hand of a user of the first electronic device.
It should be understood by those skilled in the art that, the functions of each processing unit in the first electronic device according to the embodiment of the present invention may be understood by referring to the description of the foregoing control method, and each processing unit in the electronic device according to the embodiment of the present invention may be implemented by an analog circuit that implements the functions described in the embodiment of the present invention, or may be implemented by running software that executes the functions described in the embodiment of the present invention on an intelligent terminal.
Example eight
An embodiment of the present invention further provides a first electronic device, where the first electronic device includes: the detection device comprises a bracket and a detection unit arranged on the bracket; the support is used for maintaining the relative position relation between the first electronic equipment and the head of a user of the first electronic equipment. The first electronic device includes: the control unit 63, the obtaining unit 61 and the determining unit 62, and/or the first electronic device further comprises a transmitting unit; when the sending unit is included in the present embodiment, it may be as shown in fig. 7, and when the sending unit is not included in the present embodiment, it may be as shown in fig. 6; wherein,
the control unit 63 is configured to control the first electronic device to be in a first state when a first preset condition is met; the determining unit 62 is further configured to control the first electronic device to switch to the second state when the determining unit 62 determines that the first part of the first path data matches the first part of the first preset path data, and the second part of the first path data matches the first part of the second preset path data of the second electronic device;
the acquiring unit 61 is configured to acquire the first information through the detecting unit; the first information represents motion information of a human body part of a user of the first electronic equipment;
the determining unit 62 is configured to determine first path data based on the first information acquired by the acquiring unit 61; the first path data represents a motion trajectory of a human body part of a user of the first electronic device;
and/or the sending unit is configured to send a first instruction to the second electronic device to enable the second electronic device to switch to a second state according to the first instruction when the determining unit 62 determines that the first part of the first path data matches and conforms to the first part of the first preset path data, and the second part of the first path data matches and conforms to the first part of the second preset path data of the second electronic device; and the second electronic equipment can respond to a second part of instructions in the preset instruction set when in the second state.
Specifically, the first electronic device can respond to a first part of instructions in a preset instruction set when being in the first state; the first electronic device can respond to a second part of instructions in the preset instruction set when in the second state. Wherein preferably the first portion is smaller than the second portion.
In this embodiment, the motion information of the human body part of the user of the first electronic device represented by the first information is: the first information represents the movement information of the eyeball of the first electronic equipment user; correspondingly, the first path data represents a motion trajectory of a human body part of the user of the first electronic device, and includes: the first path data represents a movement locus of an eyeball of a user of the first electronic equipment.
Or, the first information represents motion information of a human body part of the user of the first electronic device, and is: the first information represents motion information of a hand of a user of the first electronic equipment; correspondingly, the first path data represents a motion trajectory of a human body part of the user of the first electronic device, and includes: the first path data represents a motion trajectory of a hand of a user of the first electronic device.
It should be understood by those skilled in the art that, the functions of each processing unit in the first electronic device according to the embodiment of the present invention may be understood by referring to the description of the foregoing control method, and each processing unit in the electronic device according to the embodiment of the present invention may be implemented by an analog circuit that implements the functions described in the embodiment of the present invention, or may be implemented by running software that executes the functions described in the embodiment of the present invention on an intelligent terminal.
In the fifth to eighth embodiments of the present invention, the first electronic device may be implemented by a wearable electronic device such as smart glasses in practical application. The control Unit 63, the obtaining Unit 61, and the determining Unit 62 in the first electronic device may be implemented by a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or a Programmable Gate Array (FPGA) in the first electronic device in practical application; the transmitting unit 64 and the receiving unit 65 in the first electronic device may be implemented by a transceiving antenna in the first electronic device in practical applications.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.
Claims (16)
1. An information processing method is applied to first electronic equipment; the first electronic device includes: the detection device comprises a bracket and a detection unit arranged on the bracket; the first electronic equipment can maintain the relative position relation with the head of the user of the first electronic equipment through the bracket; the method comprises the following steps:
when a first preset condition is met, controlling the first electronic equipment to be in a first state;
acquiring first information through the detection unit; the first information represents motion information of a human body part of a user of the first electronic equipment;
determining first path data based on the first information; the first path data represents a motion trajectory of a human body part of a user of the first electronic device;
and when the first path data is matched with first preset path data in a consistent manner, controlling the first electronic equipment to be switched to a second state.
2. The method of claim 1, wherein the first electronic device is capable of responding to a first portion of a preset set of instructions when in the first state;
the first electronic device can respond to a second part of instructions in the preset instruction set when in the second state.
3. The method of claim 2, wherein the first portion is smaller than the second portion.
4. The method of claim 1, wherein the first information represents motion information of a human body part of a user of the first electronic device, and is: the first information represents the movement information of the eyeball of the first electronic equipment user;
correspondingly, the first path data represents a motion trajectory of a human body part of the user of the first electronic device, and includes: the first path data represents a movement locus of an eyeball of a user of the first electronic equipment.
5. The method of claim 1, wherein the first information represents motion information of a human body part of a user of the first electronic device, and is: the first information represents motion information of a hand of a user of the first electronic equipment;
correspondingly, the first path data represents a motion trajectory of a human body part of the user of the first electronic device, and includes: the first path data represents a motion trajectory of a hand of a user of the first electronic device.
6. The method of claim 1, wherein the first electronic device is further capable of communicating with a second electronic device, and when the first path data matches the first predetermined path data, the method further comprises:
and sending a first instruction to the second electronic equipment to control the second electronic equipment to execute a preset operation.
7. The method of claim 6, wherein the controlling the second electronic device to perform a predetermined operation comprises: to control the second electronic device to switch from a first state to a second state according to the first instruction; wherein,
the second electronic device is switched from the first state to the second state, and can respond to a first part of instructions in a preset instruction set when being in the first state; the second electronic equipment can respond to a second part of instructions in the preset instruction set when in a second state; the first portion is smaller than the second portion;
or, the second electronic device is switched from the first state to the second state, and when the second electronic device is in the first state, the second electronic device can respond to a first part of instructions in a preset instruction set; the second electronic equipment can respond to a second part of instructions in the preset instruction set when in a second state; the first portion is larger than the second portion.
8. The method according to claim 6, wherein the meeting of the first preset condition comprises: when specific display content sent by the second electronic equipment is received, determining that a first preset condition is met;
correspondingly, the controlling the first electronic device user to be in the first state includes:
generating first display content from the specific display content according to a preset generation mode, and controlling a first light beam to be emitted to the eyes of a first electronic equipment user so that the first electronic equipment user can perceive a first interface; the display content of the first interface is the first display content.
9. The method of claim 1, wherein the first electronic device is further capable of communicating with a second electronic device, and wherein the second electronic device is in a first state; the second electronic equipment can respond to a first part of instructions in a preset instruction set when in the first state; the method further comprises the following steps:
when the first part of the first path data is matched with the first part of the first preset path data in a consistent manner, and the second part of the first path data is matched with the first part of the second preset path data of the second electronic equipment in a consistent manner, controlling the first electronic equipment to be switched to a second state;
and/or sending a first instruction to the second electronic device to enable the second electronic device to be switched to a second state according to the first instruction; and the second electronic equipment can respond to a second part of instructions in the preset instruction set when in the second state.
10. A first electronic device, the first electronic device comprising: the detection device comprises a bracket and a detection unit arranged on the bracket; the bracket is used for maintaining the relative position relationship between the first electronic equipment and the head of a user of the first electronic equipment; the first electronic device includes: the device comprises a control unit, an acquisition unit and a determination unit; wherein,
the control unit is used for controlling the first electronic equipment to be in a first state when a first preset condition is met; the first electronic equipment is further used for controlling the first electronic equipment to be switched to a second state when the determining unit determines that the first path data is matched with first preset path data;
the acquisition unit is used for acquiring first information through the detection unit; the first information represents motion information of a human body part of a user of the first electronic equipment;
the determining unit is used for determining first path data based on the first information acquired by the acquiring unit; the first path data represents a motion trajectory of a human body part of a user of the first electronic device.
11. The first electronic device according to claim 10, further comprising a sending unit, configured to send a first instruction to the second electronic device to control the second electronic device to perform a predetermined operation when the first path data matches and matches with first preset path data.
12. The first electronic device according to claim 11, wherein the sending unit is configured to send a first instruction to the second electronic device to control the second electronic device to switch from the first state to the second state according to the first instruction when the first path data matches and matches with the first preset path data.
13. The first electronic device according to claim 11, wherein the first electronic device further comprises a receiving unit configured to receive specific display content transmitted by the second electronic device;
the control unit is used for determining that a first preset condition is met when the receiving unit receives the specific display content sent by the second electronic equipment; the display control module is further used for generating second display content from the specific display content according to a preset generation mode, and controlling the emitted first light beam to enter the eyes of the first electronic equipment user so that the first electronic equipment user can perceive a first interface; the display content of the first interface is the first display content.
14. The first electronic device of claim 10, wherein the first electronic device further comprises a transmitting unit; the second electronic device is in a first state; the second electronic equipment can respond to a first part of instructions in a preset instruction set when in the first state;
the control unit is used for controlling the first electronic equipment to be switched to a second state when the first part of the first path data is matched with the first part of the first preset path data and the second part of the first path data is matched with the first part of the second preset path data of the second electronic equipment;
and/or the sending unit is configured to send a first instruction to the second electronic device when the first part of the first path data matches and coincides with the first part of the first preset path data, and the second part of the first path data matches and coincides with the first part of the second preset path data of the second electronic device, so that the second electronic device switches to a second state according to the first instruction; and the second electronic equipment can respond to a second part of instructions in the preset instruction set when in the second state.
15. The first electronic device of claim 10, wherein the first information characterizes movement information of an eyeball of a user of the first electronic device; or, the first information represents a motion trajectory of a hand of the first electronic device user;
correspondingly, the first path data represents a motion track of an eyeball of the first electronic device user, or the first path data represents a motion track of a hand of the first electronic device user.
16. The first electronic device of claim 10, wherein the first electronic device is capable of responding to a first portion of a preset set of instructions when in the first state; the first electronic device can respond to a second part of instructions in the preset instruction set when in the second state.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410838327.6A CN104598140B (en) | 2014-12-29 | 2014-12-29 | A kind of information processing method and the first electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410838327.6A CN104598140B (en) | 2014-12-29 | 2014-12-29 | A kind of information processing method and the first electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104598140A true CN104598140A (en) | 2015-05-06 |
CN104598140B CN104598140B (en) | 2018-04-27 |
Family
ID=53123969
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410838327.6A Active CN104598140B (en) | 2014-12-29 | 2014-12-29 | A kind of information processing method and the first electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104598140B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105094822A (en) * | 2015-07-17 | 2015-11-25 | 小米科技有限责任公司 | Terminal device control method and device |
CN106654515A (en) * | 2017-03-02 | 2017-05-10 | 广东小天才科技有限公司 | Intelligent wearable electronic equipment |
CN111221415A (en) * | 2019-12-31 | 2020-06-02 | 联想(北京)有限公司 | Control method and device and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100041447A1 (en) * | 2008-08-13 | 2010-02-18 | Will Wang Graylin | Wearable headset with self-contained vocal feedback and vocal command |
CN103246351A (en) * | 2013-05-23 | 2013-08-14 | 刘广松 | User interaction system and method |
CN103677236A (en) * | 2012-09-18 | 2014-03-26 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN103699210A (en) * | 2012-09-27 | 2014-04-02 | 北京三星通信技术研究有限公司 | Mobile terminal and control method thereof |
CN103942480A (en) * | 2014-04-14 | 2014-07-23 | 惠州Tcl移动通信有限公司 | Method and system for achieving mobile terminal screen unlocking through matching of retina information |
CN104050402A (en) * | 2014-06-12 | 2014-09-17 | 深圳市汇顶科技股份有限公司 | Mobile terminal security certification method and system and mobile terminal |
CN104091113A (en) * | 2014-07-11 | 2014-10-08 | 惠州Tcl移动通信有限公司 | Method and system for achieving mobile terminal screen unlocking through NFC data matching |
-
2014
- 2014-12-29 CN CN201410838327.6A patent/CN104598140B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100041447A1 (en) * | 2008-08-13 | 2010-02-18 | Will Wang Graylin | Wearable headset with self-contained vocal feedback and vocal command |
CN103677236A (en) * | 2012-09-18 | 2014-03-26 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN103699210A (en) * | 2012-09-27 | 2014-04-02 | 北京三星通信技术研究有限公司 | Mobile terminal and control method thereof |
CN103246351A (en) * | 2013-05-23 | 2013-08-14 | 刘广松 | User interaction system and method |
CN103942480A (en) * | 2014-04-14 | 2014-07-23 | 惠州Tcl移动通信有限公司 | Method and system for achieving mobile terminal screen unlocking through matching of retina information |
CN104050402A (en) * | 2014-06-12 | 2014-09-17 | 深圳市汇顶科技股份有限公司 | Mobile terminal security certification method and system and mobile terminal |
CN104091113A (en) * | 2014-07-11 | 2014-10-08 | 惠州Tcl移动通信有限公司 | Method and system for achieving mobile terminal screen unlocking through NFC data matching |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105094822A (en) * | 2015-07-17 | 2015-11-25 | 小米科技有限责任公司 | Terminal device control method and device |
CN106654515A (en) * | 2017-03-02 | 2017-05-10 | 广东小天才科技有限公司 | Intelligent wearable electronic equipment |
CN106654515B (en) * | 2017-03-02 | 2023-11-24 | 广东小天才科技有限公司 | Intelligent wearable electronic equipment |
CN111221415A (en) * | 2019-12-31 | 2020-06-02 | 联想(北京)有限公司 | Control method and device and electronic equipment |
CN111221415B (en) * | 2019-12-31 | 2022-05-31 | 联想(北京)有限公司 | Control method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN104598140B (en) | 2018-04-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9529439B2 (en) | Multi device pairing and sharing via gestures | |
CN103873959B (en) | A kind of control method and electronic equipment | |
US10274729B2 (en) | Remote control device, remote control product and remote control method | |
US9829708B1 (en) | Method and apparatus of wearable eye pointing system | |
KR20210128074A (en) | Audio zoom based on speaker detection using lip learding | |
CN110348198B (en) | Identity recognition method, related device and system of simulation object | |
CN106774849B (en) | Virtual reality equipment control method and device | |
EP3974950B1 (en) | Interactive method and apparatus in virtual reality scene | |
KR20180040719A (en) | Smart device control method and apparatus | |
CN104598140B (en) | A kind of information processing method and the first electronic equipment | |
CN111131702A (en) | Method and device for acquiring image, storage medium and electronic equipment | |
WO2023000808A1 (en) | Method and apparatus for controlling smart home appliance, and smart glasses | |
KR20160017463A (en) | Mobile terminal and method for controlling the same | |
CN103942480A (en) | Method and system for achieving mobile terminal screen unlocking through matching of retina information | |
CN104023207A (en) | Live communication terminal capable of automatically carrying out bidirectional communication after unidirectional call, method and tool | |
US10846513B2 (en) | Method, device and storage medium for processing picture | |
CN104967966B (en) | A kind of method and device of binding bluetooth equipment | |
KR20150089283A (en) | Wearable terminal and system including wearable terminal | |
CN113822216A (en) | Event detection method, device, system, electronic equipment and storage medium | |
KR20160001229A (en) | Mobile terminal and method for controlling the same | |
CN110544335B (en) | Object recognition system and method, electronic device, and storage medium | |
CN104375641B (en) | A kind of control method and electronic equipment | |
EP3116220A1 (en) | Method and apparatus for querying information | |
KR20140147057A (en) | Wearable glass-type device and method of controlling the device | |
KR20190061825A (en) | Tethering type head mounted display and method for controlling the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |