CN116048243A - Display method and electronic equipment - Google Patents
Display method and electronic equipment Download PDFInfo
- Publication number
- CN116048243A CN116048243A CN202210761048.9A CN202210761048A CN116048243A CN 116048243 A CN116048243 A CN 116048243A CN 202210761048 A CN202210761048 A CN 202210761048A CN 116048243 A CN116048243 A CN 116048243A
- Authority
- CN
- China
- Prior art keywords
- interface
- image
- user
- terminal
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/197—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Ophthalmology & Optometry (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application provides a display method. The method can be applied to terminal equipment such as mobile phones, tablet computers and the like. By implementing the method, the terminal equipment can display one or more shortcut windows in the unlocked main interface and/or the interface to be unlocked. Each shortcut window is associated with a user-set common interface. When detecting that the user looks at a shortcut window, the terminal device may display a common interface associated with the shortcut window, such as a common payment interface, a health code interface, etc. Therefore, the user can quickly acquire the related information in the common interface without touch operation.
Description
Technical Field
The application relates to the field of terminals, in particular to a display method and electronic equipment.
Background
Along with the growth of mobile terminals and the maturity of communication technology, people begin to explore novel human-computer interaction modes such as voice control and gesture recognition control which are separated from a mouse and a keyboard, and further realize novel human-computer interaction modes, so that more diversified and more convenient interaction experience of users is achieved, and user use experience is improved.
Disclosure of Invention
The embodiment of the application provides a display method. By implementing the method, the terminal equipment can detect the area where the user looks at the screen, and then display the interface corresponding to the area. Thus, the user can quickly acquire the information in the interface without touch operation.
In a first aspect, the present application provides a display method, where the method is applied to an electronic device, the electronic device includes a screen, the screen of the electronic device includes a first preset area, and the method includes: displaying a first interface; when the first interface is displayed, the electronic equipment acquires a first image; determining a first ball-of-eye gaze area of the user based on the first image, the first ball-of-eye gaze area being used to indicate a screen area at which the user gazes when the user gazes at the screen; and when the first eye-gazing area is in the first preset area, displaying a second interface.
Implementing the method provided in the first aspect, the electronic device may acquire an image for determining a gaze area of an eyeball of a user while displaying an interface. When it is determined from the above-mentioned image that the user is looking at a certain preset area, the electronic device may display an interface associated with the area. Thus, the user can control the electronic equipment to display a certain interface through the gazing operation, so that the service or information provided by the interface can be acquired quickly.
In some embodiments, the screen of the electronic device includes a second preset area, the second preset area being different from the first preset area, the method further including: determining a second eyeball-gazing area of the user based on the first image, wherein the position of the second eyeball-gazing area on the screen is different from that of the first eyeball-gazing area on the screen; and when the second eyeball gazing area is in the second preset area, displaying a third interface, wherein the third interface is different from the second interface.
By implementing the method provided by the embodiment, the electronic device can divide the screen into a plurality of preset areas. One region may correspond to one interface. When the electronic device detects which region the user looks at, an interface corresponding to the region may be displayed. Thus, the user can control the electronic device to display different interfaces quickly by looking at different screen areas.
In some embodiments, the second interface and the third interface are interfaces provided by the same application or the second interface and the third interface are interfaces provided by different applications.
With reference to the method provided in the first aspect, in some embodiments, the method further includes: displaying a fourth interface; when the fourth interface is displayed, the electronic equipment acquires a second image; determining a third eye gaze area of the user based on the second image; and when the third eyeball gazing area is in the first preset area, displaying a fifth interface, wherein the fifth interface is different from the second interface.
When the method provided by the embodiment is implemented, the interfaces associated with one screen area of the electronic device can be different when the electronic device displays different main interfaces. For example, the interface associated with the upper right corner region of the electronic device may be a payment interface at the first desktop and the interface associated with the upper right corner region of the electronic device may be a ride interface at the second desktop. In this way, the user can set more interfaces associated with the screen area, thereby meeting the user's need to open the interface by looking at the operation.
In some embodiments, in combination with the method provided in the first aspect, displaying the second interface when the first ball gaze area is within the first preset area includes: and when the first eyeball fixation region is in the first preset region and the fixation time length of the first preset region is the first time length, displaying a second interface.
By implementing the method provided by the embodiment, the electronic device can also monitor the gazing duration of the user when detecting the gazing area of the eyeballs of the user. When the gazing duration meets the preset duration, the electronic equipment can display a corresponding interface.
With reference to the method provided in the first aspect, in some embodiments, the method further includes: and when the first eyeball fixation region is in the first preset region and the duration of fixation at the first preset region is the second duration, displaying a sixth interface.
By implementing the method provided by the embodiment, the electronic device can also associate one screen area with a plurality of interfaces, and determine which interface is specifically displayed according to the user fixation time length.
In some embodiments, the first ball-fixation area is a cursor point formed by one display unit on the screen, or the first ball-fixation area is a cursor point or a cursor area formed by a plurality of display units on the screen.
In combination with the method provided in the first aspect, in some embodiments, the second interface is a non-privacy interface, the method further comprising: displaying an interface to be unlocked; when the interface to be unlocked is displayed, the electronic equipment acquires a third image; determining a fourth eye gaze area of the user based on the third image; and when the fourth eyeball fixation position is in the first preset region, displaying a second interface.
In combination with the method provided in the first aspect, in some embodiments, the third interface is a privacy interface, and the method further includes: and when the fourth eyeball fixation position is in the second preset region, the third interface is not displayed.
By implementing the method provided by the embodiment, the electronic device can also set the privacy type of the associated interface. When the associated interface is a non-privacy interface, in the screen locking state, after the user is identified to watch the screen area corresponding to the non-privacy interface, the electronic equipment can directly display the non-privacy interface without unlocking. Thus, the user can more quickly acquire the non-privacy interface. When the associated interface is a privacy interface, in the screen locking state, after the user is identified to watch the screen area corresponding to the privacy interface, the electronic device may not display the privacy interface. Thus, the electronic equipment can avoid privacy disclosure when providing shortcut service for the user, and the user experience is improved.
With reference to the method provided in the first aspect, in some embodiments, the second interface and the third interface are both privacy interfaces; and the electronic equipment does not start the camera to acquire the image when the interface to be unlocked is displayed.
By implementing the method provided by the embodiment, when all the associated interfaces are privacy interfaces, the electronic equipment can not start the camera in the screen locking state, so that the power consumption is saved.
In some embodiments, the second preset area of the first interface is displayed with a first control, where the first control is used to indicate that the second preset area is associated with a third interface.
By implementing the method provided by the embodiment, the electronic device can display the prompt control in the preset area with the associated interface in the process of detecting the eyeball gazing area of the user. The prompt control may be used to indicate to the user that the area has an associated interface, as well as the services or information that the interface is capable of providing. Thus, the user can intuitively know whether each area has an associated interface or not and the service or information provided by each interface. On this basis, the user can open which associated interface by deciding which preset area to look at.
In combination with the method provided in the first aspect, in some embodiments, the second preset area of the interface to be unlocked does not display the first control.
By implementing the method provided by the embodiment, when the interface associated with a certain preset area is a privacy interface, in the screen locking state, the electronic device does not display a prompt control for indicating the privacy interface, so that the user is prevented from invalidity looking at the preset area.
In some embodiments, in combination with the method provided in the first aspect, the first control is any one of the following: the system comprises a thumbnail of a first interface, an icon of an application program corresponding to the first interface and a function icon indicating a service provided by the first interface.
With reference to the method provided in the first aspect, in some embodiments, a duration of image acquisition by the electronic device is a first preset duration; the electronic equipment collects a first image, specifically: the electronic equipment collects a first image in a first preset time.
By implementing the method provided by the embodiment, the terminal equipment can not always detect the eyeball gazing area of the user, but can detect the eyeball gazing area within a preset period of time, so that the power consumption is saved, and meanwhile, the influence of camera abuse on the information safety of the user is avoided.
In some embodiments, the first preset duration is the first 3 seconds of displaying the first interface.
By implementing the method provided by the embodiment, the terminal device may detect the eyeball gazing area of the user in the first 3 seconds of displaying the first interface, and determine whether the user is gazing at the preset area of the screen. Not only the user requirements in most scenes are satisfied, but also the power consumption is reduced as much as possible.
In combination with the method provided in the first aspect, in some embodiments, the electronic device collects the first image through a camera module, the camera module includes: at least one 2D camera and at least one 3D camera, the 2D camera being for obtaining a two-dimensional image, the 3D camera being for obtaining an image comprising depth information; the first image includes a two-dimensional image and an image including depth information.
By implementing the method provided by the embodiment, the camera module of the terminal device may include a plurality of cameras, and the plurality of cameras include at least one 2D camera and at least one 3D camera. In this way, the terminal device can acquire a two-dimensional image and a three-dimensional image indicating the eye gaze position of the user. The combination of the two-dimensional image and the three-dimensional image is beneficial to improving the precision and accuracy of the terminal equipment for identifying the eyeball fixation region of the user.
In some embodiments, the first image acquired by the camera module is stored in a secure data buffer, and before determining the first eye-gaze area of the user based on the first image, the method further comprises: the first image is obtained from the secure data buffer under a trusted execution environment.
By implementing the method provided by the embodiment, before the terminal device processes the image collected by the camera module, the terminal device can store the image collected by the camera module in the safe data buffer area. The image data stored in the safety data buffer can only be transmitted to the eyeball fixation recognition algorithm through a safety transmission channel provided by a safety service, so that the safety of the image data is improved.
In some embodiments, the method provided in connection with the first aspect, the secure data buffer is provided at a hardware layer of the electronic device.
In combination with the method provided in the first aspect, in some embodiments, determining the first eye-gaze area of the user based on the first image specifically includes: determining feature data of the first image, the feature data including one or more of a left eye image, a right eye image, a face image, and face mesh data; and determining a first eyeball fixation region indicated by the characteristic data by utilizing an eyeball fixation recognition model, wherein the eyeball fixation recognition model is established based on a convolutional neural network.
By implementing the method provided by the embodiment, the terminal equipment can respectively acquire the left eye image, the right eye image, the face image and the face grid data from the two-dimensional image and the three-dimensional image acquired by the camera module, so that more features are extracted, and the recognition precision and accuracy are improved.
With reference to the method provided in the first aspect, in some embodiments, determining feature data of the first image specifically includes: face correction is carried out on the first image, and a face-corrected first image is obtained; and determining the characteristic data of the first image based on the first image with the face being righted.
Before the method provided by the embodiment is implemented, the terminal equipment can correct the face of the image acquired by the camera module before the left eye image, the right eye image and the face image are acquired, so that the accuracy of the left eye image, the right eye image and the face image is improved.
In some embodiments, the first interface is any one of a first desktop, a second desktop, or a negative screen; the fourth interface is any one of the first desktop, the second desktop or the negative screen, and is different from the first interface.
By implementing the method provided by the embodiment, main interfaces of the terminal device, such as the first desktop, the second desktop, the negative one screen and the like, can be respectively provided with a screen preset area and an associated interface. Different main interfaces may multiplex one screen preset area.
In combination with the method provided in the first aspect, in some embodiments, the association relationship between the first preset area and the second interface and the fifth interface is set by a user.
By implementing the method provided by the embodiment, the user can set the associated interfaces of different screen preset areas corresponding to the main angles through the interfaces provided by the electronic equipment so as to meet the personalized requirements of the user.
In a second aspect, the present application provides an electronic device comprising one or more processors and one or more memories; wherein the one or more memories are coupled to the one or more processors, the one or more memories being operable to store computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform the method as described in the first aspect and any possible implementation of the first aspect.
In a third aspect, embodiments of the present application provide a chip system for application to an electronic device, the chip system comprising one or more processors for invoking computer instructions to cause the electronic device to perform a method as described in the first aspect and any possible implementation of the first aspect.
In a fourth aspect, the present application provides a computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform a method as described in the first aspect and any possible implementation of the first aspect.
In a fifth aspect, the present application provides a computer program product comprising instructions which, when run on an electronic device, cause the electronic device to perform a method as described in the first aspect and any possible implementation of the first aspect.
It will be appreciated that the electronic device provided in the second aspect, the chip system provided in the third aspect, the computer storage medium provided in the fourth aspect, and the computer program product provided in the fifth aspect are all configured to perform the method provided in the present application. Therefore, the advantages achieved by the method can be referred to as the advantages of the corresponding method, and will not be described herein.
Drawings
Fig. 1 is a schematic view of an eye gaze location according to an embodiment of the present application;
FIGS. 2A-2I are a set of user interfaces provided by embodiments of the present application;
3A-3E are a set of user interfaces provided by embodiments of the present application;
FIGS. 4A-4D are a set of user interfaces provided by embodiments of the present application;
FIGS. 5A-5M are a set of user interfaces provided by embodiments of the present application;
FIGS. 6A-6I are a set of user interfaces provided by embodiments of the present application;
7A-7C are a set of user interfaces provided by embodiments of the present application;
FIG. 8 is a flow chart of a display method provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of an eyeball gaze recognition model according to an embodiment of the present application;
fig. 10 is a flowchart of a face correction method provided in an embodiment of the present application;
FIGS. 11A-11C are schematic diagrams illustrating a set of face correction methods according to embodiments of the present application;
FIG. 12 is a block diagram of a convolutional network of an eye gaze recognition model provided in an embodiment of the present application;
FIG. 13 is a schematic illustration of a separable convolution technique provided by an embodiment of the present application;
fig. 14 is a schematic system structure of a terminal 100 according to an embodiment of the present application;
fig. 15 is a schematic hardware structure of the terminal 100 according to the embodiment of the present application.
Detailed Description
The terminology used in the following embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
The embodiment of the application provides a display method. The method can be applied to terminal equipment such as mobile phones, tablet computers and the like. Terminal devices such as mobile phones and tablet computers implementing the above method may be referred to as a terminal 100. The following embodiments will refer to the terminal device such as the mobile phone, the tablet pc, and the like using the terminal 100.
The terminal 100 may be, without limitation, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a personal digital assistant (personal digital assistant, PDA), an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, an artificial intelligence (artificial intelligence, AI) device, a wearable device, a vehicle-mounted device, a smart home device, and/or a smart city device, and the specific types of the above-described terminals are not particularly limited in the embodiments of the present application.
In the display method provided in the embodiment of the present application, the terminal 100 may display a shortcut window in the unlocked main interface. The shortcut window may display an application program frequently used by the user, for example, an icon, a main interface, or a common interface of the application program. The above-mentioned common interface refers to a page that is frequently opened by a user. After detecting that the unlocking is successful, the terminal 100 may detect the eye gaze position of the user. When detecting that the eye gaze position of the user is within the shortcut window area, the terminal 100 may display a main interface or a common interface of the application program displayed in the shortcut window.
The layer where the shortcut window is located is above the layer of the main interface. Therefore, the content displayed in the shortcut window is not blocked. The eye gaze position of the user refers to a position where a line of sight is focused on a screen of the terminal 100 when the user gazes at the terminal 100. As shown in fig. 1, a cursor point S may be displayed on the screen of the terminal 100. When the user gazes at the cursor point S, the position where the user' S line of sight focuses on the screen shown in fig. 1 is the cursor point S, that is, the eyeball gazing position of the user is the cursor point S. The cursor point S may be any position in the screen. Fig. 1 also shows a shortcut window W, and when the eye gaze position of the user is the cursor point S', the terminal 100 may determine that the eye gaze position of the user is within the shortcut window area W, i.e. the user is gazing at the shortcut window.
In some embodiments, shortcut windows may also be divided into private and non-private classes. The shortcut window marked as privacy class can only be displayed on the main interface after the unlocking is successful. The shortcut window of the non-privacy class can also be displayed on the interface to be unlocked before the unlocking is successful. In the interface to be unlocked, when detecting that the user gazes at the non-privacy-class application program displayed in the shortcut window, the terminal 100 may display a main interface or a common interface of the application program. Whether the shortcut window is private depends on the privacy requirements of the information presented in the window.
By implementing the method, the user can rapidly open the common application program and the common interfaces in the common application program, so that the user operation is saved, and the use convenience of the user is improved. Meanwhile, the user can control whether the terminal 100 opens the above-mentioned common application program or the common interface through the eye gaze position, thereby further saving the user operation. Particularly, under the condition that the hands of the user are occupied, the user controls the terminal 100 to execute a certain action through the eyeball fixation position, so that a new interaction control mode is provided for the user, and the use experience of the user is improved.
The user scenario in which the terminal 100 implements the above-described interaction method based on eye gaze recognition is specifically described below.
Fig. 2A schematically illustrates a user of the terminal 100 in an off-screen state.
The terminal 100 may be in an off-screen state when the user does not use the terminal 100. As shown in fig. 2A, when the terminal 100 is in the off-screen state, the display of the terminal 100 is dormant to be black-screen, but other devices and programs are in a state of normal operation. In other embodiments, the terminal 100 may also be in the off-screen AOD (Always on Display) state when the user is not using the terminal 100. The off-screen AOD state refers to a state of controlling the local brightness of the screen under the condition that the whole mobile phone screen is not lighted, namely, the state of controlling the local brightness of the screen on the basis of the off-screen state.
Upon detecting a user operation to wake up the handset, the terminal 100 may illuminate the entire screen, displaying the interface to be unlocked as shown in fig. 2B. The interface to be unlocked can have a display time and date for the user to view. Wherein, the terminal 100 detects the user operation of waking up the mobile phone, including but not limited to: the user picks up the operation of the mobile phone, and the user wakes up the operation of the mobile phone through the voice assistant, which is not limited in the embodiment of the present application.
After displaying the interface to be unlocked, the terminal 100 may enable the camera to acquire and generate image frames. The image frame may include therein a facial image of the user. Then, the terminal 100 may perform face recognition using the image frame, and determine whether the face image in the image frame is a face image of the owner, that is, whether the user who is performing the unlocking operation is the owner.
Referring to fig. 2B, the terminal 100 may be provided with a camera module 210. The camera module 210 of the terminal 100 includes at least one 2D camera and one 3D camera. A 2D camera refers to a camera that generates a two-dimensional image, such as a camera that generates an RGB image commonly used in a mobile phone. The above-mentioned 3D camera refers to a camera capable of generating a three-dimensional image or a camera capable of generating an image including depth information, for example, a TOF camera. Compared with the 2D camera, the image generated by the 3D camera also comprises depth information, namely distance information of the shot object and the 3D camera. Optionally, the camera module 210 may also include a plurality of 2D cameras and a plurality of 3D cameras, which is not limited in the embodiment of the present application.
In the embodiment of the present application, the camera used by the terminal 100 may be one of the cameras in the camera module 210 when performing the face unlocking test. Typically, this camera is a 3D camera in the camera module 210.
When the face unlocking is successful, i.e., the collected face image matches the face image of the owner, the terminal 100 may display the user interfaces shown in fig. 2C-2D.
First, the terminal 100 may display the unlock success interface shown in fig. 2C. The interface may be displayed with an icon 211. Icon 211 may be used to prompt the user that the facial unlocking was successful. The terminal 100 may then display the user interface shown in fig. 2D. This interface may be referred to as the main interface of the terminal 100.
It will be appreciated that the unlock success interface shown in fig. 2C is optional. After confirming that the unlocking is successful, the terminal 100 may also directly display the main interface shown in fig. 2D.
The terminal 100 may also adopt unlocking modes such as password unlocking (graphic password, digital password), fingerprint unlocking, and the like, without being limited to the above-described face unlocking described in fig. 2C. After the unlocking is successful, the terminal 100 may also display the main interface shown in fig. 2D.
The main interface may include a notification bar 221, page indicators 222, a commonly used application icon tray 223, and other application icon trays 224.
Wherein: the notification bar may include one or more signal strength indicators (e.g., signal strength indicator 221A, signal strength indicator 221B), wireless high-fidelity (wireless fidelity, wi-Fi) signal strength indicator 221C, battery status indicator 221D, time indicator 221E of a mobile communication signal (also may be referred to as a cellular signal).
The commonly used application icon tray 223 may include a plurality of commonly used application icons (e.g., camera application icons, address book application icons, phone application icons, information application icons) that remain displayed upon page switching. The above common application icons are optional, and the embodiments of the present application are not limited thereto.
Other application icon tray 224 may include a plurality of general application icons, such as a setup application icon, an application marketplace application icon, a gallery application icon, a browser application icon, and so on. The general application icons may be distributed in other application icon trays 224 of multiple pages of the main interface. The general application icons displayed in the other application icon tray 224 are changed accordingly at the time of page switching. The icon of an application may be a general application icon or a general application icon. When the icon is placed on the commonly used application icon tray 223, the icon is a commonly used application icon; when the icon is placed on the other application icon tray 224, the icon is a general application icon.
It is understood that fig. 2D illustrates only one main interface or one page of one main interface of the terminal 100, and should not be construed as limiting the embodiments of the present application.
Referring to fig. 2E, while displaying the main interface shown in fig. 2D, i.e., after the unlocking is successful, the terminal 100 may further display a shortcut window 225 and a shortcut window 226 on the layers of the main interface.
Specifically, the terminal 100 may have a first application installed thereon. The first application may provide payment services to the user. After opening the first application, the terminal 100 may display a payment interface. The payment interface may include thereon a payment code, such as a payment two-dimensional code, a payment bar code, and the like. The user can complete the payment task by presenting the payment interface. Opening the first application refers to setting the first application as a foreground application. As shown in FIG. 2E, before launching the first application, the shortcut window 225 may display a thumbnail of the payment interface described above to prompt the user for the application and common interfaces associated with the shortcut window 225.
Likewise, a second application may be installed on the terminal 100. After the second application is started, the terminal 100 may display a health code interface. A health code reflecting the health condition of the user may be displayed on the health code interface. The user can complete the health examination by presenting the health code interface. Likewise, before the second application is launched, shortcut window 226 may display a thumbnail of the health code interface described above.
The payment interface described above may be referred to as a common interface for the first application. The health code interface described above may be referred to as a common interface for the second application.
While displaying the main interface and the shortcut window, the terminal 100 may collect a facial image of the user through the camera module 210.
At this time, the number of cameras used by the terminal 100 is 2, including one 2D camera and one 3D camera. Of course, without being limited to one 2D camera and one 3D camera, the terminal 100 may also use more cameras to obtain more facial features of the user, particularly ocular features, in order to facilitate a subsequent faster and more accurate determination of the eye gaze position of the user.
In a scenario using face unlocking, the 3D camera of the terminal 100 is on, so at this time, the terminal 100 only needs to turn on the 2D camera of the camera module 210. In a scenario using password unlocking and fingerprint unlocking, the camera of the terminal 100 is closed. At this time, the terminal 100 needs to turn on the 2D camera and the 3D camera in the camera module 210.
The time for the terminal 100 to collect facial images of the user through the camera module 210 (2D camera, 3D camera) may be referred to as gaze recognition time. Preferably, the gaze recognition time is the first 3 seconds of the main interface after the successful unlocking is displayed. After 3 seconds, the terminal 100 may turn off the camera module 210 to save power consumption. The gaze recognition time is set too short, for example, 1 second, and the image frames including the user's face image acquired by the terminal 100 may be insufficient, thereby causing inaccurate eye gaze recognition results. On the other hand, it is also difficult for the user to look at a shortcut window immediately within 1 second after displaying the main interface. The gaze recognition time is set too long, e.g. 7 seconds, 10 seconds, resulting in excessive power consumption. Of course, the gaze recognition time is not limited to 3 seconds, and may be set to other values, such as 2.5 seconds, 3.5 seconds, 4 seconds, and the like, which is not limited in the embodiment of the present application. The following description will take 3 seconds as an example.
Accordingly, the terminal 100 may display the shortcut window only during the gaze recognition time. When the camera module 210 is turned off, that is, the terminal 100 no longer detects the eye gaze position of the user, the terminal 100 also no longer displays a shortcut window, so as to avoid long-term shielding of the main interface and influence on the user experience.
During the gaze recognition time, the camera module 210 may continuously acquire and generate image frames containing images of the user's face. The image frame comprises a two-dimensional image acquired by the 2D camera and a three-dimensional image acquired by the 3D camera.
Based on the image frames acquired during the above-mentioned gaze recognition time, the terminal 100 may recognize the eye gaze position of the user, and determine whether the user is gazing at the shortcut window 225 or the shortcut window 226.
As shown in fig. 2F, the terminal 100 may determine that the user is looking at the shortcut window 225 based on the acquired image frames. In response to detecting a user action of the user looking at the shortcut window 225, the terminal 100 may open the first application and display a payment interface corresponding to the shortcut window 225, referring to fig. 2G. As shown in fig. 2G, the payment interface displays a payment two-dimensional code 231 and its related information for providing a payment service to the user.
As shown in fig. 2H, the terminal 100 may also determine that the user is looking at the shortcut window 226 based on the acquired image frames. In response to detecting a user action of the user looking at the shortcut window 226, the terminal 100 may open a second application and display a health code interface corresponding to the shortcut window 226, referring to fig. 2I. As shown in fig. 2I, the health code interface displays the health codes 232 and their related information required for performing the health check so that the user can quickly complete the health check.
In other embodiments, the terminal 100 may also display different interfaces by detecting different durations of time that the user gazes at a certain area. For example, referring to fig. 2D, the terminal 100 may detect whether the user gazes at the upper right corner region of the screen after entering the main interface. After detecting that the user gazes at the upper right corner area for a first period of time, for example 2 seconds, the terminal 100 may display a shortcut window 225. If it is detected that the user is still looking at the upper right corner region and a second duration is reached, for example 3 seconds, the terminal 100 may switch the shortcut window 225 displayed in the upper right corner region to the shortcut window 226.
In the scenario of displaying the shortcut window 225 or the shortcut window 226, the terminal 100 may detect a touch operation, or a blink control operation, or a twist control operation, of the user acting on the above window to determine whether to display an interface corresponding to the shortcut window 225 or the shortcut window 226.
By implementing the method, the user can immediately acquire the common interface of the common application program after opening the terminal 100, so as to quickly acquire services and information provided by the common interface, such as payment services provided by the payment interface, health codes provided by the health code interface and related information thereof.
On the other hand, the user can display the interface corresponding to the shortcut window through the action control terminal 100 looking at the shortcut window without executing touch operations such as clicking, double clicking, long pressing and the like, so that the problem that the terminal 100 cannot be controlled under the condition that the hands of the user are occupied is avoided, and convenience is provided for the user.
FIG. 3A illustrates a main interface including a plurality of pages. Each of which may be referred to as a main interface.
As shown in FIG. 3A, the main interface may include page 30, page 31, page 32. Page 30 may be referred to as a negative one-screen. The page 31 may be referred to as a first desktop. The page 32 may be referred to as a second desktop. The page layout of the second desktop is the same as that of the first desktop, and will not be described here again. The number of desktops in the main interface may be increased or decreased depending on the user's settings, only the first desktop and the second desktop are shown in fig. 3A, etc.
In fig. 2D, the main interface displayed by the terminal 100 is actually the first desktop in the main interface shown in fig. 3A. In some embodiments, after the unlocking is successful, the terminal 100 first displays the first desktop. In other embodiments, after successful unlocking, the terminal 100 may display a negative one-screen, the first desktop, or the second desktop. Optionally, the terminal 100 specifically displays which of the negative one screen, the first desktop, or the second desktop depends on the page that was stopped at the last exit.
Thus, after displaying the unlock success interface shown in fig. 2C, the terminal 100 may also first display a second desktop or a negative screen, and display a shortcut window 225 and a shortcut window 226 on the layer where the second desktop or the negative screen is located, referring to fig. 3B and 3C.
In the first 3 seconds of displaying the main interface shown in fig. 3B (second desktop) or fig. 3C (negative one screen), the terminal 100 may also collect a facial image of the user through the camera module 210, and identify whether the user looks at the shortcut window 225 or the shortcut window 226. When recognizing that the user looks at the shortcut window 225, the terminal 100 may also display the payment interface shown in fig. 2G for the user to obtain the payment service provided by the first application. When recognizing that the user looks at the shortcut window 226, the terminal 100 may also display the health code interface shown in fig. 2I, so that the user may obtain the health code 232 and related information provided by the second application program, so that the user may quickly complete the health check.
Thus, no matter which main interface is displayed by the terminal 100 after unlocking, the user can acquire the common interface of the common application program, thereby rapidly acquiring the service and information provided by the common interface so as to meet the own requirements.
In some embodiments, the terminal 100 may also use icons with smaller areas instead of the shortcut window described above.
Referring to fig. 3D, the terminal 100 may display an icon 311 and an icon 312. Icon 311 may correspond to shortcut window 225 described above and icon 312 may correspond to shortcut window 226 described above. When detecting an action of the user looking at the icon 311 or the icon 312, the terminal 100 may display a payment interface providing a payment service or a health code interface showing a health code for the user to use.
The icons 311 and 312 not only play a role in prompting, but also reduce shielding of the main interface, and improve the use experience of users.
Of course, the terminal 100 may also display icons of applications installed on the terminal 100, such as the application icon 321 and the application icon 322 shown in fig. 3E. Generally, the application is an application frequently used by a user. After detecting the user's gaze, the terminal 100 may open the application program, so that the user provides a service of rapidly opening the application program without performing a touch operation.
The user may choose to enable or disable the eye gaze recognition function. In a scenario in which eyeball fixation identification is enabled, after unlocking is completed, the terminal 100 may collect a facial image of the user, identify whether the user is looking at the shortcut window, and further determine whether to display a common interface corresponding to the shortcut window, so that the user may quickly and conveniently obtain information in the common interface. On the contrary, in the scene of closing the eyeball fixation recognition, the terminal 100 does not recognize whether the user is looking at the shortcut window, and further does not display the common interface corresponding to the shortcut window.
Fig. 4A-4D illustrate a set of user interfaces that set the eyeball gaze recognition functionality on or off.
Fig. 4A exemplarily shows a setting interface on the terminal 100. A plurality of settings options may be displayed on the settings interface, such as account settings selection 411, WLAN option 412, bluetooth option 413, mobile network option 414, and the like. In the present embodiment, the setup interface also includes auxiliary function options 415. The auxiliary function option 415 may be used to set some shortcut operations.
The terminal 100 may detect a user operation acting on the auxiliary function option 415. In response to the above operation, the terminal 100 may display the user interface shown in fig. 4B, denoted as an auxiliary function setting interface. The interface may display a plurality of auxiliary function options, such as an unobstructed option 421, a single hand mode option 422, and the like. In the embodiment of the present application, the auxiliary function setting interface further includes a shortcut start and gesture option 423. Shortcut actuation and gesture options 423 may be used to set some gesture actions that control interactions and eye gaze actions.
The terminal 100 may detect a user operation acting on the shortcut initiation and gesture options 423. In response to the above, the terminal 100 may display the user interface shown in fig. 4C, which is denoted as a shortcut start-up and gesture setting interface. The interface may display a plurality of shortcut initiation and gesture setup options, such as a wisdom voice option 431, a screen capture option 432, a screen recording option 433, and a quick talk option 434. In the embodiment of the present application, the shortcut start and gesture setup interface further includes an eye gaze option 435. The eye gaze option 435 may be used to set an area of eye gaze recognition, a corresponding shortcut operation.
The terminal 100 may detect a user operation on the eye gaze option 435. In response to the above operation, the terminal 100 may display the user interface shown in fig. 4D, which is denoted as an eyeball-gaze-recognition-setting interface. As shown in fig. 4D, the interface may display a plurality of functional options based on eye gaze identification, such as pay code option 442, health code option 443.
The pay code option 442 may be used to turn on or off the function of the eye gaze control display of the pay code. For example, in a scenario where the pay code option 442 ("ON") is opened, when unlocking is successful and the main interface is displayed, the terminal 100 may display the shortcut window 225 associated with the payment interface, and at the same time, the terminal 100 may confirm whether the user looks at the shortcut window 225 through the captured image frame including the user's face image. When detecting the action of the user looking at the screen shortcut window 225, the terminal 100 may display a payment interface corresponding to the shortcut window 225, and acquire a payment code. Therefore, the user can quickly and conveniently acquire the payment code and complete the payment behavior, so that a large number of complicated user operations are avoided, and better use experience is obtained.
The health code option 443 may be used to turn on or off the function of the eye gaze control display health codes. For example, in a scenario where the health code option 443 ("ON") is opened, when unlocking is successful and the main interface is displayed, the terminal 100 may display the shortcut window 226 associated with the health code interface, and at the same time, the terminal 100 may confirm whether the user looks at the shortcut window 226 through the captured image frame including the user's face image. When detecting an action of the user looking at the shortcut window 226, the terminal 100 may display a health code interface including a health code and its related information. Thus, the user can quickly and conveniently acquire the health code to finish health examination, thereby avoiding a great deal of complicated user operation.
The eye gaze identification settings interface shown in fig. 4D may also include other eye gaze based shortcut function options, such as notification bar option 444. When unlocking is successful and the main interface is displayed, the terminal 100 may detect whether the user looks at the notification bar area at the top of the screen. When an action of the user looking at the notification bar is detected, the terminal 100 may display a notification interface for the user to review the notification message.
In some embodiments, the user may customize the display area of the shortcut window according to his own usage habit and the layout of the main interface, so as to reduce the influence of the shortcut window on the main interface of the terminal 100 as much as possible.
In some embodiments, the eye gaze identification settings interface may also be as shown in fig. 5A. The terminal 100 may detect a user operation acting on the pay code option 442, and in response to the operation, the terminal 100 may display a user interface (pay code setting interface) shown in fig. 5B.
As shown in fig. 5B, the interface may include a button 511, a region selection control 512. The button 511 may be used to turn ON ("ON") or OFF ("OFF") the function of eye gaze control display of the pay codes. The region selection control 512 may be used to set the display region of the pay code shortcut 225 on the screen.
The region selection controls 512 may in turn include controls 5121, 5122, 5123, 5124. By default, when the eyeball gaze control display payment code function is turned on, the payment code shortcut window 225 is displayed in the upper right corner region of the screen, corresponding to the display region shown by the control 5122. At this time, an icon 5125 (selected icon) may be displayed in the control 5122, representing the display area (upper right corner area) of the current pay code shortcut window 225 on the screen.
If a display area is already used to display a shortcut window, an icon 5126 (occupied icon) may be displayed in the control corresponding to the display area. For example, the display area shown by control 5123 can correspond to the health code shortcut window 226. Accordingly, an occupied icon may be displayed in the control 5123, indicating that the lower left corner region of the screen to which the control 5123 corresponds is occupied and is no longer available for setting the pay code shortcut window 225.
Referring to fig. 5C, the terminal 100 may detect a user operation acting on the control 5121. In response to the above, the terminal 100 may display a selected icon in the control 5121 for indicating a display area (upper left corner area) of the currently selected shortcut window 225 associated with the pay code on the screen. At this time, referring to fig. 5D, upon detecting that the unlocking is successful and displaying the main interface, the upper left corner area on the layer of the main interface may display the shortcut window 225 corresponding to the pay code.
Referring to the setting method shown in fig. 5A to 5C, the terminal 100 may further set the display area of the health code shortcut window 226 according to the user operation, which is not described herein.
As shown in fig. 5E, the eye gaze identification settings interface may also include a control 445. Control 445 may be used to add more shortcut windows to provide the user with more services to quickly open common applications and/or common interfaces.
As shown in fig. 5E, terminal 100 may detect a user operation on control 445. In response to the above operation, the terminal 100 may display the user interface (add shortcut window interface) shown in fig. 5F. The interface may include a plurality of shortcut window options, such as option 521, option 522, and the like. Option 521 may be used to set a shortcut window 227 associated with the health detection record. Upon recognizing the user action of gazing at the shortcut window 227, the terminal 100 may display an interface (third interface) containing a user health detection record. Specifically, the health detection record shortcut window may refer to fig. 5G. Option 522 may be used to set a shortcut window associated with the electronic identification card. The shortcut window can be associated with an interface for displaying the electronic identity card of the user, so as to provide a service for rapidly opening the interface, which is not described herein.
The terminal 100 may detect a user operation on the option 521. In response to the above operation, the terminal 100 may display a user interface (health detection record setting interface) shown in fig. 5H. As shown in fig. 5H, button 531 may be used to open a shortcut window 227 associated with the health detection record. Page controls (control 5321, control 5322, control 5323) can be used to set the display page of the shortcut window 227.
After the setting is completed, the terminal 100 may detect a user operation acting on the return control 534, and in response to the above operation, the terminal 100 may display an eyeball-gaze recognition setting interface, referring to fig. 5I. At this time, the interface further includes a health monitoring record option 446, which displays the health monitoring record corresponding to the eye gaze control.
Thus, when unlocking is completed and the main interface is displayed, the terminal 100 may further display a shortcut window 227 associated with the health monitoring record on top of the layer of the main interface within a preset gaze identification time, referring to the shortcut window 227 in fig. 5J. Based on the image frames including the facial image of the user acquired during the gaze recognition time, the terminal 100 may recognize the eye gaze location of the user and determine whether the user is gazing at the shortcut window 227. In response to detecting an action of the user looking at the shortcut window 227, the terminal 100 may display a third interface corresponding to the shortcut window 227, on which the health detection record is displayed.
It can be understood that, the terminal 100 can also change the display page and the display area of the shortcut window 227 according to the user operation, so as to meet the personalized display requirement of the user, more fit the use habit of the user, and improve the use experience of the user.
For example, referring to fig. 5K, the terminal 100 may detect a user operation on the page control 5323. At this time, the "display area" corresponds to 4 display areas such as the upper left corner and the upper right corner of the second desktop. The terminal 100 can detect a user operation on the region selection control 5333. At this time, the terminal 100 may determine to display the shortcut window 227 in the lower left corner region of the second desktop.
Referring to fig. 5L, upon completion of unlocking and displaying the first desktop, the terminal 100 may display a pay code shortcut window 225, a health code shortcut window 226 on the layer of the first desktop within a preset gaze recognition time; referring to fig. 5M, upon completion of unlocking and displaying the second desktop, the terminal 100 may further display a shortcut window 227 on the layer of the second desktop for a preset gaze recognition time.
It may be appreciated that, when the enabled shortcut window is set on a different page of the main interface, the terminal 100 may display a corresponding shortcut window belonging to the page according to the page displayed after unlocking.
In some embodiments, the terminal 100 may also set privacy types (private and non-private) for various shortcut windows. For non-privacy shortcut windows, the terminal 100 may also display them on the interface to be unlocked.
The terminal 100 may detect the eye gaze position of the user at the interface to be unlocked, and determine whether the user gazes at the above-mentioned non-privacy shortcut window. When detecting the action of the user looking at the non-private shortcut window, the terminal 100 may display a common interface corresponding to the shortcut window. In this way, the user does not need to complete the unlocking operation, so that the user operation is further saved, and the user can acquire the common application programs and/or the common interfaces more quickly.
Referring to fig. 6A, the pay code setting interface may further include a button 611. Button 611 may be used to set the privacy type of shortcut window 225 associated with the pay code. Button 611 is turned ON ("ON") to indicate that pay code shortcut 225 is private. Conversely, closing of button 611 ("OFF") may indicate that shortcut window 225 is non-privacy. As shown in FIG. 6A, the shortcut window 225 may be set to private.
Referring to the above procedure, the shortcut window 226 associated with the health code may also be set to private or non-private. As shown in fig. 6B, button 612 is closed, that is, shortcut window 226 may be set to be non-private. Referring to fig. 6C, in the eye gaze recognition setup interface, a security display tab 613 may be attached to the option corresponding to the private shortcut window to prompt the user that the shortcut window is private and not displayed on the screen before unlocking.
As shown in fig. 6D and 6E, when the interface to be unlocked is displayed, the terminal 100 may display a non-private health code shortcut window 226 above a layer of the interface to be unlocked. While the health code shortcut window 226 is displayed, the terminal 100 may collect a facial image of the user. Referring to fig. 6F, the terminal 100 may recognize that the user is looking at the health code shortcut window 226 based on the acquired image frame including the user's face image. In response to the user's action of looking at the health code shortcut window 226, the terminal 100 may display a health code interface corresponding to the health code shortcut window 226 displaying the health code, referring to fig. 6G.
For a privacy shortcut window, such as the pay code shortcut window 226, the terminal 100 does not display the shortcut window on the interface to be unlocked to avoid disclosure of the pay code.
When the terminal 100 is provided with both a private shortcut window and a non-private shortcut window, the terminal 100 may start the camera to identify the eye gaze position of the user at the interface to be unlocked. When the eye gaze position of the user is within the non-privacy shortcut window, the terminal 100 may display a corresponding common interface. When the eye gaze position of the user is within the privacy shortcut window, the terminal 100 may not display the corresponding common interface.
When only a privacy shortcut window is provided in the terminal 100, the terminal 100 may not start the camera to collect the facial image of the user and identify the eye gaze position of the user at the interface to be unlocked.
Referring to fig. 6H and 6I, if the terminal 100 completes the unlocking operation first, the terminal 100 may display a main interface. After displaying the main interface, the terminal 100 may display both the non-private health code shortcut window 226 and the private payment code shortcut window 225. That is, the terminal 100 may display a shortcut window of privacy after unlocking. The terminal 100 can display a non-private shortcut window before unlocking or display a non-private shortcut window after unlocking, so as to provide a more convenient service for controlling and displaying a common application program and/or a common interface for a user.
In some embodiments, the terminal 100 may also set the number of displays of various shortcut windows. After the above number of displays is exceeded, the terminal 100 may not display the shortcut window, but the terminal 100 may still recognize the eye gaze position of the user, providing a service of rapidly displaying the application and/or the common interface.
Specifically, referring to fig. 7A, the health code setting interface may also include a control 711. Control 711 may be used to set the number of displays of the shortcut window, e.g., "100 times" of the display in control 711 may represent: the terminal 100 may display a shortcut window 226 corresponding to the health code to prompt the user when the eye gaze control display health code function is enabled the first 100 times.
As shown in fig. 7B, after 100 times, the terminal 100 may not display the shortcut window 226 corresponding to the health code (the dotted frame of fig. 7B indicates the area where the eye gaze position of the user is located, and the above dotted frame is not displayed on the screen).
During the gaze recognition time, the terminal 100 may still acquire the user's facial image, although the terminal 100 no longer displays the shortcut window 226 corresponding to the health code. If the user's gaze at the lower left corner area of the first desktop is detected, the terminal 100 may still display a corresponding health code interface displaying the health code, and refer to fig. 7C, so that the user may complete the health code inspection using the health code interface.
Thus, after the user uses the eye gaze function for a long period of time, the user can know which region of which main interface corresponds to which common interface, without the terminal 100 displaying a shortcut window corresponding to the common interface in the region. At this time, the terminal 100 may not display the shortcut window, so as to reduce shielding of the main interface by the shortcut window and improve user experience.
Fig. 8 is a flowchart illustrating a display method according to an embodiment of the present application. The flow of the implementation of the above display method by the terminal 100 is specifically described below in conjunction with fig. 8 and the user interfaces described above.
S101, the terminal 100 detects that a trigger condition for starting eye gaze recognition is satisfied.
The long-term starting of the camera to collect the facial image of the user and identify the eye gaze position of the user occupies the resources (camera equipment resources and computing resources) of the terminal 100, and greatly increases the power consumption of the terminal 100. Meanwhile, the camera of the terminal 100 may not always be turned on in consideration of privacy security.
Therefore, the terminal 100 may be preset with some scenes for turning on eye gaze recognition. When detecting that the terminal 100 is in the above scene, the terminal 100 will start the camera to collect the facial image of the user. When the above scene is over, the terminal 100 may close the camera and stop collecting the facial image of the user, so as to avoid occupying the camera resource, save power consumption, and protect the privacy of the user.
The research personnel can determine the scene needing to start eye gaze recognition through the prior user habit analysis. Generally, the above-mentioned scene is a scene that a user picks up a mobile phone or just unlocks into the mobile phone. At this time, the terminal 100 may provide a service for the user to quickly enable a certain application program (a common application program), so as to save user operations and improve user experience. Further, the terminal 100 may provide the control method of controlling and starting the application program with eye gaze for the user, so as to avoid the problem that the user is inconvenient to perform the touch operation in the scene where both hands of the user are occupied, and further improve the user experience.
Thus, the above scenarios include, but are not limited to: and (3) lighting up a mobile phone screen and displaying a scene of an interface to be unlocked, and displaying a scene of a main interface (including a first desktop, a second desktop, a negative one-screen page and the like) after unlocking.
Corresponding to the scene for starting eyeball fixation recognition, the triggering conditions for starting eyeball fixation recognition comprise: detecting user operation for waking up the mobile phone, detecting user operation for finishing unlocking and displaying a main interface. The user operation of waking up the mobile phone includes, but is not limited to, an operation of the user to pick up the mobile phone, an operation of the user waking up the mobile phone through a voice assistant, and the like.
Referring to the user interfaces shown in fig. 2C to 2E, after detecting that the unlocking operation is completed, the terminal 100 may display the main interface shown in fig. 2D, and at the same time, the terminal 100 may display a shortcut window 225 and a shortcut window 226 associated with a commonly used application or a commonly used interface in the commonly used application on a layer of the main interface. The above-described operation of the pointing terminal 100 displaying the user interface shown in fig. 2C-2E may be referred to as a user operation of detecting that unlocking is completed and displaying the main interface. At this time, the terminal 100 may start the camera to collect the facial image of the user, identify the eye gaze position of the user, and further determine whether the user is gazing at the shortcut window.
Referring to the user interfaces shown in fig. 6D to 6E, when the wake-up of the terminal 100 is detected but the unlocking is not completed, the terminal 100 may also display a shortcut window associated with a common application or a common interface among the common applications on a layer of the interface to be unlocked. At this time, the terminal 100 may also start the camera to collect the facial image of the user, identify the eye gaze position of the user, and further determine whether the user is gazing at the shortcut window.
S102, the terminal 100 starts the camera module 210 to collect facial images of the user.
After detecting a user operation to wake up the mobile phone or detecting a user operation to finish unlocking and displaying the main interface, the terminal 100 may determine to turn on the eye gaze recognition function.
In one aspect, the terminal 100 may display a shortcut window to prompt the user to open a commonly used application and a commonly used interface associated with the shortcut window by looking at the shortcut window. On the other hand, the terminal 100 may turn on the camera module 210 to collect facial images of the user to identify whether the user gazes at the shortcut window and which shortcut window to gaze at.
Referring to the description of fig. 2B, the camera module 210 of the terminal 100 includes at least one 2D camera and one 3D camera. The 2D camera may be used to acquire and generate two-dimensional images. The 3D camera may be used to acquire and generate a three-dimensional image containing depth information. In this way, the terminal 100 can acquire a two-dimensional image and a three-dimensional image of the face of the user at the same time. In combination with the above two-dimensional image and the three-dimensional image, the terminal 100 can acquire richer facial features, particularly eye features, so as to more accurately identify the eyeball gazing position of the user, and more accurately determine whether the user looks at the quick window and which quick window to gaze.
Referring to S101, the terminal 100 does not always turn on the camera, and thus, after the camera module 210 is turned on, the terminal 100 sets a time to turn off the camera module 210.
The terminal 100 may set the gaze identification time. The gaze recognition time on time is a time when the terminal 100 detects the trigger condition described in S101. The moment at which the gaze recognition time ends depends on the duration of the gaze recognition time. The duration is preset, for example, 2.5 seconds, 3 seconds, 3.5 seconds, 4 seconds, etc. as described in fig. 2F. Wherein 3 seconds is the duration of the preferred gaze identification time. When the gaze identification time is timed out, the terminal 100 may turn off the camera module 210, i.e. no longer identify the eye gaze position of the user.
Accordingly, after the fixation recognition time is counted, the terminal 100 may not display the shortcut window any more, so as to avoid long-term shielding of the main interface, and influence on the user experience.
S103, the terminal 100 determines the eyeball fixation position of the user according to the acquired image frame containing the facial image of the user.
The image frames acquired and generated by the camera module 210 during the gaze recognition time may be referred to as target input images. The terminal 100 may identify the eye gaze position of the user using the target input image. Referring to the description of fig. 1, a position where a line of sight is focused on a screen of the terminal 100 when the user gazes at the terminal 100 may be referred to as an eyeball gazing position.
Specifically, after acquiring the target input image, the terminal 100 may input the image into the eye gaze recognition model. The eye gaze recognition model is a model preset in the terminal 100. The eye gaze recognition model may determine the eye gaze location of the user using an image frame containing an image of the user' S face, with reference to the cursor point S shown in fig. 1. The eye gaze recognition model may output position coordinates of the eye gaze location on the screen. Fig. 9 is a detailed description of the structure of the eye gaze recognition model used in the present application, which is not developed first.
After obtaining the position coordinates of the eye gaze position, the terminal 100 may determine whether the user gazes at the shortcut window and which shortcut window to gaze at according to the position coordinates, and further determine whether to open the common application program and the common interface associated with the shortcut window.
Optionally, the eye gaze recognition model may also output an eye gaze area of the user. One eye-gazing area may be contracted into one eye-gazing position, and one eye-gazing position may be expanded into one eye-gazing area. In some examples, a cursor point formed by one display unit on the screen may be referred to as an eyeball gazing position, and a cursor point or a cursor area formed by a plurality of display units on the screen is referred to as an eyeball gazing area.
After outputting an eyeball gazing area, the terminal 100 may determine whether the user gazes at the shortcut window and which shortcut window to gaze at by determining the position of the eyeball gazing area in the screen, thereby determining whether to open the commonly used application program and the commonly used interface associated with the shortcut window.
S104, the terminal 100 determines whether the user looks at the shortcut window according to the position coordinates of the eye-gaze position and the current interface, and further determines whether to display the common application program and the common interface associated with the shortcut window.
After determining the position coordinates of the eye gaze position of the user, in conjunction with the current interface of the terminal 100, the terminal 100 may determine whether the user is gazing at a shortcut window on the current interface.
Referring to fig. 2F, when the terminal 100 displays the interface shown in fig. 2F, the interface may be referred to as a current interface of the terminal 100. At this time, the terminal 100 may determine position coordinates of the eye gaze position of the user based on the acquired image frame containing the face image of the user. Then, the terminal 100 may determine the region or the control corresponding to the eye gaze position according to the position coordinates.
When the eye gaze location is within the shortcut window 225, the terminal 100 may determine that the user is gazing at the shortcut window 225; when the eye gaze location is within the shortcut window 226, the terminal 100 may determine that the user is looking at the shortcut window 226. In some embodiments, the terminal 100 may also determine that the eye gaze location of the user corresponds to an application icon in the commonly used application icon tray 223 or other application icon tray 224, such as a "gallery" application, or the like. The eye gaze position may also be in a blank area in the screen, which does not correspond to an icon or control in the main interface, nor to a shortcut window as described in the present application.
Referring to fig. 2F-2G, when it is determined that the user is looking at the shortcut window 225, the terminal 100 may display a payment interface corresponding to the shortcut window 225. The payment interface is a common interface for a user to determine. Referring to fig. 2H-2I, when it is determined that the user is looking at the shortcut window 226, the terminal 100 may display a health code interface for displaying a health code corresponding to the shortcut window 226. The health code interface is also a common interface for user determination.
In some embodiments, when it is determined that the user is looking at an application icon in the commonly used application icon tray 223 or other application icon tray 224, the terminal 100 may launch an application corresponding to the application icon. For example, referring to fig. 2F, when it is determined that the user is looking at the "gallery" application icon, the terminal 100 may display the top page of the "gallery".
Referring to fig. 3E, the terminal 100 may also display icons (application icons 321 and application icons 322) of commonly used applications, and when it is determined that the user is looking at the application icon 321 or the application icon 322, the terminal 100 may open the commonly used application corresponding to the application icon 321 or the application icon 322, for example, display a top page of the commonly used application.
When the eye-gaze position is a blank area in the screen, the terminal 100 may not perform any action until the eye-gaze recognition time is over, turning off the eye-gaze recognition function.
In some embodiments, referring to fig. 7A-7C, the terminal 100 may not display a shortcut window or icon when recognizing the eye gaze position of the user. However, the terminal 100 may still determine the specific region to which the eye-gaze position belongs according to the position coordinates of the eye-gaze position of the user. The above specific areas are preset, for example, an upper left corner area, an upper right corner area, a lower left corner area, a lower right corner area, and the like shown in fig. 7A. Further, according to the common application and the common interface associated with the specific area, the terminal 100 can determine which application is opened and which interface is displayed.
For example, referring to fig. 7B, the terminal 100 may recognize that the eye gaze position of the user is within the lower left corner region of the screen, and thus, the terminal 100 may display a health code interface for displaying health codes associated with the lower left corner region described above, referring to fig. 7C.
Fig. 9 exemplarily shows a structure of an eyeball fixation recognition model. The eye gaze recognition model used in embodiments of the present application is described in detail below in conjunction with fig. 9. In an embodiment of the present application, the eye gaze recognition model is built based on a convolutional neural network (Convolutional Neural Networks, CNN).
As shown in fig. 9, the eye gaze recognition model may include: the system comprises a face correction module, a dimension reduction module and a convolution network module.
(1) And a face correction module.
The image frames acquired by the camera module 210 containing the facial images of the user may first be input to the face correction module. The face correction module may be used to identify whether a facial image in an input image frame is correct. For an image frame with a face image that is not correct (e.g., is askew), the face correction module may correct the image frame to correct it, thereby avoiding subsequent impact on the eye gaze recognition effect.
Fig. 10 shows a process flow of face correction performed by the face correction module on the image frames collected by the camera module 210.
S201: face keypoints in image frame T1 are determined using a face keypoint identification algorithm.
In the embodiment of the application, the face key points comprise a left eye, a right eye, a nose, a left lip angle and a right lip angle. The face key point recognition algorithm is existing, for example, a face key point recognition algorithm based on Kinect and the like, and is not described herein.
Referring to fig. 11A, fig. 11A exemplarily shows one image frame including an image of a face of a user, which is denoted as an image frame T1. The face correction module may determine the face keypoints in the image frame T1 using a face keypoint identification algorithm: left eye a, right eye B, nose c, left lip angle d, right lip angle e, and the coordinate positions of the respective key points are determined, referring to image frame T1 in fig. 11B.
S202: and determining a calibrated line of the image frame T1 by using the face key points, and further determining a face deflection angle theta of the image frame T1.
Since the right and left eyes are on the same horizontal line in the corrected face image, a straight line (corrected line) connecting the right and left eye key points is parallel to the horizontal line, that is, a face deflection angle (angle formed by the corrected line and the horizontal line) θ is 0.
As shown in fig. 11B, the face correction module may determine the calibrated line L1 using the identified coordinate positions of the left eye a and the right eye B. Then, the face correction module may determine the face deflection angle θ of the face image in the image frame T1 from L1 and the horizontal line.
S203: if θ=0°, it is determined that the face image in the image frame T1 is correct, without correction.
S204: if θ+.0°, determining that the face image in the image frame T1 is not correct, and further, performing rotation correction on the image frame T1 to obtain an image frame with correct face image.
In fig. 11B, θ+.0, i.e., the face image in the image frame T1 is not correct. At this time, the face correction module may correct the image frame T1 so that the face in the image frame becomes correct.
Specifically, the face correction module may first determine the rotation center point y by using the coordinate positions of the left eye a and the right eye b, then rotate the image frame T1 by θ ° with the y point as the rotation center, and obtain the image frame with the correct face image, which is recorded as the image frame T2. As shown in fig. 11B, point a may represent the position of the rotated left eye a, point B may represent the position of the rotated right eye B, point C may represent the position of the rotated nose C, point D may represent the position of the rotated left lip angle D, and point E may represent the position of the rotated right lip angle E.
It will be appreciated that as image frame T1 is rotated, each pixel in the image frame will be rotated. The above A, B, C, D, E is merely exemplary to illustrate the rotation process of the key points in the image, and not just the rotation of the key points of the face.
S205: and processing the corrected image frame with the correct facial image to obtain a left eye image, a right eye image, a facial image and face grid data. Wherein the face mesh data may be used to reflect the position of the facial image in the image throughout the image.
Specifically, the face correction module may cut the corrected image frame according to a preset size with the face key point as a center, so as to obtain a left eye image, a right eye image, and a face image corresponding to the image. In determining the facial image, the face correction module may determine face mesh data.
Referring to fig. 11C, the face correction module may determine a rectangle of a fixed size centering on the left eye a. The rectangular overlaid image is the left eye image. In the same way, the face correction module may determine a right eye image centered on the right eye B and a face image centered on the nose C. The sizes of the left eye image and the right eye image are the same, and the sizes of the face image and the left eye image are different. After determining the face image, the face correction module may accordingly obtain face mesh data, i.e. the position of the face image in the whole image.
After the face correction is completed, the terminal 100 may obtain an image frame corrected for the face image, and obtain corresponding left-eye image, right-eye image, face image, and face mesh data from the image frame.
(2) And a dimension reducing module.
The face correction module can input the left eye image, the right eye image, the face image and the face grid data which are output by the face correction module into the dimension reduction module. The dimension reduction module can be used for reducing dimensions of the input left eye image, right eye image, face image and face grid data so as to reduce the calculation complexity of the convolution network module and improve the eyeball fixation recognition speed. The dimension reduction methods used by the dimension reduction module include, but are not limited to, principal component analysis (principal components analysis, PCA), downsampling, 1*1 convolution kernels, and the like.
(3) A convolutional network module.
The respective images (left-eye image, right-eye image, face image, and face mesh data) subjected to the dimension reduction process may be input to the convolutional network module. The convolutional network module may output an eye gaze location based on the input image. In the embodiment of the present application, the structure of the convolutional network in the convolutional network module may refer to fig. 12.
As shown in fig. 12, the convolutional network may include convolutional set 1 (CONV 1), convolutional set 2 (CONV 2), and convolutional set 3 (CONV 3). One convolution set includes: convolution kernel (Convolition), activation function PRelu, pooling kernel (Pooling), and local response normalization layer (Local Response Normalization, LRN). Wherein, the convolution kernel of CONV1 is a matrix of 7*7, and the pooling kernel is a matrix of 3*3; the convolution kernel of CONV2 is a matrix of 5*5 and the pooling kernel is a matrix of 3*3; the convolution kernel of CONV3 is a matrix of 3*3 and the pooling kernel is a matrix of 2 x 2.
The separable Convolution technology can reduce the storage requirements of Convolution kernel Convolition) and Pooling kernel (Pooling), so that the storage space requirement of the whole model is reduced, and the model can be deployed on terminal equipment.
Specifically, the separable convolution technique refers to decomposing an n matrix into an n×1 column matrix and a 1*n row matrix for storage, thereby reducing the storage space requirement. Therefore, the eyeball fixation module used in the application has the advantages of small volume and easy deployment, so as to be suitable for terminal electronic equipment such as mobile phones and the like.
Specifically, referring to fig. 13, matrix a may represent a convolution kernel of 3*3. Assuming that matrix a is stored directly, this matrix a needs to occupy 9 memory cells. Matrix a is split into a column matrix A1 and a row matrix A2 (column matrix a1×row matrix a2=matrix a). Only 6 memory cells are needed for the column matrix A1 and the row matrix A2.
After the processing of CONV1, CONV2, CONV3, different images can be input into different connection layers for full connection. As shown in fig. 12, the convolutional network may include a connection layer 1 (FC 1), a connection layer 2 (FC 2), and a connection layer 3 (FC 3).
The left-eye image and the right-eye image may be input into the FC1 after passing through CONV1, CONV2, CONV 3. FC1 may include a combination module (concat), convolution kernel 1201, PRelu, full connection module 1202. Wherein concat can be used to combine the left eye image and the right eye image. The face image may be input into the FC2 after passing through the CONV1, CONV2, CONV 3. FC2 may include convolution kernel 1203, pralu, full connection module 1204, full connection module 1205. The FC2 may make two full connections to the face image. Face mesh data may be input into FC3 after passing CONV1, CONV2, CONV 3. FC3 comprises a fully connected module.
The connection layers with different structures are constructed for different types of images (such as left eye images, right eye images and face images), so that the characteristics of various images can be better acquired, the accuracy of a model is improved, and the terminal 100 can more accurately identify the eyeball fixation position of a user.
Then, the full connection module 1206 may perform full connection on the left eye image, the right eye image, the face image, and the face mesh data again, and finally output the position coordinates of the eye gaze position. The eye gaze position indicates the abscissa, the ordinate of the focusing of the user' S line of sight on the screen, with reference to the cursor point S shown in fig. 1. Further, when the eye gaze position is within a control region (icon, window, etc. control), the terminal 100 may determine that the user is gazing at the control.
In addition, the eyeball fixation model used in the application is provided with fewer parameters of the convolutional neural network. Therefore, the time required for calculating and predicting the eye gaze position of the user using the eye gaze model is small, i.e. the terminal 100 can quickly determine the eye gaze position of the user, and thus quickly determine whether the user opens the common application program and the common interface through the eye gaze control.
In the embodiments of the present application:
the first preset area and the second preset area can be any two areas which are different from each other in the upper left corner area, the upper right corner area, the lower left corner area and the lower right corner area of the screen;
referring to fig. 3A, the first interface and the fourth interface may be any two different interfaces of the main interfaces of the first desktop 31, the second desktop 32, the negative one-screen 30, and the like;
the second interface, the third interface, the fifth interface, and the sixth interface may be any one of the following interfaces: the payment interface shown in fig. 2G, the health code interface shown in fig. 2I, the health detection record interface indicated in fig. 5G, and various user common interfaces such as the riding code interface shown in the drawings;
taking the payment interface set as privacy as an example, before displaying the payment interface, referring to fig. 2D, the shortcut window 225 displayed on the first desktop 31 by the electronic device may be referred to as a first control; referring to fig. 3D, icon 331 may also be referred to as a first control; alternatively, a shortcut to an application providing a payment interface may also be referred to as a first control.
Fig. 14 is a schematic system configuration diagram of a terminal 100 according to an embodiment of the present application.
The layered architecture divides the system into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the system is divided into five layers, from top to bottom, an application layer (application layer), an application framework layer (framework layer), a hardware abstraction layer, a driver layer, and a hardware layer, respectively.
The application layer may include a plurality of application programs, such as a dialing application, gallery application, and the like. In the embodiment of the application, the application layer further comprises an eyeball gaze SDK (software development kit ). The system of the terminal 100 and the third application installed on the terminal 100 can identify the eye gaze location of the user by invoking the eye gaze SDK.
The framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The framework layer includes some predefined functions. In embodiments of the present application, the framework layer may include a camera service interface, an eye gaze service interface. The camera service interface is used to provide an application programming interface and programming framework that uses a camera. The eye gaze service interface provides an application programming interface and programming framework that uses an eye gaze recognition model.
The hardware abstraction layer is an interface layer between the framework layer and the drive layer, and provides a virtual hardware platform for the operating system. In the embodiment of the application, the hardware abstraction layer may include a camera hardware abstraction layer and an eyeball-fixation process. The camera hardware abstraction layer may provide virtual hardware of the camera device 1 (RGB camera), the camera device 2 (TOF camera) or more. The calculation process of recognizing the eye gaze position of the user by the eye gaze recognition module is performed in an eye gaze process.
The driver layer is a layer between hardware and software. The driver layer includes drivers for various hardware. The drive layer may include a camera device drive. The camera device drives a sensor for driving a camera to acquire images and drives an image signal processor to preprocess the images.
The hardware layer includes sensors and secure data buffers. The sensor comprises an RGB camera (namely a 2D camera) and a TOF camera (namely a 3D camera). The cameras included in the sensor are in one-to-one correspondence with virtual camera devices included in the camera hardware abstraction layer. The RGB camera may capture and generate 2D images. The TOF camera is a depth sensing camera and can acquire and generate a 3D image with depth information.
The data collected by the camera is stored in a safe data buffer. Any upper layer processes or references need to be acquired from the secure data buffer, but cannot be acquired in other ways, when acquiring the image data acquired by the camera, so the secure data buffer can also avoid the problem that the image data acquired by the camera is abused, and is called the secure data buffer.
The software layers and modules or interfaces included in the layers described above run in an executable environment (Runnable executive environment, REE). The terminal 100 also includes a trusted execution environment (Trust executive environment, TEE). Data communication in the TEE is more secure than REEs.
An eye gaze recognition algorithm module, a trusted application (Trust Application, TA) module, and a security services module may be included in the TEE. The eye gaze recognition algorithm module stores executable code of an eye gaze recognition model. The TA may be used to securely send the recognition result output by the above model to the eye gaze process. The safety service module may be used to securely input the image data stored in the safety data buffer into the eye gaze recognition algorithm module.
The following describes, in detail, an interaction method based on eye gaze recognition in the embodiment of the present application with reference to the above hardware structure and system structure:
the terminal 100 detects that a trigger condition for turning on eye gaze recognition is satisfied. Thus, the terminal 100 may determine to perform an eye gaze recognition operation.
First, the terminal 100 may invoke an eye gaze service through the eye gaze SDK.
In one aspect, the eye gaze service may invoke a camera service of the frame layer through which facial images of the user are captured and obtained. The camera service may send an instruction to activate the RGB camera and the TOF camera by invoking the camera device 1 (RGB camera), the camera device 2 (TOF camera) in the camera hardware abstraction layer. The camera hardware abstraction layer sends the instruction to the camera device driver of the driver layer. The camera device driver can start the camera according to the instruction. The instructions sent by the camera device 1 to the camera device driver may be used to activate the RGB camera. The instructions sent by the camera device 2 to the camera device driver may be used to activate the TOF camera. The RGB camera and the TOF camera are started to collect optical signals, and two-dimensional images and three-dimensional images of the electrical signals are generated through the image signal processor.
On the other hand, the eye gaze service may create an eye gaze process, initializing an eye recognition model.
The images (two-dimensional images and three-dimensional images) generated by the image signal processor may be stored in the secure data buffer. After the eye gaze process is created and initialized, the image data stored in the secure data buffer may be transferred to the eye gaze recognition algorithm via a secure transmission channel (TEE) provided by a secure service. The eye gaze recognition algorithm may input the image data into an eye gaze recognition model established based on CNN after receiving the image data, thereby determining an eye gaze position of the user. Then, the TA safely returns the above-mentioned eye-gaze position to the eye-gaze progress, and then returns to the application layer eye-gaze SDK via the camera service, the eye-gaze service.
Finally, the eyeball fixation SDK can determine the region or the icon, the window and other controls which are watched by the user according to the received eyeball fixation position, and further determine the display action associated with the region or the controls.
Fig. 15 shows a hardware configuration diagram of the terminal 100.
The terminal 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the structure illustrated in the embodiments of the present invention does not constitute a specific limitation on the terminal 100. In other embodiments of the present application, terminal 100 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the interfacing relationship between the modules illustrated in the embodiment of the present invention is only illustrative, and does not limit the structure of the terminal 100. In other embodiments of the present application, the terminal 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110.
The wireless communication function of the terminal 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the terminal 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., applied on the terminal 100. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of terminal 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that terminal 100 may communicate with a network and other devices via wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
Terminal 100 implements display functions via a GPU, display 194, and application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD). The display panel may also be manufactured using organic light-emitting diode (OLED), active-matrix organic light-emitting diode (AMOLED) or active-matrix organic light-emitting diode (active-matrix organic light emitting diode), flexible light-emitting diode (FLED), mini, micro-OLED, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the terminal 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
In the embodiment of the present application, the terminal 100 displays the user interfaces shown in fig. 2A-2I, fig. 3A-3E, fig. 4A-4D, fig. 5A-5M, fig. 6A-6I, fig. 7A-7C through the display functions provided by the GPU, the display screen 194, and the application processor.
The terminal 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display 194, an application processor, and the like. In the present embodiment, the camera 193 includes an RGB camera (2D camera) that generates a two-dimensional image and a TOF camera (3D camera) that generates a three-dimensional image.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also perform algorithm optimization on noise and brightness of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, terminal 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. Video codecs are used to compress or decompress digital video. The terminal 100 may support one or more video codecs.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of the terminal 100 can be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
In the embodiment of the present application, the terminal 100 captures a facial image of a user through the photographing capability provided by the ISP, the camera 193. The terminal 100 may perform an eye gaze recognition algorithm through the NPU, and further recognize the eye gaze position of the user through the acquired face image of the user.
The internal memory 121 may include one or more random access memories (random access memory, RAM) and one or more non-volatile memories (NVM).
The random access memory may include static random-access memory (SRAM), dynamic random-access memory (dynamic random access memory, DRAM), synchronous dynamic random-access memory (synchronous dynamic random access memory, SDRAM), double data rate synchronous dynamic random-access memory (double data rate synchronous dynamic random access memory, DDR SDRAM, e.g., fifth generation DDR SDRAM is commonly referred to as DDR5 SDRAM), etc. The nonvolatile memory may include a disk storage device, a flash memory (flash memory).
The random access memory may be read directly from and written to by the processor 110, may be used to store executable programs (e.g., machine instructions) for an operating system or other on-the-fly programs, may also be used to store data for users and applications, and the like. The nonvolatile memory may store executable programs, store data of users and applications, and the like, and may be loaded into the random access memory in advance for the processor 110 to directly read and write.
In embodiments of the present application, the application code of the eye gaze SDK may be stored in a non-volatile memory. When the eyeball fixation service is invoked by running the eyeball fixation SDK, application program codes of the eyeball fixation SDK may be loaded into the random access memory. Data generated when the above code is executed may also be stored in the random access memory.
The external memory interface 120 may be used to connect an external nonvolatile memory to realize the memory capability of the extension terminal 100. The external nonvolatile memory communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and video are stored in an external nonvolatile memory.
The terminal 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The terminal 100 can listen to music or to handsfree calls through the speaker 170A. A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the terminal 100 receives a telephone call or voice message, it is possible to receive voice by approaching the receiver 170B to the human ear. Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. In the embodiment of the present application, in the off-screen or off-screen AOD state, the terminal 100 may acquire an audio signal in the environment through the microphone 170C, so as to determine whether the language wake-up word of the user is detected. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The earphone interface 170D is used to connect a wired earphone.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. The gyro sensor 180B may be used to determine angular velocities of the terminal 100 about three axes (i.e., x, y, and z axes), and thus determine a motion gesture of the terminal 100. The acceleration sensor 180E may detect the magnitude of acceleration of the terminal 100 in various directions (typically three axes). Therefore, the acceleration sensor 180E may be used to recognize the posture of the terminal 100. In the embodiment of the present application, in the off-screen or off-screen AOD state, the terminal 100 may detect whether the user picks up the mobile phone through the acceleration sensor 180E and the gyro sensor 180B, so as to determine whether to light the screen.
The air pressure sensor 180C is used to measure air pressure. The magnetic sensor 180D includes a hall sensor. The terminal 100 may detect the opening and closing of the flip cover using the magnetic sensor 180D. Thus, in some embodiments, when the terminal 100 is a folder type machine, the terminal 100 may detect the opening and closing of the folder according to the magnetic sensor 180D, thereby determining whether to illuminate the screen.
The distance sensor 180F is used to measure a distance. The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector. The terminal 100 may use the proximity light sensor 180G to detect a scenario where the user holds the terminal 100 in close proximity to the user, such as an earpiece conversation. The ambient light sensor 180L is used to sense ambient light level. The terminal 100 may adaptively adjust the brightness of the display 194 according to the perceived ambient light level.
The fingerprint sensor 180H is used to collect a fingerprint. The terminal 100 can utilize the collected fingerprint characteristics to realize fingerprint unlocking, access application locking and other functions. The temperature sensor 180J is for detecting temperature. The bone conduction sensor 180M may acquire a vibration signal.
The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may be disposed on the surface of the terminal 100 at a different location than the display 194.
In the embodiment of the present application, the terminal 100 detects whether there is a user operation such as clicking, sliding, or the like, which acts on the screen, through the touch sensor 180K. Based on the user operation on the screen detected by the touch sensor 180K, the terminal 100 can determine actions to be performed later, such as running a certain application, displaying an interface of the application, and the like.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. The indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or may be used to indicate a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The terminal 100 may support 1 or N SIM card interfaces.
The term "User Interface (UI)" in the description and claims of the present application and in the drawings is a media interface for interaction and information exchange between an application program or an operating system and a user, which enables conversion between an internal form of information and a form acceptable to the user. The user interface of the application program is source code written in a specific computer language such as java, extensible markup language (extensible markup language, XML) and the like, the interface source code is analyzed and rendered on the terminal equipment, and finally the interface source code is presented as content which can be identified by a user, such as a picture, characters, buttons and the like. Controls (controls), also known as parts (widgets), are basic elements of a user interface, typical controls being toolbars (toolbars), menu bars (menu bars), text boxes (text boxes), buttons (buttons), scroll bars (scrollbars), pictures and text. The properties and content of the controls in the interface are defined by labels or nodes, such as XML specifies the controls contained in the interface by nodes of < Textview >, < ImgView >, < VideoView >, etc. One node corresponds to a control or attribute in the interface, and the node is rendered into visual content for a user after being analyzed and rendered. In addition, many applications, such as the interface of a hybrid application (hybrid application), typically include web pages. A web page, also referred to as a page, is understood to be a special control embedded in an application program interface, and is source code written in a specific computer language, such as hypertext markup language (hyper text markup language, GTML), cascading style sheets (cascading style sheets, CSS), java script (JavaScript, JS), etc., and the web page source code may be loaded and displayed as user-recognizable content by a browser or web page display component similar to the browser function. The specific content contained in a web page is also defined by tags or nodes in the web page source code, such as GTML defines elements and attributes of the web page by < p >, < img >, < video >, < canvas >.
A commonly used presentation form of the user interface is a graphical user interface (graphic user interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. The method can be an interface element such as an icon, a window, a control and the like displayed in a display screen of the terminal equipment, wherein the control can comprise a visible interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget and the like.
As used in the specification and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include the plural forms as well, unless the context clearly indicates to the contrary. It should also be understood that the term "and/or" as used in this application refers to and encompasses any or all possible combinations of one or more of the listed items. As used in the above embodiments, the term "when …" may be interpreted to mean "if …" or "after …" or "in response to determination …" or "in response to detection …" depending on the context. Similarly, the phrase "at the time of determination …" or "if detected (a stated condition or event)" may be interpreted to mean "if determined …" or "in response to determination …" or "at the time of detection (a stated condition or event)" or "in response to detection (a stated condition or event)" depending on the context.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium includes: ROM or random access memory RAM, magnetic or optical disk, etc.
Claims (24)
1. A display method applied to an electronic device, the electronic device comprising a screen, characterized in that the screen of the electronic device comprises a first preset area, the method comprising:
displaying a first interface;
when the first interface is displayed, the electronic equipment acquires a first image;
determining a first ball-of-eye gaze region of a user based on the first image, the first ball-of-eye gaze region being for indicating a screen region at which the user gazes when the user gazes at the screen;
and displaying a second interface when the first eye-ball gazing area is in the first preset area.
2. The method of claim 1, wherein the screen of the electronic device includes a second preset area, the second preset area being different from the first preset area, the method further comprising:
Determining a second eyeball-gazing area of the user based on the first image, wherein the position of the second eyeball-gazing area on the screen is different from that of the first eyeball-gazing area on the screen;
and when the second eyeball gazing area is in the second preset area, displaying a third interface, wherein the third interface is different from the second interface.
3. The method of claim 2, wherein the second interface and the third interface are interfaces provided by a same application or wherein the second interface and the third interface are interfaces provided by different applications.
4. A method according to any one of claims 1-3, characterized in that the method further comprises:
displaying a fourth interface;
when the fourth interface is displayed, the electronic equipment acquires a second image;
determining a third eye gaze area of the user based on the second image;
and when the third eyeball gazing area is in the first preset area, displaying a fifth interface, wherein the fifth interface is different from the two interfaces.
5. The method of claim 1, wherein displaying a second interface when the first ball gaze area is within the first preset area comprises: and when the first eyeball fixation region is in the first preset region and the fixation time length of the first preset region is a first time length, displaying a second interface.
6. The method of claim 5, wherein the method further comprises: and when the first eyeball fixation region is in the first preset region and the fixation time length of the first preset region is the second time length, displaying a sixth interface.
7. The method according to any of claims 1-6, wherein the first eye-gaze area is a cursor point constituted by one display unit on a screen, or wherein the first eye-gaze area is a cursor point or a cursor area constituted by a plurality of display units on a screen.
8. The method of claim 2, wherein the second interface is a non-privacy interface, the method further comprising: displaying an interface to be unlocked; when the interface to be unlocked is displayed, the electronic equipment acquires a third image;
determining a fourth eye gaze area of the user based on the third image;
and displaying the second interface when the fourth eyeball gazing position is in the first preset region.
9. The method of claim 8, wherein the third interface is a privacy interface, the method further comprising:
and when the fourth eyeball fixation position is in the second preset region, the third interface is not displayed.
10. The method of claim 2, wherein the second interface and the third interface are both privacy interfaces; and the electronic equipment does not enable the camera to acquire the image when the interface to be unlocked is displayed.
11. The method of any of claims 8-10, wherein the second preset area of the first interface displays a first control for indicating that the second preset area is associated with the third interface.
12. The method of claim 11, wherein the second preset area of the interface to be unlocked does not display the first control.
13. The method of claim 11 or 12, wherein the first control is any one of: the thumbnail of the first interface, the icon of the application program corresponding to the first interface, and the function icon indicating the service provided by the first interface.
14. The method of claim 1, wherein the duration of image acquisition by the electronic device is a first preset duration; the electronic equipment collects a first image, specifically: the electronic equipment acquires the first image within the first preset time.
15. The method of claim 14, wherein the first predetermined time period is a first 3 seconds of displaying the first interface.
16. The method of claim 1, wherein the electronic device comprises a camera module, the electronic device capturing the first image is captured by the camera module, the camera module comprising: at least one 2D camera and at least one 3D camera, the 2D camera being configured to acquire a two-dimensional image, the 3D camera being configured to acquire an image including depth information; the first image includes the two-dimensional image and the image including depth information.
17. The method of claim 16, wherein the first image acquired by the camera module is stored in a secure data buffer,
before determining the first ball gaze region of the user based on the first image, the method further comprises:
the first image is acquired from the secure data buffer under a trusted execution environment.
18. The method of claim 17, wherein the secure data buffer is provided at a hardware layer of the electronic device.
19. The method according to claim 1, characterized in that said determining a first eye-gaze area of the user based on said first image, in particular comprises:
determining feature data of the first image, the feature data including one or more of a left eye image, a right eye image, a face image, and face mesh data;
and determining a first eye gaze area indicated by the characteristic data by utilizing an eye gaze recognition model, wherein the eye gaze recognition model is established based on a convolutional neural network.
20. The method according to claim 19, wherein said determining the feature data of the first image comprises:
performing face correction on the first image to obtain a face-corrected first image;
and determining characteristic data of the first image based on the first image with the right face.
21. The method of claim 4, wherein the first interface is any one of a first desktop, a second desktop, or a negative screen; the fourth interface is any one of a first desktop, a second desktop or a negative screen, and is different from the first interface.
22. The method of claim 4, wherein the association between the first preset area and the second interface and the fifth interface is set by a user.
23. An electronic device comprising one or more processors and one or more memories; wherein the one or more memories are coupled to the one or more processors, the one or more memories for storing computer program code comprising computer instructions that, when executed by the one or more processors, cause the method of any of claims 1-22 to be performed.
24. A computer readable storage medium comprising instructions which, when run on an electronic device, cause the method of any one of claims 1-22 to be performed.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311698136.XA CN118312035A (en) | 2022-05-20 | 2022-06-30 | Display method and electronic equipment |
PCT/CN2023/095396 WO2023222130A1 (en) | 2022-05-20 | 2023-05-19 | Display method and electronic device |
PCT/CN2023/095373 WO2023222125A1 (en) | 2022-05-20 | 2023-05-19 | Method and apparatus for opening floating notifications |
EP23807074.2A EP4390631A1 (en) | 2022-05-20 | 2023-05-19 | Method and apparatus for opening floating notifications |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210549347 | 2022-05-20 | ||
CN2022105493476 | 2022-05-20 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311698136.XA Division CN118312035A (en) | 2022-05-20 | 2022-06-30 | Display method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116048243A true CN116048243A (en) | 2023-05-02 |
CN116048243B CN116048243B (en) | 2023-10-20 |
Family
ID=86118708
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210761048.9A Active CN116048243B (en) | 2022-05-20 | 2022-06-30 | Display method and electronic equipment |
CN202311698136.XA Pending CN118312035A (en) | 2022-05-20 | 2022-06-30 | Display method and electronic equipment |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311698136.XA Pending CN118312035A (en) | 2022-05-20 | 2022-06-30 | Display method and electronic equipment |
Country Status (2)
Country | Link |
---|---|
CN (2) | CN116048243B (en) |
WO (1) | WO2023222130A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023222130A1 (en) * | 2022-05-20 | 2023-11-23 | 荣耀终端有限公司 | Display method and electronic device |
CN117707449A (en) * | 2023-05-19 | 2024-03-15 | 荣耀终端有限公司 | Display control method and related equipment |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103324290A (en) * | 2013-07-04 | 2013-09-25 | 深圳市中兴移动通信有限公司 | Terminal equipment and eye control method thereof |
CN104915099A (en) * | 2015-06-16 | 2015-09-16 | 努比亚技术有限公司 | Icon sorting method and terminal equipment |
CN105843383A (en) * | 2016-03-21 | 2016-08-10 | 努比亚技术有限公司 | Application starting device and application starting method |
CN105867603A (en) * | 2015-12-08 | 2016-08-17 | 乐视致新电子科技(天津)有限公司 | Eye-controlled method and device |
DE202017101642U1 (en) * | 2017-03-21 | 2017-08-22 | Readio Gmbh | Application software for a mobile, digital terminal |
CN107608514A (en) * | 2017-09-20 | 2018-01-19 | 维沃移动通信有限公司 | Information processing method and mobile terminal |
CN107977586A (en) * | 2017-12-22 | 2018-05-01 | 联想(北京)有限公司 | Display content processing method, the first electronic equipment and the second electronic equipment |
CN209805952U (en) * | 2019-07-23 | 2019-12-17 | 北京子乐科技有限公司 | Camera controlling means and smart machine |
CN111131594A (en) * | 2018-10-30 | 2020-05-08 | 奇酷互联网络科技(深圳)有限公司 | Method for displaying notification content, intelligent terminal and storage medium |
CN111737775A (en) * | 2020-06-23 | 2020-10-02 | 广东小天才科技有限公司 | Privacy peep-proof method and intelligent equipment based on user eyeball tracking |
CN112487888A (en) * | 2020-11-16 | 2021-03-12 | 支付宝(杭州)信息技术有限公司 | Image acquisition method and device based on target object |
CN112597469A (en) * | 2015-03-31 | 2021-04-02 | 华为技术有限公司 | Mobile terminal privacy protection method and device and mobile terminal |
WO2021238373A1 (en) * | 2020-05-26 | 2021-12-02 | 华为技术有限公司 | Method for unlocking by means of gaze and electronic device |
CN114445903A (en) * | 2020-11-02 | 2022-05-06 | 北京七鑫易维信息技术有限公司 | Screen-off unlocking method and device |
CN114466102A (en) * | 2021-08-12 | 2022-05-10 | 荣耀终端有限公司 | Method for displaying application interface, electronic equipment and traffic information display system |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105338192A (en) * | 2015-11-25 | 2016-02-17 | 努比亚技术有限公司 | Mobile terminal and operation processing method thereof |
CN115981582B (en) * | 2020-09-10 | 2024-05-14 | 华为技术有限公司 | Display method and electronic equipment |
CN116048243B (en) * | 2022-05-20 | 2023-10-20 | 荣耀终端有限公司 | Display method and electronic equipment |
-
2022
- 2022-06-30 CN CN202210761048.9A patent/CN116048243B/en active Active
- 2022-06-30 CN CN202311698136.XA patent/CN118312035A/en active Pending
-
2023
- 2023-05-19 WO PCT/CN2023/095396 patent/WO2023222130A1/en unknown
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103324290A (en) * | 2013-07-04 | 2013-09-25 | 深圳市中兴移动通信有限公司 | Terminal equipment and eye control method thereof |
CN112597469A (en) * | 2015-03-31 | 2021-04-02 | 华为技术有限公司 | Mobile terminal privacy protection method and device and mobile terminal |
CN104915099A (en) * | 2015-06-16 | 2015-09-16 | 努比亚技术有限公司 | Icon sorting method and terminal equipment |
CN105867603A (en) * | 2015-12-08 | 2016-08-17 | 乐视致新电子科技(天津)有限公司 | Eye-controlled method and device |
CN105843383A (en) * | 2016-03-21 | 2016-08-10 | 努比亚技术有限公司 | Application starting device and application starting method |
DE202017101642U1 (en) * | 2017-03-21 | 2017-08-22 | Readio Gmbh | Application software for a mobile, digital terminal |
CN107608514A (en) * | 2017-09-20 | 2018-01-19 | 维沃移动通信有限公司 | Information processing method and mobile terminal |
CN107977586A (en) * | 2017-12-22 | 2018-05-01 | 联想(北京)有限公司 | Display content processing method, the first electronic equipment and the second electronic equipment |
CN111131594A (en) * | 2018-10-30 | 2020-05-08 | 奇酷互联网络科技(深圳)有限公司 | Method for displaying notification content, intelligent terminal and storage medium |
CN209805952U (en) * | 2019-07-23 | 2019-12-17 | 北京子乐科技有限公司 | Camera controlling means and smart machine |
WO2021238373A1 (en) * | 2020-05-26 | 2021-12-02 | 华为技术有限公司 | Method for unlocking by means of gaze and electronic device |
CN111737775A (en) * | 2020-06-23 | 2020-10-02 | 广东小天才科技有限公司 | Privacy peep-proof method and intelligent equipment based on user eyeball tracking |
CN114445903A (en) * | 2020-11-02 | 2022-05-06 | 北京七鑫易维信息技术有限公司 | Screen-off unlocking method and device |
CN112487888A (en) * | 2020-11-16 | 2021-03-12 | 支付宝(杭州)信息技术有限公司 | Image acquisition method and device based on target object |
CN114466102A (en) * | 2021-08-12 | 2022-05-10 | 荣耀终端有限公司 | Method for displaying application interface, electronic equipment and traffic information display system |
Non-Patent Citations (3)
Title |
---|
TARO ICHII.ETC: "TEllipsoid: Ellipsoidal Display for Videoconference System Transmitting Accurate Gaze Direction", IEEE * |
吴伟;: "基于视觉传达的电子设备人机界面色彩设计", 现代电子技术, no. 10 * |
张利;: "基于Ladybug的全景数据采集处理", 测绘标准化, no. 04 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023222130A1 (en) * | 2022-05-20 | 2023-11-23 | 荣耀终端有限公司 | Display method and electronic device |
CN117707449A (en) * | 2023-05-19 | 2024-03-15 | 荣耀终端有限公司 | Display control method and related equipment |
Also Published As
Publication number | Publication date |
---|---|
WO2023222130A9 (en) | 2024-02-15 |
WO2023222130A1 (en) | 2023-11-23 |
CN116048243B (en) | 2023-10-20 |
CN118312035A (en) | 2024-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021213164A1 (en) | Application interface interaction method, electronic device, and computer readable storage medium | |
WO2021129326A1 (en) | Screen display method and electronic device | |
US20240179237A1 (en) | Screenshot Generating Method, Control Method, and Electronic Device | |
WO2020029306A1 (en) | Image capture method and electronic device | |
WO2021104485A1 (en) | Photographing method and electronic device | |
CN112751954B (en) | Operation prompting method and electronic equipment | |
CN112506386A (en) | Display method of folding screen and electronic equipment | |
WO2021036770A1 (en) | Split-screen processing method and terminal device | |
EP4199499A1 (en) | Image capture method, graphical user interface, and electronic device | |
WO2021013132A1 (en) | Input method and electronic device | |
WO2021180089A1 (en) | Interface switching method and apparatus and electronic device | |
CN113949803B (en) | Photographing method and electronic equipment | |
CN116048243B (en) | Display method and electronic equipment | |
CN112930533A (en) | Control method of electronic equipment and electronic equipment | |
CN113641271A (en) | Application window management method, terminal device and computer readable storage medium | |
WO2022022406A1 (en) | Always-on display method and electronic device | |
EP4394560A1 (en) | Display method and electronic device | |
WO2022135273A1 (en) | Method for invoking capabilities of other devices, electronic device, and system | |
WO2022078116A1 (en) | Brush effect picture generation method, image editing method and device, and storage medium | |
CN114173005B (en) | Application layout control method and device, terminal equipment and computer readable storage medium | |
CN114173165A (en) | Display method and electronic equipment | |
WO2022222705A1 (en) | Device control method and electronic device | |
WO2024109573A1 (en) | Method for floating window display and electronic device | |
WO2024139257A1 (en) | Method for displaying interfaces of application programs and electronic device | |
WO2024114212A1 (en) | Cross-device focus switching method, electronic device and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |