Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments that can be derived from the embodiments given herein by a person of ordinary skill in the art are intended to be within the scope of the present disclosure.
While the concepts of the present application are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the description above is not intended to limit the application to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the application.
Reference in the specification to "one embodiment," "an embodiment," "a particular embodiment," or the like, means that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, where a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. In addition, it should be understood that items in the list included in the form "at least one of a, B, and C" may include the following possible items: (A) (ii) a (B) (ii) a (C) (ii) a (A and B); (A and C); (B and C); or (A, B and C). Likewise, a listing of items in the form of "at least one of a, B, or C" may mean (a); (B) (ii) a (C) (ii) a (A and B); (A and C); (B and C); or (A, B and C).
In some cases, the disclosed embodiments may be implemented as hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be executed by one or more processors. A machine-readable storage medium may be implemented as a storage device, mechanism, or other physical structure (e.g., a volatile or non-volatile memory, a media disk, or other media other physical structure device) for storing or transmitting information in a form readable by a machine.
In the drawings, some structural or methodical features may be shown in a particular arrangement and/or ordering. Preferably, however, such specific arrangement and/or ordering is not necessary. Rather, in some embodiments, such features may be arranged in different ways and/or orders than as shown in the figures. Moreover, the inclusion of structural or methodical features in particular figures is not meant to imply that such features are required in all embodiments and that, in some embodiments, such features may not be included or may be combined with other features.
The embodiment of the application provides a data processing scheme, which can acquire a fixation point of a user according to eyeball data of the user; and under the condition that the user's point of regard is not on the screen or the stay time of the user's point of regard on the screen does not exceed a first time threshold, the content of the notification is broadcasted in voice.
In the embodiment of the application, the fixation point can be used for representing the position of the sight line of the user. The screen may refer to a screen of the terminal. A notification is a notification with a global effect, shown at the top of the screen. The notification may be generated for information sources such as information received by the application, information pushed by the application, or task status messages. For example, the information received by the application may include: information sent by the opposite communication terminal; the information pushed by the application may include: reminding information pushed by the navigation application, such as 'please pay attention to safe driving in fatigue driving', 'you are driving for 4 hours continuously and advise you to rest in a nearby service area', and the like; the task status message may include: download progress, or installation progress of the application, etc. It is understood that the embodiment of the present application does not impose a limitation on the specific information source corresponding to the notification.
The embodiment of the application can judge whether the attention of the user is on the screen or not by judging whether the fixation point is on the screen or not or the stay time of the fixation point on the screen; specifically, if the stay time of the point of regard on the screen exceeds a first time threshold, it can be said that the user's attention is on the screen; alternatively, if the point of regard is not on the screen, or if the dwell time of the point of regard on the screen does not exceed the first time threshold, it may be said that the user's attention is not on the screen.
According to the embodiment of the application, the processing mode for the notification can be determined according to the judgment result of whether the attention of the user is on the screen, so that the rationality of the processing mode for the notification can be improved. Specifically, in the embodiment of the present application, when the gaze point of the user is not on the screen, or the staying time of the gaze point of the user on the screen does not exceed the first time threshold, it may be considered that the attention of the user is not on the screen, and in this case, the content of the notification is broadcasted in a voice, so that the user may be prevented from missing information corresponding to the notification to a certain extent.
The application scenario of the embodiment of the application may include: driving scenes, work scenes, etc. Taking a driving scene as an example, in order to drive safely, a user needs to observe road conditions around a vehicle, so that the user is not focused on a screen of a terminal in terms of driving behavior, and information corresponding to notification is easily missed; in the embodiment of the application, under the condition that the attention of the user is not on the screen, the content of the notification is broadcasted in a voice mode, and the user can be prevented from missing the information corresponding to the notification to a certain extent. It can be understood that the driving scenario and the working scenario are only examples of application scenarios, and in fact, a person skilled in the art may apply the embodiment of the present application to any application scenario according to actual application requirements, and the embodiment of the present application is not limited to a specific application scenario.
The data processing method provided by the embodiment of the present application can be applied to the application environment shown in fig. 1, as shown in fig. 1, the client 100 and the server 200 are located in a wired or wireless network, and the client 100 and the server 200 perform data interaction through the wired or wireless network.
Optionally, the client may run on the terminal, for example, the client may be an APP running on the terminal, such as a navigation APP, an e-commerce APP, an instant messaging APP, an input method APP, or an APP carried by an operating system, and the embodiment of the present application does not limit a specific APP corresponding to the client. Optionally, the above-mentioned devices may specifically include but are not limited to: smart phones, tablet computers, electronic book readers, MP3 (Moving picture Experts Group Audio Layer III) players, MP4 (Moving picture Experts Group Audio Layer IV) players, laptop portable computers, car terminals, desktop computers, set-top boxes, smart televisions, wearable devices, and the like. It is to be understood that the embodiments of the present application are not limited to the specific devices.
Examples of the in-vehicle terminal may include: a HUD (Head Up Display), which is generally installed in front of a driver and can provide necessary driving information for the driver during driving, such as vehicle speed, fuel consumption, navigation, even incoming call of a mobile phone, message reminding, and the like; in other words, HUD collects multiple functions in an organic whole, makes things convenient for the driver to pay close attention to driving road conditions.
Method embodiment one
Referring to fig. 2, a flowchart illustrating steps of a first embodiment of a data processing method according to the present application is shown, which may specifically include the following steps:
step 201, acquiring a fixation point of a user according to eyeball data of the user;
step 202, if the user's gaze point is not on the screen or the stay time of the user's gaze point on the screen does not exceed a first time threshold, performing voice broadcast on the notified content.
In step 201, when the user observes the external object through the eyes, the eyeballs are usually in a moving state, such as moving the eyeballs in any one of the upward, downward, left and right directions, opening and closing the eyes, or looking straight ahead.
The gaze point may be used to characterize the location of the user's gaze. In practical applications, the point of regard may include: the orientation of the user's gaze in three-dimensional space, such as a vector, etc.
According to the embodiment of the application, the fixation point of the user can be obtained by using an eyeball tracking method. The eyeball tracking method specifically comprises the following steps: tracking according to the characteristic changes of the eyeballs and the eyeballs periphery; tracking according to the change of the iris angle; or, actively projecting a beam such as infrared rays to the iris to extract features, and tracking according to the extracted features, and the like.
An example of obtaining a point of regard of a user based on eyeball data of the user is provided herein. This example may actively project a beam of infrared light or the like onto the iris to extract features. Specifically, the low-power infrared beam can be used to irradiate the eyeball of the user, the sensor can capture the reflected light reflected by different parts such as the pupil, the iris, the cornea and the like, and the user's gaze point can be determined after the analysis by a complex algorithm.
In an embodiment of the present application, the eyeball tracking apparatus is configured to obtain a point of regard of a user according to eyeball data of the user. The eye tracking apparatus may comprise: the illumination light source and the camera module are arranged; wherein, the illumination light source may include: an infrared or near-infrared LED (light emitting diode) or LED group for illuminating the eye and projecting a fixed pattern (usually a single pattern, such as a circle, a trapezoid, or a slightly complex pattern) to the eye; the camera module is used for shooting different parts of the eye, such as the pupil, the iris, the cornea and the like, so as to shoot the reflected light rays obtained by the figures reflected by the parts, thereby obtaining a vector connecting the pupil center and the cornea reflection spot center, and calculating the fixation point of the user according to the vector by combining an algorithm.
The eye tracking apparatus may be provided in a separate terminal, such as a head-mounted terminal. The eyeball tracking device may be disposed inside the head-mounted terminal, and if the head-mounted terminal is a pair of smart glasses, the eyeball tracking device may be disposed inside a frame of the pair of smart glasses at a position close to the pair of glasses, for example, at a position inside the frame corresponding to an upper left corner and an upper right corner of the eyes, respectively. Alternatively, the eye tracking device may be disposed outside the head-mounted terminal and near the eye.
Of course, the eyeball tracking device may be integrated into a terminal for executing the method of the embodiment of the application, and if the terminal for executing the method of the embodiment of the application is a vehicle-mounted terminal, the eyeball tracking device may be integrated into or out of the vehicle-mounted terminal. It is to be understood that the specific configuration of the eyeball tracking device is not limited in the embodiments of the present application.
In step 202, if the gaze point of the user is not on the screen or the staying time of the gaze point of the user on the screen does not exceed the first time threshold, it may be determined that the attention of the user is not on the screen, and in this case, the content of the notification is broadcasted by voice, so that the user may be prevented from missing the information corresponding to the notification to a certain extent.
In practical applications, TTS (text to speech) technology may be used to convert the content of the notification into a target speech and play the target speech.
In one embodiment of the present application, it may be determined whether the user's point of regard is on the screen. Optionally, an intersection point of the vector corresponding to the gaze point and the plane where the screen is located may be considered, and if the intersection point is located on the screen, the gaze point of the user may be on the screen; otherwise, if the intersection point is not located on the screen, it may be determined that the user's gaze point is not on the screen. In practical applications, the set of coordinate points corresponding to the screen may be determined, and the set of coordinate points corresponding to the screen may be a set of coordinate points corresponding to a rectangle corresponding to the screen, where the rectangle may be determined by the size of the screen or the size of a display area of the screen. The display area of the screen may be an area after the frame of the screen is removed. It can be understood that the embodiment of the present application does not impose any limitation on the specific process of determining whether the gaze point of the user is on the screen.
The first time threshold may be used to characterize a threshold for a gaze time corresponding to a concentration of attention on an object. Specifically, if the gaze time exceeds the first time threshold, it may be said that the user's attention is focused on some object; otherwise, if the gazing time does not exceed the first time threshold, it may indicate that the user's attention is not focused on a certain object. In particular to the embodiments of the present application, the object may be a screen. For example, if the user looks at the driving behavior during driving and only glances at the screen, the user's gaze time on the screen does not exceed the first time threshold, and thus the user's attention may be considered to be off the screen.
The first time threshold can be determined by one skilled in the art or by a user according to actual application requirements. Optionally, a setting interface may be provided for the user, and the first time threshold set by the setting interface may be received, so that the first time threshold meeting the personalized requirements of the user may be obtained, and the accuracy of determining whether the point of regard of the user is on the screen may be further improved. For example, the first time threshold may be 500ms (milliseconds), and the like, and it is understood that the specific first time threshold is not limited by the embodiments of the present application.
In the embodiment of the application, the notification is a notification with a global effect and is displayed at the top of the screen. The notification may be generated for information sources such as information received by the application, information pushed by the application, or task status messages. The notification in step 202 may be a pending notification.
In an alternative embodiment of the present application, step 201 may not have a trigger condition, but may be performed periodically, that is, the gaze point of the user may be acquired periodically according to eyeball data of the user.
In another optional embodiment of the present application, step 201 may have a trigger condition, and the trigger condition may specifically be: an unprocessed notification is detected. That is, in step 201, the gaze point of the user may be obtained according to the eyeball data of the user when the trigger condition is met.
In the case that the gaze point of the user is not on the screen or the stay time of the gaze point of the user on the screen does not exceed the first time threshold, the processing method for the notification in the embodiment of the present application may include: and carrying out voice broadcast on the notified content. It can be understood that, in this case, the notification processing method in the embodiment of the present application may further include: and displaying the content of the notification for the user to view, and the like. It is to be understood that the embodiment of the present application does not impose a limitation on the specific processing manner of the notification.
In summary, in the data processing method according to the embodiment of the present application, when the gaze point of the user is not on the screen or the retention time of the gaze point of the user on the screen does not exceed the first time threshold, it may be considered that the attention of the user is not on the screen, and in this case, the content of the notification is broadcasted by voice, so that the user may be prevented from missing the information corresponding to the notification to a certain extent.
The application scenario of the embodiment of the application may include: driving scenes, work scenes, etc. Taking a driving scene as an example, in order to drive safely, a user needs to observe road conditions around a vehicle, so that the user is not focused on a screen of a terminal in terms of driving behavior, and information corresponding to notification is easily missed; in the embodiment of the application, under the condition that the attention of the user is not on the screen, the content of the notification is broadcasted in a voice mode, and the user can be prevented from missing the information corresponding to the notification to a certain extent.
Method embodiment two
Referring to fig. 3, a flowchart illustrating steps of a second embodiment of the data processing method of the present application is shown, which may specifically include the following steps:
301, acquiring a fixation point of a user according to eyeball data of the user;
step 302, if the user's gaze point is not on the screen or the stay time of the user's gaze point on the screen does not exceed a first time threshold, starting to perform voice broadcast on the notified content;
with respect to the first method embodiment shown in fig. 2, the method of the embodiment of the present application may further include:
step 303, after the content of the notification is started to be broadcasted by voice, if the staying time of the gaze point of the user on the screen exceeds a first time threshold, stopping the voice broadcasting of the content of the notification after the content before the symbol is preset in the content of the notification is broadcasted by voice.
After the content of the notification is broadcasted in a voice mode, if the staying time of the point of regard of the user on the screen exceeds a first time threshold value, the attention of the user can be considered to be on the screen; because the user can read the content of the notification under the condition that the attention of the user is on the screen, the embodiment of the application can stop performing voice broadcast on the content of the notification after the content before the preset symbol in the content of the notification is completed by voice broadcast, so that the repetition of voice broadcast and content reading is avoided.
In addition, before the voice broadcast of the notified content is stopped, the voice broadcast of the content before the preset symbol is completed, and the preset symbol can play a role in segmenting the text, so that the integrity of the voice broadcast content can be improved to a certain extent.
Optionally, the preset symbol may include: punctuation marks, special symbols (such as directional arrows), unit symbols, and the like. Wherein, the punctuation mark is the mark of the auxiliary character recording language, which is used for expressing the pause, tone and the character and function of the word; the preset symbol can be divided, so that the content before the preset symbol is relatively complete content, and the integrity of the voice broadcast content can be improved to a certain extent.
Method embodiment three
Referring to fig. 4, a flowchart illustrating steps of a third embodiment of the data processing method in the present application is shown, which may specifically include the following steps:
step 401, acquiring a fixation point of a user according to eyeball data of the user;
step 402, if the user's gaze point is not on the screen or the stay time of the user's gaze point on the screen does not exceed a first time threshold, starting to perform voice broadcast on the notified content;
with respect to the first method embodiment shown in fig. 2, the method of the embodiment of the present application may further include:
step 403, after the content of the notification is started to be broadcasted in voice, if the staying time of the point of regard of the user on the screen exceeds a first time threshold, displaying an operation interface corresponding to the notification on the screen.
After the content of the notification is broadcasted in a voice mode, if the staying time of the point of regard of the user on the screen exceeds a first time threshold value, the attention of the user can be considered to be on the screen; since the user can be provided with the operation condition in a case where the user's attention is on the screen, an operation interface corresponding to the notification may be displayed on the screen so that the user responds to the notification through the operation interface.
Optionally, the type of the operation interface may be a control. A control may refer to a component that provides or implements user interface functionality, a control is a package for data and methods, and a control may have its own properties and methods. In practical applications, an operation interface (abbreviated as an operation control) of a control type can be displayed at any position of the screen, so that the user can respond to the notification through the operation control. For example, in the case that the screen is a touch screen, the user may click the operation control by touch to trigger the operation control. Optionally, the operation interface may be displayed on the right side of the screen, and of course, the specific position of the operation interface is not limited in the embodiment of the present application.
Those skilled in the art can determine the number and functions of the operation interfaces to be displayed according to the actual application requirements. In an application example of the present application, it is assumed that information originating from a navigation application is notified, the number of corresponding operation interfaces may be 2, functions of two operation interfaces may be a navigation function and a ignore function, respectively, and if a user triggers an operation interface corresponding to the navigation function, the user may enter the navigation interface; alternatively, when the user triggers the operation interface corresponding to the ignore function, the processing of the notification may be stopped, for example, the display of the content of the notification may be stopped, or the content of the notification may be stopped from being broadcasted by voice.
Method example four
Referring to fig. 5, a flowchart illustrating a fourth step of the data processing method according to the embodiment of the present application is shown, which may specifically include the following steps:
step 501, acquiring a fixation point of a user according to eyeball data of the user;
step 502, if the user's gaze point is not on the screen or the stay time of the user's gaze point on the screen does not exceed a first time threshold, starting to perform voice broadcast on the notified content;
with respect to the first method embodiment shown in fig. 2, the method of the embodiment of the present application may further include:
step 503, after completing the voice broadcast of the notified content, monitoring the voice instruction of the user;
and step 504, responding to the notification according to the voice instruction.
After the voice broadcasting of the content of the notification is completed, the embodiment of the application can start a listening mode, wherein the listening mode is used for listening a voice instruction of a user so as to enable the user to perform voice reply on the notification, so that the user can realize response to the notification without transferring attention. For example, in a driving scene, a user can listen to the voice broadcast content to acquire the content of the notification and respond to the notification in a voice manner when the user focuses on the driving behavior, so that the user's attention can be prevented from being distracted, and the driving safety can be improved.
In practical applications, a voice recognition technology may be used to convert a voice command of a user into a text command, and respond to the notification according to the text command.
In an application example of the present application, assuming that the notification is information originated from a navigation application, for example, the content of the notification is "fatigue driving, please pay attention to safe driving", the response instruction of the notification may include: "navigate," or "ignore," etc., where "navigate" is used to enter the navigation interface and "ignore" may be used to stop processing of notifications. It can be understood that, a person skilled in the art can determine a response instruction corresponding to a notification according to the actual application requirements, and a user can respond to the notification through a voice instruction matched with the response instruction; the embodiment of the present application does not impose any limitation on the specific response command.
Method example five
Referring to fig. 6, a flowchart illustrating a fifth step of an embodiment of a data processing method according to the present application is shown, which may specifically include the following steps:
601, acquiring a fixation point of a user according to eyeball data of the user;
step 602, if the point of regard of the user is not on the screen, playing a prompt tone corresponding to the notification;
step 603, if the user's gaze point is not on the screen or the stay time of the user's gaze point on the screen does not exceed a first time threshold, performing voice broadcast on the notified content.
With respect to the first method embodiment shown in fig. 2, a processing manner for notification in the embodiment of the present application may include: firstly, step 602 is executed, in the case that the point of regard of the user is not on the screen, a prompt tone corresponding to the notification is played, and the prompt tone can prompt the user of the arrival of the notification; then, step 603 is executed to perform voice broadcast on the notified content so that the user listens to the notified content when the user's gaze point is not on the screen or the stay time of the user's gaze point on the screen does not exceed the first time threshold.
In summary, the prompt tone played first in the embodiment of the present application can prompt the user of the arrival of the notification, so that the user can prepare for the next voice broadcast; on the basis that the user prepares for the next voice broadcast, the voice broadcast is carried out on the content of the notification, and the efficiency of obtaining the content of the notification from the voice by the user can be improved.
Method example six
Referring to fig. 7, a flowchart illustrating steps of a sixth embodiment of the data processing method of the present application is shown, which may specifically include the following steps:
step 701, acquiring a fixation point of a user according to eyeball data of the user;
and step 702, if the staying time of the point of regard of the user on the screen exceeds a first time threshold, not performing voice broadcast on the notified content, and displaying an operation interface corresponding to the notification on the screen.
According to the embodiment of the application, under the condition that the stay time of the fixation point of the user on the screen exceeds the first time threshold value, the attention of the user can be considered to be on the screen, and under the condition, the user can be considered to have reading capability, so that voice broadcasting can not be carried out on the notified content, and the repetition of voice broadcasting and content reading can be avoided to a certain extent.
Further, since the user can have the operation condition in a case where the user's attention is on the screen, an operation interface corresponding to the notification may be displayed on the screen so that the user responds to the notification through the operation interface. The operation interface may refer to the third embodiment of the method shown in fig. 4, which is not described herein again.
Method example seven
Referring to fig. 8, a flowchart illustrating steps of a seventh embodiment of the data processing method of the present application is shown, which may specifically include the following steps:
step 801, acquiring a fixation point of a user according to eyeball data of the user;
step 802, if the notification is detected, judging whether the point of regard of the user is on the screen; if yes, go to step 803, otherwise go to step 804;
step 803, judging whether the stay time of the point of regard of the user on the screen exceeds a first time threshold; if not, executing step 804, otherwise, executing step 808;
step 804, starting to perform voice broadcast on the notified content;
step 805, after the notified content is started to be broadcasted in voice, judging whether the staying time of the point of regard of the user on the screen exceeds a first time threshold, if so, executing step 806, otherwise, executing step 807;
step 806, after the voice broadcast of the content before the preset symbol in the notified content is completed, stopping the voice broadcast of the notified content, and displaying an operation interface corresponding to the notification on the screen;
step 807, continuing voice broadcasting, monitoring a voice instruction of the user after completing the voice broadcasting of the content of the notification, and responding to the notification according to the voice instruction;
and 808, not carrying out voice broadcast on the content of the notification, and displaying an operation interface corresponding to the notification on the screen.
In summary, the embodiment of the present application may determine the processing manner for the notification according to the determination result of whether the attention of the user is on the screen, and thus may improve the rationality of the processing manner for the notification.
According to an embodiment, in the embodiment of the present application, when the gaze point of the user is not on the screen or the staying time of the gaze point of the user on the screen does not exceed the first time threshold, the attention of the user may be considered not to be on the screen, and in this case, the content of the notification is broadcasted in a voice, so that the user may be prevented from missing information corresponding to the notification to a certain extent.
According to another embodiment, in the case where the stay time of the user's gaze point on the screen exceeds the first time threshold, the user's attention may be considered to be on the screen, and in this case, the user may be considered to have a reading capability, and therefore, the notified content may not be broadcasted by voice, so that repetition of voice broadcasting and content reading is avoided to some extent.
Further, since the user can have the operation condition in a case where the user's attention is on the screen, an operation interface corresponding to the notification may be displayed on the screen so that the user responds to the notification through the operation interface.
Under the condition of being applied to a driving scene, if the attention of the driver is in front and not on the screen, the attention of the driver can be focused more through voice broadcast, and the corresponding information of the notification is not missed any more.
In addition, according to the eyeball data, whether the attention of the user is on the screen, namely whether the user is observing the screen, is intelligently judged so as to obtain the corresponding judgment result. The judgment result can be used for judging whether voice broadcasting needs to be started or stopped, and unnecessary interference caused by hearing is reduced; the judgment result is also used for judging whether an operation interface needs to be provided or not, so that the user can more quickly answer the notification.
Method example eight
Referring to fig. 9, a flowchart illustrating steps of an eighth embodiment of the data processing method of the present application is shown, which may specifically include the following steps:
step 901, acquiring a fixation point of a user according to eyeball data of the user;
step 902, if the user's gaze point is not on the screen or the dwell time of the user's gaze point on the screen does not exceed a first time threshold, performing voice broadcast on the notified content;
with respect to the first method embodiment shown in fig. 2, the method of the embodiment of the present application may further include:
step 903, if the distance between the arm of the user and the screen does not exceed the distance threshold and the hovering time of the arm exceeds a second time threshold, switching the interaction mode to the gesture interaction mode.
The terminal of the embodiment of the application can support multiple interaction modes, and the multiple interaction modes specifically include: a touch mode and a gesture interaction mode.
The touch mode may support a user to interact with the terminal through a touch or mouse or other control methods, however, the touch mode requires a finger or a mouse of the user to precisely trigger the operation interface on the screen, for example, the user presses a specific control to trigger the control, so that the touch mode is difficult to operate, and the user needs to consume much attention, and therefore, in a driving scene, the user needs to consume much attention, thereby affecting safe driving.
In the field of intelligent control, gesture interaction is an important control method. Generally, in a visual range of a terminal having a visual function (e.g., a photographing function), a user makes gestures of a specific shape and corresponding operations of the terminal set in advance, and when the terminal recognizes the gestures, the corresponding operations are found, so that the corresponding operations can be automatically performed.
Gestures that may be supported by gesture interactions may include: gestures in different directions, different rotation angles, or different arcs, etc. For example, different orientations of the palm represent different gestures, etc. The gesture interaction mode executes corresponding operation through the gesture of the user, and the user is not required to press an operation interface on the screen, so that the operation difficulty can be reduced; in particular, the attention consumption can be reduced, and the driving safety can be improved in a driving scene.
The default interaction mode may be a touch mode. In order to reduce the operation difficulty and reduce the attention consumption, the embodiment of the application can switch the interaction mode into the gesture interaction mode according to the switching operation of the user, that is, switch the interaction mode from the touch mode to the gesture operation.
The switching operation in the embodiment of the present application may specifically be: the distance between the arm of the user and the screen does not exceed the distance threshold, and the hovering time of the arm exceeds a second time threshold, that is, the arm of the user is close to the screen and hovers for a long time, where the distance threshold and the second time threshold may be determined by those skilled in the art or by the user according to actual application requirements, and the specific distance threshold and the second time threshold are not limited in the embodiment of the present application.
In an optional embodiment of the present application, the step 903 of switching the interaction mode to the gesture interaction mode specifically may include: displaying the content of the notification in a middle area of a screen, and displaying an operation prompt icon of the notification on at least one side of the middle area.
Alternatively, the content of the notification may be displayed by a panel of the middle area, and the operation prompt icon may be displayed on at least one side (e.g., at least one of the upper side, the lower side, the left side, and the right side) of the middle area. The operation prompt icon can be used for prompting the corresponding operation function, and the orientation of the operation prompt icon can be used for prompting the orientation corresponding to the gesture, so that the user can generate the gesture corresponding to the operation according to the orientation of the operation prompt icon, and the difficulty of the user in memorizing the gesture can be reduced. In an application example of the present application, the operation prompt icon corresponding to the "close operation" may be "x", and the operation prompt icon corresponding to the "close operation" may be "x"
And the like. It is to be understood that the operation prompt icon may also be replaced by operation prompt information in a text form, that is, the content of the notification may be displayed in the middle area of the screen, and the operation prompt information of the notification may be displayed on at least one side of the middle area to prompt the orientation corresponding to the gesture and the operation function corresponding to the gesture.
It should be noted that, no matter what state the terminal is (for example, content of a notification displayed on a screen or a navigation interface displayed on the screen), if the switching operation is detected, the interaction mode can be switched to the gesture interaction mode. Therefore, the execution order between step 903 and step 901 or step 902 is not limited in the embodiments of the present application, for example, step 903 may be executed before or after step 901, or step 903 may be executed before or after step 902.
Referring to fig. 10, a schematic diagram of switching an interaction mode to a gesture interaction mode according to an embodiment of the present application is shown, where in a touch mode, a content 1002 of a notification and a corresponding operation interface 1003 thereof may be displayed on a screen 1001; in this case, if it is detected that the palm is in front of the screen and the hovering time exceeds 500ms, it may be considered that the switching operation is detected, and therefore, the interaction mode may be switched to the gesture interaction mode, specifically, the notified content 1002, the operation prompting icon 1004, and the operation prompting icon 1005 may be displayed on the screen 1001, where the operation prompting icon 1004 and the operation prompting icon 1005 are respectively located on the left side and the right side of the notified content 1002, so as to prompt the user to trigger an operation corresponding to the operation prompting icon 1004 through a left-directional gesture, and prompt the user to trigger an operation corresponding to the operation prompting icon 1005 through a right-directional gesture; that is, the orientation corresponding to the gesture can be matched with the orientation corresponding to the operation prompt icon, so that the difficulty of the user in memorizing the gesture is reduced.
It should be noted that if the palm is in front of the screen and the hovering time does not exceed 500ms, or the user is not in front of the screen, it may be considered that the switching operation is not detected, and therefore, the switching of the interactive mode may not be performed.
In addition, in other embodiments of the present application, the interaction mode may be switched to the gesture interaction mode according to the exit operation. A person skilled in the art or a user may determine the quitting operation according to the actual application requirement, for example, the quitting operation may be a change from a palm unfolding state to a hand grasping boxing state, and the like.
In summary, the embodiment of the application can support switching from the touch mode to the gesture interaction mode, specifically, the gesture interaction mode can be started through gesture recognition, so that a user can conveniently respond to a corresponding function of notification through a gesture, and the distraction of the attention of a driver is reduced.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the embodiments. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no particular act is required of the embodiments of the application.
The embodiment of the application also provides a data processing device.
Referring to fig. 11, a block diagram of a data processing apparatus according to an embodiment of the present application is shown, which may specifically include the following modules:
a gaze point obtaining module 1101, configured to obtain a gaze point of a user according to eyeball data of the user; and
and a notification content broadcasting module 1102, configured to perform voice broadcasting on the notified content if the gaze point of the user is not on the screen or a retention time of the gaze point of the user on the screen does not exceed a first time threshold.
Optionally, the apparatus may further include:
and the broadcast stopping module is used for stopping the voice broadcast of the notified content after the notified content broadcast module starts to carry out voice broadcast on the notified content, if the retention time of the point of regard of the user on the screen exceeds a first time threshold value, stopping the voice broadcast of the notified content after the content before the preset symbol in the notified content is completed by the voice broadcast.
Optionally, the apparatus may further include:
and the first operation interface display module is used for displaying the operation interface corresponding to the notification on the screen if the retention time of the point of regard of the user on the screen exceeds a first time threshold after the notification content broadcasting module starts to perform voice broadcasting on the notified content.
Optionally, the apparatus may further include:
the voice monitoring module is used for monitoring the voice instruction of the user after the voice broadcasting of the notified content is finished;
and the response module is used for responding to the notification according to the voice instruction.
Optionally, the apparatus may further include:
and the prompt tone playing module is used for playing the prompt tone corresponding to the notification if the point of regard of the user is not on the screen before the notification content broadcasting module carries out voice broadcasting on the notified content.
Optionally, the apparatus may further include:
and the second operation interface display module is used for not carrying out voice broadcast on the content of the notification and displaying an operation interface corresponding to the notification on the screen if the retention time of the point of regard of the user on the screen exceeds a first time threshold.
Optionally, the apparatus may further include:
and the mode switching module is used for switching the interaction mode into the gesture interaction mode if the distance between the arm of the user and the screen does not exceed the distance threshold and the hovering time of the arm exceeds a second time threshold.
Optionally, the mode switching module may include:
the display sub-module is used for displaying the content of the notification in the middle area of the screen and displaying the operation prompt icon of the notification on at least one side of the middle area.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Embodiments of the application can be implemented as a system or apparatus employing any suitable hardware and/or software for the desired configuration. Fig. 12 schematically illustrates an example apparatus 1300 that can be used to implement various embodiments described herein.
For one embodiment, fig. 12 illustrates an example apparatus 1300, which apparatus 1300 may comprise: one or more processors 1302, a system control module (chipset) 1304 coupled to at least one of the processors 1302, system memory 1306 coupled to the system control module 1304, non-volatile memory (NVM)/storage 1308 coupled to the system control module 1304, one or more input/output devices 1310 coupled to the system control module 1304, and a network interface 1312 coupled to the system control module 1306. The system memory 1306 may include: instruction 1362, the instruction 1362 executable by the one or more processors 1302.
Processor 1302 may include one or more single-core or multi-core processors, and processor 1302 may include any combination of general-purpose processors or special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In some embodiments, the apparatus 1300 can be a server, a target device, a wireless device, etc., as described in embodiments herein.
In some embodiments, apparatus 1300 may include one or more machine-readable media (e.g., system memory 1306 or NVM/storage 1308) having instructions thereon and one or more processors 1302, which in combination with the one or more machine-readable media, are configured to execute the instructions to implement the modules included in the foregoing apparatus to perform the actions described in embodiments of the present application.
System control module 1304 for one embodiment may include any suitable interface controller to provide any suitable interface to at least one of processors 1302 and/or any suitable device or component in communication with system control module 1304.
System control module 1304 for one embodiment may include one or more memory controllers to provide an interface to system memory 1306. The memory controller may be a hardware module, a software module, and/or a firmware module.
System memory 1306 for one embodiment may be used to load and store data and/or instructions 1362. For one embodiment, system memory 1306 may include any suitable volatile memory, such as suitable DRAM (dynamic random access memory). In some embodiments, system memory 1306 may include: double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
System control module 1304 for one embodiment may include one or more input/output controllers to provide an interface to NVM/storage 1308 and input/output device(s) 1310.
NVM/storage 1308 for one embodiment may be used to store data and/or instructions 1382. NVM/storage 1308 may include any suitable non-volatile memory (e.g., flash memory, etc.) and/or may include any suitable non-volatile storage device(s), e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives, etc.
The NVM/storage 1308 may include storage resources that are physically part of the device on which the device 1300 is installed or may be accessible by the device and not necessarily part of the device. For example, the NVM/storage 1308 may be accessed over a network via the network interface 1312 and/or through the input/output devices 1310.
Input/output device(s) 1310 for one embodiment may provide an interface for apparatus 1300 to communicate with any other suitable device, and input/output devices 1310 may include communication components, audio components, sensor components, and so forth.
Network interface 1312 of one embodiment may provide an interface for device 1300 to communicate with one or more networks and/or with any other suitable device, and device 1300 may communicate wirelessly with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols, such as to access a communication standard-based wireless network, such as WiFi, 2G, or 3G, or a combination thereof.
For one embodiment, at least one of the processors 1302 may be packaged together with logic for one or more controllers (e.g., memory controllers) of the system control module 1304. For one embodiment, at least one of the processors 1302 may be packaged together with logic for one or more controllers of the system control module 1304 to form a System In Package (SiP). For one embodiment, at least one of the processors 1302 may be integrated on the same novelty as the logic of one or more controllers of the system control module 1304. For one embodiment, at least one of processors 1302 may be integrated on the same chip with logic for one or more controllers of system control module 1304 to form a system on a chip (SoC).
In various embodiments, apparatus 1300 may include, but is not limited to: a computing device such as a desktop computing device or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, apparatus 1300 may have more or fewer components and/or different architectures. For example, in some embodiments, device 1300 may include one or more cameras, keyboards, Liquid Crystal Display (LCD) screens (including touch screen displays), non-volatile memory ports, multiple antennas, graphics chips, Application Specific Integrated Circuits (ASICs), and speakers.
Wherein, if the display includes a touch panel, the display screen may be implemented as a touch screen display to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The present application also provides a non-transitory readable storage medium, where one or more modules (programs) are stored in the storage medium, and when the one or more modules are applied to an apparatus, the apparatus may be caused to execute instructions (instructions) of methods in the present application.
Provided in one example is an apparatus comprising: one or more processors; and, instructions in one or more machine-readable media stored thereon, which when executed by the one or more processors, cause the apparatus to perform a method as in embodiments of the present application, which may include: the method shown in fig. 2 or fig. 3 or fig. 4 or fig. 5 or fig. 6 or fig. 7 or fig. 8 or fig. 9.
One or more machine-readable media are also provided in one example, having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform a method as in embodiments of the application, which may include: the method shown in fig. 2 or fig. 3 or fig. 4 or fig. 5 or fig. 6 or fig. 7 or fig. 8 or fig. 9.
The specific manner in which each module performs operations of the apparatus in the above embodiments has been described in detail in the embodiments related to the method, and will not be described in detail here, and reference may be made to part of the description of the method embodiments for relevant points.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing detailed description has provided a data processing method, a data processing apparatus, an apparatus, and a machine-readable medium, which are provided by the present application, and the present application has described the principles and embodiments of the present application by applying specific examples, and the descriptions of the foregoing examples are only used to help understand the method and the core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.