CN114007101B - Processing method, device and storage medium of fusion display device - Google Patents
Processing method, device and storage medium of fusion display device Download PDFInfo
- Publication number
- CN114007101B CN114007101B CN202111277612.1A CN202111277612A CN114007101B CN 114007101 B CN114007101 B CN 114007101B CN 202111277612 A CN202111277612 A CN 202111277612A CN 114007101 B CN114007101 B CN 114007101B
- Authority
- CN
- China
- Prior art keywords
- module
- user
- display device
- fusion display
- fusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/231—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
- H04N21/23106—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25866—Management of end-user data
- H04N21/25875—Management of end-user data involving end-user authentication
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/262—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
- H04N21/2625—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for delaying content or additional data distribution, e.g. because of an extended sport event
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/433—Content storage operation, e.g. storage operation in response to a pause request, caching operations
- H04N21/4331—Caching operations, e.g. of an advertisement for later insertion during playback
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/441—Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
- H04N21/4415—Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/475—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
- H04N21/4751—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for defining user accounts, e.g. accounts for children
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Child & Adolescent Psychology (AREA)
- Biomedical Technology (AREA)
- Theoretical Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Computer Graphics (AREA)
- Telephonic Communication Services (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application provides a processing method, equipment and storage medium of a fusion display device, wherein the method comprises the following steps: acquiring face information and sound information of a current user through an image acquisition module and a sound acquisition module integrated in the fusion display device; if the fusion display equipment is not connected with the Internet currently, calling an edge service module integrated in the fusion display equipment, and executing user identification processing according to the face information and the sound information of the user and the standard face and the standard sound stored in the edge service module; and recording the user identification result of the time to the local, and pushing the user identification result to a management user of the fusion display equipment when the fusion display equipment is connected with the network. By the scheme, real-time monitoring can be performed in a state that the fusion display device is not networked. Thereby providing a stable service to the user.
Description
Technical Field
The present disclosure relates to the field of electronics, and in particular, to a processing method, a device, and a storage medium for a fusion display device.
Background
Currently, electronic technology is rapidly developing, and demands of users for display devices are gradually expanding, so that functions of the display devices are continuously enriched to meet the continuously expanding demands of the users.
However, the current display device has the problem of high dependence on the network while the functions of the display device are gradually enriched, and when the situation of network disconnection or network fluctuation occurs, the real-time monitoring function of the display device cannot be realized, so that the use of a user is affected.
Disclosure of Invention
The application provides a processing method, equipment and storage medium of fusion display equipment, which are used for realizing monitoring under the condition of no network.
In a first aspect, the present application provides a processing method for a fusion display device, including: acquiring face information and sound information of a current user through an image acquisition module and a sound acquisition module integrated in the fusion display device; if the fusion display equipment is not connected with the Internet currently, calling an edge service module integrated in the fusion display equipment, and executing user identification processing according to the face information and the sound information of the user and the standard face and the standard sound stored in the edge service module; and recording the user identification result of the time to the local, and pushing the user identification result to a management user of the fusion display equipment when the fusion display equipment is connected with the network.
In one possible implementation manner, the performing a user identification process includes: detecting the similarity between the face information of the user and the standard face prestored in the edge service module; detecting the similarity between the voice information of the user and the standard voice prestored in the edge service module; and if the similarity of the face information is smaller than a first threshold value or the similarity of the sound information is smaller than a second threshold value, judging that the current user is an abnormal user.
In one possible implementation, the method further includes: if the current user is an abnormal user, calling an image acquisition module to record the video of the user in a preset time length; and when the converged display device is networked, pushing the converged display device to a management user of the converged display device, wherein the method comprises the following steps: and pushing the currently recorded user identification result and the corresponding video when the fusion display equipment is connected with the network.
In one possible implementation, the method further includes: acquiring an IPTV video stream from an IPTV service platform through an IPTV private network slicing module integrated with the converged display equipment; and transmitting the IPTV video stream to a video decoding and player module of the fusion display device, so that the video decoding and player module analyzes the IPTV video stream and then calls a display screen module of the fusion display device to display and play.
In one possible implementation, the method further includes: acquiring an OTT video stream from an OTT service platform through a built-in set top box module integrated in the fusion display device; transmitting the OTT video stream to a video decoding and player module of the fusion display device, so that the video decoding and player module analyzes the OTT video stream and then calls a display screen module of the fusion display device to display and play
In one possible implementation, the method further includes: the mobile network provided by the mobile network communication module is converted into wireless network signals of other communication protocols through the network conversion module integrated by the converged display equipment, and the wireless network signals are provided outwards; the wireless network signal is used for providing home network service and access of home intelligent equipment.
In a second aspect, the present application provides a fusion display device comprising: the acquisition module is used for acquiring face information and sound information of the current user through the image acquisition module and the sound acquisition module integrated in the fusion display device; the identification module is used for calling an edge service module integrated in the fusion display equipment if the fusion display equipment is not networked currently, and executing user identification processing according to the face information and the sound information of the user and the standard face and the standard sound stored in the edge service module; and the management module is used for recording the user identification result of the time to the local and pushing the user identification result to the management user of the fusion display equipment when the fusion display equipment is connected with the network.
In a possible implementation manner, the identification module is specifically configured to detect similarity between the face information of the user and the standard face stored in the edge service module in advance; detecting the similarity between the voice information of the user and the standard voice prestored in the edge service module; the recognition module is specifically further configured to determine that the current user is an abnormal user if the similarity of the face information is smaller than a first threshold or the similarity of the sound information is smaller than a second threshold.
In one possible implementation manner, the obtaining module is further configured to invoke the image acquisition module to record the video of the user in a predetermined time period if the current user is an abnormal user; and the management module is also used for pushing the currently recorded user identification result and the corresponding video when the fusion display equipment is connected with the network.
In one possible implementation, the apparatus further includes: the playing module is used for acquiring IPTV video streams from the IPTV service platform through the IPTV private network slicing module integrated with the integrated display equipment; and the playing module is also used for transmitting the IPTV video stream to a video decoding and playing module of the fusion display device so that the video decoding and playing module analyzes the IPTV video stream and then calls a display screen module of the fusion display device to display and play.
In one possible implementation manner, the playing module is further configured to obtain an OTT video stream from an OTT service platform through a built-in set top box module integrated in the converged display device; the playing module is further configured to transmit the OTT video stream to a video decoding and playing module of the fusion display device, so that after the video decoding and playing module parses the OTT video stream, the display screen module of the fusion display device is invoked to display and play the OTT video stream
In one possible implementation, the apparatus further includes: the network conversion module is used for converting the mobile network provided by the mobile network communication module into wireless network signals of other communication protocols and providing the wireless network signals outwards; the wireless network signal is used for providing home network service and access of home intelligent equipment.
In a third aspect, the present application provides an electronic device, comprising: a processor, and a memory communicatively coupled to the processor; the memory stores computer-executable instructions; the processor executes computer-executable instructions stored in the memory to implement the method of any one of the first aspects.
In a fourth aspect, the present application provides a computer-readable storage medium having stored therein computer-executable instructions for performing the method of any of the first aspects by a processor.
According to the processing method, the processing device and the storage medium of the fusion display device, the face information and the sound information of the current user are obtained through the image acquisition module and the sound acquisition module integrated in the fusion display device; if the fusion display equipment is not connected with the Internet currently, calling an edge service module integrated in the fusion display equipment, and executing user identification processing according to the face information and the sound information of the user and the standard face and the standard sound stored in the edge service module; and recording the user identification result of the time to the local, and pushing the user identification result to a management user of the fusion display equipment when the fusion display equipment is connected with the network. By the scheme, real-time monitoring can be performed in a state that the fusion display device is not networked. Thereby providing a stable service to the user.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is an application scenario schematic diagram of a processing method of a fusion display device provided in the present application;
fig. 2 is a flow chart of a processing method of a fusion display device according to an embodiment of the present application;
FIG. 3 is an example of a converged display device;
FIG. 4 is an example of a converged display device;
FIG. 5 is an example of a converged display device;
FIG. 6 is an example of a converged display device;
FIG. 7 is an example of a converged display device;
fig. 8 is a schematic structural diagram of a fusion display device according to a third embodiment of the present application;
fig. 9 is a block diagram of an apparatus for fusing display devices according to a fifth embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to a sixth embodiment of the present application.
Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
First, the terms involved are explained:
IPTV service: the interactive network video, the user can select the video on the network to play;
OTT traffic: various application services are provided to users via the internet.
Fig. 1 is an application scenario schematic diagram of a processing method of a fusion display device provided in an embodiment of the present application, where, as shown in fig. 1, the scenario includes: user 1, converged display device 2 and edge service module 3.
Examples are given in connection with the illustrated scenario: when the user 1 appears in the collection range of the fusion display device 2, the non-networked fusion display device 2 may collect face information and sound information of the user 1 and compare with the standard face information and standard sound information stored in the edge service module 3, thereby identifying whether the user 1 is an abnormal user.
In practical applications, the edge service module 3 may be integrated inside the fusion display device 2.
The following examples are presented to illustrate aspects of embodiments of the present application in connection with the following examples.
Example 1
Fig. 2 is a flowchart of a processing method of a fusion display device according to an embodiment of the present application, where the method includes the following steps:
s101, acquiring face information and sound information of a current user through an image acquisition module and a sound acquisition module integrated in the fusion display device;
s102, if the fusion display equipment is not networked currently, calling an edge service module integrated in the fusion display equipment, and executing user identification processing according to the face information and the sound information of the user and the standard face and the standard sound stored in the edge service module;
s103, recording the user identification result of the time to the local, and pushing the user identification result to a management user of the fusion display device when the fusion display device is connected with the network.
Optionally, the fusion display device integrated with the image acquisition module, the sound acquisition module and the edge service module includes, but is not limited to, a computer display, a television integrated with a set-top box, a display screen of a projector integrated with a set-top box, and the like.
In one example, S101 may specifically include: the image acquisition module controls a 360-degree ultra-high definition camera of the fusion display device to acquire multi-angle face information of a current user, wherein the multi-angle face information comprises front face information and side face information; the sound collection module controls the sound or the microphone of the fusion display device to collect the sound information of the current user.
As shown in fig. 3 in connection with the scene example, fig. 3 is an example of a fusion display device. The fusion display device is provided with a high-definition camera, a sound and a microphone. Optionally, the high definition camera can be installed outside four frames of fusion display device in order to realize 360 rotations. Alternatively, the sound and microphone may be mounted inside, outside, front, side, and back of the four frames of the converged display device. The image acquisition module of the fusion display device controls the 360-degree ultra-high definition camera of the fusion display device, the multi-angle face information acquisition is carried out on the current user, the face and side face information of the user are acquired through the rotation of the ultra-high definition camera, and the acquisition accuracy can be improved through the multi-angle face information acquisition. And the sound collection module of the fusion display device controls the sound or the microphone of the fusion display device to collect the sound information of the current user.
By the method in the embodiment, the face information collection and the sound information collection can be realized in a wider range.
In one example, S102 is further preceded by: the fusion display device displays a guide interface for setting member information, and an administrator sets member information on the guide interface, wherein options of the guide interface include, but are not limited to: adding users, modifying users, deleting users, etc.; and saving the member information set by the administrator in the edge server and synchronizing the member information to the cloud server.
As shown in fig. 4 in connection with the scene example, fig. 4 is an example of a fusion display device. Taking the example of adding user standard information, the fusion display device displays a guiding interface for setting member information, and an administrator selects an adding user option. The image acquisition module of the fusion display device controls the 360-degree ultra-high definition camera of the fusion display device to acquire multi-angle face information of the current user. And the sound collection module of the fusion display device controls the sound or the microphone of the fusion display device to collect the sound information of the current user. After the user standard information is collected, an administrator selects to confirm adding the user, and the fusion display device stores the collected user standard information in the edge service module and synchronizes to the cloud server.
By the method in the embodiment, the user standard information can be collected under the local non-networking condition and stored in the edge service module in advance, and then the user standard information stored in the edge service module can be called at any time under the non-networking condition or the user standard information of the cloud server can be called under the networking condition, so that normal service can be provided under any network condition.
In one example, S102 may specifically include: detecting the similarity between the face information of the user and the standard face prestored in the edge service module; detecting the similarity between the voice information of the user and the standard voice prestored in the edge service module; and if the similarity of the face information is smaller than a first threshold value or the similarity of the sound information is smaller than a second threshold value, judging that the current user is an abnormal user.
And detecting the similarity between the face information of the current user and the standard face prestored in the edge service module by combining with a scene example. And detecting the similarity between the voice information of the current user and the standard voice prestored in the edge service module. And judging whether the current user is a user stored in the edge service module or not through the similarity. And if the similarity of the face information is smaller than a first threshold value or the similarity of the sound information is smaller than a second threshold value, judging that the current user is an abnormal user if the face information and the sound information of the current user are not in the user standard information stored in the edge service module.
By the method in the embodiment, the similarity of the face information and the similarity of the sound information can be comprehensively identified, and the accuracy of identification is improved.
Wherein the administrator can adjust the first threshold and the second threshold by adjusting the first threshold and the second threshold to adjust the sensitivity of the user identification.
In one example, the method further comprises: if the current user is an abnormal user, calling an image acquisition module to record the video of the user in a preset time length; and when the converged display device is networked, pushing the converged display device to a management user of the converged display device, wherein the method comprises the following steps: and pushing the currently recorded user identification result and the corresponding video when the fusion display equipment is connected with the network.
In combination with the scene example, if the current user is identified as an abnormal user, the image acquisition module records the video of the user in a preset time length. And if the current display equipment is in a networking state, pushing the current user identification result and the corresponding video to the management user through a network. If the current display device is in an unconnected state, the current user identification result and the corresponding video are stored locally. And pushing the current user identification result and the corresponding video to the management user through the network until the current equipment is networked. The management user can confirm the video, and if the abnormal user confirms that the management user has no problem, the management user can execute the operation of adding the user.
In one example, the method further comprises: acquiring an IPTV video stream from an IPTV service platform through an IPTV private network slicing module integrated with the converged display equipment; and transmitting the IPTV video stream to a video decoding and player module of the fusion display device, so that the video decoding and player module analyzes the IPTV video stream and then calls a display screen module of the fusion display device to display and play.
As shown in fig. 5 in connection with the scene example, fig. 5 is an example of a fusion display device. The IPTV private network slicing module acquires an IPTV video stream from an IPTV service platform. The IPTV private network slicing module divides the virtual local area network for transmission. The IPTV video stream is transmitted to a video decoding and player module, the IPTV video stream is analyzed by the video decoding and player module, and the video is displayed by a display screen module.
In one example, the method further comprises: acquiring an OTT video stream from an OTT service platform through a built-in set top box module integrated in the fusion display device; and transmitting the OTT video stream to a video decoding and player module of the fusion display device, so that the video decoding and player module analyzes the OTT video stream and then calls a display screen module of the fusion display device to display and play.
As shown in fig. 6 in connection with the scene example, fig. 6 is an example of a fusion display device. And the set top box module acquires the OTT video stream from the OTT service platform. The OTT video stream is transmitted to a video decoding and player module, the OTT video stream is analyzed by the video decoding and player module, and the video is displayed by a display screen module.
In the processing method of the fusion display device provided by the embodiment, the face information and the sound information of the current user are obtained through the image acquisition module and the sound acquisition module integrated in the fusion display device. And if the fusion display equipment is not connected with the Internet currently, calling an edge service module integrated in the fusion display equipment, and executing user identification processing according to the face information and the sound information of the user and the standard face and the standard sound stored in the edge service module. And recording the user identification result of the time to the local, and pushing the user identification result to a management user of the fusion display equipment when the fusion display equipment is connected with the network. Through the scheme, real-time monitoring can be performed through the edge service module in the state that the fusion display equipment is not networked. Thereby providing a stable use service to the user.
Example two
On the basis of the first embodiment, the present embodiment illustrates a network conversion flow of the fusion display device. On the basis of the first embodiment, the method further includes:
s201, converting a mobile network provided by a mobile network communication module into wireless network signals of other communication protocols through a network conversion module integrated by the converged display device, and providing the wireless network signals outwards; the wireless network signal is used for providing home network service and access of home intelligent equipment.
Optionally, the wireless network signal includes but is not limited to WiFi6 signal and ZigBee signal.
As shown in fig. 7 in combination with the scene example, fig. 7 is an example of a fusion display device. The communication module sends the mobile network to the network conversion module, and the WiFi6 module and the ZigBee module under the network conversion module respectively convert the mobile network into WiFi6 signals and ZigBee signals.
In the processing method of the converged display device provided by the embodiment, the mobile network provided by the mobile network communication module is converted into wireless network signals of other communication protocols through the network conversion module integrated by the converged display device, and the wireless network signals are provided outwards; the wireless network signal is used for providing home network service and access of home intelligent equipment. Through the scheme, the fusion display device can convert the mobile network into the wireless network without external equipment. Thereby optimizing the problems of excessive external devices and complex wiring.
Example III
Fig. 8 is a schematic structural diagram of a fusion display device according to a third embodiment of the present application, as shown in fig. 8, where the device includes: an acquisition module 61, an identification module 62 and a management module 63.
An acquisition module 61, configured to acquire face information and sound information of a current user through an image acquisition module and a sound acquisition module integrated with the fusion display device;
the identifying module 62 is configured to invoke an edge service module integrated with the converged display device if the converged display device is not currently networked, and perform user identification processing according to face information and sound information of the user, and standard faces and standard sounds stored in the edge service module;
and the management module 63 is used for recording the user identification result of the time to the local and pushing the user identification result to the management user of the fusion display device when the fusion display device is connected with the network.
Optionally, the fusion display device integrated with the image acquisition module, the sound acquisition module and the edge service module includes, but is not limited to, a computer display, a television integrated with a set-top box, a display screen of a projector integrated with a set-top box, and the like.
In one example, the acquisition module 61 is specifically configured to: the image acquisition module controls a 360-degree ultra-high definition camera of the fusion display device to acquire multi-angle face information of a current user, wherein the multi-angle face information comprises front face information and side face information; the sound collection module controls the sound or the microphone of the fusion display device to collect the sound information of the current user.
As shown in fig. 3 in connection with the scene example, fig. 3 is an example of a fusion display device. The fusion display device is provided with a high-definition camera, a sound and a microphone. Optionally, the high definition camera can be installed outside four frames of fusion display device in order to realize 360 rotations. Alternatively, the sound and microphone may be mounted inside, outside, front, side, and back of the four frames of the converged display device. The image acquisition module of the fusion display device controls the 360-degree ultra-high definition camera of the fusion display device, the multi-angle face information acquisition is carried out on the current user, the face and side face information of the user are acquired through the rotation of the ultra-high definition camera, and the acquisition accuracy can be improved through the multi-angle face information acquisition. And the sound collection module of the fusion display device controls the sound or the microphone of the fusion display device to collect the sound information of the current user.
By the method in the embodiment, the face information collection and the sound information collection can be realized in a wider range.
In one example, the identification module 62 is further configured to: the fusion display device displays a guide interface for setting member information, and an administrator sets member information on the guide interface, wherein options of the guide interface include, but are not limited to: adding users, modifying users, deleting users, etc.; and saving the member information set by the administrator in the edge server and synchronizing the member information to the cloud server.
As shown in fig. 4 in connection with the scene example, fig. 4 is an example of a fusion display device. Taking the example of adding user standard information, the fusion display device displays a guiding interface for setting member information, and an administrator selects an adding user option. The image acquisition module of the fusion display device controls the 360-degree ultra-high definition camera of the fusion display device to acquire multi-angle face information of the current user. And the sound collection module of the fusion display device controls the sound or the microphone of the fusion display device to collect the sound information of the current user. After the user standard information is collected, an administrator selects to confirm adding the user, and the fusion display device stores the collected user standard information in the edge service module and synchronizes to the cloud server.
By the method in the embodiment, the user standard information can be collected under the local non-networking condition and stored in the edge service module in advance, and then the user standard information stored in the edge service module can be called at any time under the non-networking condition or the user standard information of the cloud server can be called under the networking condition, so that normal service can be provided under any network condition.
In one example, the identification module 62 is specifically configured to: detecting the similarity between the face information of the user and the standard face prestored in the edge service module; detecting the similarity between the voice information of the user and the standard voice prestored in the edge service module; and if the similarity of the face information is smaller than a first threshold value or the similarity of the sound information is smaller than a second threshold value, judging that the current user is an abnormal user.
In connection with the scene example, the recognition module 62 detects the similarity between the face information of the current user and the standard face stored in the edge service module in advance. And, the recognition module 62 detects the similarity between the voice information of the current user and the standard voice stored in the edge service module in advance. The identification module 62 determines whether the current user is a user stored in the edge service module by similarity. And if the similarity of the face information is smaller than a first threshold value or the similarity of the sound information is smaller than a second threshold value, judging that the current user is an abnormal user if the face information and the sound information of the current user are not in the user standard information stored in the edge service module.
By the method in the embodiment, the similarity of the face information and the similarity of the sound information can be comprehensively identified, and the accuracy of identification is improved.
Wherein the administrator can adjust the first threshold and the second threshold by adjusting the first threshold and the second threshold to adjust the sensitivity of the user identification.
In one example, the apparatus further comprises: if the current user is an abnormal user, the image acquisition module records the video of the user in a preset time length; and when the converged display device is networked, pushing the converged display device to a management user of the converged display device, wherein the method comprises the following steps: and pushing the currently recorded user identification result and the corresponding video when the fusion display equipment is connected with the network.
In combination with the scene example, if the current user is identified as an abnormal user, the image acquisition module records the video of the user in a preset time length. And if the current display equipment is in a networking state, pushing the current user identification result and the corresponding video to the management user through a network. If the current display device is in an unconnected state, the current user identification result and the corresponding video are stored locally. And pushing the current user identification result and the corresponding video to the management user through the network until the current equipment is networked. The management user can confirm the video, and if the abnormal user confirms that the management user has no problem, the management user can execute the operation of adding the user.
In one example, the apparatus further comprises: the IPTV private network slicing module is integrated with the converged display equipment and acquires IPTV video streams from an IPTV service platform; and transmitting the IPTV video stream to a video decoding and player module of the fusion display device, so that the video decoding and player module analyzes the IPTV video stream and then calls a display screen module of the fusion display device to display and play.
As shown in fig. 5 in connection with the scene example, fig. 5 is an example of a fusion display device. The IPTV private network slicing module acquires an IPTV video stream from an IPTV service platform. The IPTV private network slicing module divides the virtual local area network for transmission. The IPTV video stream is transmitted to a video decoding and player module, the IPTV video stream is analyzed by the video decoding and player module, and the video is displayed by a display screen module.
In one example, the apparatus further comprises: the built-in set top box module is integrated with the fusion display device and acquires an OTT video stream from an OTT service platform; and transmitting the OTT video stream to a video decoding and player module of the fusion display device, so that the video decoding and player module analyzes the OTT video stream and then calls a display screen module of the fusion display device to display and play.
As shown in fig. 6 in connection with the scene example, fig. 6 is an example of a fusion display device. And the set top box module acquires the OTT video stream from the OTT service platform. The OTT video stream is transmitted to a video decoding and player module, the OTT video stream is analyzed by the video decoding and player module, and the video is displayed by a display screen module.
In the fusion display device provided in this embodiment, the acquiring module acquires face information and sound information of a current user through the image acquiring module and the sound acquiring module integrated in the fusion display device. And if the converged display equipment is not networked currently, the identification module calls an edge service module integrated in the converged display equipment, and performs user identification processing according to the face information and the sound information of the user and the standard face and the standard sound stored in the edge service module. And the management module is used for recording the user identification result of the time to the local and pushing the user identification result to the management user of the fusion display equipment when the fusion display equipment is connected with the network. Through the scheme, real-time monitoring can be performed through the edge service module in the state that the fusion display equipment is not networked. Thereby providing a stable use service to the user.
Example IV
The fourth embodiment of the present application provides a fusion display device, which is based on the third embodiment:
the network conversion module is used for converting the mobile network provided by the mobile network communication module into wireless network signals of other communication protocols and providing the wireless network signals outwards; the wireless network signal is used for providing home network service and access of home intelligent equipment.
Optionally, the wireless network signal includes but is not limited to WiFi6 signal and ZigBee signal.
As shown in fig. 7 in combination with the scene example, fig. 7 is an example of a fusion display device. The communication module sends the mobile network to the network conversion module, and the WiFi6 module and the ZigBee module under the network conversion module respectively convert the mobile network into WiFi6 signals and ZigBee signals.
In the fusion display device provided in this embodiment, the network conversion module converts the mobile network provided by the mobile network communication module into a wireless network signal of another communication protocol, and provides the wireless network signal to the outside. The wireless network signal is used for providing home network service and access of home intelligent equipment. Through the scheme, the fusion display device can convert the mobile network into the wireless network without external equipment. Thereby optimizing the problems of excessive external devices and complex wiring.
Example five
Fig. 9 is a block diagram of an apparatus, which may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, etc., incorporating a display device, according to an example embodiment.
The apparatus 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the apparatus 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on the device 800, contact data, phonebook data, messages, pictures, videos, and the like. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 800.
The multimedia component 808 includes a screen between the device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 800 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the apparatus 800. For example, the sensor assembly 814 may detect an on/off state of the device 800, a relative positioning of the components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, an orientation or acceleration/deceleration of the device 800, and a change in temperature of the device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the apparatus 800 and other devices, either in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including instructions executable by processor 820 of apparatus 800 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Example six
Fig. 10 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, as shown in fig. 10, where the electronic device includes:
a processor 291, the electronic device further comprising a memory 292; a communication interface (Communication Interface) 293 and bus 294 may also be included. The processor 291, the memory 292, and the communication interface 293 may communicate with each other via the bus 294. Communication interface 293 may be used for information transfer. The processor 291 may call logic instructions in the memory 294 to perform the methods of the above embodiments.
Further, the logic instructions in memory 292 described above may be implemented in the form of software functional units and stored in a computer-readable storage medium when sold or used as a stand-alone product.
The memory 292 is a computer readable storage medium, and may be used to store a software program, a computer executable program, and program instructions/modules corresponding to the methods in the embodiments of the present application. The processor 291 executes functional applications and data processing by running software programs, instructions and modules stored in the memory 292, i.e., implements the methods of the method embodiments described above.
Embodiments of the present application provide a non-transitory computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, are configured to implement a method as described in the previous embodiments.
Embodiments of the present application provide a computer program product comprising a computer program which, when executed by a processor, implements a method as described in the previous embodiments.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Claims (12)
1. A method for processing a converged display device, comprising:
acquiring face information and sound information of a current user through an image acquisition module and a sound acquisition module integrated in the fusion display device;
if the fusion display equipment is not connected with the Internet currently, calling an edge service module integrated in the fusion display equipment, and executing user identification processing according to the face information and the sound information of the user and the standard face and the standard sound stored in the edge service module;
recording the user identification result of the time to the local, and pushing the user identification result to a management user of the fusion display equipment when the fusion display equipment is networked;
if the current user is an abnormal user, calling an image acquisition module to record the video of the user in a preset time length;
and when the converged display device is networked, pushing the converged display device to a management user of the converged display device, wherein the method comprises the following steps:
And pushing the currently recorded user identification result and the corresponding video when the fusion display equipment is connected with the network.
2. The method of claim 1, wherein the performing a user identification process comprises:
detecting the similarity between the face information of the user and the standard face prestored in the edge service module; detecting the similarity between the voice information of the user and the standard voice prestored in the edge service module;
and if the similarity of the face information is smaller than a first threshold value or the similarity of the sound information is smaller than a second threshold value, judging that the current user is an abnormal user.
3. The method according to claim 1, wherein the method further comprises:
acquiring an IPTV video stream from an IPTV service platform through an IPTV private network slicing module integrated with the converged display equipment;
and transmitting the IPTV video stream to a video decoding and player module of the fusion display device, so that the video decoding and player module analyzes the IPTV video stream and then calls a display screen module of the fusion display device to display and play.
4. The method according to claim 1, wherein the method further comprises:
Acquiring an OTT video stream from an OTT service platform through a built-in set top box module integrated in the fusion display device;
and transmitting the OTT video stream to a video decoding and player module of the fusion display device, so that the video decoding and player module analyzes the OTT video stream and then calls a display screen module of the fusion display device to display and play.
5. The method according to any one of claims 1-4, further comprising:
the mobile network provided by the mobile network communication module is converted into wireless network signals of other communication protocols through the network conversion module integrated by the converged display equipment, and the wireless network signals are provided outwards; the wireless network signal is used for providing home network service and access of home intelligent equipment.
6. A fusion display device, comprising:
the acquisition module is used for acquiring face information and sound information of the current user through the image acquisition module and the sound acquisition module integrated in the fusion display device;
the identification module is used for calling an edge service module integrated in the fusion display equipment if the fusion display equipment is not networked currently, and executing user identification processing according to the face information and the sound information of the user and the standard face and the standard sound stored in the edge service module;
The management module is used for recording the user identification result of the time to the local and pushing the user identification result to a management user of the fusion display equipment when the fusion display equipment is connected with the network;
the acquisition module is also used for calling the image acquisition module to record the video of the user in a preset time length if the current user is an abnormal user;
and the management module is also used for pushing the currently recorded user identification result and the corresponding video when the fusion display equipment is connected with the network.
7. The fusion display device of claim 6, wherein the display device is configured to display the image of the user,
the identification module is specifically configured to detect similarity between face information of the user and the standard face stored in the edge service module in advance; detecting the similarity between the voice information of the user and the standard voice prestored in the edge service module;
the recognition module is specifically further configured to determine that the current user is an abnormal user if the similarity of the face information is smaller than a first threshold or the similarity of the sound information is smaller than a second threshold.
8. The fusion display device of claim 6, wherein the device further comprises:
The playing module is used for acquiring IPTV video streams from the IPTV service platform through the IPTV private network slicing module integrated with the integrated display equipment;
the playing module is further configured to transmit the IPTV video stream to a video decoding and playing module of the converged display device, so that the video decoding and playing module analyzes the IPTV video stream and then invokes a display screen module of the converged display device to display and play the IPTV video stream.
9. The fusion display device of claim 6, wherein the device further comprises:
the playing module is used for acquiring the OTT video stream from the OTT service platform through a built-in set top box module integrated in the fusion display device;
the playing module is further configured to transmit the OTT video stream to a video decoding and playing module of the fusion display device, so that the video decoding and playing module analyzes the OTT video stream and then invokes a display screen module of the fusion display device to display and play.
10. The fusion display device of any of claims 6-9, wherein the device further comprises:
the network conversion module is used for converting the mobile network provided by the mobile network communication module into wireless network signals of other communication protocols and providing the wireless network signals outwards; the wireless network signal is used for providing home network service and access of home intelligent equipment.
11. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the method of any one of claims 1-5.
12. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the method of any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111277612.1A CN114007101B (en) | 2021-10-29 | 2021-10-29 | Processing method, device and storage medium of fusion display device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111277612.1A CN114007101B (en) | 2021-10-29 | 2021-10-29 | Processing method, device and storage medium of fusion display device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114007101A CN114007101A (en) | 2022-02-01 |
CN114007101B true CN114007101B (en) | 2023-05-16 |
Family
ID=79925688
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111277612.1A Active CN114007101B (en) | 2021-10-29 | 2021-10-29 | Processing method, device and storage medium of fusion display device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114007101B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011161674A1 (en) * | 2010-06-23 | 2011-12-29 | Eyal Wiener | Real-time automatic user status recognition and broadcasting service |
CN106991403A (en) * | 2017-04-07 | 2017-07-28 | 移康智能科技(上海)股份有限公司 | A kind of method and apparatus of recognition of face |
CN109543633A (en) * | 2018-11-29 | 2019-03-29 | 上海钛米机器人科技有限公司 | A kind of face identification method, device, robot and storage medium |
CN111585765A (en) * | 2020-04-28 | 2020-08-25 | 深圳市元征科技股份有限公司 | Face recognition method and device and related equipment |
-
2021
- 2021-10-29 CN CN202111277612.1A patent/CN114007101B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011161674A1 (en) * | 2010-06-23 | 2011-12-29 | Eyal Wiener | Real-time automatic user status recognition and broadcasting service |
CN106991403A (en) * | 2017-04-07 | 2017-07-28 | 移康智能科技(上海)股份有限公司 | A kind of method and apparatus of recognition of face |
CN109543633A (en) * | 2018-11-29 | 2019-03-29 | 上海钛米机器人科技有限公司 | A kind of face identification method, device, robot and storage medium |
CN111585765A (en) * | 2020-04-28 | 2020-08-25 | 深圳市元征科技股份有限公司 | Face recognition method and device and related equipment |
Non-Patent Citations (2)
Title |
---|
PCA based Facial Recognition for Attendance System;T. A. Kiran等;《2020 International Conference on Smart Electronics and Communication (ICOSEC)》;第248-252页 * |
机场人证合一比对系统的设计与实现;盛智勇 等;《北方工业大学学报》;第29卷(第05期);第13-19页 * |
Also Published As
Publication number | Publication date |
---|---|
CN114007101A (en) | 2022-02-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3046309B1 (en) | Method, device and system for projection on screen | |
JP6626440B2 (en) | Method and apparatus for playing multimedia files | |
US9661390B2 (en) | Method, server, and user terminal for sharing video information | |
CN105100829B (en) | Video content intercept method and device | |
CN112114765A (en) | Screen projection method and device and storage medium | |
CN104010222A (en) | Method, device and system for displaying comment information | |
EP3163887A1 (en) | Method and apparatus for performing media synchronization | |
US20220007074A1 (en) | Method and apparatus for playing videos, and electronic device and storage medium thereof | |
US10334282B2 (en) | Methods and devices for live broadcasting based on live broadcasting application | |
CN112969096A (en) | Media playing method and device and electronic equipment | |
EP2986020A1 (en) | Method and apparatus for adjusting video quality based on network environment | |
CN111212306A (en) | Wheat connecting method and device, electronic equipment and storage medium | |
CN105120301A (en) | Video processing method and apparatus, and intelligent equipment | |
CN103997519A (en) | Method and device for transmitting image | |
CN106792024B (en) | Multimedia information sharing method and device | |
CN112291631A (en) | Information acquisition method, device, terminal and storage medium | |
CN112261453A (en) | Method, device and storage medium for transmitting subtitle splicing map | |
CN108521579B (en) | Bullet screen information display method and device | |
CN106254402A (en) | The synchronous method of intelligent terminal's configuration information and device | |
CN111541922A (en) | Method, device and storage medium for displaying interface input information | |
CN114007101B (en) | Processing method, device and storage medium of fusion display device | |
CN110213531B (en) | Monitoring video processing method and device | |
CN109920437B (en) | Method and device for removing interference | |
CN113660513A (en) | Method, device and storage medium for synchronizing playing time | |
CN105700878B (en) | The treating method and apparatus of message editing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |