WO2022267733A1 - Method for dynamically adjusting frame-dropping threshold value, and related devices - Google Patents

Method for dynamically adjusting frame-dropping threshold value, and related devices Download PDF

Info

Publication number
WO2022267733A1
WO2022267733A1 PCT/CN2022/092369 CN2022092369W WO2022267733A1 WO 2022267733 A1 WO2022267733 A1 WO 2022267733A1 CN 2022092369 W CN2022092369 W CN 2022092369W WO 2022267733 A1 WO2022267733 A1 WO 2022267733A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
electronic device
video frame
time
video
Prior art date
Application number
PCT/CN2022/092369
Other languages
French (fr)
Chinese (zh)
Inventor
贾睿
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2022267733A1 publication Critical patent/WO2022267733A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping

Definitions

  • the present application relates to the field of screen mirroring, and in particular to a method and related equipment for dynamically adjusting a frame loss threshold.
  • Mirror projection refers to the real-time projection of the screen of one device to another device for display. It is especially useful in scenarios such as business meetings.
  • This application provides a method for dynamically adjusting the frame loss threshold.
  • the initial frame loss threshold can be set, and the frame loss threshold can be dynamically adjusted according to the time difference of receiving video frames by the second electronic device, that is, the current network status can be judged according to the time difference of receiving video frames and Correspondingly adjust the frame loss threshold, and then perform the frame loss operation, which can effectively solve the video frame blocking problem under different network conditions.
  • judge whether the decoding delay and the display delay after adjusting the frame loss threshold are reduced and determine whether the frame loss threshold needs to be adjusted again according to the judgment result, so that the effect of the frame loss threshold can be fed back in time , to ensure that the current frame loss threshold can effectively solve the problem of video frame blocking.
  • the present application provides a method for dynamically adjusting a frame loss threshold.
  • the method can be applied to the second electronic device.
  • the method may include: receiving the video frame sent by the first electronic device; determining the frame receiving time difference; the frame receiving time difference is the difference between the time of receiving the video frame and the time of receiving the Mth video frame; the Mth video frame is The video frame sent by the first electronic device before sending the video frame; when the frame receiving time difference is not less than the frame loss threshold, if the first duration does not reach the preset time, reduce the frame loss threshold; the first duration is above Adjust the duration from the frame loss threshold to the current moment once; the frame loss threshold is used to determine whether to discard the video frame; or, when the frame receiving time difference is not less than the frame loss threshold, if the first duration reaches the preset time, and the frame loss If the frame threshold is less than the upper threshold, increase the frame loss threshold; if the first duration reaches a preset time, and the frame loss threshold is not less than the upper threshold, reduce the frame loss
  • the second electronic device can adjust the frame loss threshold to adapt to video frame blocking problems under different network conditions and device states.
  • the second electronic device can also perform a threshold test, that is, determine whether the decoding delay and the display delay after adjusting the frame loss threshold are reduced, and determine whether it is necessary to adjust the frame loss threshold again according to the judgment result , so that the effect of the frame loss threshold can be fed back in time, ensuring that the current frame loss threshold can effectively solve the problem of video frame blocking.
  • the video frame is a video frame not used for a threshold test.
  • the video frames received by the second electronic device and sent by the first electronic device may not be used for the threshold test.
  • the second electronic device can judge whether to adjust the frame loss threshold according to the time of receiving the video frame, which can avoid the inaccurate threshold test caused by adjusting the frame loss threshold when the video frame is used for the threshold test.
  • judging whether the adjusted frame loss threshold is valid includes: determining the average decoding delay and the average display delay in the current full time period;
  • the current full time period is a period of time from receiving the first video frame to determining the average decoding delay and average display delay; if the decoding delay and display delay of N frames of video frames are respectively higher than the average decoding The delay and the average display delay are reduced by at least c%, and it is determined that the adjusted frame loss threshold is valid.
  • the second electronic device can adjust the average decoding delay and the average decoding delay and average Send to display delay to judge whether the adjusted frame loss threshold can effectively solve the problem of video frame blocking, that is, whether it can reduce decoding delay and display delay.
  • the second electronic device may discard the video frame when the frame receiving time difference is smaller than the frame loss threshold.
  • the second electronic device judges that a large number of video frames have been received in a short period of time, so the video frames just received can be directly discarded. This can reduce the waiting time for decoding, thereby solving the problem of video frame blocking.
  • the second electronic device may also record the time when the video frame is received, and perform a check on the frame loss threshold Initialize processing. It can be understood that the order in which the second electronic device records the time of receiving the video frame and initializes the frame loss threshold is not limited.
  • the second electronic device may record the time when the video frame is received, so as to subsequently calculate the frame receiving time difference.
  • the second electronic device may also store the time of receiving the video frame in the first queue; The time when the second electronic device receives the Mth video frame is stored in a queue.
  • a queue may be set to store the time when the second electronic device receives the video frame, so as to facilitate processing the time when the second electronic device receives the video frame.
  • the second electronic device before the recording delay of decoding and display delay of N video frames received after receiving the video frame, the second electronic device further The first written element in the first queue may be removed, and the time at which the video frame is received is written into the first queue.
  • the second electronic device can adjust the first queue in time to facilitate the calculation of the frame receiving time difference next time.
  • the present application provides an electronic device, which may include a display screen, a memory, and one or more processors, wherein the memory is used to store computer programs; the processor is used to call the A computer program, so that the electronic device executes: receiving the video frame sent by the first electronic device; determining the frame receiving time difference; the frame receiving time difference is the difference between the time of receiving the video frame and the time of receiving the Mth video frame; the second M frames of video frames are video frames sent by the first electronic device before sending the video frame; in the case that the frame receiving time difference is not less than the frame loss threshold, if the first duration does not reach the preset time, reduce the frame loss threshold; The first duration is the duration from the last time the frame loss threshold was adjusted to the current moment; the frame loss threshold is used to determine whether to discard the video frame; or, when the frame receiving time difference is not less than the frame loss threshold, if the first duration reaches the predetermined Set the time, and the frame loss threshold is less than the upper threshold, increase the frame loss threshold; if
  • the video frame is a video frame not used for a threshold test.
  • the processor is configured to call the computer program, so that the electronic device executes when judging whether the adjusted frame loss threshold is valid, specifically using Invoking the computer program, so that the electronic device executes: determining the average decoding delay and the average display delay in the current full time period; the current full time period is from receiving the first video frame to determining the average decoding delay A period of time until the delay and the average display delay; if the decoding delay and the display delay of N frames of video frames are respectively reduced by at least c% compared with the average decoding delay and the average display delay, determine the adjusted loss Frame Threshold is in effect.
  • the processor may be further configured to discard the video frame when the frame receiving time difference is less than a frame loss threshold.
  • the processor is used to invoke the computer program so that the electronic device executes receiving the video frame sent by the first electronic device, further It can be used to call the computer program, so that the electronic device executes: recording the time of receiving the video frame; and initializing the frame loss threshold.
  • the processor may further The method is used to call the computer program, so that the electronic device executes: storing the time of receiving the video frame in the first queue; storing the time when the second electronic device receives the Mth video frame in the first queue.
  • the processor is configured to call the computer program, so that the electronic device performs recording of N frames received after receiving the video frame Before the decoding delay of the video frame and the display delay, it can also be used to call the computer program, so that the electronic device performs: remove the first written element in the first queue, and write the time of receiving the video frame first queue.
  • the present application provides a computer storage medium, including an instruction, which, when the instruction is run on an electronic device, causes the electronic device to execute any possible implementation manner in the first aspect above.
  • an embodiment of the present application provides a chip, the chip is applied to an electronic device, and the chip includes one or more processors, and the processor is used to invoke computer instructions so that the electronic device executes any one of the above first aspects a possible implementation.
  • an embodiment of the present application provides a computer program product including instructions, which, when the computer program product is run on a device, cause the electronic device to execute any possible implementation manner in the first aspect above.
  • the electronic device provided by the second aspect the computer storage medium provided by the third aspect, the chip provided by the fourth aspect, and the computer program product provided by the fifth aspect are all used to execute the method provided by the embodiment of the present application. Therefore, the beneficial effects that it can achieve can refer to the beneficial effects in the corresponding method, and will not be repeated here.
  • FIG. 1 is a schematic diagram of a mirror projection screen provided by an embodiment of the present application.
  • FIG. 2 is a schematic flow diagram of a mirror projection screen provided by an embodiment of the present application.
  • FIG. 3A is a schematic diagram of decoding provided by an embodiment of the present application when the network condition is good
  • FIG. 3B is a schematic diagram of decoding when the network condition is not good provided by the embodiment of the present application.
  • FIG. 3C is a schematic diagram of decoding after frame loss provided by the embodiment of the present application.
  • FIG. 4 is a flow chart of a method for dynamically adjusting the frame loss threshold provided by an embodiment of the present application
  • FIG. 5 is a flowchart of another method for dynamically adjusting the frame loss threshold provided by the embodiment of the present application.
  • FIG. 6 is a flow chart for adjusting the frame loss threshold provided by the embodiment of the present application.
  • FIG. 7A is a schematic diagram of adjusting the frame loss threshold provided by the embodiment of the present application.
  • FIG. 7B is another schematic diagram of adjusting the frame loss threshold provided by the embodiment of the present application.
  • FIG. 7C is another schematic diagram of adjusting the frame loss threshold provided by the embodiment of the present application.
  • FIG. 8 is a schematic flow diagram of an audio-video synchronization provided by an embodiment of the present application.
  • FIG. 9 is a flowchart of another method for dynamically adjusting the frame loss threshold provided by the embodiment of the present application.
  • FIG. 10 is a schematic diagram of a hardware structure of an electronic device 100 provided by an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of a software structure of an electronic device 100 provided by an embodiment of the present invention.
  • Application generally refers to mobile phone software, mainly refers to the software installed on the smart phone, to improve the shortcomings of the original system and personalization. It is the main means to make the mobile phone improve its functions and provide users with a richer experience.
  • the operation of mobile phone software requires a corresponding mobile phone system.
  • the main mobile phone systems Apple's iOS, Google's Android (Android) system, Symbian platform and Microsoft platform.
  • the Window class represents the top-level window of a view, which manages the top-level View in this view, and provides some standard UI processing strategies such as background, title bar, and default buttons. At the same time, it has the only Surface for drawing its own content.
  • WindowManager When the app creates a Window through WindowManager, WindowManager will create a Surface for each Window and pass the Surface to the app so that the application can draw content on it. In simple terms, it can be considered that there was a one-to-one relationship between Window and Surface.
  • WindowManagerService is a system-level service, started by SystemService, and implements the IWindowManager.AIDL interface.
  • the WindowManager controls window objects, which are containers for view objects. Window objects are always backed by Surface objects.
  • the WindowManager oversees the lifecycle, input and focus events, screen orientation, transitions, animations, position, transformations, Z-order, and many other aspects of the window. WindowManager will send all window metadata to SurfaceFlinger so that SurfaceFlinger can use this data to compose a Surface on the screen.
  • a Surface typically represents the producer side of a buffer queue consumed by SurfaceFlinger. When rendered onto a Surface, the resulting result goes into the associated buffer, which is passed to the consumer. To put it simply, Surface can be regarded as a layer of view, images are drawn on Surface, and the BufferQueue in it produces view data, which is then handed over to its consumer SurfaceFlinger to be synthesized with other view layers, and finally displayed on the screen.
  • SurfaceFlinger is a service of Android, which is responsible for managing the Surface on the application side and compounding all Surfaces.
  • SurfaceFlinger is a layer between the graphics library and the application. Its role is to accept graphics display data from multiple sources, synthesize them, and then send them to the display device. For example, when you open an application, there are usually three layers of display, the top status bar, the bottom or side navigation bar, and the application interface. Each layer is updated and rendered separately. These interfaces are synthesized by SurfaceFlinger and refreshed to the hardware display. Bufferqueue is used in the display process, and SurfaceFlinger is used as the consumer. For example, the Surface managed by WindowManager is used as the producer to generate pages, which are then synthesized by SurfaceFlinger.
  • HWC Hardware Composer
  • OEM display device hardware original equipment manufacturer
  • the display subsystem (Display Sub System, DSS) is a piece of hardware dedicated to image synthesis, and the image of the main screen of the mobile phone is synthesized using DSS.
  • VirtualDisplay refers to virtual display, which is one of the multiple screens supported by Android (screens supported by Android include: main display, external display, and virtual display). There are many usage scenarios of VirtualDisplay, such as screen recording, WFD display, etc. Its function is to grab the content displayed on the screen. VirtualDisplay captures screen content, and there are many ways to implement it. ImageReader is provided in the API to read the content in VirtualDisplay.
  • MediaCodec can be used to obtain the underlying multimedia encoding of Android, which can be used for encoding and decoding. It is an important part of the Android low-level multimedia infrastructure framework.
  • the role of MediaCodec is to process input data to generate output data. First generate an input data buffer, fill the data into the buffer and provide it to the codec. The codec will process the input data in an asynchronous manner, and then provide the filled output buffer to the consumer. After the consumer consumes the buffer Return to codec.
  • Transmission Control Protocol is a connection-oriented, reliable, byte stream-based transport layer communication protocol, defined by RFC 793 of IETF.
  • TCP was designed to accommodate a layered protocol hierarchy supporting multiple network applications.
  • TCP is used to provide reliable communication services between pairs of processes in host computers connected to different but interconnected computer communication networks.
  • TCP assumes that it can obtain simple, possibly unreliable, datagram service from lower-level protocols.
  • TCP should be able to operate over a variety of communication systems from hardwired connections to packet-switched or circuit-switched networks.
  • IP Internet Protocol
  • IP Internet Protocol
  • the purpose of designing IP is to improve the scalability of the network: one is to solve Internet problems and realize the interconnection and intercommunication of large-scale and heterogeneous networks; Independent development.
  • IP only provides a connectionless, unreliable, best-effort data packet transmission service for the host.
  • Packet is the data unit in the communication transmission of TCP/IP protocol, generally also called "data packet”.
  • Real-time Transport Protocol is a network transmission protocol.
  • the RTP protocol specifies a standard packet format for delivering audio and video over the Internet. It was originally designed as a multicast protocol, but has since been used in many unicast applications.
  • the RTP protocol is often used in streaming media systems (with RTCP protocol), video conferencing and push to talk (Push to Talk) systems (with H.323 or SIP), making it the technical basis of the IP telephony industry.
  • the RTP protocol is used with the RTP control protocol RTCP, and it is built on the user datagram protocol.
  • RTP itself does not provide a timely delivery mechanism or other Quality of Service (QoS) guarantees, it relies on low-level services to achieve this process. RTP does not guarantee delivery or prevent out-of-order delivery, nor does it determine the reliability of the underlying network. RTP implements orderly transmission.
  • the serial number in RTP allows the receiver to reassemble the packet sequence of the sender. At the same time, the serial number can also be used to determine the appropriate packet location. For example, in video decoding, sequential decoding is not required.
  • the RTP standard defines two sub-protocols, RTP and RTCP.
  • the data transfer protocol RTP is used to transfer data in real time.
  • Information provided by the protocol includes: timestamp (for synchronization), sequence number (for packet loss and reordering detection), and payload format (for specifying the encoding format of the data).
  • the control protocol RTCP is used for QoS feedback and synchronous media flow, that is, RTCP is used to monitor the quality of service and transmit relevant information about ongoing session participants. Compared with RTP, the bandwidth occupied by RTCP is very small, usually only 5%. The functionality of the second aspect of RTCP is sufficient for "loosely controlled" sessions, that is, it does not have to be used to support all of an application's control communication requests without explicit membership control and organization.
  • the content of each transmission is a structure, so each time it is transmitted, the data in the structure must be packaged, and then when one end receives the data, the received buf The data in the parameter is unpacked.
  • FPS Frames Per Second
  • image field refers to the number of frames that the chip can or actually renders per second. Generally speaking, it refers to the number of visually animated or video frames.
  • FPS is a measure of the amount of information used to save and display dynamic video. The more frames per second, the smoother the displayed motion will be. Generally, 30 is the minimum you need to avoid jerky motion. Some computer video formats can only provide 15 frames per second.
  • PTS Presentation Time Stamp: Displays the timestamp, which is used to tell the player when to display the data of this frame.
  • Vertical synchronization (Vertical synchronization, VSYNC) is also known as field synchronization. From the display principle of CRT (Cathode Ray Tube) display, a single pixel forms a horizontal scanning line, and the accumulation of horizontal scanning lines in the vertical direction forms a complete picture. The refresh rate of the display is controlled by the graphics card DAC (digital-to-analog converter, also known as the D/A converter), and the graphics card DAC will generate a vertical synchronization signal after scanning one frame. What we usually say to turn on vertical synchronization refers to sending the signal to the 3D graphics processing part of the graphics card, so that the graphics card is restricted by the vertical synchronization signal when generating 3D graphics.
  • DAC digital-to-analog converter
  • mirroring screen projection brings great convenience to users' work and life. on the screen so that other participants can watch it, and there is no need to perform corresponding operations on the large screen, which greatly improves the user experience.
  • FIG. 1 is a schematic diagram of a mirroring screen projection provided by an embodiment of the present application.
  • the mirroring screen projection process specifically projects content displayed on a first electronic device to a second electronic device for display.
  • the first electronic device establishes a communication connection with other devices (for example, the second electronic device).
  • Screen As shown in Figure 1, the first electronic device is a mobile phone, and the second electronic device is a smart screen. After a communication connection is established between the mobile phone and the smart screen, the mirror image of the video played on the mobile phone can be displayed on the smart screen. It can be understood that if the interface of the mobile phone (the picture displayed on the display screen of the mobile phone) changes due to some user operations in the subsequent process, the interface on the smart screen will also change accordingly.
  • the first electronic device may be one of electronic devices such as a mobile phone, a tablet computer, and a PC
  • the second electronic device may be one of electronic devices such as a tablet computer, a PC, and a smart screen.
  • a communication connection between the first electronic device and the second electronic device may be established in various ways.
  • wireless communication technology can be used to establish a communication connection between the first electronic device and the second electronic device, for example, connecting the first electronic device and the second electronic device through a wireless fidelity (Wireless Fidelity, Wi-Fi) network
  • wired communication technology can be used to establish a communication connection between the first electronic device and the second electronic device, for example, to use media such as coaxial cables, twisted pairs, and optical fibers to connect the first electronic device and the second electronic device.
  • FIG. 2 is a schematic flow chart of a mirroring screen projection provided by an embodiment of the present application. As shown in FIG. 1 , the mirroring screen projection process is to mirror the content displayed on the first electronic device to the second electronic device.
  • the APP on the first electronic device creates Window through WMS, and creates a Surface for each Window, and passes the corresponding Surface to the application so that the application can draw graphics data on the Surface, that is, Layers are presented via WMS.
  • WMS provides SurfaceFlinger with buffer and window metadata (drawn Surface), and SurfaceFlinger can use this information to synthesize Surface, obtain the synthesized image, and then display the synthesized image to the second through its own display system (HWC/DSS).
  • HWC/DSS display system
  • the second is to transmit the screen to be projected to the second electronic device.
  • the captured screen content can be audio and video data, such as H.264 data, H.265 data, VP9 data, AV1 data , AAC data, etc.
  • an encoder for example, MediaCodec, etc.
  • Encryption Encryption
  • multi-layer packaging for example, perform RTP packaging and VTP/TCP packaging, and finally pack the obtained data packets sent to the second electronic device.
  • the data packet may be sent to the second electronic device through wireless communication (for example, WiFi), and the data packet may also be sent to the second electronic device through wired communication.
  • wireless communication for example, WiFi
  • wired communication for example, Wi-Fi
  • the data packet sent by the first electronic device to the second electronic device includes audio data or video data, that is, the audio data and video data are transmitted independently, wherein the audio data may include audio frames, and the video data may include video frame. Therefore, the process of the first electronic device sending the data packet to the second electronic device is the process of the first electronic device sending video frames and audio frames to the second electronic device.
  • the second electronic device After the second electronic device receives the data packet sent by the first electronic device, it performs corresponding unpacking (RTP unpacking and VTP/TCP unpacking) operations, decryption operations, and decoding (MediaCodec decoding) operations, and then the obtained The audio data and video data are synchronized, and then sent to display by MediaCodec, and finally the layers are synthesized by SurfaceFlinger and displayed on the screen of the second electronic device.
  • RTP unpacking and VTP/TCP unpacking decryption operations
  • decoding MediaCodec decoding
  • the process of the second electronic device receiving the data packet sent by the first electronic device is that the second electronic device receives the video frame and the audio frame sent by the first electronic device.
  • the transmission of data packets is an important part of screen mirroring. Whether the first electronic device can send the data packets to the second electronic device smoothly and in time directly affects the screen projection effect. Since the transmission of audio data is relatively stable (it can also be transmitted relatively stably in the case of poor network conditions), this application mainly considers the transmission of video data, that is, the transmission of video frames.
  • the second electronic device may not be able to receive the video frame sent by the first electronic device in time.
  • a lot of video frames will be received, because the decoder is serial decoding, and the time required for decoding is longer than the time required for unpacking and decrypting, so the video frames received later can only be decoded after the decoding of the previously received video frames is completed. Video frame blocking issues may occur.
  • FIG. 3A-FIG. 3C The difference in processing video frames by the second electronic device under different network environments is introduced below (FIG. 3A-FIG. 3C).
  • the second electronic device receives video frames A, B, C, and D at a stable rate (the time difference of receiving video frames is relatively stable), and also sequentially receives video frames A at a stable rate.
  • B, C, and D are decoded, and the decoding time is equal to the actual decoding time.
  • the decoding time refers to the time required for the second electronic device to actually start decoding the video frame to complete the decoding
  • the actual decoding time refers to the time from the second electronic device receiving the video frame to the completion of decoding.
  • the second electronic device can synchronize the decoded video frames A, B, C, and D with the audio frames at a stable rate, and send the synchronized video frames to display, that is, to transmit the video frames to the display screen to display.
  • the second electronic device can synchronize the decoded video frames A, B, C, and D with the audio frames at a stable rate, and send the synchronized video frames to display, that is, to transmit the video frames to the display screen to display.
  • FIG. 3A when the network condition is good, video frames can be displayed on the display screen of the second electronic device at a stable rate.
  • the second electronic device may receive multiple video frames in a short period of time, so that it cannot be decoded in time, that is to say , when the decoding of the current video frame has not been completed, the next video frame has been received by the second electronic device, and the video frame received later can only start decoding after the previous video frame has been decoded.
  • the second electronic device receives video frames A, B, and C within a short period of time, and since video frame A is received first, video frame A is decoded first.
  • the second electronic device receives video frame B, and video frame B should be decoded immediately, but since the decoding of video frame A has not been completed, video frame B only needs to be decoded. I can wait for a while. After video frame A is decoded, video frame B is decoded. Similarly, video frame C also needs to wait for video frame B to be decoded before decoding.
  • the decoding time of video frame A, B, C is the same, but the actual decoding time is different. Since video A is the first frame received by the second electronic device, it can be decoded without waiting. Therefore, the decoding time of video frame A is the same as the actual decoding time. However, video frame B needs to wait for video frame A to be decoded before starting to decode, so the actual decoding time of video frame B is longer than the actual decoding time of video frame A. Similarly, the actual decoding time of the video frame C also includes the waiting time for decoding. In other words, the actual decoding time refers to the time required by the electronic device from receiving a video frame to completing decoding.
  • the process of waiting for decoding will increase the decoding delay, thus causing the delayed display of the video frame.
  • the delay of mirroring screen projection will increase. That is to say, the picture displayed on the first electronic device may not be synchronized with the picture displayed on the second electronic device. For example, when the 5th second (s) of the video is displayed on the first electronic device, the 3rd second (s) of the video is displayed on the second electronic device. From the user's point of view, the above phenomena greatly affect the user experience.
  • a frame loss threshold can be set, and by judging the relationship between the time difference between the second electronic device receiving video frames and the frame loss threshold, frames are selectively dropped, thereby reducing the waiting time for video frames to be decoded. For example, as shown in FIG. 3C , when the time difference between receiving video frame A and video frame B is less than a predetermined threshold, video frame B may be selected to be discarded. In this case, the video frames can also be displayed on the display screen of the second electronic device in time.
  • the above method can improve the situation of video frame blocking, and there is room for further optimization.
  • it is impossible to determine whether the set frame loss threshold can effectively solve the problem of video frame blocking that is, it is impossible to quantify the benefits of decoding delay and display delay caused by frame loss; on the other hand,
  • the preset frame loss threshold may not be able to effectively solve the video frame blocking problem under different network conditions and different device states.
  • the present application provides a method and related equipment for dynamically adjusting the frame loss threshold.
  • the initial frame loss threshold can be set, and the frame loss threshold can be dynamically adjusted according to the time difference when the second electronic device receives video frames, that is, according to the received video frame
  • the time difference judges the current network status and adjusts the frame loss threshold accordingly, and then performs the frame loss operation, which can effectively solve the video frame blocking problem under different network conditions.
  • the user triggers a setting application program control on the first electronic device, and in response to the user operation, the first electronic device displays a setting interface, and the setting interface includes a wireless screen projection control.
  • the first electronic device may detect a user operation acting on the wireless screen projection control, and in response to the user operation, the first electronic device may display a wireless screen projection interface.
  • the wireless screen projection interface includes one or more controls, and these controls are used to represent devices that can perform mirror projection with the first electronic device.
  • the first electronic device may detect a user operation acting on the first control, and in response to the user operation, the first electronic device may perform screen mirroring with the second electronic device.
  • the first electronic device not only displays images on the display screen of the device, but also sends video frames to the second electronic device, so that the images displayed on the first electronic device can also be displayed on the second electronic device.
  • the second electronic device needs to undergo a series of processing before it can display the video frame sent by the first electronic device on the display screen.
  • the processing procedure of the second electronic device reference may be made to the following embodiments.
  • Fig. 4 exemplarily shows a flow chart of a method for dynamically adjusting the frame loss threshold provided by the embodiment of the present application.
  • S401 Receive a video frame.
  • the second electronic device receives the video frame sent by the first electronic device, and marks the received video frame as A.
  • the TCP/VTP module in the second electronic device receives the video frame sent by the first electronic device.
  • the method of transmitting video frames between the first electronic device and the second electronic device includes but is not limited to sending through wireless communication methods such as wireless local area networks (Wireless Local Area Networks, WLAN), for example, through wireless fidelity (Wireless Fidelity, Wi-Fi) network transmission, and transmission through wired communication, for example, transmission using media such as coaxial cables, twisted pairs, and optical fibers.
  • wireless communication methods such as wireless local area networks (Wireless Local Area Networks, WLAN), for example, through wireless fidelity (Wireless Fidelity, Wi-Fi) network transmission, and transmission through wired communication, for example, transmission using media such as coaxial cables, twisted pairs, and optical fibers.
  • S402 Determine whether a frame drop operation needs to be performed.
  • the second electronic device determines the difference between the time of the received video frame (A) and the reception time of the previously received video frame. When the difference is smaller than the frame loss threshold, a frame loss operation is performed, that is, video frame A is discarded. When the difference is not less than the frame loss threshold, it is judged whether to adjust the frame loss threshold.
  • the video frame is transmitted to the decoder for decoding, and then sent for display after the decoding is completed, so that the video frame is displayed on the display screen of the second electronic device.
  • Fig. 5 exemplarily shows a flowchart of another method for dynamically adjusting the frame loss threshold provided by the embodiment of the present application.
  • S501 Receive a video frame.
  • the second electronic device receives the video frame sent by the first electronic device.
  • the TCP/VTP module in the second electronic device receives the video frame sent by the first electronic device.
  • the method of transmitting video frames between the first electronic device and the second electronic device includes but is not limited to sending through wireless communication methods such as wireless local area networks (Wireless Local Area Networks, WLAN), for example, through wireless fidelity (Wireless Fidelity, Wi-Fi) network transmission, and transmission through wired communication, for example, transmission using media such as coaxial cables, twisted pairs, and optical fibers.
  • wireless communication methods such as wireless local area networks (Wireless Local Area Networks, WLAN), for example, through wireless fidelity (Wireless Fidelity, Wi-Fi) network transmission, and transmission through wired communication, for example, transmission using media such as coaxial cables, twisted pairs, and optical fibers.
  • S502 Record the time of receiving the video frame.
  • the second electronic device may record the time when the video frame arrives, that is, the time when the second electronic device receives the video frame.
  • the second electronic device may also set a number for the received video frame, and record both the number and the arrival time of the video frame. For example, the time when the second electronic device receives the Nth video frame is T, and the second electronic device can find out that T is the arrival time of the video frame corresponding to the number N according to the number N.
  • arrival time of the video frame may be expressed in numbers or other forms, and the arrival time of the video frame mentioned here is not necessarily the real time when the second electronic device receives the video frame.
  • the second electronic device records the time at which a certain video frame is received as 20210511235643.
  • the time at which the second electronic device receives the first video frame may be set as the 1 ms, and the time at which subsequent video frames are received is calculated based on the time at which the first frame is received.
  • the second electronic device may also set a queue to store the arrival time of the video frame and/or the number of the video frame. That is, the queue element represents the time when the second electronic device receives the video frame. Understandably, this queue may be recorded as the first queue.
  • the set queue can be a first-in-first-out linear table, which only allows insertion at one end of the table and deletion of elements at the other end.
  • the queue element refers to the data element in the queue or refers to the data element using the queue data structure to perform related operations.
  • the data type of the queue data element can be an existing data type or a user-defined data type. For example, if the time when the second electronic device receives the first video frame is set as 1 ms, the corresponding queue element may be 1.
  • the elements written first are removed before new elements can be written into the queue.
  • the length of the queue is 3, that is, the number of elements that can be accommodated in the queue is at most 3, that is, the queue can contain at most the receiving time of three video frames.
  • the second electronic device initializes a frame loss threshold, and the frame loss threshold is used to determine whether a video frame is discarded.
  • the initial frame loss threshold may be set as mThreshold.
  • mThreshold 1*VsyncDuration
  • the preset frame rate refers to the rate at which the first electronic device sends video frames to the second electronic device, that is, the number of video frames that the first electronic device sends to the second electronic device per second. It is determined when the second electronic device establishes a communication connection.
  • an adjustment range of the frame loss threshold may be set, for example, the adjustment range of the frame loss threshold is set as: 1*VsyncDuration ⁇ mThreshold ⁇ 2*VsyncDuration.
  • the second electronic device if the second electronic device sets up a queue to record the time when the second electronic device receives video frames (such as step S502), the second electronic device also needs to determine whether the queue is full, that is, determine the number of elements contained in the queue Whether the upper limit is reached. If the queue is full, continue to execute step S505, and if the queue is not full, write the receiving time of the recorded video frame into the queue (step S507).
  • the data in the queue is used to judge and adjust the frame loss threshold.
  • the frame loss threshold is adjusted only when the queue is full. In some other embodiments, the frame loss threshold may also be adjusted according to the elapsed time when the queue is not full.
  • S505 Determine whether the average frame rate is greater than the minimum frame rate.
  • the average frame rate is compared with the minimum frame rate. If the average frame rate is less than the minimum frame rate, it means that the original frame rate is small or there are too many frames lost. In order to avoid affecting the continuity of the projected screen, the received video frames are not discarded, and the received video frames are directly sent to the decoder.
  • the device waits for decoding, and removes the first written element in the queue, and then writes the receiving time of the video frame into the queue (step S507); if the average frame rate is not less than the minimum frame rate, calculate the time and the time of receiving the video frame
  • the second electronic device receives the time difference of the Mth video frame, and denote the difference as a.
  • the time of the Mth video frame may be the first written element in the first queue.
  • the frame rate refers to the number of frames of pictures refreshed per second, which can be understood here as: when the video frame of the first electronic device is mirrored and projected to the second electronic device, the frame rate of the second electronic device The number of video frames refreshed on the screen per second, therefore, the frame rate refers to the number of video frames displayed on the screen of the second electronic device per second.
  • the average frame rate mentioned above refers to the average frame rate within a period of time, which is not necessarily 1s.
  • the average frame rate may be an average rate at which the second electronic device receives video frames sent by the first electronic device within 10s.
  • the average frame rate may be from the second electronic device receiving the first video frame sent by the first electronic device to the second electronic device receiving the latest video frame sent by the first electronic device Average frame rate so far.
  • the average frame rate may also be an average frame rate at which the second electronic device receives N frames of video frames recently sent by the first electronic device.
  • the second electronic device discards the video frames sent by the first electronic device, the discarded video frames may not be calculated during the calculation of the average frame rate.
  • the minimum frame rate is set to ensure the continuity of the screen projection, because when the frame rate is too low, the screen projection is not smooth, which greatly affects the user experience, so the minimum frame rate can be preset according to actual needs. Judge the relationship between the average frame rate and the minimum frame rate before frame loss, so as to avoid the problem of low frame rate that may be caused by continuous frame loss.
  • step S505 it is also possible to determine whether the frame dropping operation can be performed according to the number of consecutive dropped frames , if the number of consecutive lost frames exceeds the preset number of lost frames, the frame loss operation cannot be performed, and the frame loss threshold is not adjusted.
  • the received video frames are directly sent to the decoder for decoding, and the continuous lost frames The number is cleared, the first written element in the queue is removed, and then the receiving time of the video frame is written into the queue (step S507).
  • the preset number of dropped frames may be 2, that is, when the number of consecutive dropped frames exceeds 2, the second electronic device cannot drop the received video frame, but directly transmits the video frame to the decoder for waiting Decode, and clear the number of consecutive lost frames.
  • you can also set a parameter to indicate the feasibility of the frame loss operation for example, set the parameter Adjust to indicate the feasibility of the frame loss operation, that is, judge whether the frame loss operation can be performed and adjust the frame loss threshold according to the value of Adjust (Step S506), specifically, after step S505 is executed, check the value of Adjust, and judge whether the frame loss operation can be performed according to the value of Adjust.
  • Adjust 0, it means that the frame loss threshold cannot be adjusted at this time, and the frame loss cannot be performed.
  • the second electronic device receives too many video frames in a short period of time, directly discards the received video frames, then counts the number of consecutive dropped frames, and calculates the average frame rate. It can be understood that when calculating the average frame rate here, the video frames received but discarded by the second electronic device may not be included in the calculation. If the number of consecutive lost frames exceeds the preset number of lost frames, the frame loss operation is not performed on the next received video frame, and the frame loss threshold is not adjusted, but the video frame is directly sent to the decoder for decoding.
  • S601 Determine whether the second electronic device is performing a threshold test.
  • the threshold test refers to: after threshold adjustment, the second electronic device judges whether the adjustment is valid according to the decoding delay and display delay of the next received N video frames, and when the adjustment is valid, maintain the adjusted The frame loss threshold, otherwise, adjust the threshold again.
  • the ongoing threshold test indicates that it is currently impossible to know whether the last threshold adjustment is effective, so the threshold will not be adjusted again at this time to avoid interfering with the judgment of the effect of the last threshold adjustment.
  • the second electronic device if the second electronic device is performing a threshold test, it directly transmits the received video frame to the decoder to wait for decoding, and removes the first written element in the queue, and then writes the receiving time of the video frame into the queue (such as Step S507); if the second electronic device is not performing a threshold test, continue to execute step S602.
  • S602 Determine whether the frame loss threshold is not adjusted within a preset time.
  • the frame loss threshold is not adjusted within the preset time, it may be because the frame loss threshold is too large, so a always satisfies a ⁇ mThreshold (case 1) within the preset time, and the threshold can be considered to be reduced. If the frame loss threshold is adjusted within the preset time, continue to execute step S603.
  • the preset time can be set according to actual needs, and in an embodiment of the present application, the preset time can be set to 1.5s.
  • S603 Determine whether the frame loss threshold is smaller than the upper threshold.
  • an adjustment range can be set for the frame loss threshold, that is, an upper limit and/or a lower limit for adjusting the frame loss threshold can be set.
  • the lower limit of the frame loss threshold is set to 1*VsyncDuration
  • the upper threshold is 2*VsyncDuration.
  • the frame loss threshold is not less than the upper limit of the threshold, it means that the frame loss threshold has reached the adjustable upper limit (because the adjustment of the frame loss threshold cannot exceed the adjustment range), and the frame loss threshold can be considered to be reduced. If the frame loss threshold is less than the upper threshold, it may be considered to continue to increase the frame loss range to alleviate video frame congestion, that is, increase the frame loss threshold (as shown in step S604).
  • the value of b can be set according to the actual situation. For example, b is set to 0.1*VsyncDuration.
  • the value of b can be set according to the actual situation. For example, b is set to 0.1*VsyncDuration.
  • the adjustment of the frame loss threshold is described with reference to FIG. 7A-FIG. 7C.
  • the frame loss threshold is initialized.
  • FIG. 7A when a ⁇ mThreshold, the frame loss process is performed.
  • the method of adjusting the frame loss threshold is to increase the frame loss threshold with a certain step size (as shown in Figure 7B)
  • the adjustable upper limit of the frame loss threshold is 2*VsyncDuration, that is, mThreshold ⁇ 2*VsyncDuration.
  • the second electronic device can remove the first written element in the queue , write the time of receiving the video frame (the receiving time of the video frame recorded in step S502) into the queue.
  • the queue is ⁇ 1, 5, 10 ⁇ , that is, the receiving times of the three video frames stored in the queue are the 1st, 5th, and 10th ms respectively, where 1 is the first element written into the queue, and 10 It is the last element written in the queue, remove the first element written in the queue - 1, write the time of receiving the video frame into the queue, if the time of receiving the video frame is the 16th ms, write 16 into the queue , the adjusted queue is ⁇ 5, 10, 16 ⁇ .
  • the queue is ⁇ 3, 4, 5 ⁇ , that is, the queue stores the numbers of the 3rd, 4th, and 5th video frames received by the second electronic device. Through these numbers, the second electronic device can find out the receiving time when the second electronic device receives the 3rd, 4th, and 5th video frames.
  • the queue is ⁇ 20210511235643, 20210511235666, 20210511235733 ⁇
  • the three strings of numbers stored in the queue indicate the time of the three video frames received by the second electronic device.
  • the decoder in the second electronic device decodes the received video frame, and records the decoding delay.
  • the decoding delay refers to a period of time from when a video frame is transmitted to a decoder to when decoding of the video frame is completed.
  • the audio frame is adjusted accordingly, and it is judged whether to send the video frame for display.
  • the second electronic device can determine whether there is a frame dropping operation (step S801), if a video frame received before a video frame to be synchronized (received video frame) has been discarded, it also needs to be discarded Only the audio frames corresponding to the video frames (step S802) with the same number of discarded video frames can be obtained, otherwise the video frames and audio frames cannot be correspondingly displayed on the screen of the second electronic device The sound and picture are out of sync. If there is no frame dropping operation, or the same number of audio frames has been discarded (step S802), it is necessary to calculate the average PTS interval of all video frames sent for display, which is recorded as c, and then the sending time of the next video frame is estimated.
  • estimated video frame PTS actual PTS of the previous video frame+c. It can be understood that the actual PTS of the previous video frame refers to the actual display time of the previous video frame. If the estimated video frame PTS is different from the actual display time, when the difference between the two is less than the first preset threshold, the video frame is sent to display with the estimated video frame PTS, because the video frame is only displayed at the VSYNC time point The video frame can only be displayed after the frame is sent for display, so it is necessary to find the VSYNC time point closest to the time point of sending for display (estimated video frame PTS) (step S804).
  • step S805 judge whether the difference between the VSYNC time point and the audio frame display time point is less than the second preset threshold (step S805), if the VSYNC time point and the audio frame display time point are between If the difference is not less than the second preset threshold, the video frame is not sent for display (step S806), otherwise, the video frame is sent for display (step S807), so that the video frame is finally displayed on the screen of the second electronic device.
  • both the first preset threshold and the second preset threshold can be set according to actual needs, which is not limited in the present application.
  • the upper layer application of the second electronic device will receive the callback information of the completion of display sending from MediaCodec (as shown in Figure 2), update the decoding delay and display sending delay, and update the average frame Rate.
  • the delay in sending and displaying refers to the time required from the decoding of the video frame to the actual display on the screen of the second electronic device.
  • the second electronic device needs to perform a threshold test to test whether the adjustment to the frame loss threshold is an effective adjustment. Specifically, the second electronic device monitors the decoding delay and display delay of the N video frames received subsequently. If the frame loss threshold is adjusted, the decoding delay and display delay of the N video frames received by the second electronic device , which are at least c% lower than the average decoding delay and average display delay in the current full time period, then it is judged that the adjustment to the frame loss threshold in the above steps is an effective adjustment, and the adjusted frame loss threshold is continued to be used; otherwise, It is judged that the adjustment effect of the frame loss threshold is not good, and the frame loss threshold mThreshold is reduced by a certain step size. For details, please refer to step S506, which will not be repeated here.
  • the current full time period refers to a period of time from when the second electronic device receives the first video frame to when the average decoding delay and the average display delay are calculated.
  • c can be adjusted according to actual needs.
  • c can be 10.
  • the decoding delay and display delay of the N video frames received by the second electronic device which are respectively reduced by at least 10% compared with the average decoding delay and the average display delay of the current full time period, it is judged that the adjustment to the frame loss threshold is an effective adjustment.
  • FIG. 9 exemplarily shows a flowchart of another method for dynamically adjusting the frame loss threshold provided by the embodiment of the present application.
  • S901 Receive a video frame.
  • the second electronic device receives the video frame sent by the first electronic device.
  • the TCP/VTP module in the second electronic device receives the video frame sent by the first electronic device.
  • step S901 reference may be made to step S501, which will not be repeated here.
  • S902 Record the time of receiving the video frame and initialize the frame loss threshold.
  • the second electronic device After receiving the video frame sent by the first electronic device, the second electronic device records the receiving time of the video frame, and initializes the frame loss threshold. It can be understood that, for the specific implementation manner of step S902, reference may be made to step S502 and step S503, which will not be repeated here.
  • the second electronic device can set a queue for storing the receiving time of the video frame. Before writing the receiving time of the video frame into the queue, the second electronic device can judge whether the queue is full. The specific judgment process can refer to step S504, here No longer.
  • S904 Determine whether the average frame rate is greater than the minimum frame rate.
  • the second electronic device may determine whether the average frame rate is greater than the minimum frame rate. It can be understood that, for a specific implementation manner of step S904, reference may be made to step S505, which will not be repeated here.
  • S905 Calculate the difference between the arrival time of the current video frame and the first element in the queue, and denote the difference as a.
  • the second electronic device calculates the difference between the received time of the current video frame and the first written element in the queue, and marks the difference as a.
  • S906 Determine whether the frame loss threshold can be adjusted.
  • step S906 reference may be made to step S505, which will not be repeated here.
  • S907 Determine whether a is smaller than the frame loss threshold.
  • the second electronic device may determine whether a is smaller than the frame loss threshold. If a is less than the frame loss threshold, directly drop the frame and count the number of consecutive frame loss (such as step S908); if a is not less than the frame loss threshold, determine whether a threshold test is in progress (such as step S909).
  • step S908 reference may be made to step S506, which will not be repeated here.
  • S909 Determine whether a threshold value test is being performed.
  • the second electronic device can determine whether a threshold test is in progress. If the threshold test is being performed, step S914 is directly performed; if the threshold test is not being performed, step S910 is continued.
  • step S909 reference may be made to step S601, which will not be repeated here.
  • S910 Determine whether the frame loss threshold is not adjusted within a preset time.
  • the second electronic device may determine whether the frame loss threshold is not adjusted within a preset time. If the second electronic device adjusts the frame loss threshold within the preset time, directly perform step S911; if the second electronic device does not adjust the frame loss threshold within the preset time, continue to perform step S913.
  • step S910 for a specific implementation manner of step S910, reference may be made to step S602, which will not be repeated here.
  • S911 Determine whether the frame loss threshold is smaller than the upper threshold.
  • the second electronic device may determine whether the frame loss threshold is smaller than the upper threshold. If the frame loss threshold is less than the upper threshold, continue to execute step S912; if the frame loss threshold is not less than the upper threshold, execute step S913.
  • step S911 reference may be made to step S603, which will not be repeated here.
  • S912 Increase the frame loss threshold by one step and start threshold testing.
  • step S912 For a specific implementation manner of step S912, reference may be made to step S604, which will not be repeated here.
  • S913 Decrease the frame loss threshold by one step and start threshold testing.
  • step S913 reference may be made to step S605, which will not be repeated here.
  • step S914 reference may be made to step S507, which will not be repeated here.
  • S915 Write the time of the received video frame into a queue.
  • step S915 reference may be made to step S507, which will not be repeated here.
  • step S916 reference may be made to step S508, which will not be repeated here.
  • S917 Audio and video synchronization and display.
  • step S917 For a specific implementation manner of step S917, reference may be made to step S509, which will not be repeated here.
  • S918 Sending to display callback and updating decoding delay, display delay and average frame rate.
  • the second electronic device can check the display status of the video frame on the second electronic device by sending and displaying the callback. For details, refer to step S509, which will not be repeated here.
  • step S919-step S923 reference may be made to step S510, which will not be repeated here.
  • the second electronic device may detect the decoding delay and the display delay of subsequent received video frames.
  • the second electronic device may determine whether the adjustment to the frame loss threshold is an effective adjustment. Therefore, the second electronic device needs to determine whether the received video frames reach 60 frames after the frame loss threshold is adjusted.
  • step S510 for a specific determination method, reference may be made to step S510, which will not be repeated here.
  • step S923 reference may be made to step S605, which will not be repeated here.
  • FIG. 10 is a schematic diagram of a hardware structure of an electronic device 100 provided in an embodiment of the present application.
  • the electronic device 100 may implement the methods for dynamically adjusting the frame loss threshold shown in FIG. 4 , FIG. 5 and FIG. 9 . It can be understood that the above-mentioned first electronic device and the second electronic device may be the electronic device 100 .
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (Universal Serial Bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber Identification Module (Subscriber Identification Module, SIM) card interface 195 and so on.
  • SIM Subscriber Identification Module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (Application Processor, AP), a modem processor, a graphics processor (Graphics Processing unit, GPU), an image signal processor (Image Signal Processor, ISP), controller, memory, video codec, digital signal processor (Digital Signal Processor, DSP), baseband processor, and/or neural network processor (Neural-network Processing Unit, NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor Application Processor, AP
  • modem processor a graphics processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP Digital Signal Processor
  • baseband processor baseband processor
  • neural network processor Neural-network Processing Unit, NPU
  • the first electronic device may be the electronic device 100, and the specific process of displaying the screen by the first electronic device is: the processor 110 completes the synthesis of multiple screen layers (WMS layer display, and SurfaceFlinger layer synthesis), and then sent to display screens 1 to N194 for display (HWC/DSS display, and main screen display).
  • the processor 110 of the first electronic device has also completed the encoding and packaging of video frames (VirtualDisplay virtual display, and VTP/TCP packets), and finally these packaged video frames will be sent to the second video frame through the wireless communication module 160.
  • Electronic equipment Electronic equipment.
  • the second electronic device may be the electronic device 100 .
  • the wireless communication module 160 of the second electronic device receives the video frame data sent by the first electronic device.
  • the video frame data will be processed by the processor 110 through a series of reverse unpacking (VTP/TCP unpacking, and RTP unpacking) and decoding (MediaCodec decoding) to obtain video frame data that can actually be displayed.
  • These video frame data will also be sent to display (MediaCodec for display) and layer synthesis (SurfaceFlinger layer synthesis), and finally sent to display screen 194 for display (main screen for display).
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • the interface can include an integrated circuit (Inter-Integrated Circuit, I2C) interface, an integrated circuit built-in audio (Inter-Integrated Circuit Sound, I2S) interface, a pulse code modulation (Pulse Code Modulation, PCM) interface, a universal asynchronous transmitter (Universal Asynchronous Receiver/Transmitter, UART) interface, mobile industry processor interface (Mobile Industry Processor Interface, MIPI), general-purpose input and output (General-Purpose Input/Output, GPIO) interface, subscriber identity module (Subscriber Identity Module, SIM) interface, and /or Universal Serial Bus (Universal Serial Bus, USB) interface, etc.
  • I2C Inter-Integrated Circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM Pulse Code Modulation
  • UART Universal Asynchronous Receiver/Transmitter
  • MIPI Mobile Industry Processor Interface
  • GPIO General-purpose input and output
  • SIM Subscriber Identity Module
  • USB Universal Serial Bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (Serial Data Line, SDA) and a serial clock line (Serial Clock Line, SCL).
  • processor 110 may include multiple sets of I2C buses.
  • the processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flashlight, the camera 193 and the like through different I2C bus interfaces.
  • the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the electronic device 100 .
  • the I2S interface can be used for audio communication.
  • processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled to the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through the Bluetooth headset.
  • the PCM interface can also be used for audio communication, sampling, quantizing and encoding the analog signal.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is generally used to connect the processor 110 and the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to realize the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interface includes camera serial interface (Camera Serial Interface, CSI), display serial interface (Display Serial Interface, DSI), etc.
  • the processor 110 communicates with the camera 193 through the CSI interface to realize the shooting function of the electronic device 100 .
  • the processor 110 communicates with the display screen 194 through the DSI interface to realize the display function of the electronic device 100 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193 , the display screen 194 , the wireless communication module 160 , the audio module 170 , the sensor module 180 and so on.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100 , and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices 100, such as AR devices.
  • the interface connection relationship between the modules shown in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 can receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 is charging the battery 142 , it can also supply power to the electronic device 100 through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be disposed in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be set in the same device.
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (Low Noise Amplifier, LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves through the antenna 1 for radiation.
  • at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is passed to the application processor after being processed by the baseband processor.
  • the application processor outputs sound signals through audio equipment (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area network (Wireless Local Area Networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (Bluetooth, BT), global navigation satellite System (Global Navigation Satellite System, GNSS), frequency modulation (Frequency Modulation, FM), near field communication technology (Near Field Communication, NFC), infrared technology (Infrared, IR) and other wireless communication solutions.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), broadband Code Division Multiple Access (WCDMA), Time-Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc.
  • the GNSS may include Global Positioning System (Global Positioning System, GPS), Global Navigation Satellite System (Global Navigation Satellite System, GLONASS), Beidou Navigation Satellite System (Beidou Navigation Satellite System, BDS), Quasi Zenith Satellite System (Quasi - Zenith Satellite System (QZSS) and/or Satellite Based Augmentation Systems (SBAS).
  • Global Positioning System Global Positioning System, GPS
  • Global Navigation Satellite System Global Navigation Satellite System
  • GLONASS Global Navigation Satellite System
  • Beidou Navigation Satellite System Beidou Navigation Satellite System
  • BDS Beidou Navigation Satellite System
  • QZSS Quasi Zenith Satellite System
  • SBAS Satellite Based Augmentation Systems
  • the communication between the first electronic device and the second electronic device can be realized through the wireless communication module 160 . It can be understood that a point-to-point communication manner may be adopted between the first electronic device and the second electronic device, or communication may be performed through a server.
  • the electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor.
  • GPU is a microprocessor for image processing, connected to display screen 194 and application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel can adopt liquid crystal display (Liquid Crystal Display, LCD), organic light-emitting diode (Organic Light-Emitting Diode, OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (Active-Matrix Organic Light Emitting Diode, AMOLED), flexible light-emitting diode (Flex Light-Emitting Diode, FLED), Mini LED, Micro LED, Micro-OLED, quantum dot light-emitting diode (Quantum Dot Light Emitting Diodes, QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
  • the electronic device 100 may realize the acquisition function through an ISP, a camera 193 , a video codec, a GPU, a display screen 194 , and an application processor.
  • the ISP is used for processing the data fed back by the camera 193 .
  • the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image or video visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element can be a charge coupled device (Charge Coupled Device, CCD) or a complementary metal oxide semiconductor (Complementary Metal-Oxide-Semiconductor, CMOS) phototransistor.
  • CCD Charge Coupled Device
  • CMOS complementary metal oxide semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP for conversion into a digital image or video signal.
  • ISP outputs digital image or video signal to DSP for processing.
  • DSP converts digital images or video signals into standard RGB, YUV and other formats of images or video signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • the electronic device 100 can use N cameras 193 to acquire images with multiple exposure coefficients, and then, in video post-processing, the electronic device 100 can synthesize HDR images using the HDR technology based on the images with multiple exposure coefficients. image.
  • Digital signal processors are used to process digital signals. In addition to digital image or video signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos in various encoding formats, for example: Moving Picture Experts Group (Moving Picture Experts Group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • NPU is a neural network (Neural-Network, NN) computing processor.
  • NN neural network
  • Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 121 may be used to store computer-executable program codes including instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 .
  • the internal memory 121 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image and video playing function, etc.) and the like.
  • the storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (Universal Flash Storage, UFS) and the like.
  • a non-volatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (Universal Flash Storage, UFS) and the like.
  • the electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module 170 may also be used to encode and decode audio signals.
  • the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
  • Speaker 170A also referred to as a "horn" is used to convert audio electrical signals into sound signals.
  • Electronic device 100 can listen to music through speaker 170A, or listen to hands-free calls.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the receiver 170B can be placed close to the human ear to receive the voice.
  • the microphone 170C also called “microphone” or “microphone”, is used to convert sound signals into electrical signals.
  • the earphone interface 170D is used for connecting wired earphones.
  • the earphone interface 170D may be a USB interface 130, or a 35mm Open Mobile Terminal Platform (OMTP) standard interface, or a Cellular Telecommunications Industry Association of the USA (CTIA) standard interface.
  • OMTP Open Mobile Terminal Platform
  • CTIA Cellular Telecommunications Industry Association of the USA
  • the sensor module 180 may include one or more sensors, and these sensors may be of the same type or different types. It can be understood that the sensor module 180 shown in FIG. 1 is only an exemplary division method, and there may be other division methods. This application is not limited to this.
  • the pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view short messages is executed. When a touch operation whose intensity is greater than or equal to the first pressure threshold acts on the icon of the short message application, the instruction of creating a new short message is executed.
  • the gyro sensor 180B can be used to determine the motion posture of the electronic device 100 .
  • the angular velocity of the electronic device 100 around three axes may be determined by the gyro sensor 180B.
  • the gyro sensor 180B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shaking of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip leather case.
  • the electronic device 100 when the electronic device 100 is a clamshell machine, the electronic device 100 can detect opening and closing of the clamshell according to the magnetic sensor 180D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of the electronic device 100, and can be applied to applications such as horizontal and vertical screen switching, pedometers, etc.
  • the distance sensor 180F is used to measure the distance.
  • the electronic device 100 may measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F for distance measurement to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • LEDs light emitting diodes
  • photodiodes such as photodiodes
  • the ambient light sensor 180L is used for sensing ambient light brightness.
  • the fingerprint sensor 180H is used to acquire fingerprints.
  • the electronic device 100 can use the acquired fingerprint characteristics to implement fingerprint unlocking, access to application locks, take pictures with fingerprints, answer incoming calls with fingerprints, and the like.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to implement a temperature treatment strategy.
  • Touch sensor 180K also known as "touch panel”.
  • the touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the position of the display screen 194 .
  • the bone conduction sensor 180M can acquire vibration signals.
  • the keys 190 include a power key, a volume key and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
  • the motor 191 can generate a vibrating reminder.
  • the motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • touch operations applied to different applications may correspond to different vibration feedback effects.
  • the indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 195 is used for connecting a SIM card.
  • the SIM card can be connected and separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 .
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card etc. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the multiple cards may be the same or different.
  • the SIM card interface 195 is also compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as calling and data communication.
  • the electronic device 100 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
  • FIG. 11 is a schematic diagram of a software structure of an electronic device 100 provided by an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the system is divided into four layers, which are application program layer, application program framework layer, runtime (Runtime) and system library, and kernel layer from top to bottom.
  • the application layer can consist of a series of application packages.
  • the application package may include camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message and other applications (also called applications).
  • the application program package may also include another application program, and the user can complete the screen mirroring after triggering the application program by touch, click, gesture, voice, etc., during the screen mirroring process
  • the electronic device 100 can be used as a device for sending video frames and audio frames (for example, a first electronic device), or as a device for receiving video frames and audio frames (for example, a second electronic device).
  • the name of the application program may be "wireless projection", which is not limited in this application.
  • the application framework layer provides an application programming interface (Application Programming Interface, API) and a programming framework for applications in the application layer.
  • API Application Programming Interface
  • the application framework layer includes some predefined functions.
  • the application framework layer can include window manager, content provider, view system, phone manager, resource manager, notification manager and so on.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • Said data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebook, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on.
  • the view system can be used to build applications.
  • a display interface can consist of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide communication functions of the electronic device 100 . For example, the management of call status (including connected, hung up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify the download completion, message reminder, etc.
  • the notification manager can also be a notification that appears on the top status bar of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog interface. For example, prompting text information in the status bar, issuing a prompt sound, vibrating the electronic device, and flashing the indicator light, etc.
  • Runtime includes the core library and virtual machine. Runtime is responsible for the scheduling and management of the system.
  • the core library includes two parts: one part is the function function that the programming language (for example, java language) needs to call, and the other part is the core library of the system.
  • one part is the function function that the programming language (for example, java language) needs to call
  • the other part is the core library of the system.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes programming files (for example, java files) of the application program layer and the application program framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • a system library can include multiple function modules. For example: Surface Manager (Surface Manager), Media Library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • Surface Manager Surface Manager
  • Media Libraries Media Libraries
  • 3D graphics processing library eg: OpenGL ES
  • 2D graphics engine eg: SGL
  • the surface manager is used to manage the display subsystem, and provides fusion of two-dimensional (2-Dimensional, 2D) and three-dimensional (3-Dimensional, 3D) layers for multiple applications.
  • the media library supports playback and recording of various commonly used audio and video formats, as well as still image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing, etc.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer includes at least a display driver, a camera driver, an audio driver, a sensor driver, and a virtual card driver.
  • the electronic device 100 is a device (for example, the first electronic device) that sends video frames and audio frames during screen mirroring
  • a corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into original input events (including touch coordinates, time stamps of touch operations, and other information).
  • Raw input events are stored at the kernel level.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event.
  • the wireless projection application calls the interface of the application framework layer, starts the wireless projection application, and then calls the kernel layer to Start the drive, so as to transmit the video frame and the audio frame to another device (the device receiving the video frame and the audio frame during the screen mirroring process, for example, the second electronic device) through the wireless communication module 160 .
  • the device to be screen-cast may start the wireless screen-casting application by default, or start the wireless screen-casting application when receiving a screen mirroring request sent by other devices.
  • the first electronic device starts the wireless screen projection application, it can select the second electronic device that has already started the wireless screen projection application, so when the first electronic device is selected and establishes a communication connection with the second electronic device, mirroring can begin. Cast screen.
  • a communication connection between the first electronic device and the second electronic device may be established through the wireless communication technology provided by the wireless communication module 160 in FIG. 10 .
  • the electronic device 100 is a device (for example, a second electronic device) that receives video frames and audio frames in mirror projection, correspondingly, start the wireless screen projection application, receive the video frames and audio frames through the wireless communication module 160, and call the kernel layer to start the display driver and audio driver, display the received video frame through the display screen 194, and play the received audio frame through the speaker 170A.
  • a device for example, a second electronic device
  • serial numbers of the above-mentioned processes do not mean the order of execution, and the order of execution of the processes should be determined by their functions and internal logic, and should not be implemented in this application.
  • the implementation of the examples constitutes no limitation.
  • the disclosed systems, devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions described above are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
  • the modules in the device of the embodiment of the present application can be combined, divided and deleted according to actual needs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

Disclosed in the present application are a method for dynamically adjusting a frame-dropping threshold value, and related devices. The method comprises: a second electronic device being able to receive a video frame that is sent by a first electronic device, and recording the time at which the video frame is received; the second electronic device determining a frame-receiving time difference, wherein the frame-receiving time difference is the difference between the time at which the video frame is received and the time at which an Mth video frame is received, and the Mth video frame is a video frame that is sent by the first electronic device before the first electronic device sends said video frame; if the frame-receiving time difference is not less than a frame-dropping threshold value, the second electronic device further being able to adjust the frame-dropping threshold value; in addition, after adjusting the frame-dropping threshold value, the second electronic device further being able to perform a threshold value test, that is, determining whether a decoding delay and a display delay after the frame-dropping threshold value is adjusted are reduced. By means of the above method, not only can the problem of video frame blocking under different network conditions and in different device states be handled, but the effect of adjusting the frame-dropping threshold value can also be fed back in a timely manner, thereby ensuring that the problem of video frame blocking can be effectively solved on the basis of the current frame-dropping threshold value.

Description

一种动态调节丢帧阈值的方法及相关设备A method and related equipment for dynamically adjusting the frame loss threshold
本申请要求于2021年06月25日提交中国专利局、申请号为202110713481.0、申请名称为“一种动态调节丢帧阈值的方法及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202110713481.0 and the application title "A Method for Dynamically Adjusting the Frame Loss Threshold and Related Equipment" submitted to the China Patent Office on June 25, 2021, the entire contents of which are incorporated by reference incorporated in this application.
技术领域technical field
本申请涉及镜像投屏领域,尤其涉及一种动态调节丢帧阈值的方法及相关设备。The present application relates to the field of screen mirroring, and in particular to a method and related equipment for dynamically adjusting a frame loss threshold.
背景技术Background technique
随着互联网技术的不断发展,投屏技术得到了广泛应用,给人们的工作和生活带来了极大的便利。投屏分为镜像投屏和非镜像投屏两类,镜像投屏是指将一个设备的画面实时投放至另一个设备上进行显示,其在商务会议等场景下的作用尤其明显。With the continuous development of Internet technology, screen projection technology has been widely used, bringing great convenience to people's work and life. Screen projection is divided into two types: mirror projection and non-mirror projection. Mirror projection refers to the real-time projection of the screen of one device to another device for display. It is especially useful in scenarios such as business meetings.
镜像投屏的过程中会有视频帧的传输,而视频帧的传输容易受网络影响,若网络状况不佳,需要接收视频帧的设备无法真正接收到视频帧,待后续网络恢复之后,该设备短时间内会接收大量视频帧,超出了该设备解码器的解码能力,使得大量视频帧积压在解码器内,解码器无法及时对这些视频帧进行解码,增大了投屏时延。There will be video frame transmission during the mirroring process, and the transmission of video frames is easily affected by the network. If the network condition is not good, the device that needs to receive the video frame cannot actually receive the video frame. After the subsequent network recovery, the device A large number of video frames will be received in a short period of time, which exceeds the decoding capability of the device's decoder, resulting in a backlog of a large number of video frames in the decoder. The decoder cannot decode these video frames in time, which increases the screen projection delay.
因此,如何在不同的网络状况及不同的设备状态下都能有效解决视频帧阻塞的问题是目前亟待解决的问题。Therefore, how to effectively solve the problem of video frame blocking under different network conditions and different device states is an urgent problem to be solved at present.
发明内容Contents of the invention
本申请提供了一种动态调节丢帧阈值的方法,可以设定初始丢帧阈值,根据第二电子设备接收视频帧的时间差动态调节丢帧阈值,即根据接收视频帧的时间差判断当前网络状况并相应的调节丢帧阈值,然后进行丢帧操作,这样可以有效解决不同网络状况下的视频帧阻塞问题。另外,在丢帧完成后,判断调节丢帧阈值后的解码时延及送显时延是否减小,根据该判断结果确定是否需要再次调节丢帧阈值,使得丢帧阈值的效果可以及时得到反馈,保证当前的丢帧阈值可以有效解决视频帧阻塞的问题。This application provides a method for dynamically adjusting the frame loss threshold. The initial frame loss threshold can be set, and the frame loss threshold can be dynamically adjusted according to the time difference of receiving video frames by the second electronic device, that is, the current network status can be judged according to the time difference of receiving video frames and Correspondingly adjust the frame loss threshold, and then perform the frame loss operation, which can effectively solve the video frame blocking problem under different network conditions. In addition, after the frame loss is completed, judge whether the decoding delay and the display delay after adjusting the frame loss threshold are reduced, and determine whether the frame loss threshold needs to be adjusted again according to the judgment result, so that the effect of the frame loss threshold can be fed back in time , to ensure that the current frame loss threshold can effectively solve the problem of video frame blocking.
第一方面,本申请提供一种动态调节丢帧阈值的方法。该方法可以应用于第二电子设备。该方法可以包括:接收第一电子设备发送的视频帧;确定收帧时间差;收帧时间差为接收所述视频帧的时间与接收第M帧视频帧的时间的差值;第M帧视频帧为第一电子设备在发送所述视频帧之前发送的视频帧;在收帧时间差不小于丢帧阈值的情况下,若第一时长未达到预设时间,减小丢帧阈值;第一时长为上一次调整丢帧阈值到当前时刻的时长;丢帧阈值用于判断是否丢弃所述视频帧;或者,在收帧时间差不小于丢帧阈值的情况下,若第一时长达到预设时间,且丢帧阈值小于阈值上限,增大所述丢帧阈值;若第一时长达到预设时间,且所述丢帧阈值不小于所述阈值上限,减小所述丢帧阈值;所述阈值上限为所述丢帧阈值的最大值;记录在接收视频帧之后接收的N帧视频帧的解码时延和送显时延;解码时延为视频帧从到达解码器开始到完成解码的时间;送显时延为视频帧从完成解码开始到显示在显示屏上的时间;判断N是否等于预设帧数;若N等于预设帧数,判断调节后的丢帧阈值是否有效;若调节后的丢帧阈值无效,减小调节后的丢帧阈值,并停止阈值测 试;阈值测试用于判断对丢帧阈值的调节是否有效。In a first aspect, the present application provides a method for dynamically adjusting a frame loss threshold. The method can be applied to the second electronic device. The method may include: receiving the video frame sent by the first electronic device; determining the frame receiving time difference; the frame receiving time difference is the difference between the time of receiving the video frame and the time of receiving the Mth video frame; the Mth video frame is The video frame sent by the first electronic device before sending the video frame; when the frame receiving time difference is not less than the frame loss threshold, if the first duration does not reach the preset time, reduce the frame loss threshold; the first duration is above Adjust the duration from the frame loss threshold to the current moment once; the frame loss threshold is used to determine whether to discard the video frame; or, when the frame receiving time difference is not less than the frame loss threshold, if the first duration reaches the preset time, and the frame loss If the frame threshold is less than the upper threshold, increase the frame loss threshold; if the first duration reaches a preset time, and the frame loss threshold is not less than the upper threshold, reduce the frame loss threshold; the upper threshold is the The maximum value of the frame loss threshold; record the decoding delay and display delay of N frames of video frames received after receiving the video frame; the decoding delay is the time from when the video frame arrives at the decoder to the completion of decoding; when sending the display Delay is the time from the completion of decoding to the time when the video frame is displayed on the display screen; judge whether N is equal to the preset number of frames; if N is equal to the preset number of frames, judge whether the adjusted frame loss threshold is valid; if the adjusted frame loss If the threshold is invalid, reduce the adjusted frame loss threshold and stop the threshold test; the threshold test is used to determine whether the adjustment to the frame loss threshold is valid.
在本申请提供的方案中,第二电子设备可以调节丢帧阈值以适应不同网络状况和设备状态下的视频帧阻塞问题。另外,在丢帧完成后,第二电子设备还可以进行阈值测试,即判断调节丢帧阈值后的解码时延及送显时延是否减小,根据该判断结果确定是否需要再次调节丢帧阈值,使得丢帧阈值的效果可以及时得到反馈,保证当前的丢帧阈值可以有效解决视频帧阻塞的问题。In the solution provided by the present application, the second electronic device can adjust the frame loss threshold to adapt to video frame blocking problems under different network conditions and device states. In addition, after the frame loss is completed, the second electronic device can also perform a threshold test, that is, determine whether the decoding delay and the display delay after adjusting the frame loss threshold are reduced, and determine whether it is necessary to adjust the frame loss threshold again according to the judgment result , so that the effect of the frame loss threshold can be fed back in time, ensuring that the current frame loss threshold can effectively solve the problem of video frame blocking.
结合第一方面,在第一方面的一种可能的实现方式中,所述视频帧为不用于阈值测试的视频帧。With reference to the first aspect, in a possible implementation manner of the first aspect, the video frame is a video frame not used for a threshold test.
在本申请提供的方案中,第二电子设备接收的第一电子设备发送的视频帧可以不用于阈值测试。此时,第二电子设备才可以根据接收视频帧的时间判断是否调整丢帧阈值,可以避免视频帧用于阈值测试时调整丢帧阈值而造成的阈值测试的不准确。In the solution provided by the present application, the video frames received by the second electronic device and sent by the first electronic device may not be used for the threshold test. At this time, the second electronic device can judge whether to adjust the frame loss threshold according to the time of receiving the video frame, which can avoid the inaccurate threshold test caused by adjusting the frame loss threshold when the video frame is used for the threshold test.
结合第一方面,在第一方面的一种可能的实现方式中,判断调节后的所述丢帧阈值是否有效,包括:确定当前全时间段内的平均解码时延和平均送显时延;当前全时间段为从接收第一个视频帧开始,到确定平均解码时延和平均送显时延为止的一段时间;若N帧视频帧的解码时延和送显时延,分别比平均解码时延和平均送显时延减少至少c%,确定调节后的丢帧阈值有效。With reference to the first aspect, in a possible implementation manner of the first aspect, judging whether the adjusted frame loss threshold is valid includes: determining the average decoding delay and the average display delay in the current full time period; The current full time period is a period of time from receiving the first video frame to determining the average decoding delay and average display delay; if the decoding delay and display delay of N frames of video frames are respectively higher than the average decoding The delay and the average display delay are reduced by at least c%, and it is determined that the adjusted frame loss threshold is valid.
在本申请提供的方案中,第二电子设备可以根据当前全时间段内的平均解码时延和平均送显时延,以及调节丢帧阈值后接收的N帧视频帧的平均解码时延和平均送显时延,来判断调节后的丢帧阈值是否可以有效解决视频帧阻塞的问题,即是否可以降低解码时延和送显时延。In the solution provided by this application, the second electronic device can adjust the average decoding delay and the average decoding delay and average Send to display delay to judge whether the adjusted frame loss threshold can effectively solve the problem of video frame blocking, that is, whether it can reduce decoding delay and display delay.
结合第一方面,在第一方面的一种可能的实现方式中,第二电子设备可以在收帧时间差小于丢帧阈值的情况下,丢弃所述视频帧。With reference to the first aspect, in a possible implementation manner of the first aspect, the second electronic device may discard the video frame when the frame receiving time difference is smaller than the frame loss threshold.
在本申请提供的方案中,在收帧时间差小于丢帧阈值的情况下,第二电子设备判断在短时间内接收到了大量视频帧,因此可以直接丢弃刚刚接收到的视频帧。这样可以减少等待解码的时间,从而解决视频帧阻塞的问题。In the solution provided by the present application, when the frame receiving time difference is smaller than the frame loss threshold, the second electronic device judges that a large number of video frames have been received in a short period of time, so the video frames just received can be directly discarded. This can reduce the waiting time for decoding, thereby solving the problem of video frame blocking.
结合第一方面,在第一方面的一种可能的实现方式中,第二电子设备接收第一电子设备发送的视频帧之后,还可以记录接收所述视频帧的时间,并对丢帧阈值进行初始化处理。可理解,第二电子设备记录接收所述视频帧的时间,与对丢帧阈值进行初始化处理的先后顺序不限定。With reference to the first aspect, in a possible implementation manner of the first aspect, after the second electronic device receives the video frame sent by the first electronic device, it may also record the time when the video frame is received, and perform a check on the frame loss threshold Initialize processing. It can be understood that the order in which the second electronic device records the time of receiving the video frame and initializes the frame loss threshold is not limited.
在本申请提供的方案中,第二电子设备可以记录接收所述视频帧的时间,以便后续计算收帧时间差。In the solution provided by the present application, the second electronic device may record the time when the video frame is received, so as to subsequently calculate the frame receiving time difference.
结合第一方面,在第一方面的一种可能的实现方式中,所述记录接收视频帧的时间之后,第二电子设备还可以将接收所述视频帧的时间存储在第一队列中;第一队列中存储有第二电子设备接收第M帧视频帧的时间。With reference to the first aspect, in a possible implementation of the first aspect, after recording the time of receiving the video frame, the second electronic device may also store the time of receiving the video frame in the first queue; The time when the second electronic device receives the Mth video frame is stored in a queue.
在本申请提供的方案中,可以设置一个队列存储第二电子设备接收视频帧的时间,便于对第二电子设备接收视频帧的时间进行处理。In the solution provided by the present application, a queue may be set to store the time when the second electronic device receives the video frame, so as to facilitate processing the time when the second electronic device receives the video frame.
结合第一方面,在第一方面的一种可能的实现方式中,所述记录在接收所述视频帧之后接收的N帧视频帧的解码时延和送显时延之前,第二电子设备还可以去除所述第一队列 中最先写入的元素,并将接收所述视频帧的时间写入所述第一队列。With reference to the first aspect, in a possible implementation manner of the first aspect, before the recording delay of decoding and display delay of N video frames received after receiving the video frame, the second electronic device further The first written element in the first queue may be removed, and the time at which the video frame is received is written into the first queue.
在本申请提供的方案中,第二电子设备可以及时调整第一队列,便于下一次计算收帧时间差。In the solution provided by the present application, the second electronic device can adjust the first queue in time to facilitate the calculation of the frame receiving time difference next time.
第二方面,本申请提供一种电子设备,该电子设备可以包括显示屏、存储器、一个或多个处理器,其特征在于,所述存储器用于存储计算机程序;所述处理器用于调用所述计算机程序,使得所述电子设备执行:接收第一电子设备发送的视频帧;确定收帧时间差;收帧时间差为接收所述视频帧的时间与接收第M帧视频帧的时间的差值;第M帧视频帧为第一电子设备在发送所述视频帧之前发送的视频帧;在收帧时间差不小于丢帧阈值的情况下,若第一时长未达到预设时间,减小丢帧阈值;第一时长为上一次调整丢帧阈值到当前时刻的时长;丢帧阈值用于判断是否丢弃所述视频帧;或者,在收帧时间差不小于丢帧阈值的情况下,若第一时长达到预设时间,且丢帧阈值小于阈值上限,增大所述丢帧阈值;若第一时长达到预设时间,且所述丢帧阈值不小于所述阈值上限,减小所述丢帧阈值;所述阈值上限为所述丢帧阈值的最大值;记录在接收视频帧之后接收的N帧视频帧的解码时延和送显时延;解码时延为视频帧从到达解码器开始到完成解码的时间;送显时延为视频帧从完成解码开始到显示在显示屏上的时间;判断N是否等于预设帧数;若N等于预设帧数,判断调节后的丢帧阈值是否有效;若调节后的丢帧阈值无效,减小调节后的丢帧阈值,并停止阈值测试;阈值测试用于判断对丢帧阈值的调节是否有效。In a second aspect, the present application provides an electronic device, which may include a display screen, a memory, and one or more processors, wherein the memory is used to store computer programs; the processor is used to call the A computer program, so that the electronic device executes: receiving the video frame sent by the first electronic device; determining the frame receiving time difference; the frame receiving time difference is the difference between the time of receiving the video frame and the time of receiving the Mth video frame; the second M frames of video frames are video frames sent by the first electronic device before sending the video frame; in the case that the frame receiving time difference is not less than the frame loss threshold, if the first duration does not reach the preset time, reduce the frame loss threshold; The first duration is the duration from the last time the frame loss threshold was adjusted to the current moment; the frame loss threshold is used to determine whether to discard the video frame; or, when the frame receiving time difference is not less than the frame loss threshold, if the first duration reaches the predetermined Set the time, and the frame loss threshold is less than the upper threshold, increase the frame loss threshold; if the first duration reaches the preset time, and the frame loss threshold is not less than the upper threshold, reduce the frame loss threshold; The upper limit of the threshold is the maximum value of the frame loss threshold; record the decoding delay and the display delay of the N frames of video frames received after receiving the video frame; the decoding delay is from the arrival of the video frame to the completion of the decoding Time; the display delay is the time from the completion of decoding to the time when the video frame is displayed on the display screen; judge whether N is equal to the preset number of frames; if N is equal to the preset number of frames, judge whether the adjusted frame loss threshold is valid; if The adjusted frame loss threshold is invalid, reduce the adjusted frame loss threshold, and stop the threshold test; the threshold test is used to determine whether the adjustment to the frame loss threshold is valid.
结合第二方面,在第二方面的一种可能的实现方式中,所述视频帧为不用于阈值测试的视频帧。With reference to the second aspect, in a possible implementation manner of the second aspect, the video frame is a video frame not used for a threshold test.
结合第二方面,在第二方面的一种可能的实现方式中,所述处理器用于调用所述计算机程序,使得所述电子设备执行判断调节后的所述丢帧阈值是否有效时,具体用于调用所述计算机程序,使得所述电子设备执行:确定当前全时间段内的平均解码时延和平均送显时延;当前全时间段为从接收第一个视频帧开始,到确定平均解码时延和平均送显时延为止的一段时间;若N帧视频帧的解码时延和送显时延,分别比平均解码时延和平均送显时延减少至少c%,确定调节后的丢帧阈值有效。With reference to the second aspect, in a possible implementation manner of the second aspect, the processor is configured to call the computer program, so that the electronic device executes when judging whether the adjusted frame loss threshold is valid, specifically using Invoking the computer program, so that the electronic device executes: determining the average decoding delay and the average display delay in the current full time period; the current full time period is from receiving the first video frame to determining the average decoding delay A period of time until the delay and the average display delay; if the decoding delay and the display delay of N frames of video frames are respectively reduced by at least c% compared with the average decoding delay and the average display delay, determine the adjusted loss Frame Threshold is in effect.
结合第二方面,在第二方面的一种可能的实现方式中,所述处理器,还可以用于在收帧时间差小于丢帧阈值的情况下,丢弃所述视频帧。With reference to the second aspect, in a possible implementation manner of the second aspect, the processor may be further configured to discard the video frame when the frame receiving time difference is less than a frame loss threshold.
结合第二方面,在第二方面的一种可能的实现方式中,所述处理器,在用于调用所述计算机程序,使得所述电子设备执行接收第一电子设备发送的视频帧之后,还可以用于调用所述计算机程序,使得所述电子设备执行:记录接收视频帧的时间;对丢帧阈值进行初始化处理。With reference to the second aspect, in a possible implementation manner of the second aspect, after the processor is used to invoke the computer program so that the electronic device executes receiving the video frame sent by the first electronic device, further It can be used to call the computer program, so that the electronic device executes: recording the time of receiving the video frame; and initializing the frame loss threshold.
结合第二方面,在第二方面的一种可能的实现方式中,所述处理器,在用于调用所述计算机程序,使得所述电子设备执行记录接收所述视频帧的时间之后,还可以用于调用所述计算机程序,使得所述电子设备执行:将接收视频帧的时间存储在第一队列中;第一队列中存储有第二电子设备接收第M帧视频帧的时间。With reference to the second aspect, in a possible implementation manner of the second aspect, after the processor is used to call the computer program so that the electronic device executes recording the time at which the video frame is received, the processor may further The method is used to call the computer program, so that the electronic device executes: storing the time of receiving the video frame in the first queue; storing the time when the second electronic device receives the Mth video frame in the first queue.
结合第二方面,在第二方面的一种可能的实现方式中,所述处理器,在用于调用所述计算机程序,使得所述电子设备执行记录在接收所述视频帧之后接收的N帧视频帧的解码 时延和送显时延之前,还可以用于调用所述计算机程序,使得所述电子设备执行:去除第一队列中最先写入的元素,将接收视频帧的时间写入第一队列。With reference to the second aspect, in a possible implementation manner of the second aspect, the processor is configured to call the computer program, so that the electronic device performs recording of N frames received after receiving the video frame Before the decoding delay of the video frame and the display delay, it can also be used to call the computer program, so that the electronic device performs: remove the first written element in the first queue, and write the time of receiving the video frame first queue.
第三方面,本申请提供一种计算机存储介质,包括指令,当上述指令在电子设备上运行时,使得上述电子设备执行上述第一方面中任一种可能的实现方式。In a third aspect, the present application provides a computer storage medium, including an instruction, which, when the instruction is run on an electronic device, causes the electronic device to execute any possible implementation manner in the first aspect above.
第四方面,本申请实施例提供一种芯片,该芯片应用于电子设备,该芯片包括一个或多个处理器,该处理器用于调用计算机指令以使得该电子设备执行上述第一方面中任一种可能的实现方式。In a fourth aspect, an embodiment of the present application provides a chip, the chip is applied to an electronic device, and the chip includes one or more processors, and the processor is used to invoke computer instructions so that the electronic device executes any one of the above first aspects a possible implementation.
第五方面,本申请实施例提供一种包含指令的计算机程序产品,当上述计算机程序产品在设备上运行时,使得上述电子设备执行上述第一方面中任一种可能的实现方式。In a fifth aspect, an embodiment of the present application provides a computer program product including instructions, which, when the computer program product is run on a device, cause the electronic device to execute any possible implementation manner in the first aspect above.
可以理解地,上述第二方面提供的电子设备、第三方面提供的计算机存储介质、第四方面提供的芯片、第五方面提供的计算机程序产品均用于执行本申请实施例所提供的方法。因此,其所能达到的有益效果可参考对应方法中的有益效果,此处不再赘述。It can be understood that the electronic device provided by the second aspect, the computer storage medium provided by the third aspect, the chip provided by the fourth aspect, and the computer program product provided by the fifth aspect are all used to execute the method provided by the embodiment of the present application. Therefore, the beneficial effects that it can achieve can refer to the beneficial effects in the corresponding method, and will not be repeated here.
附图说明Description of drawings
图1为本申请实施例提供的一种镜像投屏的示意图;FIG. 1 is a schematic diagram of a mirror projection screen provided by an embodiment of the present application;
图2为本申请实施例提供的一种镜像投屏的流程示意图;FIG. 2 is a schematic flow diagram of a mirror projection screen provided by an embodiment of the present application;
图3A为本申请实施例提供的一种网络状况良好时的解码示意图;FIG. 3A is a schematic diagram of decoding provided by an embodiment of the present application when the network condition is good;
图3B为本申请实施例提供的一种网络状况不佳时的解码示意图;FIG. 3B is a schematic diagram of decoding when the network condition is not good provided by the embodiment of the present application;
图3C为本申请实施例提供的一种丢帧后的解码示意图;FIG. 3C is a schematic diagram of decoding after frame loss provided by the embodiment of the present application;
图4为本申请实施例提供的一种动态调节丢帧阈值的方法的流程图;FIG. 4 is a flow chart of a method for dynamically adjusting the frame loss threshold provided by an embodiment of the present application;
图5为本申请实施例提供的又一种动态调节丢帧阈值的方法的流程图;FIG. 5 is a flowchart of another method for dynamically adjusting the frame loss threshold provided by the embodiment of the present application;
图6为本申请实施例提供的一种调节丢帧阈值的流程图;FIG. 6 is a flow chart for adjusting the frame loss threshold provided by the embodiment of the present application;
图7A为本申请实施例提供的一种调节丢帧阈值的示意图;FIG. 7A is a schematic diagram of adjusting the frame loss threshold provided by the embodiment of the present application;
图7B为本申请实施例提供的又一种调节丢帧阈值的示意图;FIG. 7B is another schematic diagram of adjusting the frame loss threshold provided by the embodiment of the present application;
图7C为本申请实施例提供的又一种调节丢帧阈值的示意图;FIG. 7C is another schematic diagram of adjusting the frame loss threshold provided by the embodiment of the present application;
图8为本申请实施例提供的一种音视频同步的流程示意图;FIG. 8 is a schematic flow diagram of an audio-video synchronization provided by an embodiment of the present application;
图9为本申请实施例提供的又一种动态调节丢帧阈值的方法的流程图;FIG. 9 is a flowchart of another method for dynamically adjusting the frame loss threshold provided by the embodiment of the present application;
图10是本发明实施例提供的一种电子设备100的硬件结构示意图;FIG. 10 is a schematic diagram of a hardware structure of an electronic device 100 provided by an embodiment of the present invention;
图11是本发明实施例提供的一种电子设备100的软件结构示意图。FIG. 11 is a schematic diagram of a software structure of an electronic device 100 provided by an embodiment of the present invention.
具体实施方式detailed description
下面结合附图对本申请实施例中的技术方案进行清楚、完整的描述,显然,所描述的实施例仅仅是本申请的一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the accompanying drawings. Apparently, the described embodiments are only a part of the embodiments of the present application, rather than all the embodiments. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the scope of protection of this application.
首先,对本申请中所涉及的部分用语和相关技术进行解释说明,以便于本领域技术人员理解。First, some terms and related technologies involved in this application are explained, so as to facilitate the understanding of those skilled in the art.
应用程序(Application,APP),一般指手机软件,主要指安装在智能手机上的软件,完善原始系统的不足与个性化。使手机完善其功能,为用户提供更丰富的使用体验的主要手段。手机软件的运行需要有相应的手机系统,截至2017年6月1日,主要的手机系统:苹果公司的iOS、谷歌公司的Android(安卓)系统、塞班平台和微软平台。Application (Application, APP), generally refers to mobile phone software, mainly refers to the software installed on the smart phone, to improve the shortcomings of the original system and personalization. It is the main means to make the mobile phone improve its functions and provide users with a richer experience. The operation of mobile phone software requires a corresponding mobile phone system. As of June 1, 2017, the main mobile phone systems: Apple's iOS, Google's Android (Android) system, Symbian platform and Microsoft platform.
Window类代表着一个视图的顶层窗口,它管理着这个视图中最顶层的View,提供了一些背景、标题栏、默认按键等标准的UI处理策略。同时,它拥有唯一一个用以绘制自己的内容的Surface,当app通过WindowManager创建一个Window时,WindowManager会为每一个Window创建一个Surface,并把该Surface传递给app以便应用在上面绘制内容。简单来说,可以认为Window与Surface之前存在着一对一的关系。The Window class represents the top-level window of a view, which manages the top-level View in this view, and provides some standard UI processing strategies such as background, title bar, and default buttons. At the same time, it has the only Surface for drawing its own content. When the app creates a Window through WindowManager, WindowManager will create a Surface for each Window and pass the Surface to the app so that the application can draw content on it. In simple terms, it can be considered that there was a one-to-one relationship between Window and Surface.
窗口管理服务(WindowManagerService,WMS)是一个系统级服务,由SystemService启动,实现了IWindowManager.AIDL接口。Window Management Service (WindowManagerService, WMS) is a system-level service, started by SystemService, and implements the IWindowManager.AIDL interface.
WindowManager会控制窗口对象,它们是用于容纳视图对象的容器。窗口对象始终由Surface对象提供支持。WindowManager会监督生命周期、输入和聚焦事件、屏幕方向、转换、动画、位置、变形、Z轴顺序以及窗口的许多其他方面。WindowManager会将所有窗口元数据发送到SurfaceFlinger,以便SurfaceFlinger可以使用这些数据在屏幕上合成Surface。The WindowManager controls window objects, which are containers for view objects. Window objects are always backed by Surface objects. The WindowManager oversees the lifecycle, input and focus events, screen orientation, transitions, animations, position, transformations, Z-order, and many other aspects of the window. WindowManager will send all window metadata to SurfaceFlinger so that SurfaceFlinger can use this data to compose a Surface on the screen.
Surface通常表示由SurfaceFlinger消耗的缓冲区队列的生产方。当渲染到Surface上时,产生的结果将进入相关缓冲区,该缓冲区被传递给消耗方。简单来说,可以把Surface当做是一层视图,在Surface上绘制图像,来往其中的BufferQueue生产视图数据,进而交给其消耗方SurfaceFlinger来与其他视图层合成,最终显示到屏幕上。A Surface typically represents the producer side of a buffer queue consumed by SurfaceFlinger. When rendered onto a Surface, the resulting result goes into the associated buffer, which is passed to the consumer. To put it simply, Surface can be regarded as a layer of view, images are drawn on Surface, and the BufferQueue in it produces view data, which is then handed over to its consumer SurfaceFlinger to be synthesized with other view layers, and finally displayed on the screen.
SurfaceFlinger是Android的一个服务,其负责管理应用端的Surface,将所有的Surface复合。SurfaceFlinger是介于图形库和应用之间的一层,其作用是接受多个来源的图形显示数据,将他们合成,然后发送到显示设备。比如打开应用,常见的有三层显示,顶部的statusbar底部或者侧面的导航栏以及应用的界面,每个层是单独更新和渲染,这些界面都是由SurfaceFlinger合成一个刷新到硬件显示。在显示过程中使用到了bufferqueue,SurfaceFlinger作为consumer方,比如WindowManager管理的Surface作为生产方产生页面,交由SurfaceFlinger进行合成。SurfaceFlinger is a service of Android, which is responsible for managing the Surface on the application side and compounding all Surfaces. SurfaceFlinger is a layer between the graphics library and the application. Its role is to accept graphics display data from multiple sources, synthesize them, and then send them to the display device. For example, when you open an application, there are usually three layers of display, the top status bar, the bottom or side navigation bar, and the application interface. Each layer is updated and rendered separately. These interfaces are synthesized by SurfaceFlinger and refreshed to the hardware display. Bufferqueue is used in the display process, and SurfaceFlinger is used as the consumer. For example, the Surface managed by WindowManager is used as the producer to generate pages, which are then synthesized by SurfaceFlinger.
(Hardware Composer,HWC)用于确定通过可用硬件来合成缓冲区的最有效方法。作为HAL,其实现是特定于设备的,而且通常由显示设备硬件原始设备制造商(OEM)完成。(Hardware Composer, HWC) is used to determine the most efficient way to compose buffers with the available hardware. As a HAL, its implementation is device-specific and is usually done by the display device hardware original equipment manufacturer (OEM).
显示子系统(Display Sub System,DSS)是一个专门做图像合成的硬件,手机主屏的图像使用DSS合成。The display subsystem (Display Sub System, DSS) is a piece of hardware dedicated to image synthesis, and the image of the main screen of the mobile phone is synthesized using DSS.
VirtualDisplay指的是虚显,虚显是Android支持的多个屏幕中的一个(Android支持的屏幕有:主显,外显,和虚显)。VirtualDisplay的使用场景很多,比如录屏,WFD显示等。其作用就是抓取屏幕上显示的内容。VirtualDisplay抓取屏幕内容,其实现方式有很多。在API中就提供了ImageReader进行读取VirtualDisplay里的内容。VirtualDisplay refers to virtual display, which is one of the multiple screens supported by Android (screens supported by Android include: main display, external display, and virtual display). There are many usage scenarios of VirtualDisplay, such as screen recording, WFD display, etc. Its function is to grab the content displayed on the screen. VirtualDisplay captures screen content, and there are many ways to implement it. ImageReader is provided in the API to read the content in VirtualDisplay.
MediaCodec可以用来获得安卓底层的多媒体编码,可以用来编码和解码,它是安卓low-level多媒体基础框架的重要组成部分。MediaCodec的作用是处理输入的数据生成输出 数据。首先生成一个输入数据缓冲区,将数据填入缓冲区提供给codec,codec会采用异步的方式处理这些输入的数据,然后将填满输出缓冲区提供给消费者,消费者消费完后将缓冲区返还给codec。MediaCodec can be used to obtain the underlying multimedia encoding of Android, which can be used for encoding and decoding. It is an important part of the Android low-level multimedia infrastructure framework. The role of MediaCodec is to process input data to generate output data. First generate an input data buffer, fill the data into the buffer and provide it to the codec. The codec will process the input data in an asynchronous manner, and then provide the filled output buffer to the consumer. After the consumer consumes the buffer Return to codec.
传输控制协议(Transmission Control Protocol,TCP)是一种面向连接的、可靠的、基于字节流的传输层通信协议,由IETF的RFC 793定义。TCP旨在适应支持多网络应用的分层协议层次结构。连接到不同但互连的计算机通信网络的主计算机中的成对进程之间依靠TCP提供可靠的通信服务。TCP假设它可以从较低级别的协议获得简单的,可能不可靠的数据报服务。原则上,TCP应该能够在从硬线连接到分组交换或电路交换网络的各种通信系统之上操作。Transmission Control Protocol (Transmission Control Protocol, TCP) is a connection-oriented, reliable, byte stream-based transport layer communication protocol, defined by RFC 793 of IETF. TCP was designed to accommodate a layered protocol hierarchy supporting multiple network applications. TCP is used to provide reliable communication services between pairs of processes in host computers connected to different but interconnected computer communication networks. TCP assumes that it can obtain simple, possibly unreliable, datagram service from lower-level protocols. In principle, TCP should be able to operate over a variety of communication systems from hardwired connections to packet-switched or circuit-switched networks.
网际互连协议是(Internet Protocol,IP)的缩写,是TCP/IP体系中的网络层协议。设计IP的目的是提高网络的可扩展性:一是解决互联网问题,实现大规模、异构网络的互联互通;二是分割顶层网络应用和底层网络技术之间的耦合关系,以利于两者的独立发展。根据端到端的设计原则,IP只为主机提供一种无连接、不可靠的、尽力而为的数据包传输服务。Internet Protocol (Internet Protocol, IP) is the abbreviation of the network layer protocol in the TCP/IP system. The purpose of designing IP is to improve the scalability of the network: one is to solve Internet problems and realize the interconnection and intercommunication of large-scale and heterogeneous networks; Independent development. According to the end-to-end design principle, IP only provides a connectionless, unreliable, best-effort data packet transmission service for the host.
包(Packet)是TCP/IP协议通信传输中的数据单位,一般也称“数据包”。Packet (Packet) is the data unit in the communication transmission of TCP/IP protocol, generally also called "data packet".
实时传送协议(Real-time Transport Protocol,RTP/RTTP)是一个网络传输协议。RTP协议详细说明了在互联网上传递音频和视频的标准数据包格式。它一开始被设计为一个多播协议,但后来被用在很多单播应用中。RTP协议常用于流媒体系统(配合RTCP协议),视频会议和一键通(Push to Talk)系统(配合H.323或SIP),使它成为IP电话产业的技术基础。RTP协议和RTP控制协议RTCP一起使用,而且它是建立在用户数据报协议上的。Real-time Transport Protocol (RTP/RTTP) is a network transmission protocol. The RTP protocol specifies a standard packet format for delivering audio and video over the Internet. It was originally designed as a multicast protocol, but has since been used in many unicast applications. The RTP protocol is often used in streaming media systems (with RTCP protocol), video conferencing and push to talk (Push to Talk) systems (with H.323 or SIP), making it the technical basis of the IP telephony industry. The RTP protocol is used with the RTP control protocol RTCP, and it is built on the user datagram protocol.
RTP本身并没有提供按时发送机制或其它服务质量(Quality of Service,QoS)保证,它依赖于低层服务去实现这一过程。RTP并不保证传送或防止无序传送,也不确定底层网络的可靠性。RTP实行有序传送,RTP中的序列号允许接收方重组发送方的包序列,同时序列号也能用于决定适当的包位置,例如:在视频解码中,就不需要顺序解码。RTP itself does not provide a timely delivery mechanism or other Quality of Service (QoS) guarantees, it relies on low-level services to achieve this process. RTP does not guarantee delivery or prevent out-of-order delivery, nor does it determine the reliability of the underlying network. RTP implements orderly transmission. The serial number in RTP allows the receiver to reassemble the packet sequence of the sender. At the same time, the serial number can also be used to determine the appropriate packet location. For example, in video decoding, sequential decoding is not required.
RTP标准定义了两个子协议,RTP和RTCP。The RTP standard defines two sub-protocols, RTP and RTCP.
数据传输协议RTP,用于实时传输数据。该协议提供的信息包括:时间戳(用于同步)、序列号(用于丢包和重排序检测)、以及负载格式(用于说明数据的编码格式)。The data transfer protocol RTP is used to transfer data in real time. Information provided by the protocol includes: timestamp (for synchronization), sequence number (for packet loss and reordering detection), and payload format (for specifying the encoding format of the data).
控制协议RTCP,用于QoS反馈和同步媒体流,即RTCP用于监控服务质量并传送正在进行的会话参与者的相关信息。相对于RTP来说,RTCP所占的带宽非常小,通常只有5%。RTCP第二方面的功能对于“松散受控”会话是足够的,也就是说,在没有明确的成员控制和组织的情况下,它并不非得用来支持一个应用程序的所有控制通信请求。The control protocol RTCP is used for QoS feedback and synchronous media flow, that is, RTCP is used to monitor the quality of service and transmit relevant information about ongoing session participants. Compared with RTP, the bandwidth occupied by RTCP is very small, usually only 5%. The functionality of the second aspect of RTCP is sufficient for "loosely controlled" sessions, that is, it does not have to be used to support all of an application's control communication requests without explicit membership control and organization.
在TCP节点之间的信息传递,每次传送的内容是结构体,所以每次在传送的时候,要将结构体中的数据进行封包,然后当一端接收到数据之后,要对接收到的buf参数中的数据进行解包。In the transmission of information between TCP nodes, the content of each transmission is a structure, so each time it is transmitted, the data in the structure must be packaged, and then when one end receives the data, the received buf The data in the parameter is unpacked.
每秒渲染帧数(Frames Per Second,FPS)是图像领域中的定义,是指芯片每秒可以或实际渲染的帧数,通俗来讲就是指视觉上动画或视频的画面数。FPS是测量用于保存、显示动态视频的信息数量。每秒钟帧数越多,所显示的动作就会越流畅。通常,要避免动作 不流畅的最低是30。某些计算机视频格式,每秒只能提供15帧。Frames Per Second (FPS) is the definition in the image field, which refers to the number of frames that the chip can or actually renders per second. Generally speaking, it refers to the number of visually animated or video frames. FPS is a measure of the amount of information used to save and display dynamic video. The more frames per second, the smoother the displayed motion will be. Generally, 30 is the minimum you need to avoid jerky motion. Some computer video formats can only provide 15 frames per second.
PTS(Presentation Time Stamp):即显示时间戳,这个时间戳用来告诉播放器该在什么时候显示这一帧的数据。PTS (Presentation Time Stamp): Displays the timestamp, which is used to tell the player when to display the data of this frame.
垂直同步(Vertical synchronization,VSYNC)又称场同步,从CRT(Cathode Ray Tube)显示器的显示原理来看,单个像素组成了水平扫描线,水平扫描线在垂直方向的堆积形成了完整的画面。显示器的刷新率受显卡DAC(数模转换器,又称D/A转换器)控制,显卡DAC完成一帧的扫描后就会产生一个垂直同步信号。我们平时所说的打开垂直同步指的是将该信号送入显卡3D图形处理部分,从而让显卡在生成3D图形时受垂直同步信号的制约。Vertical synchronization (Vertical synchronization, VSYNC) is also known as field synchronization. From the display principle of CRT (Cathode Ray Tube) display, a single pixel forms a horizontal scanning line, and the accumulation of horizontal scanning lines in the vertical direction forms a complete picture. The refresh rate of the display is controlled by the graphics card DAC (digital-to-analog converter, also known as the D/A converter), and the graphics card DAC will generate a vertical synchronization signal after scanning one frame. What we usually say to turn on vertical synchronization refers to sending the signal to the 3D graphics processing part of the graphics card, so that the graphics card is restricted by the vertical synchronization signal when generating 3D graphics.
在日常生活中,镜像投屏给用户的工作和生活带来了极大的便利,例如,在会议场景下,用户可以通过镜像投屏将其电脑、手机等个人电子设备上的内容投射到大屏幕上,以便其他参会者观看,无需再对大屏幕进行相应的操作,大大提升了用户体验。In daily life, mirroring screen projection brings great convenience to users' work and life. on the screen so that other participants can watch it, and there is no need to perform corresponding operations on the large screen, which greatly improves the user experience.
图1为本申请实施例提供的一种镜像投屏的示意图,该镜像投屏过程具体是将第一电子设备上显示的内容投屏到第二电子设备上显示。在第一电子设备的设置界面可以找到“无线投屏”选项,将“无线投屏”打开,当第一电子设备与其它设备(例如,第二电子设备)建立通信连接后即可进行镜像投屏。如图1所示,第一电子设备为手机,第二电子设备为智慧屏,当手机与智慧屏建立通信连接后,可以将手机上播放的视频镜像投屏到智慧屏上显示。可理解,若后续过程中因为一些用户操作导致手机界面(手机显示屏上显示的画面)发生变化,智慧屏上的界面也相应地变化。FIG. 1 is a schematic diagram of a mirroring screen projection provided by an embodiment of the present application. The mirroring screen projection process specifically projects content displayed on a first electronic device to a second electronic device for display. On the setting interface of the first electronic device, you can find the "Wireless Screen Projection" option. Turn on "Wireless Screen Projection". After the first electronic device establishes a communication connection with other devices (for example, the second electronic device), you can perform mirror projection. Screen. As shown in Figure 1, the first electronic device is a mobile phone, and the second electronic device is a smart screen. After a communication connection is established between the mobile phone and the smart screen, the mirror image of the video played on the mobile phone can be displayed on the smart screen. It can be understood that if the interface of the mobile phone (the picture displayed on the display screen of the mobile phone) changes due to some user operations in the subsequent process, the interface on the smart screen will also change accordingly.
可理解,第一电子设备可以为手机、平板电脑、PC等电子设备中的一个,第二电子设备可以为平板电脑、PC、智慧屏等电子设备中的一个。It can be understood that the first electronic device may be one of electronic devices such as a mobile phone, a tablet computer, and a PC, and the second electronic device may be one of electronic devices such as a tablet computer, a PC, and a smart screen.
另外,可以通过多种方式在第一电子设备与第二电子设备之间建立通信连接。可选的,可以利用无线通信技术在第一电子设备与第二电子设备之间建立通信连接,例如,通过无线保真(Wireless Fidelity,Wi-Fi)网络连接第一电子设备和第二电子设备;还可以利用有线通信技术在第一电子设备与第二电子设备之间建立通信连接,例如,利用同轴电缆、双绞线、光纤等介质连接第一电子设备与第二电子设备。In addition, a communication connection between the first electronic device and the second electronic device may be established in various ways. Optionally, wireless communication technology can be used to establish a communication connection between the first electronic device and the second electronic device, for example, connecting the first electronic device and the second electronic device through a wireless fidelity (Wireless Fidelity, Wi-Fi) network It is also possible to use wired communication technology to establish a communication connection between the first electronic device and the second electronic device, for example, to use media such as coaxial cables, twisted pairs, and optical fibers to connect the first electronic device and the second electronic device.
下面根据图2来介绍镜像投屏的具体过程。The specific process of screen mirroring is introduced below according to FIG. 2 .
图2为本申请实施例提供的一种镜像投屏的流程示意图,如图1所示,该镜像投屏过程是将第一电子设备上所显示的内容镜像投屏到第二电子设备上。FIG. 2 is a schematic flow chart of a mirroring screen projection provided by an embodiment of the present application. As shown in FIG. 1 , the mirroring screen projection process is to mirror the content displayed on the first electronic device to the second electronic device.
首先,在第一电子设备侧,需要进行两方面的操作:First, on the side of the first electronic device, two operations need to be performed:
一是显示画面。One is the display screen.
具体地,第一电子设备上的APP通过WMS创建Window,并且会为每一个Window创建一个Surface,以及把相应的Surface传递给应用程序,以便应用程序将图形数据绘制到Surface上,也就是说,通过WMS来呈现图层。WMS为SurfaceFlinger提供缓冲区和窗口元数据(绘制好的Surface),而SurfaceFlinger可使用这些信息合成Surface,得到合成后 的图像,再通过自身的显示系统(HWC/DSS)将合成的图像显示到第一电子设备的屏幕上,即用户能看到的第一电子设备上的图像。Specifically, the APP on the first electronic device creates Window through WMS, and creates a Surface for each Window, and passes the corresponding Surface to the application so that the application can draw graphics data on the Surface, that is, Layers are presented via WMS. WMS provides SurfaceFlinger with buffer and window metadata (drawn Surface), and SurfaceFlinger can use this information to synthesize Surface, obtain the synthesized image, and then display the synthesized image to the second through its own display system (HWC/DSS). On the screen of an electronic device, that is, the image on the first electronic device that the user can see.
二是将需要投屏的画面传输给第二电子设备。The second is to transmit the screen to be projected to the second electronic device.
具体地,SurfaceFlinger合成图像后,需要通过VirtualDisplay抓取屏幕内容来进行虚拟显示,可理解,抓取的屏幕内容可以为音视频数据,例如H.264数据、H.265数据、VP9数据、AV1数据、AAC数据等。然后通过编码器(例如,MediaCodec等)对抓取的音视频数据进行编码,再进行加密(Encryption)和多层封包,例如,进行RTP封包以及VTP/TCP封包,最后将封包后得到的数据包发送给第二电子设备。Specifically, after SurfaceFlinger synthesizes images, it needs to capture the screen content through VirtualDisplay for virtual display. It is understandable that the captured screen content can be audio and video data, such as H.264 data, H.265 data, VP9 data, AV1 data , AAC data, etc. Then encode the captured audio and video data through an encoder (for example, MediaCodec, etc.), then perform encryption (Encryption) and multi-layer packaging, for example, perform RTP packaging and VTP/TCP packaging, and finally pack the obtained data packets sent to the second electronic device.
可理解,可以通过无线通信方式(例如,WiFi)将数据包发送给第二电子设备,还可以通过有线通信方式将数据包发送给第二电子设备。It can be understood that the data packet may be sent to the second electronic device through wireless communication (for example, WiFi), and the data packet may also be sent to the second electronic device through wired communication.
需要说明的是,第一电子设备向第二电子设备发送的数据包中包括音频数据或视频数据,即音频数据和视频数据是独立传输的,其中,音频数据可以包括音频帧,视频数据可以包括视频帧。因此,第一电子设备向第二电子设备发送数据包的过程即为第一电子设备向第二电子设备发送视频帧和音频帧的过程。It should be noted that the data packet sent by the first electronic device to the second electronic device includes audio data or video data, that is, the audio data and video data are transmitted independently, wherein the audio data may include audio frames, and the video data may include video frame. Therefore, the process of the first electronic device sending the data packet to the second electronic device is the process of the first electronic device sending video frames and audio frames to the second electronic device.
其次,第二电子设备侧也需要进行相关操作,才能成功投屏。Secondly, related operations also need to be performed on the side of the second electronic device in order to successfully cast the screen.
具体地,第二电子设备接收第一电子设备发送的数据包后,进行相应的拆包(RTP拆包和VTP/TCP拆包)操作、解密操作以及解码(MediaCodec解码)操作,然后将得到的音频数据和视频数据进行同步,再进行MediaCodec送显,最后由SurfaceFlinger进行图层合成并显示在第二电子设备的屏幕上。Specifically, after the second electronic device receives the data packet sent by the first electronic device, it performs corresponding unpacking (RTP unpacking and VTP/TCP unpacking) operations, decryption operations, and decoding (MediaCodec decoding) operations, and then the obtained The audio data and video data are synchronized, and then sent to display by MediaCodec, and finally the layers are synthesized by SurfaceFlinger and displayed on the screen of the second electronic device.
需要说明的是,相应的,第二电子设备接收第一电子设备发送的数据包的过程即为第二电子设备接收第一电子设备发送的视频帧和音频帧。It should be noted that, correspondingly, the process of the second electronic device receiving the data packet sent by the first electronic device is that the second electronic device receives the video frame and the audio frame sent by the first electronic device.
可理解,数据包的传输是镜像投屏的重要一环,第一电子设备能否将数据包顺利且及时地发送给第二电子设备,直接影响了投屏效果。由于音频数据的传输比较稳定(在网络状况不佳的情况下也能较为稳定地传输),所以本申请主要考虑视频数据的传输问题,即视频帧的传输问题。It can be understood that the transmission of data packets is an important part of screen mirroring. Whether the first electronic device can send the data packets to the second electronic device smoothly and in time directly affects the screen projection effect. Since the transmission of audio data is relatively stable (it can also be transmitted relatively stably in the case of poor network conditions), this application mainly considers the transmission of video data, that is, the transmission of video frames.
若出现网络状况不佳的情况,例如,网络出现波动或短暂网弱等,第二电子设备可能无法及时接收第一电子设备发送的视频帧,等网络恢复稳定状态后,第二电子设备短时间内会接收很多视频帧,由于解码器是串行解码,且解码需要的时间比拆包和解密需要的时间长,所以后面接收的视频帧只能等待前面接收的视频帧解码完成才能进行解码,可能出现视频帧阻塞的问题。If the network condition is not good, for example, the network fluctuates or the network is weak for a short time, etc., the second electronic device may not be able to receive the video frame sent by the first electronic device in time. A lot of video frames will be received, because the decoder is serial decoding, and the time required for decoding is longer than the time required for unpacking and decrypting, so the video frames received later can only be decoded after the decoding of the previously received video frames is completed. Video frame blocking issues may occur.
下面介绍在不同网络环境下第二电子设备处理视频帧的差异(图3A-图3C)。如图3A所示,网络状况良好时,第二电子设备以稳定速率接收视频帧A、B、C、D(接收视频帧的时间差较稳定),也会以稳定速率依次对接收的视频帧A、B、C、D进行解码,并且,解码时间与实际解码时间相等。其中,解码时间指的是第二电子设备真正开始对视频帧解码到完成解码所需的时间;实际解码时间指的是从第二电子设备接收到视频帧到解码完成的时间。例如,若第二电子设备从接收视频帧开始到完成对视频帧的解码所用的时间为30毫秒(ms),而从真正开始对视频帧进行解码到解码完成所需的时间为10ms,则解码时间 为10ms,实际解码时间为30ms。解码完成之后,第二电子设备可以再以稳定速率将解码后的视频帧A、B、C、D与音频帧进行同步,并将同步后的视频帧送显,即将视频帧传送到显示屏上进行显示。一般的,在送显之后可以通过送显回调来查看视频帧的显示情况。如图3A所示,在网络状况良好的情况下,视频帧可以以稳定的速率显示在第二电子设备的显示屏上。The difference in processing video frames by the second electronic device under different network environments is introduced below (FIG. 3A-FIG. 3C). As shown in Figure 3A, when the network condition is good, the second electronic device receives video frames A, B, C, and D at a stable rate (the time difference of receiving video frames is relatively stable), and also sequentially receives video frames A at a stable rate. , B, C, and D are decoded, and the decoding time is equal to the actual decoding time. Wherein, the decoding time refers to the time required for the second electronic device to actually start decoding the video frame to complete the decoding; the actual decoding time refers to the time from the second electronic device receiving the video frame to the completion of decoding. For example, if the second electronic device takes 30 milliseconds (ms) from receiving the video frame to completing the decoding of the video frame, and the time required from actually starting to decode the video frame to the completion of decoding is 10 ms, then the decoding The time is 10ms, and the actual decoding time is 30ms. After the decoding is completed, the second electronic device can synchronize the decoded video frames A, B, C, and D with the audio frames at a stable rate, and send the synchronized video frames to display, that is, to transmit the video frames to the display screen to display. Generally, after sending to display, you can check the display status of the video frame by sending to display callback. As shown in FIG. 3A , when the network condition is good, video frames can be displayed on the display screen of the second electronic device at a stable rate.
网络状况不佳时,视频帧无法及时传输至第二电子设备,而当网络恢复稳定时,第二电子设备很可能短时间内接收多个视频帧,从而无法及时对其进行解码,也就是说,当前的视频帧还未完成解码时,下一个视频帧已经被第二电子设备接收,较晚接收的视频帧只能等待前一个视频帧完成解码后才能开始解码。When the network condition is poor, the video frame cannot be transmitted to the second electronic device in time, and when the network is restored to stability, the second electronic device may receive multiple video frames in a short period of time, so that it cannot be decoded in time, that is to say , when the decoding of the current video frame has not been completed, the next video frame has been received by the second electronic device, and the video frame received later can only start decoding after the previous video frame has been decoded.
示例性的,如图3B所示,第二电子设备短时间内接收视频帧A、B、C,由于最先接收的是视频帧A,所以最先对视频帧A进行解码。然而,在对视频帧A进行解码的过程中,第二电子设备接收视频帧B,本来紧接着应该对视频帧B进行解码,但是由于对视频帧A的解码还未完成,所以视频帧B只能暂时等待。当视频帧A解码完成后,再对视频帧B进行解码。类似的,视频帧C也需要等待视频帧B解码完成后再进行解码。Exemplarily, as shown in FIG. 3B , the second electronic device receives video frames A, B, and C within a short period of time, and since video frame A is received first, video frame A is decoded first. However, in the process of decoding video frame A, the second electronic device receives video frame B, and video frame B should be decoded immediately, but since the decoding of video frame A has not been completed, video frame B only needs to be decoded. I can wait for a while. After video frame A is decoded, video frame B is decoded. Similarly, video frame C also needs to wait for video frame B to be decoded before decoding.
在这种情况下,视频帧A、B、C的解码时间是一样的,但是实际解码时间却不相同。由于视频A是第二电子设备接收的第一帧,所以无需等待就可以进行解码,因此,视频帧A的解码时间和实际解码时间是一样的。然而,视频帧B需要等待视频帧A解码完成后才能开始解码,所以,视频帧B的实际解码时间要比视频帧A的实际解码时间长一些。类似的,视频帧C的实际解码时间中也包括等待解码的时间。换而言之,实际解码时间指电子设备从接收到视频帧到解码完成所需要的时间。In this case, the decoding time of video frame A, B, C is the same, but the actual decoding time is different. Since video A is the first frame received by the second electronic device, it can be decoded without waiting. Therefore, the decoding time of video frame A is the same as the actual decoding time. However, video frame B needs to wait for video frame A to be decoded before starting to decode, so the actual decoding time of video frame B is longer than the actual decoding time of video frame A. Similarly, the actual decoding time of the video frame C also includes the waiting time for decoding. In other words, the actual decoding time refers to the time required by the electronic device from receiving a video frame to completing decoding.
可理解,网络状况不佳时,会增加视频帧的送显时延(如图3B所示),使得视频帧无法及时显示在第二电子设备上。It can be understood that when the network condition is not good, the display delay of the video frame will be increased (as shown in FIG. 3B ), so that the video frame cannot be displayed on the second electronic device in time.
可理解,等待解码的过程会增加解码时延,从而导致该视频帧延迟显示。整体上,镜像投屏的时延会增加。也就是说,第一电子设备上显示的画面可能与第二电子设备上显示的画面不同步。例如,当第一电子设备上显示的是视频的第5秒(s)时,第二电子设备上显示的是视频的第3s。从用户角度看,上述现象很影响用户体验。It can be understood that the process of waiting for decoding will increase the decoding delay, thus causing the delayed display of the video frame. Overall, the delay of mirroring screen projection will increase. That is to say, the picture displayed on the first electronic device may not be synchronized with the picture displayed on the second electronic device. For example, when the 5th second (s) of the video is displayed on the first electronic device, the 3rd second (s) of the video is displayed on the second electronic device. From the user's point of view, the above phenomena greatly affect the user experience.
为了解决上述问题,可以设置一个丢帧阈值,通过判断第二电子设备接收视频帧的时间差和丢帧阈值的关系,有选择地丢帧,从而减少视频帧等待解码的时间。例如,如图3C所示,当接收视频帧A和视频帧B的时间差小于既定阈值时,可以选择丢掉视频帧B。在这种情况下,视频帧也可以及时显示在第二电子设备的显示屏上。In order to solve the above problem, a frame loss threshold can be set, and by judging the relationship between the time difference between the second electronic device receiving video frames and the frame loss threshold, frames are selectively dropped, thereby reducing the waiting time for video frames to be decoded. For example, as shown in FIG. 3C , when the time difference between receiving video frame A and video frame B is less than a predetermined threshold, video frame B may be selected to be discarded. In this case, the video frames can also be displayed on the display screen of the second electronic device in time.
上述方法可以改善视频帧阻塞情况,还存在进一步优化的空间。一方面,由于没有建立反馈机制,所以无法确定设置的丢帧阈值是否可以有效解决视频帧阻塞的问题,即无法量化丢帧带来的解码时延和送显时延的收益;另一方面,预先设置的丢帧阈值不一定能有效解决不同网络状况下及设备不同状态下的视频帧阻塞问题。The above method can improve the situation of video frame blocking, and there is room for further optimization. On the one hand, since there is no feedback mechanism established, it is impossible to determine whether the set frame loss threshold can effectively solve the problem of video frame blocking, that is, it is impossible to quantify the benefits of decoding delay and display delay caused by frame loss; on the other hand, The preset frame loss threshold may not be able to effectively solve the video frame blocking problem under different network conditions and different device states.
基于上述内容,本申请提供了一种动态调节丢帧阈值的方法及相关设备,可以设定初 始丢帧阈值,根据第二电子设备接收视频帧的时间差动态调节丢帧阈值,即根据接收视频帧的时间差判断当前网络状况并相应的调节丢帧阈值,然后进行丢帧操作,这样可以有效解决不同网络状况下的视频帧阻塞问题。另外,在丢帧完成后,判断调节丢帧阈值后的解码时延及送显时延是否减小,根据该判断结果确定是否需要再次调节丢帧阈值,使得丢帧阈值的效果可以及时得到反馈,保证当前的丢帧阈值可以有效解决视频帧阻塞的问题。Based on the above content, the present application provides a method and related equipment for dynamically adjusting the frame loss threshold. The initial frame loss threshold can be set, and the frame loss threshold can be dynamically adjusted according to the time difference when the second electronic device receives video frames, that is, according to the received video frame The time difference judges the current network status and adjusts the frame loss threshold accordingly, and then performs the frame loss operation, which can effectively solve the video frame blocking problem under different network conditions. In addition, after the frame loss is completed, judge whether the decoding delay and the display delay after adjusting the frame loss threshold are reduced, and determine whether the frame loss threshold needs to be adjusted again according to the judgment result, so that the effect of the frame loss threshold can be fed back in time , to ensure that the current frame loss threshold can effectively solve the problem of video frame blocking.
可理解,在进行如图1所示的镜像投屏之前,用户需要触发镜像投屏。It can be understood that before performing the mirror projection as shown in FIG. 1 , the user needs to trigger the mirror projection.
示例性的,用户触发第一电子设备上的设置应用程序控件,响应于该用户操作,第一电子设备显示设置界面,该设置界面包括无线投屏控件。第一电子设备可以检测到作用于无线投屏控件上的用户操作,响应于该用户操作,第一电子设备可以显示无线投屏界面。该无线投屏界面上包括一个或多个控件,这些控件用于表示可以与第一电子设备进行镜像投屏的设备。第一电子设备可以检测到作用于第一控件的用户操作,响应于该用户操作,第一电子设备可以与第二电子设备进行镜像投屏。第一电子设备不仅在本设备的显示屏上显示画面,还将视频帧发送给第二电子设备,使得第二电子设备上也可以显示第一电子设备上显示的画面。Exemplarily, the user triggers a setting application program control on the first electronic device, and in response to the user operation, the first electronic device displays a setting interface, and the setting interface includes a wireless screen projection control. The first electronic device may detect a user operation acting on the wireless screen projection control, and in response to the user operation, the first electronic device may display a wireless screen projection interface. The wireless screen projection interface includes one or more controls, and these controls are used to represent devices that can perform mirror projection with the first electronic device. The first electronic device may detect a user operation acting on the first control, and in response to the user operation, the first electronic device may perform screen mirroring with the second electronic device. The first electronic device not only displays images on the display screen of the device, but also sends video frames to the second electronic device, so that the images displayed on the first electronic device can also be displayed on the second electronic device.
需要说明的是,第二电子设备需要经过一系列处理,才能将第一电子设备发送的视频帧显示在显示屏上。第二电子设备的处理过程可参考下面的实施例。It should be noted that the second electronic device needs to undergo a series of processing before it can display the video frame sent by the first electronic device on the display screen. For the processing procedure of the second electronic device, reference may be made to the following embodiments.
图4示例性示出了本申请实施例提供的一种动态调节丢帧阈值方法的流程图。Fig. 4 exemplarily shows a flow chart of a method for dynamically adjusting the frame loss threshold provided by the embodiment of the present application.
S401:接收视频帧。S401: Receive a video frame.
第二电子设备接收第一电子设备发送的视频帧,将接收的视频帧记为A。在本申请的一个实施例中,由第二电子设备中的TCP/VTP模块接收第一电子设备发送的视频帧。The second electronic device receives the video frame sent by the first electronic device, and marks the received video frame as A. In an embodiment of the present application, the TCP/VTP module in the second electronic device receives the video frame sent by the first electronic device.
可理解,第一电子设备和第二电子设备之间传输视频帧的方式包括但不限于通过无线局域网(Wireless Local Area Networks,WLAN)等无线通信方式发送,例如,通过无线保真(Wireless Fidelity,Wi-Fi)网络发送,以及通过有线通信方式发送,例如,利用同轴电缆、双绞线、光纤等介质发送。It can be understood that the method of transmitting video frames between the first electronic device and the second electronic device includes but is not limited to sending through wireless communication methods such as wireless local area networks (Wireless Local Area Networks, WLAN), for example, through wireless fidelity (Wireless Fidelity, Wi-Fi) network transmission, and transmission through wired communication, for example, transmission using media such as coaxial cables, twisted pairs, and optical fibers.
S402:判断是否需要执行丢帧操作。S402: Determine whether a frame drop operation needs to be performed.
第二电子设备确定接收的视频帧(A)的时间与之前接收的视频帧的接收时间的差值。当该差值小于丢帧阈值时,执行丢帧操作,即丢弃视频帧A。当该差值不小于丢帧阈值时,判断是否调节丢帧阈值。The second electronic device determines the difference between the time of the received video frame (A) and the reception time of the previously received video frame. When the difference is smaller than the frame loss threshold, a frame loss operation is performed, that is, video frame A is discarded. When the difference is not less than the frame loss threshold, it is judged whether to adjust the frame loss threshold.
可理解,判断是否进行丢帧操作的过程,以及调节丢帧阈值的过程会在后续实施例中具体说明,在此先不展开说明。It can be understood that the process of judging whether to perform the frame loss operation and the process of adjusting the frame loss threshold will be specifically described in subsequent embodiments, and will not be described here.
S403:解码及送显。S403: decoding and sending to display.
若不丢弃接收的视频帧,将该视频帧传输到解码器进行解码,待解码完成后再送显,使得该视频帧显示到第二电子设备的显示屏上。If the received video frame is not discarded, the video frame is transmitted to the decoder for decoding, and then sent for display after the decoding is completed, so that the video frame is displayed on the display screen of the second electronic device.
图5示例性示出了本申请实施例提供的又一种动态调节丢帧阈值方法的流程图。Fig. 5 exemplarily shows a flowchart of another method for dynamically adjusting the frame loss threshold provided by the embodiment of the present application.
S501:接收视频帧。S501: Receive a video frame.
具体地,第二电子设备接收第一电子设备发送的视频帧。在本申请的一个实施例中, 由第二电子设备中的TCP/VTP模块接收第一电子设备发送的视频帧。Specifically, the second electronic device receives the video frame sent by the first electronic device. In an embodiment of the present application, the TCP/VTP module in the second electronic device receives the video frame sent by the first electronic device.
可理解,第一电子设备和第二电子设备之间传输视频帧的方式包括但不限于通过无线局域网(Wireless Local Area Networks,WLAN)等无线通信方式发送,例如,通过无线保真(Wireless Fidelity,Wi-Fi)网络发送,以及通过有线通信方式发送,例如,利用同轴电缆、双绞线、光纤等介质发送。It can be understood that the method of transmitting video frames between the first electronic device and the second electronic device includes but is not limited to sending through wireless communication methods such as wireless local area networks (Wireless Local Area Networks, WLAN), for example, through wireless fidelity (Wireless Fidelity, Wi-Fi) network transmission, and transmission through wired communication, for example, transmission using media such as coaxial cables, twisted pairs, and optical fibers.
S502:记录接收视频帧的时间。S502: Record the time of receiving the video frame.
具体地,第二电子设备可以记录视频帧到达的时间,即第二电子设备接收视频帧的时间。在本申请的一个实施例中,第二电子设备还可以给接收的视频帧设置编号,并将该编号和该视频帧到达的时间都记录下来。例如,第二电子设备接收第N帧视频帧的时间为T,第二电子设备根据编号N就可以查找到T为该编号所对应的视频帧的到达时间。Specifically, the second electronic device may record the time when the video frame arrives, that is, the time when the second electronic device receives the video frame. In an embodiment of the present application, the second electronic device may also set a number for the received video frame, and record both the number and the arrival time of the video frame. For example, the time when the second electronic device receives the Nth video frame is T, and the second electronic device can find out that T is the arrival time of the video frame corresponding to the number N according to the number N.
可理解,可以用数字或其他形式表示视频帧到达的时间,这里所说的视频帧到达的时间不一定为第二电子设备接收视频帧时的真实时间。It can be understood that the arrival time of the video frame may be expressed in numbers or other forms, and the arrival time of the video frame mentioned here is not necessarily the real time when the second electronic device receives the video frame.
示例性的,第二电子设备记录接收某一帧视频帧的时间为20210511235643。Exemplarily, the second electronic device records the time at which a certain video frame is received as 20210511235643.
示例性的,可以将第二电子设备接收第一个视频帧的时间设置为第1ms,后续接收视频帧的时间以接收第一帧的时间为基准来计算。Exemplarily, the time at which the second electronic device receives the first video frame may be set as the 1 ms, and the time at which subsequent video frames are received is calculated based on the time at which the first frame is received.
在本申请的一个实施例中,第二电子设备还可以设置一个队列来存储视频帧到达的时间和/或视频帧的编号。即该队列元素表示的是第二电子设备接收视频帧的时间。可理解,可以将该队列记为第一队列。In an embodiment of the present application, the second electronic device may also set a queue to store the arrival time of the video frame and/or the number of the video frame. That is, the queue element represents the time when the second electronic device receives the video frame. Understandably, this queue may be recorded as the first queue.
可理解,所设置的队列可以是一种先进先出的线性表,它只允许在表的一端进行插入,而在另一端删除元素。队列元素是指队列中的数据元素或指数据元素使用队列数据结构进行有关操作。队列数据元素的数据类型可以采用已有数据类型或自定义的数据类型。例如,若将第二电子设备接收第一个视频帧的时间设置为第1ms,相应的队列元素可以为1。It can be understood that the set queue can be a first-in-first-out linear table, which only allows insertion at one end of the table and deletion of elements at the other end. The queue element refers to the data element in the queue or refers to the data element using the queue data structure to perform related operations. The data type of the queue data element can be an existing data type or a user-defined data type. For example, if the time when the second electronic device receives the first video frame is set as 1 ms, the corresponding queue element may be 1.
另外,可以在设置队列时确定队列的长度,即确定队列能容纳的最多的元素个数,而当队列中的元素个数达到队列能容纳的上限(队列已满)时,需要将队列中最先写入的元素移出,才能将新的元素写入队列。In addition, you can determine the length of the queue when setting the queue, that is, determine the maximum number of elements that the queue can accommodate, and when the number of elements in the queue reaches the upper limit that the queue can accommodate (the queue is full), you need to add the maximum number of elements in the queue. The elements written first are removed before new elements can be written into the queue.
在本申请的一个实施例中,队列的长度为3,即队列中可容纳的元素的个数最多为3个,也就是说,这个队列最多可以包含三个视频帧的接收时间。In an embodiment of the present application, the length of the queue is 3, that is, the number of elements that can be accommodated in the queue is at most 3, that is, the queue can contain at most the receiving time of three video frames.
S503:初始化丢帧阈值。S503: Initialize the frame loss threshold.
具体地,第二电子设备初始化丢帧阈值,丢帧阈值用于判断视频帧是否被丢弃。Specifically, the second electronic device initializes a frame loss threshold, and the frame loss threshold is used to determine whether a video frame is discarded.
在本申请的一个实施例中,初始丢帧阈值可以设置为mThreshold。示例性的,mThreshold=1*VsyncDuration,VsyncDuration指的是在预设帧率下传输相邻两个视频帧所间隔的时间。例如,当预设帧率为60FPS时,传输相邻两个视频帧间隔的时间为1000/60=16.67ms。In an embodiment of the present application, the initial frame loss threshold may be set as mThreshold. Exemplarily, mThreshold=1*VsyncDuration, VsyncDuration refers to the time interval between transmitting two adjacent video frames at a preset frame rate. For example, when the preset frame rate is 60FPS, the interval between transmitting two adjacent video frames is 1000/60=16.67ms.
可理解,预设帧率指的是第一电子设备向第二电子设备发送视频帧的速率,即第一电子设备每秒钟向第二电子设备发送视频帧的数量,在第一电子设备和第二电子设备建立通 信连接时确定。It can be understood that the preset frame rate refers to the rate at which the first electronic device sends video frames to the second electronic device, that is, the number of video frames that the first electronic device sends to the second electronic device per second. It is determined when the second electronic device establishes a communication connection.
另外,可以设置丢帧阈值的调节范围,例如,设置丢帧阈值的调节范围为:1*VsyncDuration≤mThreshold≤2*VsyncDuration。In addition, an adjustment range of the frame loss threshold may be set, for example, the adjustment range of the frame loss threshold is set as: 1*VsyncDuration≤mThreshold≤2*VsyncDuration.
S504:判断队列是否已满。S504: Determine whether the queue is full.
具体地,若第二电子设备设置了一个队列来记录第二电子设备接收视频帧的时间(如步骤S502),第二电子设备还需要判断队列是否已满,即判断队列内容纳的元素个数是否达到上限。若队列已满,继续执行步骤S505,若队列未满,将记录下的视频帧的接收时间写入队列(步骤S507)。Specifically, if the second electronic device sets up a queue to record the time when the second electronic device receives video frames (such as step S502), the second electronic device also needs to determine whether the queue is full, that is, determine the number of elements contained in the queue Whether the upper limit is reached. If the queue is full, continue to execute step S505, and if the queue is not full, write the receiving time of the recorded video frame into the queue (step S507).
换而言之,队列里面的数据是用于判断和调整丢帧阈值。在一些实施例中,当队列已满才开始调整丢帧阈值。在另一些实施例中,也可以在队列未满的情况下就根据已有时间调整丢帧阈值。In other words, the data in the queue is used to judge and adjust the frame loss threshold. In some embodiments, the frame loss threshold is adjusted only when the queue is full. In some other embodiments, the frame loss threshold may also be adjusted according to the elapsed time when the queue is not full.
S505:判断平均帧率是否比最小帧率大。S505: Determine whether the average frame rate is greater than the minimum frame rate.
具体地,将平均帧率和最小帧率进行比较。若平均帧率小于最小帧率,说明原始帧率较小或丢帧过多,为避免对投屏画面的连贯性造成影响,不丢弃接收的视频帧,直接将该接收的视频帧传送至解码器等待解码,并将队列中最先写入的元素移除,再将该视频帧的接收时间写入队列(步骤S507);若平均帧率不小于最小帧率,计算接收视频帧的时间与第二电子设备接收第M帧视频帧的时间的差值,将该差值记为a。Specifically, the average frame rate is compared with the minimum frame rate. If the average frame rate is less than the minimum frame rate, it means that the original frame rate is small or there are too many frames lost. In order to avoid affecting the continuity of the projected screen, the received video frames are not discarded, and the received video frames are directly sent to the decoder. The device waits for decoding, and removes the first written element in the queue, and then writes the receiving time of the video frame into the queue (step S507); if the average frame rate is not less than the minimum frame rate, calculate the time and the time of receiving the video frame The second electronic device receives the time difference of the Mth video frame, and denote the difference as a.
在本申请的一些实施例中,第M帧视频帧的时间可以为第一队列中最先写入的元素。In some embodiments of the present application, the time of the Mth video frame may be the first written element in the first queue.
一般来说,帧率是指每秒钟刷新的图片的帧数,在这里可以将其理解为:将第一电子设备的视频帧镜像投屏到第二电子设备上时,第二电子设备的屏幕每秒钟刷新的视频帧的数量,因此,帧率指的是在第二电子设备屏幕上每秒钟显示的视频帧的数量。Generally speaking, the frame rate refers to the number of frames of pictures refreshed per second, which can be understood here as: when the video frame of the first electronic device is mirrored and projected to the second electronic device, the frame rate of the second electronic device The number of video frames refreshed on the screen per second, therefore, the frame rate refers to the number of video frames displayed on the screen of the second electronic device per second.
上文提到的平均帧率指的是一段时间内帧率的平均数,这段时间不一定为1s。例如,平均帧率可以为第二电子设备在10s内接收第一电子设备发送的视频帧的平均速率。The average frame rate mentioned above refers to the average frame rate within a period of time, which is not necessarily 1s. For example, the average frame rate may be an average rate at which the second electronic device receives video frames sent by the first electronic device within 10s.
在本申请的一个实施例中,平均帧率可以为从第二电子设备接收第一电子设备发送的第一帧视频帧开始,到第二电子设备接收第一电子设备发送的最新一帧视频帧为止的平均帧率。在本申请的又一个实施例中,平均帧率还可以为第二电子设备接收第一电子设备最近发送的N帧视频帧的平均帧率。In an embodiment of the present application, the average frame rate may be from the second electronic device receiving the first video frame sent by the first electronic device to the second electronic device receiving the latest video frame sent by the first electronic device Average frame rate so far. In yet another embodiment of the present application, the average frame rate may also be an average frame rate at which the second electronic device receives N frames of video frames recently sent by the first electronic device.
需要说明的是,若第二电子设备丢弃过第一电子设备发送的视频帧,在平均帧率的计算过程中可以不计算丢弃的视频帧。It should be noted that if the second electronic device discards the video frames sent by the first electronic device, the discarded video frames may not be calculated during the calculation of the average frame rate.
另外,最小帧率是为了保证投屏画面的连贯性而设置的,因为当帧率过低时,投屏的画面不流畅,非常影响用户体验,所以根据实际需要可以预先设置最小帧率,在丢帧前判断平均帧率和最小帧率的关系,从而避免连续丢帧可能造成的帧率过低的问题。In addition, the minimum frame rate is set to ensure the continuity of the screen projection, because when the frame rate is too low, the screen projection is not smooth, which greatly affects the user experience, so the minimum frame rate can be preset according to actual needs. Judge the relationship between the average frame rate and the minimum frame rate before frame loss, so as to avoid the problem of low frame rate that may be caused by continuous frame loss.
需要说明的是,为了保证投屏到第二电子设备上的画面的连贯性,连续丢帧的数量不能过多,执行步骤S505后,还可以根据连续丢帧的数量判断能否进行丢帧操作,若连续丢帧的数量超过预设丢帧数时,不能进行丢帧操作,也不调整丢帧阈值,此时,直接将接收 的视频帧传送至解码器等待解码,并将连续丢帧的数量清零,移除队列中最先写入的元素,然后将该视频帧的接收时间写入队列(步骤S507)。It should be noted that, in order to ensure the continuity of the screen projected to the second electronic device, the number of consecutive dropped frames should not be too much. After step S505 is performed, it is also possible to determine whether the frame dropping operation can be performed according to the number of consecutive dropped frames , if the number of consecutive lost frames exceeds the preset number of lost frames, the frame loss operation cannot be performed, and the frame loss threshold is not adjusted. At this time, the received video frames are directly sent to the decoder for decoding, and the continuous lost frames The number is cleared, the first written element in the queue is removed, and then the receiving time of the video frame is written into the queue (step S507).
示例性的,预设丢帧数可以为2,也就是说,当连续丢帧的数量超过2时,第二电子设备不能丢掉接收的视频帧,而是直接将该视频帧传送至解码器等待解码,并且将连续丢帧的数量清零。Exemplarily, the preset number of dropped frames may be 2, that is, when the number of consecutive dropped frames exceeds 2, the second electronic device cannot drop the received video frame, but directly transmits the video frame to the decoder for waiting Decode, and clear the number of consecutive lost frames.
可选的,还可以设置一个参数来表示丢帧操作的可行性,例如,设置参数Adjust来表示丢帧操作的可行性,即根据Adjust的值来判断是否可以执行丢帧操作及调节丢帧阈值(步骤S506),具体地,执行步骤S505后,查看Adjust的值,并根据Adjust的值判断是否可以进行丢帧操作,当Adjust=0时,表示此时不能调节丢帧阈值,以及不能进行丢帧操作,直接将视频帧传送至解码器等待解码,并将队列中最先写入的元素移除,再将该视频帧的接收时间写入队列(步骤S507);当Adjust=1时,可以执行后续步骤(如步骤S506),是否进行丢帧操作还需要在后续步骤(步骤S506)中进行具体判断。再如,设置当Adjust=false时,不能调节丢帧阈值,以及不能进行丢帧操作;当Adjust=true时,可以执行后续步骤(如步骤S506)。可理解,还可以有其他的判断方式,本申请对此不作限制。Optionally, you can also set a parameter to indicate the feasibility of the frame loss operation, for example, set the parameter Adjust to indicate the feasibility of the frame loss operation, that is, judge whether the frame loss operation can be performed and adjust the frame loss threshold according to the value of Adjust (Step S506), specifically, after step S505 is executed, check the value of Adjust, and judge whether the frame loss operation can be performed according to the value of Adjust. When Adjust=0, it means that the frame loss threshold cannot be adjusted at this time, and the frame loss cannot be performed. Frame operation, the video frame is directly sent to the decoder to wait for decoding, and the element written first in the queue is removed, and then the receiving time of the video frame is written into the queue (step S507); when Adjust=1, you can Execute the subsequent steps (such as step S506), and whether to perform the frame dropping operation needs to be specifically judged in the subsequent steps (step S506). For another example, it is set that when Adjust=false, the frame loss threshold cannot be adjusted, and the frame loss operation cannot be performed; when Adjust=true, subsequent steps can be performed (such as step S506). It can be understood that there may be other judgment methods, which are not limited in the present application.
S506:调节丢帧阈值。S506: Adjust the frame loss threshold.
具体地,判断a和丢帧阈值mThreshold的大小关系,并根据其大小关系来调节丢帧阈值,有如下2种情况:Specifically, judge the size relationship between a and the frame loss threshold mThreshold, and adjust the frame loss threshold according to the size relationship, there are two situations as follows:
1、a<mThreshold。1. a<mThreshold.
这种情况下,第二电子设备短时间内接收的视频帧的数量过多,直接丢掉接收的视频帧,然后统计连续丢帧的数量,并计算平均帧率。可理解,此处计算平均帧率时可以不将第二电子设备接收但丢弃的视频帧计算在内。若连续丢帧的数量超过预设丢帧数,不对接下来接收的一个视频帧执行丢帧操作,也不调节丢帧阈值,而是直接将该视频帧传送至解码器等待解码。In this case, the second electronic device receives too many video frames in a short period of time, directly discards the received video frames, then counts the number of consecutive dropped frames, and calculates the average frame rate. It can be understood that when calculating the average frame rate here, the video frames received but discarded by the second electronic device may not be included in the calculation. If the number of consecutive lost frames exceeds the preset number of lost frames, the frame loss operation is not performed on the next received video frame, and the frame loss threshold is not adjusted, but the video frame is directly sent to the decoder for decoding.
另外,当设置一个参数来表示丢帧操作的可行性时,可以在确定a<mThreshold并统计连续丢帧的数量之后更新该参数的值,例如,当设置参数Adjust来表示丢帧操作的可行性时,确定a<mThreshold并统计连续丢帧的数量之后,若连续丢帧的数量超过丢帧阈值,将Adjust的值更新为零,即Adjust=0,表示第二电子设备不能丢掉接下来接收的一个视频帧。In addition, when setting a parameter to indicate the feasibility of the frame loss operation, the value of the parameter can be updated after determining a<mThreshold and counting the number of consecutive frame loss. For example, when setting the parameter Adjust to indicate the feasibility of the frame loss operation , after determining a<mThreshold and counting the number of consecutive frame loss, if the number of consecutive frame loss exceeds the frame loss threshold, the value of Adjust is updated to zero, that is, Adjust=0, indicating that the second electronic device cannot discard the next received A video frame.
2、a≥mThreshold。2. a≥mThreshold.
结合图6对a≥mThreshod情况下的丢帧阈值的调节进行具体说明:In conjunction with Figure 6, the adjustment of the frame loss threshold in the case of a≥mThreshod is specifically explained:
S601:判断第二电子设备是否正在进行阈值测试。S601: Determine whether the second electronic device is performing a threshold test.
首先对阈值测试进行简单说明。阈值测试指的是:在进行阈值调节之后,第二电子设备根据接下来接收的N个视频帧的解码时延和送显时延判断该调节是否有效,当该调节有效时,保持调节后的丢帧阈值,否则,再次调节阈值。First, a brief description of the threshold test is given. The threshold test refers to: after threshold adjustment, the second electronic device judges whether the adjustment is valid according to the decoding delay and display delay of the next received N video frames, and when the adjustment is valid, maintain the adjusted The frame loss threshold, otherwise, adjust the threshold again.
正在进行阈值测试说明当前还无法得知上次阈值调节是否有效,所以此时不会再次调节阈值以避免干扰对上次阈值调节的效果的判断。The ongoing threshold test indicates that it is currently impossible to know whether the last threshold adjustment is effective, so the threshold will not be adjusted again at this time to avoid interfering with the judgment of the effect of the last threshold adjustment.
因此,若第二电子设备正在进行阈值测试,直接将接收的视频帧传输至解码器等待解码,并移除队列中最先写入的元素,再将该视频帧的接收时间写入队列(如步骤S507);若第二电子设备并非正在进行阈值测试,继续执行步骤S602。Therefore, if the second electronic device is performing a threshold test, it directly transmits the received video frame to the decoder to wait for decoding, and removes the first written element in the queue, and then writes the receiving time of the video frame into the queue (such as Step S507); if the second electronic device is not performing a threshold test, continue to execute step S602.
S602:判断是否在预设时间内未调整丢帧阈值。S602: Determine whether the frame loss threshold is not adjusted within a preset time.
若在预设时间内丢帧阈值未调整,可能是由于丢帧阈值太大,所以在预设时间内a总是满足a<mThreshold(情况1),可以考虑将阈值减小。若在预设时间内调节过丢帧阈值,继续执行步骤S603。If the frame loss threshold is not adjusted within the preset time, it may be because the frame loss threshold is too large, so a always satisfies a<mThreshold (case 1) within the preset time, and the threshold can be considered to be reduced. If the frame loss threshold is adjusted within the preset time, continue to execute step S603.
需要说明的是,预设时间可以根据实际需要进行设置,在本申请的一个实施例中,预设时间可以设置为1.5s。It should be noted that the preset time can be set according to actual needs, and in an embodiment of the present application, the preset time can be set to 1.5s.
S603:判断丢帧阈值是否小于阈值上限。S603: Determine whether the frame loss threshold is smaller than the upper threshold.
如步骤S503所示,可以对丢帧阈值设置调节范围,即对丢帧阈值设置调节的上限和/或下限,在本申请的一个实施例中,设定丢帧阈值的阈值下限为1*VsyncDuration,阈值上限为2*VsyncDuration。As shown in step S503, an adjustment range can be set for the frame loss threshold, that is, an upper limit and/or a lower limit for adjusting the frame loss threshold can be set. In one embodiment of the present application, the lower limit of the frame loss threshold is set to 1*VsyncDuration , the upper threshold is 2*VsyncDuration.
若丢帧阈值不小于阈值上限,说明丢帧阈值已经达到可调节的上限(因为对丢帧阈值的调节不能超过调节范围),可以考虑减小丢帧阈值。若丢帧阈值小于阈值上限,可以考虑继续增大丢帧范围以缓解视频帧阻塞,即增大丢帧阈值(如步骤S604所示)。If the frame loss threshold is not less than the upper limit of the threshold, it means that the frame loss threshold has reached the adjustable upper limit (because the adjustment of the frame loss threshold cannot exceed the adjustment range), and the frame loss threshold can be considered to be reduced. If the frame loss threshold is less than the upper threshold, it may be considered to continue to increase the frame loss range to alleviate video frame congestion, that is, increase the frame loss threshold (as shown in step S604).
需要说明的是,完成丢帧阈值的调节(以某个步长增大或减小)之后,可以理解为阈值测试就开始进行了。It should be noted that after the frame loss threshold is adjusted (increased or decreased by a certain step size), it can be understood that the threshold test has started.
S604:增加丢帧阈值。S604: Increase the frame loss threshold.
调节丢帧阈值的具体方式可以为:以某个步长增加丢帧阈值,将该步长记为b,则调整后的丢帧阈值为:mThreshold=mThreshold+b。A specific way to adjust the frame loss threshold may be: increase the frame loss threshold with a certain step size, and denote the step size as b, then the adjusted frame loss threshold value is: mThreshold=mThreshold+b.
可理解,可以根据实际情况设置b的值,示例性的,将b设置为0.1*VsyncDuration,此时,丢帧阈值为:mThreshold=mThreshold-0.1*VsyncDuration。需要注意的是,若已经设置丢帧阈值的调节范围(如步骤S503),那么只能在这个范围内调节丢帧阈值。It can be understood that the value of b can be set according to the actual situation. For example, b is set to 0.1*VsyncDuration. At this time, the frame loss threshold is: mThreshold=mThreshold-0.1*VsyncDuration. It should be noted that if the adjustment range of the frame loss threshold has been set (such as step S503), then the frame loss threshold can only be adjusted within this range.
S605:减小丢帧阈值。S605: Decrease the frame loss threshold.
调节丢帧阈值的具体方式可以为:以某个步长减少丢帧阈值,将该步长记为b,则调整后的丢帧阈值为:mThreshold=mThreshold-b。A specific way of adjusting the frame loss threshold may be: reduce the frame loss threshold with a certain step size, and denote the step size as b, then the adjusted frame loss threshold value is: mThreshold=mThreshold-b.
可理解,可以根据实际情况设置b的值,示例性的,将b设置为0.1*VsyncDuration,此时,丢帧阈值为:mThreshold=mThreshold-0.1*VsyncDuration。需要注意的是,若已经设置丢帧阈值的调节范围(如步骤S503),那么只能在这个范围内调节丢帧阈值。It can be understood that the value of b can be set according to the actual situation. For example, b is set to 0.1*VsyncDuration. At this time, the frame loss threshold is: mThreshold=mThreshold-0.1*VsyncDuration. It should be noted that if the adjustment range of the frame loss threshold has been set (such as step S503), then the frame loss threshold can only be adjusted within this range.
示例性的,结合图7A-图7C,对丢帧阈值的调整进行说明。在步骤S503中,对丢帧阈值进行初始化处理,初始化处理后的丢帧阈值为mThreshold=1*VsyncDuration,如图7A所示,当a<mThreshold时,进行丢帧处理。当调节丢帧阈值的方式为以某个步长增加丢帧阈值(如图7B所示)时,若将步长设置为0.1*VsyncDuration,则更新后的丢帧阈值为mThreshold=1.1*VsyncDuration,如图7C所示,当a<mThreshold时,进行丢帧处理,即当a<1.1*VsyncDuration时,进行丢帧处理。可理解,如图7A-图7C所示,丢帧阈值可调节的上限为2*VsyncDuration,即mThreshold<2*VsyncDuration。Exemplarily, the adjustment of the frame loss threshold is described with reference to FIG. 7A-FIG. 7C. In step S503, the frame loss threshold is initialized. The frame loss threshold after initialization is mThreshold=1*VsyncDuration. As shown in FIG. 7A, when a<mThreshold, the frame loss process is performed. When the method of adjusting the frame loss threshold is to increase the frame loss threshold with a certain step size (as shown in Figure 7B), if the step size is set to 0.1*VsyncDuration, the updated frame loss threshold is mThreshold=1.1*VsyncDuration, As shown in FIG. 7C , when a<mThreshold, frame dropping processing is performed, that is, when a<1.1*VsyncDuration, frame dropping processing is performed. It can be understood that, as shown in FIGS. 7A-7C , the adjustable upper limit of the frame loss threshold is 2*VsyncDuration, that is, mThreshold<2*VsyncDuration.
S507:调整队列。S507: Adjust the queue.
具体地,若第二电子设备设置了一个队列来记录第二电子设备接收视频帧的时间(如步骤S502),在执行步骤S506之后,第二电子设备可以移除队列中最先写入的元素,将接收视频帧的时间(步骤S502中记录的视频帧的接收时间)写入该队列。Specifically, if the second electronic device has set up a queue to record the time when the second electronic device receives the video frame (such as step S502), after performing step S506, the second electronic device can remove the first written element in the queue , write the time of receiving the video frame (the receiving time of the video frame recorded in step S502) into the queue.
示例性的,队列为{1,5,10},即该队列存储的三个视频帧的接收时间分别为第1ms、第5ms和第10ms,其中,1是最先写入队列的元素,10是最后写入队列的元素,移除队列中最先写入的元素——1,将接收视频帧的时间写入该队列,若接收视频帧的时间为第16ms,则将16写入该队列,调整后的队列为{5,10,16}。Exemplarily, the queue is {1, 5, 10}, that is, the receiving times of the three video frames stored in the queue are the 1st, 5th, and 10th ms respectively, where 1 is the first element written into the queue, and 10 It is the last element written in the queue, remove the first element written in the queue - 1, write the time of receiving the video frame into the queue, if the time of receiving the video frame is the 16th ms, write 16 into the queue , the adjusted queue is {5, 10, 16}.
示例性的,队列为{3,4,5},即该队列存储的是第二电子设备接收的第3、4、5帧视频帧的编号。通过这些编号,第二电子设备可以查找到第二电子设备接收第3、4、5帧视频帧的接收时间。Exemplarily, the queue is {3, 4, 5}, that is, the queue stores the numbers of the 3rd, 4th, and 5th video frames received by the second electronic device. Through these numbers, the second electronic device can find out the receiving time when the second electronic device receives the 3rd, 4th, and 5th video frames.
示例性的,队列为{20210511235643,20210511235666,20210511235733},队列中存储的这三串数字表示的是第二电子设备接收的三帧视频帧的时间。Exemplarily, the queue is {20210511235643, 20210511235666, 20210511235733}, and the three strings of numbers stored in the queue indicate the time of the three video frames received by the second electronic device.
S508:解码。S508: decoding.
具体地,第二电子设备中的解码器对接收的视频帧进行解码,并记录解码时延。可理解,解码时延指的是从视频帧被传送至解码器开始,到该视频帧解码完成的这一段时间。Specifically, the decoder in the second electronic device decodes the received video frame, and records the decoding delay. It can be understood that the decoding delay refers to a period of time from when a video frame is transmitted to a decoder to when decoding of the video frame is completed.
S509:音视频同步处理及送显。S509: audio and video synchronization processing and display.
具体地,根据前面步骤中的丢帧情况,相应地调整音频帧,并判断是否将视频帧送显。Specifically, according to the frame loss situation in the previous step, the audio frame is adjusted accordingly, and it is judged whether to send the video frame for display.
如图8所示,第二电子设备可以判断是否存在丢帧操作(步骤S801),如果在某一个将要同步的视频帧(接收的视频帧)之前接收的视频帧已经被丢弃,那么也需要丢弃与被丢弃的视频帧数量相同的音频帧(步骤S802),才能得到与该视频帧相对应的音频帧,否则会导致视频帧和音频帧不能一一对应,显示到第二电子设备的屏幕上时音画不同步。如果不存在丢帧操作,或者,已经丢弃相同数量的音频帧(步骤S802),需要计算所有已送显的视频帧的平均PTS间隔,将其记为c,然后预估下一个视频帧的送显时间(步骤S803),预估的视频帧PTS=前一个视频帧的实际PTS+c。可理解,前一个视频帧的实际PTS指的是前一个视频帧真正的送显时间。若预估的视频帧PTS与真实的送显时间不一样,当二者差值小于第一预设阈值时,将视频帧以预估的视频帧PTS送显,由于只有在VSYNC时间点将视频帧送显,视频帧才能显示,所以需要寻找距离送显时间点(预估的视频帧PTS)最近的VSYNC时间点(步骤S804)。找到相应的VSYNC时间点之后,判断该VSYNC时间点与音频帧送显时间点之间的差值是否小于第二预设阈值(步骤S805),若VSYNC时间点与音频帧送显时间点之间的差值不小于第二预设阈值,不将该视频帧送显(步骤S806),否则,将该视频帧送显(步骤S807),使得视频帧最终显示在第二电子设备的屏幕上。As shown in Figure 8, the second electronic device can determine whether there is a frame dropping operation (step S801), if a video frame received before a video frame to be synchronized (received video frame) has been discarded, it also needs to be discarded Only the audio frames corresponding to the video frames (step S802) with the same number of discarded video frames can be obtained, otherwise the video frames and audio frames cannot be correspondingly displayed on the screen of the second electronic device The sound and picture are out of sync. If there is no frame dropping operation, or the same number of audio frames has been discarded (step S802), it is necessary to calculate the average PTS interval of all video frames sent for display, which is recorded as c, and then the sending time of the next video frame is estimated. Display time (step S803), estimated video frame PTS=actual PTS of the previous video frame+c. It can be understood that the actual PTS of the previous video frame refers to the actual display time of the previous video frame. If the estimated video frame PTS is different from the actual display time, when the difference between the two is less than the first preset threshold, the video frame is sent to display with the estimated video frame PTS, because the video frame is only displayed at the VSYNC time point The video frame can only be displayed after the frame is sent for display, so it is necessary to find the VSYNC time point closest to the time point of sending for display (estimated video frame PTS) (step S804). After finding the corresponding VSYNC time point, judge whether the difference between the VSYNC time point and the audio frame display time point is less than the second preset threshold (step S805), if the VSYNC time point and the audio frame display time point are between If the difference is not less than the second preset threshold, the video frame is not sent for display (step S806), otherwise, the video frame is sent for display (step S807), so that the video frame is finally displayed on the screen of the second electronic device.
可理解,第一预设阈值和第二预设阈值都可以根据实际需要进行设置,本申请对此不作限制。It can be understood that both the first preset threshold and the second preset threshold can be set according to actual needs, which is not limited in the present application.
需要说明的是,送显完成后,第二电子设备的上层应用会收到MediaCodec的送显完成 的回调信息(如图2所示),更新解码时延和送显时延,并更新平均帧率。可理解,送显时延指的是视频帧解码完成到真正显示到第二电子设备屏幕上所需的时间。It should be noted that after the display sending is completed, the upper layer application of the second electronic device will receive the callback information of the completion of display sending from MediaCodec (as shown in Figure 2), update the decoding delay and display sending delay, and update the average frame Rate. It can be understood that the delay in sending and displaying refers to the time required from the decoding of the video frame to the actual display on the screen of the second electronic device.
S510:测试丢帧阈值是否有效。S510: Test whether the frame loss threshold is valid.
若在上述步骤中对丢帧阈值进行了调节,第二电子设备需要进行阈值测试,来测试对丢帧阈值的调节是否为有效调节。具体地,第二电子设备监测后续接收的N个视频帧的解码时延和送显时延,若丢帧阈值调节之后第二电子设备接收的N个视频帧的解码时延和送显时延,分别比当前全时间段内的平均解码时延和平均送显时延减少至少c%,则判断上述步骤中对丢帧阈值的调节为有效调节,继续使用调节后的丢帧阈值;否则,判断丢帧阈值调整效果不佳,以某个步长减少丢帧阈值mThreshold,详细内容可参考步骤S506,在此不再赘述。If the frame loss threshold is adjusted in the above steps, the second electronic device needs to perform a threshold test to test whether the adjustment to the frame loss threshold is an effective adjustment. Specifically, the second electronic device monitors the decoding delay and display delay of the N video frames received subsequently. If the frame loss threshold is adjusted, the decoding delay and display delay of the N video frames received by the second electronic device , which are at least c% lower than the average decoding delay and average display delay in the current full time period, then it is judged that the adjustment to the frame loss threshold in the above steps is an effective adjustment, and the adjusted frame loss threshold is continued to be used; otherwise, It is judged that the adjustment effect of the frame loss threshold is not good, and the frame loss threshold mThreshold is reduced by a certain step size. For details, please refer to step S506, which will not be repeated here.
可理解,当前全时间段指的是从第二电子设备接收第一个视频帧开始,到计算平均解码时延和平均送显时延为止的一段时间。It can be understood that the current full time period refers to a period of time from when the second electronic device receives the first video frame to when the average decoding delay and the average display delay are calculated.
需要说明的是,c可以根据实际需要进行调整,例如,c可以为10,在该条件下,若丢帧阈值调整后第二电子设备接收的N个视频帧的解码时延和送显时延,分别比当前全时间段的平均解码时延和平均送显时延减少至少10%,则判断对丢帧阈值的调节为有效调节。It should be noted that c can be adjusted according to actual needs. For example, c can be 10. Under this condition, if the frame loss threshold is adjusted, the decoding delay and display delay of the N video frames received by the second electronic device , which are respectively reduced by at least 10% compared with the average decoding delay and the average display delay of the current full time period, it is judged that the adjustment to the frame loss threshold is an effective adjustment.
图9示例性示出了本申请实施例提供的又一种动态调节丢帧阈值方法的流程图。FIG. 9 exemplarily shows a flowchart of another method for dynamically adjusting the frame loss threshold provided by the embodiment of the present application.
S901:接收视频帧。S901: Receive a video frame.
第二电子设备接收第一电子设备发送的视频帧。在本申请的一个实施例中,由第二电子设备中的TCP/VTP模块接收第一电子设备发送的视频帧。The second electronic device receives the video frame sent by the first electronic device. In an embodiment of the present application, the TCP/VTP module in the second electronic device receives the video frame sent by the first electronic device.
可理解,步骤S901的具体实现方式可参考步骤S501,在此不再赘述。It can be understood that, for the specific implementation manner of step S901, reference may be made to step S501, which will not be repeated here.
S902:记录接收视频帧的时间并初始化丢帧阈值。S902: Record the time of receiving the video frame and initialize the frame loss threshold.
第二电子设备接收第一电子设备发送的视频帧后,记录该视频帧的接收时间,并初始化丢帧阈值。可理解,步骤S902的具体实现方式可参考步骤S502和步骤S503,在此不再赘述。After receiving the video frame sent by the first electronic device, the second electronic device records the receiving time of the video frame, and initializes the frame loss threshold. It can be understood that, for the specific implementation manner of step S902, reference may be made to step S502 and step S503, which will not be repeated here.
S903:判断队列是否已满。S903: Determine whether the queue is full.
第二电子设备可以设置一个队列用于存储视频帧的接收时间,在将视频帧的接收时间写入队列之前,第二电子设备可以判断队列是否已满,具体判断过程可参考步骤S504,在此不再赘述。The second electronic device can set a queue for storing the receiving time of the video frame. Before writing the receiving time of the video frame into the queue, the second electronic device can judge whether the queue is full. The specific judgment process can refer to step S504, here No longer.
S904:判断平均帧率是否比最小帧率大。S904: Determine whether the average frame rate is greater than the minimum frame rate.
为了视频帧可以以合适的帧率显示在第二电子设备上,保证第二电子设备上显示的画面的连贯性,第二电子设备可以判断平均帧率是否比最小帧率大。可理解,步骤S904的具体实现方式可参考步骤S505,在此不再赘述。In order to display the video frames on the second electronic device at an appropriate frame rate and ensure the continuity of the images displayed on the second electronic device, the second electronic device may determine whether the average frame rate is greater than the minimum frame rate. It can be understood that, for a specific implementation manner of step S904, reference may be made to step S505, which will not be repeated here.
S905:计算当前视频帧的到达时间与队列首元素的差值,将该差值记为a。S905: Calculate the difference between the arrival time of the current video frame and the first element in the queue, and denote the difference as a.
第二电子设备计算接收的当前视频帧的时间与队列中最先写入的元素的差值,将该差值记为a。The second electronic device calculates the difference between the received time of the current video frame and the first written element in the queue, and marks the difference as a.
S906:判断是否可以调整丢帧阈值。S906: Determine whether the frame loss threshold can be adjusted.
具体地,第二电子设备可以设置一个参数来表示调节丢帧阈值的可行性。例如,第二 电子设备可以根据Adjust的值来判断是否可以调节丢帧阈值。当Adjust=0时,表示此时不能进行丢帧操作,直接将视频帧传送至解码器等待解码,并将队列中最先写入的元素移除,再将该视频帧的接收时间写入队列(如步骤S915);当Adjust=1时,可以执行后续步骤(如步骤S506),是否进行丢帧操作还需要在后续步骤(如步骤S907)中进行具体判断。Specifically, the second electronic device may set a parameter to indicate the feasibility of adjusting the frame loss threshold. For example, the second electronic device can determine whether the frame loss threshold can be adjusted according to the value of Adjust. When Adjust=0, it means that the frame loss operation cannot be performed at this time, and the video frame is directly sent to the decoder to wait for decoding, and the element written first in the queue is removed, and then the receiving time of the video frame is written into the queue (such as step S915); when Adjust=1, subsequent steps can be performed (such as step S506), and whether to perform the frame dropping operation needs to be specifically determined in subsequent steps (such as step S907).
可理解,步骤S906的具体实现方式可参考步骤S505,在此不再赘述。It can be understood that, for a specific implementation manner of step S906, reference may be made to step S505, which will not be repeated here.
S907:判断a是否小于丢帧阈值。S907: Determine whether a is smaller than the frame loss threshold.
第二电子设备可以判断a是否小于丢帧阈值。若a小于丢帧阈值,直接丢帧并统计连续丢帧的数量(如步骤S908);若a不小于丢帧阈值,判断是否正在进行阈值测试(如步骤S909)。The second electronic device may determine whether a is smaller than the frame loss threshold. If a is less than the frame loss threshold, directly drop the frame and count the number of consecutive frame loss (such as step S908); if a is not less than the frame loss threshold, determine whether a threshold test is in progress (such as step S909).
S908:直接丢帧并统计连续丢帧数。S908: Drop frames directly and count the number of consecutive dropped frames.
可理解,步骤S908的具体实现方式可参考步骤S506,在此不再赘述。It can be understood that, for the specific implementation manner of step S908, reference may be made to step S506, which will not be repeated here.
S909:判断是否正在进行阈值测试。S909: Determine whether a threshold value test is being performed.
第二电子设备可以判断是否正在进行阈值测试。若正在进行阈值测试,直接执行步骤S914;若不是正在进行阈值测试,继续执行步骤S910。The second electronic device can determine whether a threshold test is in progress. If the threshold test is being performed, step S914 is directly performed; if the threshold test is not being performed, step S910 is continued.
可理解,步骤S909的具体实现方式可参考步骤S601,在此不再赘述。It can be understood that, for the specific implementation manner of step S909, reference may be made to step S601, which will not be repeated here.
S910:判断是否在预设时间内未调整丢帧阈值。S910: Determine whether the frame loss threshold is not adjusted within a preset time.
第二电子设备可以判断是否在预设时间内未调整丢帧阈值。若第二电子设备在预设时间内调整了丢帧阈值,直接执行步骤S911;若第二电子设备在预设时间内没有调整丢帧阈值,继续执行步骤S913。The second electronic device may determine whether the frame loss threshold is not adjusted within a preset time. If the second electronic device adjusts the frame loss threshold within the preset time, directly perform step S911; if the second electronic device does not adjust the frame loss threshold within the preset time, continue to perform step S913.
可理解,步骤S910的具体实现方式可参考步骤S602,在此不再赘述。It can be understood that, for a specific implementation manner of step S910, reference may be made to step S602, which will not be repeated here.
S911:判断丢帧阈值是否小于阈值上限。S911: Determine whether the frame loss threshold is smaller than the upper threshold.
第二电子设备可以判断丢帧阈值是否小于阈值上限。若丢帧阈值小于阈值上限,继续执行步骤S912;若丢帧阈值不小于阈值上限,执行步骤S913。The second electronic device may determine whether the frame loss threshold is smaller than the upper threshold. If the frame loss threshold is less than the upper threshold, continue to execute step S912; if the frame loss threshold is not less than the upper threshold, execute step S913.
可理解,步骤S911的具体实现方式可参考步骤S603,在此不再赘述。It can be understood that, for the specific implementation manner of step S911, reference may be made to step S603, which will not be repeated here.
S912:将丢帧阈值增加一个步长并开始进行阈值测试。S912: Increase the frame loss threshold by one step and start threshold testing.
可理解,步骤S912的具体实现方式可参考步骤S604,在此不再赘述。It can be understood that, for a specific implementation manner of step S912, reference may be made to step S604, which will not be repeated here.
S913:将丢帧阈值减小一个步长并开始进行阈值测试。S913: Decrease the frame loss threshold by one step and start threshold testing.
可理解,步骤S913的具体实现方式可参考步骤S605,在此不再赘述。It can be understood that, for the specific implementation manner of step S913, reference may be made to step S605, which will not be repeated here.
S914:移除队列首元素。S914: Remove the first element of the queue.
可理解,步骤S914的具体实现方式可参考步骤S507,在此不再赘述。It can be understood that, for the specific implementation manner of step S914, reference may be made to step S507, which will not be repeated here.
S915:将接收的视频帧的时间写入队列。S915: Write the time of the received video frame into a queue.
可理解,步骤S915的具体实现方式可参考步骤S507,在此不再赘述。It can be understood that, for the specific implementation manner of step S915, reference may be made to step S507, which will not be repeated here.
S916:解码并统计解码时延。S916: Decode and count the decoding delay.
可理解,步骤S916的具体实现方式可参考步骤S508,在此不再赘述。It can be understood that, for the specific implementation manner of step S916, reference may be made to step S508, which will not be repeated here.
S917:音视频同步及送显。S917: Audio and video synchronization and display.
可理解,步骤S917的具体实现方式可参考步骤S509,在此不再赘述。It can be understood that, for a specific implementation manner of step S917, reference may be made to step S509, which will not be repeated here.
S918:送显回调并更新解码时延、送显时延以及平均帧率。S918: Sending to display callback and updating decoding delay, display delay and average frame rate.
第二电子设备可以通过送显回调来查看视频帧在第二电子设备上的显示情况,具体可参考步骤S509,在此不再赘述。The second electronic device can check the display status of the video frame on the second electronic device by sending and displaying the callback. For details, refer to step S509, which will not be repeated here.
S919:判断是否正在进行阈值测试。S919: Determine whether a threshold test is being performed.
可理解,步骤S919-步骤S923的具体内容可参考步骤S510,在此不再赘述。It can be understood that, for the specific content of step S919-step S923, reference may be made to step S510, which will not be repeated here.
S920:统计视频帧的解码时延和送显时延。S920: Count the decoding delay and display sending delay of video frames.
可理解,在调节丢帧阈值后,第二电子设备可以检测后续接收的视频帧的解码时延和送显时延。It can be understood that, after adjusting the frame loss threshold, the second electronic device may detect the decoding delay and the display delay of subsequent received video frames.
S921:判断是否达到60帧视频帧。S921: Determine whether 60 video frames are reached.
可理解,第二电子设备可以在接收60帧视频帧后判断对丢帧阈值的调节是否为有效调节。因此,第二电子设备需要判断调节丢帧阈值之后,其接收的视频帧是否达到60帧。It can be understood that, after receiving 60 video frames, the second electronic device may determine whether the adjustment to the frame loss threshold is an effective adjustment. Therefore, the second electronic device needs to determine whether the received video frames reach 60 frames after the frame loss threshold is adjusted.
S922:判断对丢帧阈值的调节是否为有效调节。S922: Determine whether the adjustment to the frame loss threshold is an effective adjustment.
可理解,具体判断方法可参考步骤S510,在此不再赘述。It can be understood that, for a specific determination method, reference may be made to step S510, which will not be repeated here.
S923:将丢帧阈值减小一个步长。S923: Decrease the frame loss threshold by one step.
可理解,步骤S923的具体实现方式可参考步骤S605,在此不再赘述。It can be understood that, for the specific implementation manner of step S923, reference may be made to step S605, which will not be repeated here.
下面介绍本申请实施例涉及的装置。The devices involved in the embodiments of the present application are introduced below.
图10为本申请实施例提供的一种电子设备100的硬件结构示意图。FIG. 10 is a schematic diagram of a hardware structure of an electronic device 100 provided in an embodiment of the present application.
可理解,电子设备100可以执行图4、图5和图9所示的动态调节丢帧阈值的方法。可理解,上述第一电子设备和第二电子设备可以为电子设备100。It can be understood that the electronic device 100 may implement the methods for dynamically adjusting the frame loss threshold shown in FIG. 4 , FIG. 5 and FIG. 9 . It can be understood that the above-mentioned first electronic device and the second electronic device may be the electronic device 100 .
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(Universal Serial Bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(Subscriber Identification Module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (Universal Serial Bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber Identification Module (Subscriber Identification Module, SIM) card interface 195 and so on. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that, the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100 . In other embodiments of the present application, the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components. The illustrated components can be realized in hardware, software or a combination of software and hardware.
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(Application Processor,AP),调制解调处理器,图形处理器(Graphics Processing unit,GPU),图像信号处理器(Image Signal Processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(Digital Signal Processor,DSP),基带处理器,和/或神经网络处理器(Neural-network Processing Unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (Application Processor, AP), a modem processor, a graphics processor (Graphics Processing unit, GPU), an image signal processor (Image Signal Processor, ISP), controller, memory, video codec, digital signal processor (Digital Signal Processor, DSP), baseband processor, and/or neural network processor (Neural-network Processing Unit, NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
在本申请的一个实施例中,第一电子设备可以为电子设备100,第一电子设备显示画面的具体过程为:由处理器110完成对多个屏幕图层的合成(WMS图层显示,以及SurfaceFlinger图层合成),然后送给显示屏1~N194进行显示(HWC/DSS显示,以及主屏幕送显)。此外,第一电子设备的处理器110还完成了对视频帧的编码和打包(VirtualDisplay虚拟显示,以及VTP/TCP封包),最终这些打包后的视频帧将会通过无线通信模块160发送给第二电子设备。In an embodiment of the present application, the first electronic device may be the electronic device 100, and the specific process of displaying the screen by the first electronic device is: the processor 110 completes the synthesis of multiple screen layers (WMS layer display, and SurfaceFlinger layer synthesis), and then sent to display screens 1 to N194 for display (HWC/DSS display, and main screen display). In addition, the processor 110 of the first electronic device has also completed the encoding and packaging of video frames (VirtualDisplay virtual display, and VTP/TCP packets), and finally these packaged video frames will be sent to the second video frame through the wireless communication module 160. Electronic equipment.
在本申请的又一个实施例中,第二电子设备,即接收视频帧的设备,可以为电子设备100。第二电子设备的无线通信模块160接收第一电子设备发来的视频帧数据。这些视频帧数据会由处理器110进行一系列反向拆包(VTP/TCP拆包,以及RTP拆包)和解码(MediaCodec解码)工作,即可得到真正可以用来显示的视频帧数据。这些视频帧数据也同样会经过送显(MediaCodec送显)以及图层合成(SurfaceFlinger图层合成),最终送到显示屏194进行显示(主屏幕送显)。In yet another embodiment of the present application, the second electronic device, that is, the device that receives the video frame, may be the electronic device 100 . The wireless communication module 160 of the second electronic device receives the video frame data sent by the first electronic device. The video frame data will be processed by the processor 110 through a series of reverse unpacking (VTP/TCP unpacking, and RTP unpacking) and decoding (MediaCodec decoding) to obtain video frame data that can actually be displayed. These video frame data will also be sent to display (MediaCodec for display) and layer synthesis (SurfaceFlinger layer synthesis), and finally sent to display screen 194 for display (main screen for display).
可理解,音频数据的处理流程与视频数据(视频帧)类似,在此不再赘述。It can be understood that the processing flow of audio data is similar to that of video data (video frame), and will not be repeated here.
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。Wherein, the controller may be the nerve center and command center of the electronic device 100 . The controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(Inter-Integrated Circuit,I2C)接口,集成电路内置音频(Inter-Integrated Circuit Sound,I2S)接口,脉冲编码调制(Pulse Code Modulation,PCM)接口,通用异步收发传输器(Universal Asynchronous Receiver/Transmitter,UART)接口,移动产业处理器接口(Mobile Industry Processor Interface,MIPI),通用输入输出(General-Purpose Input/Output,GPIO)接口,用户标识模块(Subscriber Identity Module,SIM)接口,和/或通用串行总线(Universal Serial Bus,USB)接口等。In some embodiments, processor 110 may include one or more interfaces. The interface can include an integrated circuit (Inter-Integrated Circuit, I2C) interface, an integrated circuit built-in audio (Inter-Integrated Circuit Sound, I2S) interface, a pulse code modulation (Pulse Code Modulation, PCM) interface, a universal asynchronous transmitter (Universal Asynchronous Receiver/Transmitter, UART) interface, mobile industry processor interface (Mobile Industry Processor Interface, MIPI), general-purpose input and output (General-Purpose Input/Output, GPIO) interface, subscriber identity module (Subscriber Identity Module, SIM) interface, and /or Universal Serial Bus (Universal Serial Bus, USB) interface, etc.
I2C接口是一种双向同步串行总线,包括一根串行数据线(Serial Data Line,SDA)和一根串行时钟线(Serial Clock Line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。The I2C interface is a bidirectional synchronous serial bus, including a serial data line (Serial Data Line, SDA) and a serial clock line (Serial Clock Line, SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flashlight, the camera 193 and the like through different I2C bus interfaces. For example, the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the electronic device 100 .
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。The I2S interface can be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 . In some embodiments, the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through the Bluetooth headset.
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话 的功能。所述I2S接口和所述PCM接口都可以用于音频通信。The PCM interface can also be used for audio communication, sampling, quantizing and encoding the analog signal. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。The UART interface is a universal serial data bus used for asynchronous communication. The bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 and the wireless communication module 160 . For example: the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to realize the Bluetooth function. In some embodiments, the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(Camera Serial Interface,CSI),显示屏串行接口(Display Serial Interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。The MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 . MIPI interface includes camera serial interface (Camera Serial Interface, CSI), display serial interface (Display Serial Interface, DSI), etc. In some embodiments, the processor 110 communicates with the camera 193 through the CSI interface to realize the shooting function of the electronic device 100 . The processor 110 communicates with the display screen 194 through the DSI interface to realize the display function of the electronic device 100 .
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。The GPIO interface can be configured by software. The GPIO interface can be configured as a control signal or as a data signal. In some embodiments, the GPIO interface can be used to connect the processor 110 with the camera 193 , the display screen 194 , the wireless communication module 160 , the audio module 170 , the sensor module 180 and so on. The GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备100,例如AR设备等。The USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like. The USB interface 130 can be used to connect a charger to charge the electronic device 100 , and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices 100, such as AR devices.
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationship between the modules shown in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 . In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备100供电。The charging management module 140 is configured to receive a charging input from a charger. Wherein, the charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 can receive charging input from the wired charger through the USB interface 130 . In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 is charging the battery 142 , it can also supply power to the electronic device 100 through the power management module 141 .
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。The power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 . The power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 . The power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance). In some other embodiments, the power management module 141 may also be disposed in the processor 110 . In some other embodiments, the power management module 141 and the charging management module 140 may also be set in the same device.
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。The wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。 Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals. Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(Low Noise Amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。The mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100 . The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (Low Noise Amplifier, LNA) and the like. The mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation. The mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves through the antenna 1 for radiation. In some embodiments, at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 . In some embodiments, at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。A modem processor may include a modulator and a demodulator. Wherein, the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing. The low-frequency baseband signal is passed to the application processor after being processed by the baseband processor. The application processor outputs sound signals through audio equipment (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194 . In some embodiments, the modem processor may be a stand-alone device. In some other embodiments, the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module 150 or other functional modules.
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(Wireless Local Area Networks,WLAN)(如无线保真(Wireless Fidelity,Wi-Fi)网络),蓝牙(Bluetooth,BT),全球导航卫星系统(Global Navigation Satellite System,GNSS),调频(Frequency Modulation,FM),近距离无线通信技术(Near Field Communication,NFC),红外技术(Infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。The wireless communication module 160 can provide wireless local area network (Wireless Local Area Networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (Bluetooth, BT), global navigation satellite System (Global Navigation Satellite System, GNSS), frequency modulation (Frequency Modulation, FM), near field communication technology (Near Field Communication, NFC), infrared technology (Infrared, IR) and other wireless communication solutions. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 . The wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(Global System for Mobile Communications,GSM),通用分组无线服务(General Packet Radio Service,GPRS),码分多址接入(Code Division Multiple Access,CDMA),宽带码分多址(Wideband Code Division Multiple Access,WCDMA),时分码分多址(Time-Division Code Division Multiple Access,TD-SCDMA),长期演进(Long Term Evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(Global Positioning System,GPS),全球导航卫星系统(Global Navigation Satellite System,GLONASS),北斗卫星导航系统(Beidou Navigation Satellite System,BDS),准天顶卫星系统(Quasi-Zenith Satellite System,QZSS)和/或星基增强系统(Satellite Based Augmentation Systems,SBAS)。In some embodiments, the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology. The wireless communication technology may include Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), broadband Code Division Multiple Access (WCDMA), Time-Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc. The GNSS may include Global Positioning System (Global Positioning System, GPS), Global Navigation Satellite System (Global Navigation Satellite System, GLONASS), Beidou Navigation Satellite System (Beidou Navigation Satellite System, BDS), Quasi Zenith Satellite System (Quasi - Zenith Satellite System (QZSS) and/or Satellite Based Augmentation Systems (SBAS).
在本申请的一个实施例中,第一电子设备与第二电子设备之间的通信可以通过无线通信模块160实现。可理解,第一电子设备与第二电子设备之间可以采取点对点的通信方式,或者,通过服务器进行通信。In an embodiment of the present application, the communication between the first electronic device and the second electronic device can be realized through the wireless communication module 160 . It can be understood that a point-to-point communication manner may be adopted between the first electronic device and the second electronic device, or communication may be performed through a server.
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像 处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor. GPU is a microprocessor for image processing, connected to display screen 194 and application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(Liquid Crystal Display,LCD),有机发光二极管(Organic Light-Emitting Diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(Active-Matrix Organic Light Emitting Diode的,AMOLED),柔性发光二极管(Flex Light-Emitting Diode,FLED),Mini LED,Micro LED,Micro-OLED,量子点发光二极管(Quantum Dot Light Emitting Diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。The display screen 194 is used to display images, videos and the like. The display screen 194 includes a display panel. The display panel can adopt liquid crystal display (Liquid Crystal Display, LCD), organic light-emitting diode (Organic Light-Emitting Diode, OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (Active-Matrix Organic Light Emitting Diode, AMOLED), flexible light-emitting diode (Flex Light-Emitting Diode, FLED), Mini LED, Micro LED, Micro-OLED, quantum dot light-emitting diode (Quantum Dot Light Emitting Diodes, QLED), etc. In some embodiments, the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现获取功能。The electronic device 100 may realize the acquisition function through an ISP, a camera 193 , a video codec, a GPU, a display screen 194 , and an application processor.
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像或视频。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。The ISP is used for processing the data fed back by the camera 193 . For example, when taking a picture, open the shutter, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image or video visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be located in the camera 193 .
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(Charge Coupled Device,CCD)或互补金属氧化物半导体(Complementary Metal-Oxide-Semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像或视频信号。ISP将数字图像或视频信号输出到DSP加工处理。DSP将数字图像或视频信号转换成标准的RGB,YUV等格式的图像或视频信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。例如,在一些实施例中,电子设备100可以利用N个摄像头193获取多个曝光系数的图像,进而,在视频后处理中,电子设备100可以根据多个曝光系数的图像,通过HDR技术合成HDR图像。Camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects it to the photosensitive element. The photosensitive element can be a charge coupled device (Charge Coupled Device, CCD) or a complementary metal oxide semiconductor (Complementary Metal-Oxide-Semiconductor, CMOS) phototransistor. The photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP for conversion into a digital image or video signal. ISP outputs digital image or video signal to DSP for processing. DSP converts digital images or video signals into standard RGB, YUV and other formats of images or video signals. In some embodiments, the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1. For example, in some embodiments, the electronic device 100 can use N cameras 193 to acquire images with multiple exposure coefficients, and then, in video post-processing, the electronic device 100 can synthesize HDR images using the HDR technology based on the images with multiple exposure coefficients. image.
数字信号处理器用于处理数字信号,除了可以处理数字图像或视频信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。Digital signal processors are used to process digital signals. In addition to digital image or video signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(Moving Picture Experts Group,MPEG)1,MPEG2,MPEG3,MPEG4等。Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in various encoding formats, for example: Moving Picture Experts Group (Moving Picture Experts Group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
NPU为神经网络(Neural-Network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。NPU is a neural network (Neural-Network, NN) computing processor. By referring to the structure of biological neural networks, such as the transmission mode between neurons in the human brain, it can quickly process input information and continuously learn by itself. Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。The external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving music, video and other files in the external memory card.
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像视频播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(Universal Flash Storage,UFS)等。The internal memory 121 may be used to store computer-executable program codes including instructions. The processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 . The internal memory 121 may include an area for storing programs and an area for storing data. Wherein, the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image and video playing function, etc.) and the like. The storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (Universal Flash Storage, UFS) and the like.
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。The electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。The audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。Speaker 170A, also referred to as a "horn", is used to convert audio electrical signals into sound signals. Electronic device 100 can listen to music through speaker 170A, or listen to hands-free calls.
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。Receiver 170B, also called "earpiece", is used to convert audio electrical signals into sound signals. When the electronic device 100 receives a call or a voice message, the receiver 170B can be placed close to the human ear to receive the voice.
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。The microphone 170C, also called "microphone" or "microphone", is used to convert sound signals into electrical signals.
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是35mm的开放移动电子设备平台(Open Mobile Terminal Platform,OMTP)标准接口,美国蜂窝电信工业协会(Cellular Telecommunications Industry Association of the USA,CTIA)标准接口。The earphone interface 170D is used for connecting wired earphones. The earphone interface 170D may be a USB interface 130, or a 35mm Open Mobile Terminal Platform (OMTP) standard interface, or a Cellular Telecommunications Industry Association of the USA (CTIA) standard interface.
传感器模块180可以包括1个或多个传感器,这些传感器可以为相同类型或不同类型,可理解,图1所示的传感器模块180仅为一种示例性的划分方式,还可能有其他划分方式,本申请对此不作限制。The sensor module 180 may include one or more sensors, and these sensors may be of the same type or different types. It can be understood that the sensor module 180 shown in FIG. 1 is only an exemplary division method, and there may be other division methods. This application is not limited to this.
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。The pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal. When a touch operation acts on the display screen 194, the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A. In some embodiments, touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view short messages is executed. When a touch operation whose intensity is greater than or equal to the first pressure threshold acts on the icon of the short message application, the instruction of creating a new short message is executed.
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。The gyro sensor 180B can be used to determine the motion posture of the electronic device 100 . In some embodiments, the angular velocity of the electronic device 100 around three axes (ie, x, y and z axes) may be determined by the gyro sensor 180B. The gyro sensor 180B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shaking of the electronic device 100 through reverse movement to achieve anti-shake. The gyro sensor 180B can also be used for navigation and somatosensory game scenes.
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。The air pressure sensor 180C is used to measure air pressure. In some embodiments, the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。The magnetic sensor 180D includes a Hall sensor. The electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip leather case. In some embodiments, when the electronic device 100 is a clamshell machine, the electronic device 100 can detect opening and closing of the clamshell according to the magnetic sensor 180D. Furthermore, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, features such as automatic unlocking of the flip cover are set.
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备100姿态,应用于横竖屏切换,计步器等应用。The acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of the electronic device 100, and can be applied to applications such as horizontal and vertical screen switching, pedometers, etc.
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。The distance sensor 180F is used to measure the distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F for distance measurement to achieve fast focusing.
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
环境光传感器180L用于感知环境光亮度。The ambient light sensor 180L is used for sensing ambient light brightness.
指纹传感器180H用于获取指纹。电子设备100可以利用获取的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。The fingerprint sensor 180H is used to acquire fingerprints. The electronic device 100 can use the acquired fingerprint characteristics to implement fingerprint unlocking, access to application locks, take pictures with fingerprints, answer incoming calls with fingerprints, and the like.
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。The temperature sensor 180J is used to detect temperature. In some embodiments, the electronic device 100 uses the temperature detected by the temperature sensor 180J to implement a temperature treatment strategy.
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。Touch sensor 180K, also known as "touch panel". The touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”. The touch sensor 180K is used to detect a touch operation on or near it. The touch sensor can pass the detected touch operation to the application processor to determine the type of touch event. Visual output related to the touch operation can be provided through the display screen 194 . In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the position of the display screen 194 .
骨传导传感器180M可以获取振动信号。The bone conduction sensor 180M can acquire vibration signals.
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。The keys 190 include a power key, a volume key and the like. The key 190 may be a mechanical key. It can also be a touch button. The electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。The motor 191 can generate a vibrating reminder. The motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback. For example, touch operations applied to different applications (such as taking pictures, playing audio, etc.) may correspond to different vibration feedback effects.
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。The indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施 例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。The SIM card interface 195 is used for connecting a SIM card. The SIM card can be connected and separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 . The electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1. SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card etc. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the multiple cards may be the same or different. The SIM card interface 195 is also compatible with different types of SIM cards. The SIM card interface 195 is also compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as calling and data communication. In some embodiments, the electronic device 100 adopts an eSIM, that is, an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
图11为本申请实施例提供的一种电子设备100的软件结构示意图。FIG. 11 is a schematic diagram of a software structure of an electronic device 100 provided by an embodiment of the present application.
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将系统分为四层,从上至下分别为应用程序层,应用程序框架层,运行时(Runtime)和系统库,以及内核层。The layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces. In some embodiments, the system is divided into four layers, which are application program layer, application program framework layer, runtime (Runtime) and system library, and kernel layer from top to bottom.
应用程序层可以包括一系列应用程序包。The application layer can consist of a series of application packages.
如图11所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序(也可以称为应用)。As shown in FIG. 11 , the application package may include camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message and other applications (also called applications).
在本申请的一个实施例中,应用程序包还可以包括另一应用程序,用户可以在以触摸、点击、手势、语音等方式触发该应用程序后完成镜像投屏,在镜像投屏的过程中,电子设备100可以作为发送视频帧和音频帧的设备(例如,第一电子设备),也可以作为接收视频帧和音频帧的设备(例如,第二电子设备)。可理解,该应用程序的名称可以为“无线投屏”,本申请对此不作限制。In an embodiment of the present application, the application program package may also include another application program, and the user can complete the screen mirroring after triggering the application program by touch, click, gesture, voice, etc., during the screen mirroring process The electronic device 100 can be used as a device for sending video frames and audio frames (for example, a first electronic device), or as a device for receiving video frames and audio frames (for example, a second electronic device). It can be understood that the name of the application program may be "wireless projection", which is not limited in this application.
应用程序框架层为应用程序层的应用程序提供应用编程接口(Application Programming Interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。The application framework layer provides an application programming interface (Application Programming Interface, API) and a programming framework for applications in the application layer. The application framework layer includes some predefined functions.
如图11所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。As shown in Figure 11, the application framework layer can include window manager, content provider, view system, phone manager, resource manager, notification manager and so on.
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。A window manager is used to manage window programs. The window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc.
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。Content providers are used to store and retrieve data and make it accessible to applications. Said data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebook, etc.
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。The view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. The view system can be used to build applications. A display interface can consist of one or more views. For example, a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。The phone manager is used to provide communication functions of the electronic device 100 . For example, the management of call status (including connected, hung up, etc.).
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话界面形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。The notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and can automatically disappear after a short stay without user interaction. For example, the notification manager is used to notify the download completion, message reminder, etc. The notification manager can also be a notification that appears on the top status bar of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog interface. For example, prompting text information in the status bar, issuing a prompt sound, vibrating the electronic device, and flashing the indicator light, etc.
运行时(Runtime)包括核心库和虚拟机。Runtime负责系统的调度和管理。Runtime (Runtime) includes the core library and virtual machine. Runtime is responsible for the scheduling and management of the system.
核心库包含两部分:一部分是编程语言(例如,java语言)需要调用的功能函数,另一部分是系统的核心库。The core library includes two parts: one part is the function function that the programming language (for example, java language) needs to call, and the other part is the core library of the system.
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的编程文件(例如,java文件)执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。The application layer and the application framework layer run in virtual machines. The virtual machine executes programming files (for example, java files) of the application program layer and the application program framework layer as binary files. The virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
系统库可以包括多个功能模块。例如:表面管理器(Surface Manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),二维图形引擎(例如:SGL)等。A system library can include multiple function modules. For example: Surface Manager (Surface Manager), Media Library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了二维(2-Dimensional,2D)和三维(3-Dimensional,3D)图层的融合。The surface manager is used to manage the display subsystem, and provides fusion of two-dimensional (2-Dimensional, 2D) and three-dimensional (3-Dimensional, 3D) layers for multiple applications.
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。The media library supports playback and recording of various commonly used audio and video formats, as well as still image files, etc. The media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
三维图形处理库用于实现3D图形绘图,图像渲染,合成,和图层处理等。The 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing, etc.
2D图形引擎是2D绘图的绘图引擎。2D graphics engine is a drawing engine for 2D drawing.
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动,虚拟卡驱动。The kernel layer is the layer between hardware and software. The kernel layer includes at least a display driver, a camera driver, an audio driver, a sensor driver, and a virtual card driver.
下面结合镜像投屏场景,示例性说明电子设备100软件以及硬件的工作流程。The workflow of the software and hardware of the electronic device 100 will be exemplarily described below in combination with the screen mirroring scene.
若电子设备100为镜像投屏过程中发送视频帧和音频帧的设备(例如,第一电子设备),当触摸传感器180K接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸单击操作,该单击操作所对应的控件为无线投屏图标的控件为例,无线投屏应用调用应用框架层的接口,启动无线投屏应用,进而调用内核层来启动驱动,从而通过无线通信模块160将视频帧和音频帧传输到另一设备(镜像投屏过程中接收视频帧和音频帧的设备,例如,第二电子设备)。If the electronic device 100 is a device (for example, the first electronic device) that sends video frames and audio frames during screen mirroring, when the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes touch operations into original input events (including touch coordinates, time stamps of touch operations, and other information). Raw input events are stored at the kernel level. The application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Taking the touch operation as a touch-click operation, and the control corresponding to the click operation is the control of the wireless projection icon as an example, the wireless projection application calls the interface of the application framework layer, starts the wireless projection application, and then calls the kernel layer to Start the drive, so as to transmit the video frame and the audio frame to another device (the device receiving the video frame and the audio frame during the screen mirroring process, for example, the second electronic device) through the wireless communication module 160 .
可理解,被投屏的设备(例如,第二电子设备)可以默认开启无线投屏应用,或者,在接收其他设备发送的镜像投屏请求时开启无线投屏应用。而第一电子设备在启动无线投屏应用时可以选择已经开启无线投屏应用的第二电子设备,所以当第一电子设备选择完毕并与第二电子设备建立通信连接后,就可以开始进行镜像投屏。It can be understood that the device to be screen-cast (for example, the second electronic device) may start the wireless screen-casting application by default, or start the wireless screen-casting application when receiving a screen mirroring request sent by other devices. When the first electronic device starts the wireless screen projection application, it can select the second electronic device that has already started the wireless screen projection application, so when the first electronic device is selected and establishes a communication connection with the second electronic device, mirroring can begin. Cast screen.
需要说明的是,可以通过图10中无线通信模块160所提供的无线通信技术在第一电子设备与第二电子设备之间建立通信连接。It should be noted that a communication connection between the first electronic device and the second electronic device may be established through the wireless communication technology provided by the wireless communication module 160 in FIG. 10 .
若电子设备100为镜像投屏中接收视频帧和音频帧的设备(例如,第二电子设备),相应地,启动无线投屏应用,通过无线通信模块160接收视频帧和音频帧,并且调用内核层来启动显示驱动和音频驱动,将接收的视频帧通过显示屏194显示,将接收的音频帧通过扬声器170A播放。If the electronic device 100 is a device (for example, a second electronic device) that receives video frames and audio frames in mirror projection, correspondingly, start the wireless screen projection application, receive the video frames and audio frames through the wireless communication module 160, and call the kernel layer to start the display driver and audio driver, display the received video frame through the display screen 194, and play the received audio frame through the speaker 170A.
在上述实施例中,对各个实施例的描述各有侧重,某个实施例中没有详述的部分,可以参见其它实施例的相关描述。In the foregoing embodiments, the descriptions of each embodiment have their own emphases, and for parts not described in detail in a certain embodiment, reference may be made to relevant descriptions of other embodiments.
应理解,本文中涉及的第一、第二、第三、第四以及各种数字编号仅为描述方便进行的区分,并不用来限制本申请的范围。It should be understood that the first, second, third, fourth and various numbers mentioned herein are only for convenience of description, and are not intended to limit the scope of the present application.
应理解,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在 三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。It should be understood that the term "and/or" in this article is only an association relationship describing associated objects, which means that there may be three relationships, for example, A and/or B may mean: A exists alone, and A and B exist at the same time , there are three cases of B alone. In addition, the character "/" in this article generally indicates that the contextual objects are an "or" relationship.
还应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should also be understood that in various embodiments of the present application, the serial numbers of the above-mentioned processes do not mean the order of execution, and the order of execution of the processes should be determined by their functions and internal logic, and should not be implemented in this application. The implementation of the examples constitutes no limitation.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those skilled in the art can appreciate that the units and algorithm steps of the examples described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present application.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that for the convenience and brevity of the description, the specific working process of the above-described system, device and unit can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed systems, devices and methods may be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions described above are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
本申请实施例方法中的步骤可以根据实际需要进行顺序调整、合并和删减。The steps in the methods of the embodiments of the present application can be adjusted, combined and deleted according to actual needs.
本申请实施例装置中的模块可以根据实际需要进行合并、划分和删减。The modules in the device of the embodiment of the present application can be combined, divided and deleted according to actual needs.
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。As mentioned above, the above embodiments are only used to illustrate the technical solutions of the present application, and are not intended to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: it can still understand the foregoing The technical solutions described in each embodiment are modified, or some of the technical features are replaced equivalently; and these modifications or replacements do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the various embodiments of the application.

Claims (16)

  1. 一种动态调节丢帧阈值的方法,其特征在于,应用于第二电子设备,所述方法包括:A method for dynamically adjusting the frame loss threshold, which is applied to a second electronic device, and the method includes:
    接收第一电子设备发送的视频帧;receiving video frames sent by the first electronic device;
    确定收帧时间差;所述收帧时间差为所述接收所述视频帧的时间与接收第M帧视频帧的时间的差值;所述第M帧视频帧为所述第一电子设备在发送所述视频帧之前发送的视频帧;Determine the frame receiving time difference; the frame receiving time difference is the difference between the time of receiving the video frame and the time of receiving the Mth video frame; the Mth video frame is the first electronic device sending the the video frame sent before the video frame;
    在收帧时间差不小于丢帧阈值的情况下,若第一时长未达到预设时间,减小所述丢帧阈值;所述第一时长为上一次调整丢帧阈值到当前时刻的时长;所述丢帧阈值用于判断是否丢弃所述视频帧;In the case that the frame receiving time difference is not less than the frame loss threshold, if the first duration does not reach the preset time, reduce the frame loss threshold; the first duration is the duration from the last time the frame loss threshold was adjusted to the current moment; The frame loss threshold is used to judge whether to discard the video frame;
    或者,在收帧时间差不小于丢帧阈值的情况下,若第一时长达到预设时间,且所述丢帧阈值小于阈值上限,增大所述丢帧阈值;若第一时长达到预设时间,且所述丢帧阈值不小于所述阈值上限,减小所述丢帧阈值;所述阈值上限为所述丢帧阈值的最大值;Alternatively, when the frame receiving time difference is not less than the frame loss threshold, if the first duration reaches the preset time, and the frame loss threshold is less than the upper threshold, increase the frame loss threshold; if the first duration reaches the preset time , and the frame loss threshold is not less than the upper threshold, reduce the frame loss threshold; the upper threshold is the maximum value of the frame loss threshold;
    记录在接收所述视频帧之后接收的N帧视频帧的解码时延和送显时延;所述解码时延为视频帧从到达解码器开始到完成解码的时间;所述送显时延为视频帧从完成解码开始到显示在显示屏上的时间;Record the decoding delay and the display delay of the N frames of video frames received after receiving the video frame; the decoding delay is the time when the video frame arrives at the decoder and completes the decoding; the display delay is The time from when a video frame is decoded to when it is displayed on the display;
    判断所述N是否等于预设帧数;judging whether the N is equal to the preset number of frames;
    若所述N等于所述预设帧数,判断调节后的所述丢帧阈值是否有效;若调节后的所述丢帧阈值无效,减小调节后的所述丢帧阈值,并停止阈值测试;所述阈值测试用于判断对丢帧阈值的调节是否有效。If the N is equal to the preset number of frames, determine whether the adjusted frame loss threshold is valid; if the adjusted frame loss threshold is invalid, reduce the adjusted frame loss threshold, and stop the threshold test ; The threshold test is used to determine whether the adjustment to the frame loss threshold is effective.
  2. 如权利要求1所述的方法,其特征在于,所述视频帧为不用于所述阈值测试的视频帧。The method of claim 1, wherein the video frame is a video frame not used for the threshold test.
  3. 如权利要求1或2所述的方法,其特征在于,所述判断调节后的所述丢帧阈值是否有效,包括:The method according to claim 1 or 2, wherein the judging whether the adjusted frame loss threshold is valid comprises:
    确定当前全时间段内的平均解码时延和平均送显时延;所述当前全时间段为从接收第一个视频帧开始,到确定所述平均解码时延和所述平均送显时延为止的一段时间;Determine the average decoding delay and the average display delay in the current full time period; the current full time period is from receiving the first video frame to determining the average decoding delay and the average display delay for a period of time;
    若所述N帧视频帧的解码时延和送显时延,分别比所述平均解码时延和所述平均送显时延减少至少c%,确定调节后的所述丢帧阈值有效。If the decoding delay and the display delay of the N frames of video frames are respectively reduced by at least c% compared with the average decoding delay and the average display delay, it is determined that the adjusted frame loss threshold is valid.
  4. 如权利要求1-3任一项所述的方法,其特征在于,所述方法还包括:在收帧时间差小于丢帧阈值的情况下,丢弃所述视频帧。The method according to any one of claims 1-3, further comprising: discarding the video frame when the frame receiving time difference is less than a frame loss threshold.
  5. 如权利要求1-4任一项所述的方法,其特征在于,所述接收第一电子设备发送的视频帧之后,所述方法还包括:记录接收所述视频帧的时间;对所述丢帧阈值进行初始化处理。The method according to any one of claims 1-4, wherein after receiving the video frame sent by the first electronic device, the method further comprises: recording the time of receiving the video frame; The frame threshold is initialized.
  6. 如权利要求5所述的方法,其特征在于,所述记录接收所述视频帧的时间之后,所述方法还包括:将接收所述视频帧的时间存储在第一队列中;所述第一队列中存储有第二电子设备接收所述第M帧视频帧的时间。The method according to claim 5, wherein after said recording the time of receiving said video frame, said method further comprises: storing the time of receiving said video frame in a first queue; said first The time at which the second electronic device receives the Mth video frame is stored in the queue.
  7. 如权利要求6所述的方法,其特征在于,所述记录在接收所述视频帧之后接收的N帧视频帧的解码时延和送显时延之前,所述方法还包括:去除所述第一队列中最先写入的元素,将接收所述视频帧的时间写入所述第一队列。The method according to claim 6, wherein the recording is before the decoding delay and the display delay of the N frames of video frames received after receiving the video frames, and the method further comprises: removing the first The element written first in a queue writes the time of receiving the video frame into the first queue.
  8. 一种动态调节丢帧阈值的方法,其特征在于,应用于第二电子设备,所述方法包括:A method for dynamically adjusting the frame loss threshold, which is applied to a second electronic device, and the method includes:
    接收第一电子设备发送的视频帧;receiving video frames sent by the first electronic device;
    确定收帧时间差;所述收帧时间差为所述接收所述视频帧的时间与接收第M帧视频帧的时间的差值;所述第M帧视频帧为所述第一电子设备在发送所述视频帧之前发送的视频帧;Determine the frame receiving time difference; the frame receiving time difference is the difference between the time of receiving the video frame and the time of receiving the Mth video frame; the Mth video frame is the first electronic device sending the the video frame sent before the video frame;
    在收帧时间差不小于丢帧阈值的情况下,若第一时长未达到预设时间,减小所述丢帧阈值;所述第一时长为上一次调整丢帧阈值到当前时刻的时长;所述丢帧阈值用于判断是否丢弃所述视频帧;In the case that the frame receiving time difference is not less than the frame loss threshold, if the first duration does not reach the preset time, reduce the frame loss threshold; the first duration is the duration from the last time the frame loss threshold was adjusted to the current moment; The frame loss threshold is used to judge whether to discard the video frame;
    或者,在收帧时间差不小于丢帧阈值的情况下,若第一时长达到预设时间,且所述丢帧阈值小于阈值上限,增大所述丢帧阈值;若第一时长达到预设时间,且所述丢帧阈值不小于所述阈值上限,减小所述丢帧阈值;所述阈值上限为所述丢帧阈值的最大值;Alternatively, when the frame receiving time difference is not less than the frame loss threshold, if the first duration reaches the preset time, and the frame loss threshold is less than the upper threshold, increase the frame loss threshold; if the first duration reaches the preset time , and the frame loss threshold is not less than the upper threshold, reduce the frame loss threshold; the upper threshold is the maximum value of the frame loss threshold;
    记录在接收所述视频帧之后接收的N帧视频帧的解码时延和送显时延;所述解码时延为视频帧从到达解码器开始到完成解码的时间;所述送显时延为视频帧从完成解码开始到显示在显示屏上的时间;Record the decoding delay and the display delay of the N frames of video frames received after receiving the video frame; the decoding delay is the time when the video frame arrives at the decoder and completes the decoding; the display delay is The time from when a video frame is decoded to when it is displayed on the display;
    判断所述N是否等于预设帧数;judging whether the N is equal to the preset number of frames;
    若所述N等于所述预设帧数,确定当前全时间段内的平均解码时延和平均送显时延;所述当前全时间段为从接收第一个视频帧开始,到确定所述平均解码时延和所述平均送显时延为止的一段时间;If the N is equal to the preset number of frames, determine the average decoding delay and the average display delay in the current full time period; the current full time period is from receiving the first video frame to determining the A period of time until the average decoding delay and the average display delay;
    若所述N帧视频帧的解码时延和送显时延,分别比所述平均解码时延和所述平均送显时延减少至少c%,确定调节后的所述丢帧阈值有效;若所述N帧视频帧的解码时延和送显时延,并未分别比所述平均解码时延和所述平均送显时延减少至少c%,确定调节后的所述丢帧阈值无效;If the decoding delay and the display delay of the N frames of video frames are respectively at least c% lower than the average decoding delay and the average display delay, it is determined that the adjusted frame loss threshold is valid; if The decoding delay and the display delay of the N frames of video frames are not reduced by at least c% than the average decoding delay and the average display delay respectively, and it is determined that the adjusted frame loss threshold is invalid;
    若调节后的所述丢帧阈值无效,减小调节后的所述丢帧阈值,并停止阈值测试;所述阈值测试用于判断对丢帧阈值的调节是否有效。If the adjusted frame loss threshold is invalid, reduce the adjusted frame loss threshold and stop the threshold test; the threshold test is used to determine whether the adjustment to the frame loss threshold is valid.
  9. 如权利要求8所述的方法,其特征在于,所述视频帧为不用于所述阈值测试的视频帧。The method of claim 8, wherein the video frame is a video frame not used for the threshold test.
  10. 如权利要求9所述的方法,其特征在于,所述方法还包括:在收帧时间差小于丢帧阈值的情况下,丢弃所述视频帧。The method according to claim 9, further comprising: discarding the video frame when the frame receiving time difference is less than a frame loss threshold.
  11. 如权利要求8-10任一项所述的方法,其特征在于,所述接收第一电子设备发送的视频帧之后,所述方法还包括:记录接收所述视频帧的时间;对所述丢帧阈值进行初始化处理。The method according to any one of claims 8-10, wherein after receiving the video frame sent by the first electronic device, the method further comprises: recording the time of receiving the video frame; The frame threshold is initialized.
  12. 如权利要求11所述的方法,其特征在于,所述记录接收所述视频帧的时间之后,所述方法还包括:将接收所述视频帧的时间存储在第一队列中;所述第一队列中存储有第 二电子设备接收所述第M帧视频帧的时间。The method according to claim 11, wherein after said recording the time of receiving said video frame, said method further comprises: storing the time of receiving said video frame in a first queue; said first The time at which the second electronic device receives the Mth video frame is stored in the queue.
  13. 如权利要求12所述的方法,其特征在于,所述记录在接收所述视频帧之后接收的N帧视频帧的解码时延和送显时延之前,所述方法还包括:去除所述第一队列中最先写入的元素,将接收所述视频帧的时间写入所述第一队列。The method according to claim 12, characterized in that, before the decoding delay and the display delay of the N frames of video frames received after the video frame is received, the method further comprises: removing the first video frame The element written first in a queue writes the time of receiving the video frame into the first queue.
  14. 一种电子设备,包括显示屏、存储器、一个或多个处理器,其特征在于,所述存储器用于存储计算机程序;所述处理器用于调用所述计算机程序,使得所述电子设备执行权利要求1-13中任一项所述的方法。An electronic device, comprising a display screen, a memory, and one or more processors, wherein the memory is used to store a computer program; the processor is used to call the computer program, so that the electronic device executes the claims The method of any one of 1-13.
  15. 一种计算机存储介质,其特征在于,包括:计算机指令;当所述计算机指令在电子设备上运行时,使得所述电子设备执行权利要求1-13中任一项所述的方法。A computer storage medium, characterized by comprising: computer instructions; when the computer instructions are run on an electronic device, the electronic device is made to execute the method described in any one of claims 1-13.
  16. 一种计算机程序产品,该计算机程序包括指令,当该计算机程序被计算机执行时,使得计算机可以执行权利要求1-13中任一项所述的方法。A computer program product, the computer program comprising instructions, when the computer program is executed by a computer, the computer can execute the method according to any one of claims 1-13.
PCT/CN2022/092369 2021-06-25 2022-05-12 Method for dynamically adjusting frame-dropping threshold value, and related devices WO2022267733A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110713481.0A CN113473229B (en) 2021-06-25 2021-06-25 Method for dynamically adjusting frame loss threshold and related equipment
CN202110713481.0 2021-06-25

Publications (1)

Publication Number Publication Date
WO2022267733A1 true WO2022267733A1 (en) 2022-12-29

Family

ID=77873126

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/092369 WO2022267733A1 (en) 2021-06-25 2022-05-12 Method for dynamically adjusting frame-dropping threshold value, and related devices

Country Status (2)

Country Link
CN (1) CN113473229B (en)
WO (1) WO2022267733A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117041669A (en) * 2023-09-27 2023-11-10 湖南快乐阳光互动娱乐传媒有限公司 Super-division control method and device for video stream and electronic equipment

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113473229B (en) * 2021-06-25 2022-04-12 荣耀终端有限公司 Method for dynamically adjusting frame loss threshold and related equipment
CN114025233B (en) * 2021-10-27 2023-07-14 网易(杭州)网络有限公司 Data processing method and device, electronic equipment and storage medium
CN114157902B (en) * 2021-12-02 2024-03-22 瑞森网安(福建)信息科技有限公司 Wireless screen projection method, system and storage medium
CN115550708B (en) * 2022-01-07 2023-12-19 荣耀终端有限公司 Data processing method and electronic equipment
CN114579075B (en) * 2022-01-30 2023-01-17 荣耀终端有限公司 Data processing method and related device
CN114449309B (en) * 2022-02-14 2023-10-13 杭州登虹科技有限公司 Dynamic diagram playing method for cloud guide
CN115102931B (en) * 2022-05-20 2023-12-19 阿里巴巴(中国)有限公司 Method for adaptively adjusting audio delay and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104394421A (en) * 2013-09-23 2015-03-04 贵阳朗玛信息技术股份有限公司 Video frame processing method and device
WO2016207688A1 (en) * 2015-06-26 2016-12-29 Intel Corporation Method and system for improving video quality during call handover
CN106954101A (en) * 2017-04-25 2017-07-14 华南理工大学 The frame losing control method that a kind of low latency real-time video Streaming Media is wirelessly transferred
US20170318323A1 (en) * 2016-04-29 2017-11-02 Mediatek Singapore Pte. Ltd. Video playback method and control terminal thereof
CN109714634A (en) * 2018-12-29 2019-05-03 青岛海信电器股份有限公司 A kind of decoding synchronous method, device and the equipment of live data streams
CN110177308A (en) * 2019-04-15 2019-08-27 广州虎牙信息科技有限公司 Mobile terminal and its audio-video frame losing method in record screen, computer storage medium
CN112087627A (en) * 2020-08-04 2020-12-15 西安万像电子科技有限公司 Image coding control method, device, equipment and storage medium
CN112822505A (en) * 2020-12-31 2021-05-18 杭州星犀科技有限公司 Audio and video frame loss method, device, system, storage medium and computer equipment
CN113473229A (en) * 2021-06-25 2021-10-01 荣耀终端有限公司 Method for dynamically adjusting frame loss threshold and related equipment

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101990087A (en) * 2010-09-28 2011-03-23 深圳中兴力维技术有限公司 Wireless video monitoring system and method for dynamically regulating code stream according to network state
CN102368823A (en) * 2011-06-28 2012-03-07 上海盈方微电子有限公司 Video framedropping strategy based on grading mechanism
CN104144032B (en) * 2013-05-10 2017-11-17 华为技术有限公司 A kind of frame detection method and device
CN104299614B (en) * 2013-07-16 2017-12-29 华为技术有限公司 Coding/decoding method and decoding apparatus
CN104539917A (en) * 2015-02-03 2015-04-22 成都金本华科技股份有限公司 Method for improving definition of video image
CN106331835B (en) * 2015-06-26 2019-06-07 成都鼎桥通信技术有限公司 A kind of dynamic adjusting data receives the method and video decoding apparatus of caching
CN105847926A (en) * 2016-03-31 2016-08-10 乐视控股(北京)有限公司 Multimedia data synchronous playing method and device
CN105955688B (en) * 2016-05-04 2018-11-02 广州视睿电子科技有限公司 Method and system for processing frame loss of PPT (power point) playing
US10412341B2 (en) * 2016-05-16 2019-09-10 Nec Display Solutions, Ltd. Image display device, frame transmission interval control method, and image display system
CN106817614B (en) * 2017-01-20 2020-08-04 浙江瑞华康源科技有限公司 Audio and video frame loss device and method
CN108737818B (en) * 2018-05-21 2020-09-15 深圳市梦网科技发展有限公司 Frame loss method and device under congestion network and terminal equipment
CN110351595B (en) * 2019-07-17 2023-08-18 北京百度网讯科技有限公司 Buffer processing method, device, equipment and computer storage medium
WO2021042341A1 (en) * 2019-09-05 2021-03-11 深圳市大疆创新科技有限公司 Video display method, receiving end, system and storage medium
CN111162964B (en) * 2019-12-17 2021-10-12 山东鲁软数字科技有限公司智慧能源分公司 Intelligent station message integrity analysis method and system
CN112073751B (en) * 2020-09-21 2023-03-28 苏州科达科技股份有限公司 Video playing method, device, equipment and readable storage medium
CN112153446B (en) * 2020-09-27 2022-07-26 海信视像科技股份有限公司 Display device and streaming media video audio and video synchronization method
CN112312229A (en) * 2020-10-27 2021-02-02 唐桥科技(杭州)有限公司 Video transmission method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104394421A (en) * 2013-09-23 2015-03-04 贵阳朗玛信息技术股份有限公司 Video frame processing method and device
WO2016207688A1 (en) * 2015-06-26 2016-12-29 Intel Corporation Method and system for improving video quality during call handover
US20170318323A1 (en) * 2016-04-29 2017-11-02 Mediatek Singapore Pte. Ltd. Video playback method and control terminal thereof
CN106954101A (en) * 2017-04-25 2017-07-14 华南理工大学 The frame losing control method that a kind of low latency real-time video Streaming Media is wirelessly transferred
CN109714634A (en) * 2018-12-29 2019-05-03 青岛海信电器股份有限公司 A kind of decoding synchronous method, device and the equipment of live data streams
CN110177308A (en) * 2019-04-15 2019-08-27 广州虎牙信息科技有限公司 Mobile terminal and its audio-video frame losing method in record screen, computer storage medium
CN112087627A (en) * 2020-08-04 2020-12-15 西安万像电子科技有限公司 Image coding control method, device, equipment and storage medium
CN112822505A (en) * 2020-12-31 2021-05-18 杭州星犀科技有限公司 Audio and video frame loss method, device, system, storage medium and computer equipment
CN113473229A (en) * 2021-06-25 2021-10-01 荣耀终端有限公司 Method for dynamically adjusting frame loss threshold and related equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117041669A (en) * 2023-09-27 2023-11-10 湖南快乐阳光互动娱乐传媒有限公司 Super-division control method and device for video stream and electronic equipment
CN117041669B (en) * 2023-09-27 2023-12-08 湖南快乐阳光互动娱乐传媒有限公司 Super-division control method and device for video stream and electronic equipment

Also Published As

Publication number Publication date
CN113473229A (en) 2021-10-01
CN113473229B (en) 2022-04-12

Similar Documents

Publication Publication Date Title
WO2022267733A1 (en) Method for dynamically adjusting frame-dropping threshold value, and related devices
US11989482B2 (en) Split-screen projection of an image including multiple application interfaces
WO2020221039A1 (en) Screen projection method, electronic device and screen projection system
WO2022257977A1 (en) Screen projection method for electronic device, and electronic device
WO2020014880A1 (en) Multi-screen interaction method and device
CN114579075B (en) Data processing method and related device
WO2022100305A1 (en) Cross-device picture display method and apparatus, and electronic device
CN112398855B (en) Method and device for transferring application contents across devices and electronic device
WO2020143380A1 (en) Data transmission method and electronic device
WO2021185244A1 (en) Device interaction method and electronic device
WO2022017393A1 (en) Display interaction system, display method, and device
US20230305864A1 (en) Method for Displaying Plurality of Windows and Electronic Device
WO2022105445A1 (en) Browser-based application screen projection method and related apparatus
JP7181990B2 (en) Data transmission method and electronic device
CN114579076A (en) Data processing method and related device
WO2022222713A1 (en) Codec negotiation and switching method
WO2023030099A1 (en) Cross-device interaction method and apparatus, and screen projection system and terminal
WO2022042769A2 (en) Multi-screen interaction system and method, apparatus, and medium
CN115048012A (en) Data processing method and related device
WO2022222924A1 (en) Method for adjusting screen projection display parameters
WO2022222691A1 (en) Call processing method and related device
WO2022161006A1 (en) Photograph synthesis method and apparatus, and electronic device and readable storage medium
WO2022156721A1 (en) Photographing method and electronic device
WO2024156206A9 (en) Display method and electronic device
WO2021052388A1 (en) Video communication method and video communication apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22827231

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22827231

Country of ref document: EP

Kind code of ref document: A1