CN116028209B - Resource scheduling method, electronic equipment and storage medium - Google Patents
Resource scheduling method, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN116028209B CN116028209B CN202210741197.9A CN202210741197A CN116028209B CN 116028209 B CN116028209 B CN 116028209B CN 202210741197 A CN202210741197 A CN 202210741197A CN 116028209 B CN116028209 B CN 116028209B
- Authority
- CN
- China
- Prior art keywords
- gpu
- video
- scene
- array
- cpu
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Stored Programmes (AREA)
Abstract
The application provides a resource scheduling method, electronic equipment and a storage medium, and relates to the technical field of computers. According to the scheme, whether the newly created process is the process of the preset video application can be judged first, then when the newly created process is the process of the preset video application, whether the newly created process is using the three-dimensional graphics capability and the video decoding capability of the GPU or not is judged, so that whether the current user scene of the electronic equipment is a video playing scene or not can be accurately identified, and when the video playing scene is played, more accurate resource scheduling is dynamically carried out on the current task according to the load condition of the current task executed by the electronic equipment and the resource requirement of the video playing scene, and the smooth and non-blocking experience of the user on the video playing scene is met while the performance of the electronic equipment is ensured.
Description
The application claims priority of China patent application filed by the national intellectual property agency, application number 202210530859.8, application name "resource scheduling method based on Windows video scene and electronic device" at day 05 and 16 of 2022, the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of computer technologies, and in particular, to a resource scheduling method, an electronic device, and a storage medium.
Background
With the improvement of the performance of the electronic device, the power consumption of the electronic device is higher, and the use experience of the user on the electronic device is higher. The use experience of the user can be satisfied through resource scheduling. The traditional resource scheduling scheme is that the electronic equipment counts the load conditions of all the currently executed tasks in a period of time, and then performs resource scheduling on all the currently executed tasks according to the load conditions obtained by statistics. For example, if the load of all tasks currently performed is large, the power of the central processing unit (central processing unit, CPU) may be increased, but this causes the CPU to operate in a high performance state in most user scenarios, resulting in problems of resource waste and excessive power consumption.
Disclosure of Invention
The application provides a resource scheduling method, electronic equipment and a storage medium, which can more accurately identify whether a Windows foreground is in a video playing scene, and perform system performance optimization based on the power consumption requirement of the video playing scene when judging the video playing scene, so that the smoothness of the overall use experience is ensured, and the energy consumption of the electronic equipment is reduced.
In order to achieve the above purpose, the application adopts the following technical scheme:
in a first aspect, the present application provides a resource scheduling method, applied to an electronic device, where the electronic device includes a graphics processor GPU and a central processing unit CPU, the method includes:
responding to a first operation of opening a first application by a user, and displaying a first window by the electronic equipment, wherein the first window is a focus window; acquiring process information of a first process corresponding to a first window, wherein the process information of the first process comprises a first process name and a first process identifier, and the first process name corresponds to a first application; determining a first application as an application program in a preset white list according to a first process name, wherein the preset white list comprises one or more application programs of video types; a Windows interface is called, GPU occupation information is obtained, the GPU occupation information comprises process identifiers corresponding to a second process, and the second process is a process which uses the three-dimensional graphics capability and the video decoding capability of the GPU; when the process identifier corresponding to the second process comprises the first process identifier, determining that the user scene where the electronic equipment is located is a video playing scene; determining a first scheduling strategy according to the system load of the electronic equipment and the video playing scene; and adjusting the resource allocation of the electronic equipment according to the first scheduling strategy.
According to the scheme provided by the application, whether the newly created process is the process of the preset video application can be judged first, then when the newly created process is the process of the preset video application, whether the newly created process uses the three-dimensional graphics capability and the video decoding capability of the GPU or not is judged, so that whether the current user scene of the electronic equipment is a video playing scene or not can be accurately identified, and when the video playing scene is carried out, more accurate resource scheduling is dynamically carried out on the current task according to the current load condition of the task executed by the electronic equipment and the resource requirement of the video playing scene, and the smooth and non-clamping experience of the user on the video playing scene is met while the performance of the electronic equipment is ensured.
By the scheme, when the first process in the white list is started, whether the first process uses GPU 3D and video decoding capability is judged immediately, and therefore whether the current scene is in a video playing scene is judged.
In other embodiments, a first process that is currently running and in a white list may be stored first, then a process that uses GPU 3D and video decoding capabilities is determined, and the first process is compared with a process that uses GPU 3D and video decoding capabilities, and whether the first process uses GPU 3D and video decoding capabilities is determined according to the comparison result, thereby determining that the current scene is in a video playing scene.
In some possible implementations, before the Windows interface is called to obtain the GPU occupation information, the method further includes: a first thread is created that is used to query GPU occupancy information.
The step of calling the Windows interface to acquire the GPU occupation information comprises the following steps: and responding to the successful creation of the first thread, calling a Windows interface, and acquiring GPU occupation information.
In some possible implementations, if process PID array a is not empty, then a thread is started, which in turn queries whether the process PID in the array is using the GPU. Specifically, a Windows interface is called, a returned value of the Windows API interface is analyzed, the PID of the process using the GPU 3D or the GPU video decoder is obtained, the obtained PID is compared with the process PID to be queried, and if the obtained PID is the same as the process PID to be queried, the current user scene is determined to be a video playing scene. If the obtained PID is different from the process PID to be queried, determining that the current user scene is not a video playing scene.
In some possible implementations, the creating the first thread includes: when it is determined that the first array is not empty, a first thread is created. The first array is used for storing process identifiers of application programs belonging to a preset white list.
In some possible implementations, after determining, according to the first process name, that the first application is an application within a preset whitelist, the method further includes: the first process identification is stored in a first array.
In some possible implementations, after invoking the Windows interface to obtain the GPU occupancy information, the method further includes: analyzing the GPU occupation information to obtain a process identifier corresponding to a second process, wherein the process identifier corresponding to the second process comprises an identifier of a process using the three-dimensional graphics capability of the GPU and an identifier of a process using the video decoding capability of the GPU;
storing an identification of a process that is using the three-dimensional graphics capability of the GPU to a second array;
an identification of a process that is using the video decoding capabilities of the GPU is stored to a third array.
In some possible implementations, when the process identifier corresponding to the second process includes the first process identifier, determining that the user scene where the electronic device is located is a video playing scene includes:
if the second array and the third array both comprise the process identification in the first array, determining that the user scene where the electronic equipment is located is a video playing scene.
In some possible implementations, the method further includes: and if the second array or the third array does not comprise the process identification in the first array, resetting the second array and the third array.
It can be understood that under the condition that the first process does not use the GPU 3D and the GPU video decoding currently, the second array and the third array are cleared, so that the second array and the third array can be ensured to store the latest GPU occupation information every time updating, and video playing scene judging data based on the second array and the third array can be updated in time, and whether the current scene is a video playing scene can be accurately identified in real time.
In some possible implementations, after zeroing the second array and the third array, the method further includes: periodically calling a Windows interface to acquire GPU occupation information in the life cycle of the first process; and determining whether the user scene where the electronic equipment is positioned is a video playing scene or not according to the GPU occupation information and the first process identification.
It can be appreciated that when the first process corresponding to the video application survives, the video application may run in the foreground of the electronic device and playing the video resource, and may be considered as a video playing scene at this time; the video application may also go back to the electronic device background or the video asset may not be on demand, at which point it may be considered a non-video playing scene. Therefore, during the life cycle of the first process corresponding to the video application (i.e. when the process survives), it is necessary to continuously query whether the first process is using GPU 3D and GPU video decoding, so as to determine in real time whether the current user scene is a video playing scene.
In some possible implementations, the method further includes: and deleting the first process identifier from the first array under the condition that the first process is detected to be cleaned.
It should be noted that, in the case where the user triggers to close the video application or other reasons cause the video application to close, the corresponding first process of the video application is killed (cleaned), where the PID of the first process may be deleted from the first array (the first array is used to store the process identifier of the video application), because after the video application is closed, the corresponding first process is killed, where it is not necessary to query whether the first process is using the GPU 3D and the GPU videodecoder.
In some possible implementations, the Windows interface is a performance database PDH function.
In some possible implementations, invoking the Windows interface to obtain GPU occupancy information includes:
invoking a PDH function, and returning N first structures, wherein each first structure in the N first structures comprises PID information and a counter value;
traversing each first structure in the N first structures to obtain a counter value in each first structure;
when the counter value in the second structure body is larger than zero, PID information in the second structure body is obtained, wherein the PID information comprises an identifier of a process of using the three-dimensional graphics capability and the video decoding capability of the GPU, and the second structure body is one structure body in N first structure bodies;
And analyzing the PID information to obtain the GPU occupation information.
In some possible implementations, determining the first scheduling policy according to the system load and the video playback scenario includes:
determining a second scheduling strategy according to a video playing scene, wherein the second scheduling strategy comprises a process priority A of a first process, a first input/output I/O priority, a first long-time-with-frequency power consumption PL1 of a CPU, a first short-time-with-frequency power consumption PL2 and a first energy efficiency ratio EPP;
and obtaining a first scheduling strategy according to the system load, the video playing scene and the second scheduling strategy, wherein the first scheduling strategy at least comprises a process priority B of the first process, a second I/O priority, a second PL1 of the CPU, a second PL2 of the CPU and a second EPP.
Wherein the system load is greater than a preset first value, the process priority B is greater than or equal to the process priority A, the second I/O priority is greater than or equal to the first I/O priority, the second PL1 is greater than the first PL1, the second PL2 is greater than the second PL2, and the second EPP is less than the first EPP.
Wherein the system load is the average number of processes in an executable state and processes in an uninterruptible state. The process in the runnable state refers to a process that is using or waiting for using a CPU. The process of the uninterruptible state is a process waiting for I/O access (e.g., disk I/O). The system load can be divided into three levels, namely light load, medium load and heavy load. The electronic device may be preconfigured with various user scenarios and scheduling policies corresponding to various system loads.
It can be understood that the higher the load, the higher the process priority and the I/O priority of the first process, so that the first process can be guaranteed to occupy the CPU resources and perform the I/O access preferentially, and the first process can be guaranteed to run smoothly. In addition, PL1, PL2 are appropriately increased and EPP is reduced when the load increases to balance the performance and power consumption of the electronic device.
Through the scheme, under the condition that the performance of the terminal equipment can smoothly meet the requirements of users, the energy consumption of the electronic equipment is reduced, and the cruising ability of the electronic equipment is improved.
In some possible implementations, the first scheduling policy includes an operating system OS scheduling policy and a CPU power consumption scheduling policy; the method for adjusting the resource allocation of the electronic equipment according to the first scheduling policy comprises the following steps: adjusting the process priority and the input/output (I/O) priority of the first process according to the OS scheduling strategy; and adjusting the power consumption of the CPU according to the CPU power consumption scheduling strategy.
In some possible implementations, the method further includes: the method comprises the steps of determining a chip platform type of a CPU, wherein the chip platform type comprises a first type and a second type. The first type of CPU can be an AMD (advanced data processing) CPU chip, and the second type of CPU can be an Intel CPU chip.
In some possible implementations, the CPU power consumption scheduling policy includes a first sub-policy and a second sub-policy, the second sub-policy being a dynamic tuning technique DTT policy determined from the first sub-policy. The method for adjusting the power consumption of the CPU according to the power consumption scheduling strategy of the CPU comprises the following steps: if the chip platform type is the first type, adjusting the power consumption of the CPU according to the first sub-strategy; and if the chip platform type is the second type, adjusting the power consumption of the CPU according to the second sub-strategy. That is, for the AMD and Intel CPUs, the application can adaptively match different power consumption scheduling strategies.
In one possible design manner of the first aspect, the GPU occupation information of the first process includes a GPU occupation rate of the first process and a GPU engine; determining a user scene where the electronic equipment is located according to the process information and the first information, including:
determining the type of the first process according to the process information; if the type of the first process is a video type, the GPU occupancy rate of the first process is greater than 0, the GPU engine is a GPU video process engine, and the user scene where the electronic equipment is located is determined to be a video playing scene. It will be appreciated that if the type of first process is a video class, it may be determined first that the user is currently using a video class application. If the GPU occupancy rate of the first process is greater than 0, the condition that resources occupying the GPU exist in the running process of the first process is indicated. If the GPU engine of the first process is a GPU video processing (video process) engine, then this indicates that the first process uses the GPU for decoding operations during execution. Therefore, the user can be determined to play the video by using the electronic equipment with high probability, namely, the user scene where the electronic equipment is located is a video playing scene.
If the type of the first process is a video type, the GPU occupancy rate of the first process is greater than 0, the GPU engine is a GPU 3D engine, and the user scene where the electronic equipment is located is determined to be a video browsing scene. Accordingly, if the GPU engine of the first process is a GPU 3D engine, it indicates that the first process uses only the GPU to perform 2D or 3D rendering operation, and it can be inferred that the user is browsing video resources using the electronic device, but not playing video, that is, the user scene where the electronic device is located is a video browsing scene.
In one possible design manner of the first aspect, the method further includes: if the type of the first process is a game type, the power mode is a game mode, the GPU occupancy rate of the first process is larger than 0, the GPU engine is a GPU 3D engine, and the user scene where the electronic equipment is located is determined to be a game scene.
It will be appreciated that if the type of first process is a game class, it may be determined first that the user is currently using the game class application. If the GPU occupancy rate of the first process is greater than 0, the condition that resources occupying the GPU exist in the running process of the first process is indicated. If the GPU engine of the first process is a GPU 3D engine, the first process is indicated to use the GPU to perform 2D or 3D rendering operation. Thus, the user can be determined to play the game by using the electronic equipment with high probability, namely, the user scene where the electronic equipment is located is a game scene.
In one possible design manner of the first aspect, the peripheral event includes one or more of a keyboard input event, a mouse input event, a microphone input event, and a camera input event; determining a user scene where the electronic equipment is located according to the process information and the first information, including: and determining the type of the first process according to the process information.
If the type of the first process is social, and a keyboard input event is detected, determining that the user scene where the electronic equipment is located is a text chat scene. That is, if it is detected that the user is using the social application and typing at the same time, the user is chatting using the social application with a high probability, and it can be determined that the user scene where the electronic device is located is a text chatting scene.
If the type of the first process is social, a microphone input event is detected, and a camera input event is not detected, and the user scene where the electronic equipment is located is determined to be a voice chat scene. That is, if it is detected that the user is using the social application and is inputting voice at the same time, the user is highly likely to use the social application to perform voice chat, and it can be determined that the user scene where the electronic device is located is a voice chat scene.
If the type of the first process is social, a microphone input event and a camera input event are detected, and the user scene where the electronic equipment is located is determined to be a video chat scene. That is, if it is detected that the user is using the social application and is inputting video at the same time, the user is using the social application to conduct video chat with a high probability, and it can be determined that the user scene where the electronic device is located is a video chat scene.
In one possible design manner of the first aspect, the method further includes:
if the type of the first process is office type, and a keyboard input event is detected, determining that the user scene where the electronic equipment is located is a document editing scene. If the fact that the user is using the office application and typing is detected, the user is likely to edit the document by using the office application, and the scene of the user where the electronic equipment is located can be determined to be a document editing scene.
If the type of the first process is office type, detecting a mouse input event and not detecting a keyboard input event, and determining that a user scene where the electronic equipment is located is a document browsing scene. That is, if it is detected that the user uses the mouse but does not use the keyboard in the process of using the office application, the user browses the document using the office application with a high probability, and it can be determined that the user scene where the electronic device is located is a document browsing scene.
If the type of the first process is office type, detecting a microphone input event and a camera input event, and determining that a user scene where the electronic equipment is located is a video conference scene. That is, if it is detected that the user uses the camera in the process of using the office application, the user is highly likely to use the office application to perform the video conference, and it can be determined that the user scene where the electronic device is located is a video conference scene.
In one possible design manner of the first aspect, the electronic device further includes a scene recognition engine, a system event driven oseeventdriver node, and a process manager, and the method further includes: the scene recognition engine sends a first request to an OsEventDriver node; the OsEventDriver node sends a first request to a process manager; responding to the first request, and after creating a second process, the process manager sends process information of the second process to the OsEventDriver node; the OsEventDriver node sends the process information of the second process to the scene recognition engine.
In one possible design manner of the first aspect, the electronic device further includes a scene recognition engine and an API module, and the method further includes: the scene recognition engine sends a second request to the API module; in response to the second request, the API module sends the process information of the first process to the scene recognition engine after detecting that the focus window changes.
In one possible design manner of the first aspect, the electronic device further includes a scene recognition engine, an oseeventdriver node, and a graphics card driver, and the method further includes: the scene recognition engine sends a third request to the OsEventDriver node; the OsEventDriver node sends a third request to the display card driver; responding to a third request, and reporting a GPU decoding event to an OsEventDriver node by the display card driver after detecting that the GPU performs decoding operation; the OsEventDriver node sends a GPU decoding event to the scene recognition engine.
In one possible design manner of the first aspect, the electronic device further includes a scene recognition engine, an oseeventdriver node, and a peripheral driver, and the method further includes: the scene recognition engine sends a fourth request to the OsEventDriver node; the OsEventDriver node sends a fourth request to the display card driver; responding to a fourth request, and reporting a peripheral event to the OsEventdriver node after the peripheral driver detects that the peripheral operation is performed; the OsEventDriver node sends a peripheral event to the scene recognition engine.
In one possible design manner of the first aspect, the method includes: responding to a first operation of a user, and acquiring a name of a first process and a name of a second process by an API module, wherein the second process is a process corresponding to a historical focus window; and if the name of the first process is inconsistent with the name of the second process, sending the process information of the first process to the scene recognition engine.
In one possible design manner of the first aspect, determining, according to the process information and the first information, a user scenario in which the electronic device is located includes: and the scene recognition engine determines a user scene where the electronic equipment is located according to the process information and the first information.
In one possible design manner of the first aspect, the electronic device further includes a scheduling engine, and the obtaining the scheduling policy according to the system load and the user scenario includes: the scene recognition engine determines a second scheduling strategy according to the user scene; the scene recognition engine sends a second scheduling strategy and a user scene to the scheduling engine; the scheduling engine sends a fifth request to the scene recognition engine; responding to the fifth request, the scene recognition engine acquires the system load and sends the system load to the scheduling engine; the scheduling engine obtains a first scheduling strategy according to the system load, the user scene and the second scheduling strategy.
In one possible design manner of the first aspect, the electronic device further includes a process manager and an I/O manager, the OS scheduling policy includes a process priority B of the first process and a second I/O priority, and the adjusting the process priority and the input/output I/O priority of the first process according to the OS scheduling policy includes: the scheduling engine sends a first instruction to the process manager, wherein the first instruction carries a process priority B of a first process; in response to receiving the first instruction, the process manager adjusts the process priority of the first process to process priority B; the scheduling engine sends a second instruction to the I/O manager, wherein the second instruction carries a second I/O priority of the first process; in response to receiving the second instruction, the I/O manager adjusts the I/O priority of the first process to a second I/O priority.
In one possible design manner of the first aspect, determining a chip platform type of the CPU includes: the scheduling engine judges whether the chip platform type of the CPU is of a first type or a second type.
In one possible design manner of the first aspect, the electronic device further includes a power manager, a system and a chip OS2SOC driving node, the first sub-policy includes a second PL1, a second PL2 and a second EPP of the CPU, and the adjusting the power consumption of the CPU according to the first sub-policy includes: the scheduling engine sends a third instruction to the OS2SOC driving node, wherein the third instruction carries a second PL1 and a second PL2 of the CPU; the OS2SOC driving node sends a third instruction to the CPU; in response to the third instruction, the CPU adjusts PL1 to be the second PL1 and PL2 to be the second PL2; the scheduling engine sends a fourth instruction to the power manager, wherein the fourth instruction carries a second EPP of the CPU; the power manager sends a fourth instruction to the CPU; in response to the fourth instruction, the CPU adjusts the EPP to a second EPP.
In one possible design manner of the first aspect, the electronic device further includes an intel DTT driver, and adjusting the power consumption of the CPU according to the second sub-policy includes: the scheduling engine sends a fifth instruction to the Intel DTT driver, wherein the fifth instruction carries a second sub-strategy; the Intel DTT driver sends a fifth instruction to the CPU; in response to the fifth instruction, the CPU operates based on the second sub-policy.
Through the scheme, more accurate resource scheduling can be dynamically carried out on the current task according to the load condition of the current execution task of the electronic device and the resource requirement of a user scene, so that the smooth and non-blocking use experience is met, and meanwhile, the energy consumption of the electronic device is reduced.
In a second aspect, the present application provides a resource scheduling apparatus comprising means for performing the method of the first aspect described above. The apparatus may correspond to performing the method described in the first aspect, and the relevant descriptions of the units in the apparatus are referred to the description of the first aspect, which is omitted herein for brevity.
The method described in the first aspect may be implemented by hardware, or may be implemented by executing corresponding software by hardware. The hardware or software includes one or more modules or units corresponding to the functions described above. Such as a processing module or unit, a display module or unit, etc.
In a third aspect, the present application provides an electronic device comprising a memory and one or more processors; wherein the memory is for storing computer program code, the computer program code comprising computer instructions; the computer instructions, when executed by the processor, cause the electronic device to perform the method of any of the first aspects.
In a fourth aspect, the present application provides a computer-readable storage medium comprising computer instructions. When executed on an electronic device (e.g. a computer) the computer instructions cause the electronic device to perform the method as described in the first aspect and any one of its possible designs.
In a fifth aspect, the present application provides a computer program product which, when run on a computer, causes the computer to perform the method according to the first aspect and any one of its possible designs.
In a sixth aspect, the present application provides a chip system comprising one or more interface circuits and one or more processors. The interface circuit and the processor are interconnected by a wire. The chip system described above may be applied to an electronic device including a communication module and a memory. The interface circuit is for receiving signals from a memory of the electronic device and transmitting the received signals to the processor, the signals including computer instructions stored in the memory. When executed by a processor, the electronic device may perform the method as described in the first aspect and any one of its possible designs.
It may be appreciated that the advantages achieved by the resource scheduling device according to the second aspect, the electronic device according to the third aspect, the computer readable storage medium according to the fourth aspect, the computer program product according to the fifth aspect and the chip system according to the sixth aspect provided above may refer to the advantages in any one of the possible designs of the first aspect and the second aspect, and will not be described herein again.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to a resource scheduling method provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a software module architecture of a resource scheduling method according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating interactions between software modules of a resource scheduling method according to an embodiment of the present application;
fig. 4 is a schematic signal interaction diagram of a resource scheduling method according to an embodiment of the present application;
FIG. 5 is an interface diagram of a resource scheduling method according to an embodiment of the present application;
fig. 6 is a schematic diagram of still another signal interaction of the resource scheduling method according to the embodiment of the present application;
fig. 7 is a schematic diagram of still another signal interaction of the resource scheduling method according to the embodiment of the present application;
fig. 8 is a schematic diagram of a chip structure corresponding to the resource scheduling method provided in the embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application.
The term "and/or" herein is an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. The symbol "/" herein indicates that the associated object is or is a relationship, e.g., A/B indicates A or B.
The terms "first," "second," and the like herein are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
For clarity and conciseness in the description of the embodiments below, a brief introduction to related concepts or technologies is first given:
A focus window (focus window) refers to a window having focus. The focus window is the only window that can receive keyboard input. The manner in which the focus window is determined is associated with the focus mode (focus mode) of the system. The top level window of the focus window is called an active window (active window). Only one window at a time may be an active window. The focus window is a window which is needed to be used by the user at present with high probability.
The focus mode may be used to determine how the mouse brings a window into focus. In general, the focus modes may include three types, respectively:
(1) Click-to-focus (click-to-focus) in this mode, the window that the mouse clicks on gets focus. I.e. when the mouse clicks on any position of a window where focus is available, the window is activated, placed in front of all windows and receives keyboard input. When the mouse clicks on other windows, the window loses focus.
(2) The focus follows the mouse (focus-mouse), in which mode a window under the mouse can acquire focus. I.e. when the mouse is moved to a range of windows where focus is available, the user does not need to click somewhere on the window to activate the window, receive keyboard input, but the window is not necessarily placed at the forefront of all windows. When the mouse moves out of the range of this window, this window will also lose focus.
(3) Grass-focus (slope focus), which is similar to focus-mouse comparison: when the mouse is moved to a window where focus is available, the user may activate the window without clicking somewhere on the window, receiving keyboard input, but the window is not necessarily positioned at the forefront of all windows. Unlike focus-focus, focus does not change when the mouse moves out of this window, but only when the mouse moves into another window that can receive focus.
A process includes multiple threads, which may create a window. The focus process is the process to which the thread that created the focus window belongs.
The long-time power consumption (PL 1) refers to the power consumption of the CPU under normal load, which is equivalent to the thermal design power consumption, and the running power consumption of the CPU for most of the time does not exceed PL1.
Short-time-with-frequency power consumption (PL 2), which refers to the highest power consumption that a CPU can reach in a short time, has a duration limit. Generally, PL2 is greater than PL1.
The CPU energy efficiency ratio (energy performance preference, EPP) is used for reflecting the scheduling tendency of the CPU, and the value range is 0-255. The smaller the CPU energy efficiency ratio, the higher the CPU tends to be; the higher the CPU energy efficiency ratio, the lower the CPU trend.
The application provides a resource scheduling method, which provides a kernel layer node, wherein the kernel layer node can report a focus window change event and first information (comprising process information of a focus process, occupation condition of the focus process on a GPU, peripheral events, a power mode and the like) to an application layer, the application layer can determine a current user scene of electronic equipment according to the focus window change event and the first information, and determine a first scheduling policy according to the user scene and system load of the electronic equipment, and adjust the process priority of the focus process, the I/O priority and the power consumption of a CPU (central processing unit) based on the first scheduling policy, so that the energy consumption of the electronic equipment is reduced under the condition of smoothly meeting user requirements (ensuring smooth operation of the focus process).
Referring to fig. 1, a schematic structure diagram of an electronic device 100 according to an embodiment of the application is shown.
As shown in fig. 1, the electronic device 100 may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, wireless communication module 150, display screen 160, etc.
It is to be understood that the structure illustrated in the present embodiment does not constitute a specific limitation on the electronic apparatus 100. In other embodiments, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and command center of the electronic device 100. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an I2C interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a USB interface, among others.
It should be understood that the connection relationship between the modules illustrated in this embodiment is only illustrative, and does not limit the structure of the electronic device 100. In other embodiments, the electronic device 100 may also employ different interfaces in the above embodiments, or a combination of interfaces.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display screen 160, the wireless communication module 150, and the like. In some embodiments, the power management module 141 and the charge management module 140 may also be provided in the same device.
The wireless communication module 150 may provide solutions for wireless communication including WLAN (e.g., wi-Fi), bluetooth, global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., applied to the electronic device 100. For example, in an embodiment of the present application, the electronic device 100 may establish a bluetooth connection with an electronic device (such as a wireless headset) through the wireless communication module 150.
The wireless communication module 150 may be one or more devices that integrate at least one communication processing module. The wireless communication module 150 receives electromagnetic waves via an antenna, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 150 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via an antenna.
The electronic device 100 implements display functions through a GPU, a display screen 160, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 160 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 160 is used to display images, videos, and the like. The display 160 includes a display panel.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. For example, in an embodiment of the present application, the processor 110 may include a storage program area and a storage data area by executing instructions stored in the internal memory 121.
The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application exemplifies a Windows system of a layered architecture, and illustrates a software structure of the electronic device 100.
Fig. 2 is a block diagram of a software architecture of the electronic device 100 according to an embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, windows systems are classified into a user mode and a kernel mode. The user mode comprises an application layer and a subsystem dynamic link library. The kernel mode is divided into a firmware layer, a hardware abstraction layer (hardware abstraction layer, HAL), a kernel and a driving layer and an executing body from bottom to top.
As shown in FIG. 2, the application layer includes applications for music, video, games, office, social, etc. The application layer also includes an environment subsystem, a scene recognition engine, a scheduling engine, and the like. Wherein only a part of application programs are shown in the figure, the application layer can also comprise other application programs, such as shopping application, browser and the like, and the application is not limited.
The environment subsystem may expose certain subsets of the basic executive services to the application in a particular modality, providing an execution environment for the application.
The scenario recognition engine may recognize a user scenario in which the electronic device 100 is located and determine a base scheduling policy (also referred to as a second scheduling policy) that matches the user scenario. The scheduling engine may obtain the load situation of the electronic device 100, and determine an actual scheduling policy (may also be referred to as a first scheduling policy) according to the actual operation situation of the electronic device 100 in combination with the load situation of the electronic device 100 and the basic scheduling policy. The specific contents of the scene recognition engine and the scheduling engine are described below, and are not described herein.
The subsystem dynamic link library includes an application program interface (application programming interface, API) module including Windows API, windows native API, and the like. The Windows API and the Windows native API can provide system call entry and internal function support for the application program, and the difference is that the Windows native API is an API native to the Windows system. For example, windows APIs may include user. Dll, kernel. Dll, and Windows native APIs may include ntdll. The user. Dll is a Windows user interface, and can be used for performing operations such as creating a window, sending a message, and the like. kernel. Dll is used to provide an interface for applications to access the kernel. ntdll.dll is an important Windows NT kernel-level file. When Windows is started, ntdll is resident in a specific write protection area in the memory, so that other programs cannot occupy the memory area.
The executives include a process manager, a virtual memory manager, a secure reference monitor, an I/O manager, windows management specifications (Windows management instrumentation, WMI), a power manager, a system event driver (operating system event driver) node, a system and chip driver (operating system to System on Chip, OS2 SOC) node, and the like.
The process manager is used to create and suspend processes and threads.
The virtual memory manager implements "virtual memory". The virtual memory manager also provides basic support for the cache manager.
The security reference monitor may execute a security policy on the local computer that protects operating system resources, performs protection and monitoring of runtime objects.
The I/O manager performs device independent input/output and further processes call the appropriate device drivers.
The power manager may manage power state changes for all devices that support power state changes.
The system event driven node may interact with the kernel and the driver layer, for example, with a graphics card driver, and after determining that a GPU video decoding event exists, report the GPU video decoding event to the scene recognition engine.
The system and chip driver nodes may be configured for the scheduler engine to send adjustment information to the hardware device, e.g., to send adjustment PL1 and PL2 information to the CPU.
The kernel and driver layer includes a kernel and a device driver.
The kernel is an abstraction of the processor architecture, separates the difference between the executable and the processor architecture, and ensures the portability of the system. The kernel may perform thread scheduling and scheduling, trap handling and exception scheduling, interrupt handling and scheduling, etc.
The device driver operates in kernel mode as an interface between the I/O system and the associated hardware. The device drivers may include graphics card drivers, intel DTT drivers, mouse drivers, audio video drivers, camera drivers, keyboard drivers, and the like. For example, the graphics driver may drive the GPU to run and the Intel DTT driver may drive the CPU to run.
The HAL is a core state module, which can hide various details related to hardware, such as an I/O interface, an interrupt controller, a multiprocessor communication mechanism and the like, provide uniform service interfaces for different hardware platforms running Windows, and realize portability on various hardware platforms. It should be noted that, in order to maintain portability of Windows, the Windows internal components and the device driver written by the user do not directly access the hardware, but rather by calling the routine in the HAL.
The firmware layer may include a basic input output system (basic input output system, BIOS), which is a set of programs that are cured into a Read Only Memory (ROM) chip on the motherboard of the computer, which holds the most important basic input output programs, post-boot self-test programs, and system self-start programs of the computer, which can read and write specific information of the system settings from the complementary metal oxide semiconductor (complementary metal oxide semiconductor, CMOS). Its main function is to provide the lowest, most direct hardware setup and control for the computer. The Intel DTT driver may send instructions to the CPU via the BIOS.
It should be noted that, the embodiment of the present application is only illustrated by a Windows system, and in other operating systems (such as an android system, an IOS system, etc.), the scheme of the present application can be implemented as long as the functions implemented by each functional module are similar to those of the embodiment of the present application.
Fig. 3 shows a schematic workflow diagram of software and hardware when the electronic device 100 schedules resources.
As shown in fig. 3, the application layer scene recognition engine includes a system probe module, a scene recognition module, and a base policy matching manager. The scene recognition module can interact with the system probe module and the basic policy matching manager respectively. The scene recognition module may send a request to the system probe module to obtain the probe status. The system probe module may acquire the operating state of the electronic device 100. For example, the system probe modules may include a power state probe, a peripheral state probe, a process load probe, an audio video state probe, a system load probe, a system event probe, and the like.
The power state probe may subscribe to a kernel state for a power state event, determine a power state according to a callback function fed back by the kernel state, where the power state includes a battery (remaining) power, a power mode, and the like, and the power mode may include an alternating current (alternating current, AC) power and a Direct Current (DC) power. For example, the power state probe may send a request to the oseeventdriver node of the executive layer to subscribe to a power state event, which is forwarded by the oseeventdriver node to the power manager of the executive layer. The power manager may feed back a callback function to the power state probe through the oseeventdriver node.
The peripheral state probe can subscribe a peripheral event to the kernel state, and the peripheral event is determined according to a callback function fed back by the kernel state. Peripheral events include mouse wheel slide events, mouse click events, keyboard input events, microphone input events, camera input events, and the like.
The process load probe may subscribe to the process load from kernel states and determine the load of the process (e.g., the first process) according to the callback function fed back from kernel states.
The system load probe can subscribe the system load to the kernel state, and the system load is determined according to a callback function fed back by the kernel state.
The audio and video status probe may subscribe to the kernel mode for audio and video events, and determine the audio and video events currently existing in the electronic device 100 according to the callback function fed back by the kernel mode. The audio video events may include GPU decoding events, and the like. For example, the audio/video status probe may send a request to the oseeventdriver node of the executive layer for subscribing to the GPU decoding event, and the oseeventdriver node forwards the request to the graphics card driver of the kernel and driver layer. The display card driver can monitor the state of the GPU, and after the GPU is monitored to perform decoding operation, callback functions are fed back to the audio and video state probes through the OsEventDriver node.
The system event probe can subscribe to the kernel state for system events, and the system events are determined according to a callback function fed back by the kernel state. The system events may include window change events, process creation events, thread creation events, and the like. For example, the system event probe may send a request to the oseeventdriver node executing the body layer to subscribe to a process creation event, which is forwarded by the oseeventdriver node to the process manager. The process manager can feed back a callback function to the system event probe through the OsEventDriver node after the process is created. For another example, the system event probe may also send a subscribe to focus window change event to the API module, which may monitor whether the focus window of the electronic device 100 has changed, and when it is monitored that the focus window has changed, feed back a callback function to the system event probe.
It can be seen that the system probe module subscribes to various events of the electronic device 100 from the kernel mode, and then determines the running state of the electronic device 100 according to the callback function fed back from the kernel mode, so as to obtain the probe state. After the system probe module obtains the probe state, the probe state can be fed back to the scene recognition module. After the scene recognition module receives the probe state, the scene recognition module can determine the user scene where the electronic device 100 is located according to the probe state. The user scene may include a video play scene, a game scene, an office scene, a social scene, and the like. The user context may reflect the current use needs of the user. For example, when the scene recognition engine recognizes that the focus window is a window of the video application, it determines that the electronic device 100 is in a video playing scene, which indicates that the user needs to watch and browse the video using the video application. For another example, the scene recognition engine recognizes that the electronic device 100 is in a social scene when the focus window is recognized as a chat window of the WeChat ™. The scene recognition module may also send the user scene to the base policy matching manager. The base policy matching manager may determine a base scheduling policy (which may also be referred to as a second scheduling policy, see for details the description in S301, S302 below) from the user scenario. The base policy matching manager may feed back the base scheduling policy to the scene recognition module. The scene recognition module may send the base scheduling policy and the user scene to a scheduling engine of the application layer.
As shown in fig. 3, the scheduling engine includes a load manager, a chip policy aggregator, and a scheduling executor. The load management controller can receive the basic scheduling strategy and the user scene sent by the scene identification module. The load manager may also obtain the system load from the system probe module, and adjust the basic scheduling policy according to the system load and the user scenario, to obtain an actual scheduling policy (which may also be referred to as a first scheduling policy, see the description in S310 below for details). The actual scheduling policy includes an OS scheduling policy and a first CPU power consumption scheduling policy (which may also be referred to as a first sub-policy). The load manager may send the OS scheduling policy to the scheduling executor, and the scheduling executor may schedule based on the OS scheduling policy. The OS scheduling policy is used to adjust the process priority and I/O priority of the focal process. For example, the schedule executor may send an instruction to the process manager to adjust the process priority of the focal process, in response to which the process manager adjusts the process priority of the focal process. For another example, the scheduling executor may send an instruction to the I/O manager to adjust the I/O priority of the focal process, in response to which the I/O manager adjusts the I/O priority of the focal process.
The load manager may also send a first CPU power consumption scheduling policy to the chip policy aggregator, where the chip policy aggregator may obtain a second CPU power consumption scheduling policy (also referred to as a second sub-policy, see the description in S317-S325 below) based on the chip platform type of the CPU and the first CPU power consumption scheduling policy. The chip platform types of the CPU are mainly divided into two types, namely a CPU of super-Wired semiconductor company (advanced micro devices, AMD) and a CPU of Intel (Intel) which are different in adjustment mode of CPU power consumption, so that the two types of CPU are required to be distinguished.
If the type of the chip platform of the CPU is AMD (also referred to as a first type), the scheduling executor may send an instruction to adjust EPP to the power manager to adjust EPP of the CPU. In addition, the schedule executor may also send instructions to the OS2SOC driving node to adjust PL1, PL2 to adjust PL1 and PL2 of the CPU.
If the chip platform type of the CPU is Intel, the scheduling executor can send the second CPU power consumption scheduling strategy to an Intel DTT driver through the WMI plug-in, wherein the second CPU power consumption scheduling strategy can comprise a minimum value of PL1, a maximum value of PL1, duration time of PL2 and EPP, and the Intel DTT driver CPU operates based on the second CPU power consumption scheduling strategy.
The resource scheduling method provided by the embodiment of the application is mainly divided into two processes, namely: (1) determining a user scene in which the electronic device is located; (2) And carrying out resource scheduling according to the user scene where the electronic equipment is located and the system load of the electronic equipment. The above two processes will be respectively described below with reference to the drawings.
The following will take an example that the electronic device is in a video playing scene, and refer to fig. 4, to describe an interaction process of a part of modules in the electronic device shown in fig. 3. As shown in fig. 4, a flow of determining a user scenario where an electronic device is located in a resource scheduling method provided by an embodiment of the present application is as follows:
s101, a system probe module sends a request for subscribing a process creation event to an OsEventDriver node.
As shown in fig. 3, the scene recognition engine includes a system probe module that includes a system event probe. In the embodiment of the application, a system event probe can send a request for subscribing a process creation event to an OsEventDriver node located at an execution body layer. Wherein the request to subscribe to a process creation event may also be referred to as a first request.
In an alternative embodiment, the request to subscribe to a process creation event may carry a process name. That is, the scene recognition engine may subscribe to only the creation events of the specified process, reducing interference of the creation events of irrelevant processes. For example, the specified process may be a process of a video application, a process of a game application, a process of an office application, a process of a social application, and so on. Of course, in other embodiments, the scenario recognition engine may not limit the subscribed process creation events.
S102, the OsEventDriver node sends a request for subscribing a process creation event to a process manager.
The request of the process creation event may refer to the description of S101, and will not be described herein.
That is, the system event probe of the scene recognition engine may send a request to subscribe to a process creation event to the process manager through the oseeventdriver node.
It will be appreciated that the oseeventdriver node registers a callback with the process manager, and the role of registering the callback is to return the process creation event to the oseeventdriver node after the process manager creates the process.
S103, the system probe module sends a request for subscribing the GPU decoding event to the OsEventDriver node.
As also shown in fig. 3, the system probe module further includes an audio-visual status probe. In the embodiment of the application, an audio and video status probe of the system probe module can send a request for subscribing the GPU decoding event to the OsEventDriver node. Wherein, the request to subscribe to the GPU decode event may also be referred to as a third request.
S104, the OsEventdriver node sends a request for subscribing the GPU decoding event to the display card driver.
That is, the audio and video status probe of the scene recognition engine may send a request to subscribe to the GPU decoding event to the graphics card driver through the oseeventdriver node. Similarly, the oseeventdriver node may register a callback with the graphics card driver, where the role of registering the callback is to return the GPU decoding event to the oseeventdriver node after the graphics card driver monitors that the GPU performs the decoding operation.
S105, the system probe module sends a request for subscribing the focus window change event to the API module.
The API module may include a windows user interface implemented by user32.dll, which may be used to create a window. In an alternative embodiment, a request to subscribe to a focus window change event may be sent by a system event probe of the system probe module to a windows user interface of the API module. Wherein the request to subscribe to the focus window change event may also be referred to as a second request.
Likewise, the system event probe may register a callback with the API module, where the role of registering the callback is to return the focus window change event to the system event probe when the (windows user interface of the) API module monitors that the focus window has changed.
The focus window is a window with focus, and the high probability is a window which is needed to be used by a user currently. Thus, by monitoring the focus window, the user's need for use can be determined. For example, the focus window is a window of the video application, which indicates that the user needs to browse and play the video. As another example, the focus window is a window of a gaming application, indicating that the user desires to play a game. By monitoring whether the focus window changes, it can be determined whether the user's needs have changed. For example, the focus window changes from the window of the video application to the window of the game application, indicating that the user's current needs change from watching video to playing a game.
The above-mentioned steps S101, S103 and S105 are not strictly sequential, and may be sequentially performed in the order shown in fig. 4, or may be simultaneously performed, or may be sequentially performed in the order of S103, S101 and S105, sequentially performed in the order of S103, S105 and S101, sequentially performed in the order of S105, S101 and S103, or sequentially performed in the order of S105, S103 and S101. Accordingly, there is no strict order among S102, S104, and S106, as long as it is satisfied that S102 is performed after S101, S104 is performed after S103, and S106 is performed after S105, and no specific limitation is made herein.
S106, responding to the received operation of starting the video application by the user, and sending a process creation request to a process manager by the video application.
Wherein the creation process request includes a storage address of the video application.
The video application may send a request to create a process to a process manager (not shown) through the kernel32.Dll interface and the ntdll interface of the API module.
S107, the process manager creates a video application process.
Specifically, the process manager may query the binary file of the video application through the storage address. By loading the binary file of the video application program, a process running environment can be created, and the video application process is started.
Wherein the Windows operating system defines a run of an application as a process. A process may own multiple threads. A window is an example of a window structure, a graphical user interface (graphical user interface, GUI) resource, a window is created by a thread, and a thread can own all of the windows it creates. In the embodiment of the application, when the electronic device runs the video application, the process manager needs to create a process of the video application, namely, a video application process (namely, a first process). The video application process comprises a plurality of threads, the plurality of threads comprise a thread 1, the thread 1 can be used for creating a main window of the video application, and the main window is a window integrated with all function keys of the video application.
S108, the process manager reports a process creation event to the OsEventDriver node.
Wherein the process creation event may include a name of the process created by the process manager. In the embodiment of the application, the name of the process is the name of the video application process. Of course, if the process manager creates a process of another application, the name of the process corresponds to the name of the process of the other application.
As already described above, the OsEventDriver node sends a request to the process manager to subscribe to a process creation event, and registers a callback. Therefore, the process manager can report a process creation event to the oseeventdriver node after creating the video application process.
S109, the OsEventDriver node reports a process creation event to the system probe module.
The description of the process creation event is S108, and is not described herein.
In the embodiment of the application, the OsEventDriver node can report the process creation event to a system event probe of the system probe module.
S110, the system probe module sends a process creation event to the scene recognition module.
S111, responding to a call request of the thread 1, and creating the window 1 by the API module.
After the process manager creates the video application process, the thread 1 of the video application process actively calls the windows user interface creation window 1 of the API module. For example, as shown in fig. 5 (a), the electronic device may display a window 101, where the window 101 may be a desktop, or may be referred to as a main interface. The window 101 includes an icon 102 of a video application. The electronic device may receive an operation in which the user clicks on the icon 102 of the video application, and in response to the operation, as shown in (b) of fig. 5, the electronic device displays a window 103 (i.e., window 1, which may also be referred to as a first window). In the above procedure, the focus window is changed from the original window 101 to the window 103.
S112, the API module reports the focus window event to the system probe module.
In the embodiment of the present application, after creating window 1, the windows user interface of the API module may obtain the name of the first process (i.e. the focal process) and the name of the third process, where the first process is the process corresponding to the current focal window (i.e. window 1), and the third process is the process corresponding to the last focal window (e.g. window 2). Illustratively, the process corresponding to the window 1 is a video application process (first process), the name of which is, for example, hlive. Exe, and the process corresponding to the window 2 is a process of a windows program manager (third process), the name of which is, for example, explorer. Exe. And because the name of the first process is inconsistent with the name of the third process, the API module determines that the focus window changes, and reports a focus window event to a system event probe of the system probe module. Wherein the focus window change event includes the name of the first process (i.e., the focus process). Illustratively, the first process is a video application process, and the focus window change event carries a name of the video application process.
It should be noted that, in the case where the electronic device has already started the video application, the electronic device may not need to execute S106 to S111. After the system probe module sends a request for subscribing the focus window change event to the API module, if the user switches the focus window to the window of the video application, the API module can also detect that the focus window changes and report the focus window event to the system probe module.
S113, the system probe module sends a focus window event to the scene recognition module.
S114, the scene recognition module determines that the type to which the first process belongs is a video type.
The electronic device may be preconfigured with an application list, and the scene recognition module may query whether the application list includes the first process. If the application list includes the first process, the scene recognition module may determine a type to which the first process belongs. The application list comprises the process name of each application and the type of the application. By way of example, the application list may be as shown in Table 1:
TABLE 1
For example, the name of the first process is hlive. Exe, the scene recognition module may determine that the type to which the first process belongs is a video class. For another example, the name of the first process is wechat. Exe, the scene recognition module may determine that the type to which the first process belongs is a social class. It should be noted that, table 1 is only used as an example, and in fact, table 1 may further include process names of more applications and types to which the process names belong.
It should be noted that the purpose of this step is to primarily determine the user scenario in which the electronic device is located. The user scene in which the electronic device is located may include a video play scene, a game scene, a social scene, an office scene, a browser scene, and so forth. The video playing scene further comprises a video playing scene and a video browsing scene. The social scenes may further include text chat scenes, voice chat scenes, video chat scenes, and so on. The office scenes further may include document editing scenes, document browsing scenes, video conferencing scenes, and the like. The browser scene may include a browse web scene, a play video scene, and the like.
In this step, the type of the user scene where the electronic device is located may be determined by the type to which the first process belongs. For example, if the type to which the first process belongs is determined to be a video class, it may be determined that the electronic device is in a video playing scene; for another example, if it is determined that the type to which the first process belongs is a game class, it may be determined that the electronic device is in a game scene. In order to further analyze the user requirements, the scene recognition module may further analyze the specific scene where the electronic device is located by combining other parameters (such as a peripheral event, a GPU running state, etc.), so as to achieve an effect that the analysis result is more accurate, and the specific content is described in the following text.
S115, in response to receiving the operation of playing the video by the user, the video application sends a video playing instruction to the API module.
Specifically, the video application may send the video play instruction to the DirectX API of the API module. The video play instruction may include a cache address of the video.
S116, the API module reads the video file.
The API module can read the corresponding video file according to the cache address carried in the video playing instruction.
S117, the API module sends a decoding instruction to the display card driver.
S118, the display card drives to send a starting instruction to the GPU.
S119, the GPU decodes.
Specifically, the GPU may perform decoding operations on the video file through the GPU video processing engine.
S120, the GPU reports the decoding event to the display card driver.
S121, the display card driver reports the decoding event to the OsEventDriver node.
S122, the OsEventDriver node reports the decoding event to the system probe module.
Specifically, the oseeventdriver node reports the decoding event to an audio/video status probe of the system probe module.
S123, the system probe module sends a decoding event to the scene recognition module.
S124, the scene recognition module sends an instruction 1 to the system probe module.
The instruction 1 indicates the system probe module to acquire the GPU occupancy rate of the first process. The instruction 1 may carry the name of the first process.
S125, the system probe module sends a request for acquiring the GPU occupancy rate of the first process to the process manager.
Wherein the request for obtaining the GPU occupancy of the focal process may include the name of the first process.
In an alternative embodiment, a request to obtain the GPU occupancy of the first process may be sent by an audio video status probe of the system probe module to the process manager.
S126, the process manager collects the GPU occupancy rate of the first process.
Specifically, the process manager may collect the GPU occupancy rate of the first process through a graphics kernel (graphics kernel) interface driven by a graphics card.
S127, the process manager sends the GPU occupancy rate of the first process to the system probe module.
The process manager may send the GPU occupancy of the first process to the audio-video status probe of the system probe module.
S128, the system probe module sends the GPU occupancy rate of the first process to the scene recognition engine.
S129, the scene recognition module judges whether the GPU occupancy rate of the first process is larger than 0.
If the GPU occupancy rate of the first process is greater than 0, S130 is executed.
Whether the first process uses the GPU in the running process can be determined through the GPU occupancy rate of the first process, and if the GPU occupancy rate of the first process is greater than 0, the first process can be considered to use the GPU in the running process; if the GPU occupancy rate of the first process is 0, the first process is indicated to not use the GPU in the running process.
S130, the scene recognition module sends an instruction 2 to the system probe module.
Wherein instruction 2 instructs the system probe module to acquire the GPU engine of the first process. The instruction 2 may carry the name of the first process.
S131, the system probe module sends a request for acquiring the GPU engine of the first process to the process manager.
The audio and video status probe of the system probe module can send a request for acquiring the GPU engine of the first process to the process manager. The request to acquire the GPU engine of the first process includes the name of the first process.
The GPU engines comprise a GPU 3D engine, a GPU copy engine, a GPU video encoding engine and a GPU video processing engine. Wherein the GPU 3D engine is mainly responsible for processing 2D or 3D graphics. The GPU copy engine is mainly used for transmitting data. The GPU video encoding engine is mainly used for encoding operation. The GPU video processing engine performs mainly decoding operations. In some embodiments, the GPU video processing engine may also be replaced by a GPU video decoder engine.
S132, the process manager acquires the GPU engine of the first process.
Specifically, the process manager may obtain the GPU engine of the first process through a graphics kernel interface driven by a graphics card.
S133, the process manager sends a message 1 to the system probe module, wherein the message 1 indicates that the GPU engine of the first process is a GPU video processing engine.
Specifically, the process manager may send the message to an audio/video status probe of the system probe module, and then forward the message to the scene recognition module by the audio/video status.
S134, the system probe module sends a message 1 to the scene recognition module.
S135, the scene recognition module judges whether the GPU engine of the first process is a GPU video processing engine.
If the GPU engine of the first process is GPU video processing engine, then execute S129; if the GPU engine of the first process is not GPU video processing engine, then S130 is executed.
In step S114, the scene recognition engine has determined that the type to which the first process belongs is a video class, i.e. it can be determined that the electronic device is in a video playing scene. Through step S135, the scene recognition engine may determine a specific operation performed by the first process through the GPU, and further determine a specific operation of the user using the video application. For example, if the GPU engine of the first process is GPU video processing engine, indicating that the first process is using the GPU for decoding operations, the user may be considered to be playing video using the video application. For another example, if the GPU engine of the first process is not GPU video processing engine, it indicates that the first process is not using the GPU for decoding operations, then the user has a high probability of browsing video resources on the video application, and not yet playing video.
S136, the scene recognition module determines that the user scene is a video playing scene according to the process information of the first process.
The process information of the first process comprises information such as the name of the first process, the application type to which the first process belongs, the GPU occupancy rate of the first process, and a GPU engine used by the first process.
From the above, it can be seen that if the type of the first process (the focus process) is a video type, the GPU occupancy rate of the first process is greater than 0, and the GPU engine of the first process is a GPU video processing engine, it can be determined that the electronic device is in a video playing scene.
In addition to the manner of identifying a video playing scene provided in the foregoing embodiment of the present application, another manner of identifying a video playing scene is provided in the embodiment of the present application, as shown in fig. 6, and the method includes the following steps S211 to S223. Since the user scene in which the electronic device is located may change or switch frequently, it is accordingly necessary to dynamically identify the user scene. The recognition mode provided below can dynamically and accurately recognize whether the user scene is a video playing scene.
S211, the process manager detects that a new process (called a first process) is created by the Windows system.
Wherein, in response to a first operation of opening the first application by the user (for example, clicking an icon of the first application), the electronic device displays a first window, and the first window is a focus window. Accordingly, the electronic device creates a first process, which is a process of the first application. It is understood that the first process is a process corresponding to the first window. The first process is also called the focus process.
S212, the process manager sends the name of the first process (process name for short) and the process identification (process identifier, PID) to the system probe module.
The system probe module comprises an audio and video status probe. Specifically, the process manager sends the process name and PID of the first process to the audio/video status probe of the system probe module.
For example, when an alien video application is opened, the name of the process for which the system creates may be qyclient.
Wherein each process corresponds to a PID. The PID is an identity of each process, and when an application program starts to run, the system automatically allocates a unique PID to the process of the application program. It should be noted that, one application program runs as one process at a time, so when the application program process is suspended, the PID will be recovered by the system. When the application begins running again, the system reassigns another PID to the application's process.
It will be appreciated that in the case of creating a first process, the first process corresponds to a process name and a PID.
S213, the system probe module judges whether the first process is in the white list according to the process name.
In some embodiments, the whitelist may include information of a preset video class application and a corresponding available process name. In other embodiments, the whitelist may include process names for a plurality of video class applications.
In particular, according to the scheme of the application, if the process name of the first process is in the white list or the process name of the first process is found in the white list, the first process can be judged to be the process of the video application.
The following exemplifies the aiqi video application, illustrating the white list content:
</i > -Aiqi-
< Application id= "4001" name= "encyclopedic" sceneType= "4" >
< process num= "0" name= "qyclient.exe"/>// process name
< process num= "1" name= "qyplayer. Exe"/>// process name
< process num= "2" name= "qyplayercore. Exe"/>// process name
</Application>
S214, if the first process is in the white list, the system probe module sends the PID of the first process to the scene recognition module.
S215, the scene recognition module receives the PID of the first process and stores the PID of the first process into the first array.
Wherein the first array may be referred to as array a.
In some embodiments, the PIDs for the first process may be stored in an array format. Illustratively, the PID of the first process may be saved into array A. For example, assuming that the PID of the first process is 112233, then array a= {112233}. Array a is not empty at this point.
In some embodiments, in addition to storing the PID of the first process in the first array, when it is detected that a new process is created and the new process is also in the white list, the PID of the new process may also be stored in the first array. For example, assuming that the PID of the new process created is 234, then array a= {112233, 234}. That is, the PIDs of the plurality of processes currently running may be stored in the first array, and the plurality of processes are all in the whitelist.
S216, the scene recognition module starts a thread to inquire about the current process using the GPU 3D and GPU video decoding (video decoding) capability.
If the first array is not empty, the scene recognition module opens a thread to query which processes are currently using GPU 3D and GPU video decoding capabilities.
S217, the scene recognition module calls an API interface to inquire about the current process using the GPU 3D and GPU video decoding capabilities.
The thread may call the Windows interface, i.e., the API interface, to query which processes are currently using GPU 3D and GPU VideoDecode capabilities.
S218, the API interface queries the current process using the GPU 3D and GPU video decoding capabilities.
S219, the API interface sends a return value to the scene recognition module, the return value comprising one or more PIDs.
Wherein the one or more PIDs include a PID of a process currently using GPU 3D and a PID of a process currently using GPU VideoDecode.
S220, the scene recognition module analyzes the API return value, stores the PID using the 3D capability into the second array, and stores the PID using the video decoding capability into the third array.
Wherein the second array may be referred to as array B and the third array may be referred to as array C.
In this embodiment, the Windows API function may be invoked to obtain counter data, and in particular, numerous counters of system events and performance data may be accessed through the API set. The counter value may be obtained in real time or the counter data may be read from a log file. For example, the API set may be used to monitor and record current GPU occupancy, such as GPU 3D current usage and GPU VideoDecode current usage.
The system performance data, such as GPU occupancy, may be obtained, for example, using a performance database (performance data helper, PDH). The PDH provides an API for collecting current performance data, saving the performance data to and reading the data from the log file. PDH is a high-level API that can simplify the collection of performance counter data. Which facilitates query analysis, metadata caching, matching instances between examples, computing formatted values from original values, reading data from log files, and saving data to log files. To collect performance data using the PDH function, the following steps may be performed: creating a query; adding a counter to the query; collecting performance data; displaying performance data; the query is closed.
The following illustrates exemplary Windows interface functions called by the scene recognition module.
PDH_FUNCTION
PdhGetFormattedCounterArrayW(
_In_PDH_HCOUNTER hCounter,
_In_DWORDdwFormat,
_Inout_ LPDWORDlpdwBufferSize,
Out LPDWORDlPdwItemCount,// return lpdwItemCount PPDH_FMT_COUNTERRVALUE_ITEM_W constructs
_Out_writes_bytes_opt_(* lpdwBufferSize) PPDH_FMT_COUNTERVALUE_ITEM_W ItemBuffer
);
The API returns lpdwItemCount PPDH_FMT_COUNTERVALU_ITEM_W constructs, representing a process that may use GPU capabilities, traversing each PPDH_FMT_COUNTERVALU_ITEM_W construct content.
The ppdh_fmt_counter value_item_w structure described above is illustratively represented as:
typedef struct _PDH_FMT_COUNTERVALUE_ITEM_W {
LPWSTRszName;// szName contains Process PID
PDH_FMT_COUNTERVALUEFmtValue;
} PDH_FMT_COUNTERVALUE_ITEM_W, * PPDH_FMT_COUNTERVALUE_ITEM_W;
Further calling the structure body, wherein the FmtValue corresponding to the PDH_FMT_COUNTERVALU can be obtained by the double value in the structure body.
typedef struct _PDH_FMT_COUNTERVALUE {
DWORDCStatus;
union {
LONG longValue;
dubbledobbevalue;// GPU occupancy
LONGLONGlargeValue;
LPCSTR AnsiStringValue;
LPCWSTRWideStringValue;
};
} PDH_FMT_COUNTERVALUE, * PPDH_FMT_COUNTERVALUE;
Where, the double value represents the percentage value of the GPU capability being used.
If the value of FmtValue is greater than 0, the value of szName is further parsed, for example szName is (L "pid_10308_light_0x00000000_0x0000CA13_phys_0_eng_11_engype_video processing"), and the process ID (10308) is known by parsing, so that the process ID using the GPU 3D or GPU video decoding capability is finally obtained.
In some alternative embodiments, the Windows interface is invoked to obtain process arrays using GPU 3D and GPU VideoDecode capabilities, respectively.
If it is determined that the process uses GPU 3D, then the process PID is stored in array B, resulting in a process PID array that uses GPU 3D capabilities, e.g., array b= {112233, 112235.
If it is determined that the process uses GPU video decoding, the process PID is stored in array C. The end result is a process PID array using GPU VideoDecode capabilities, such as array c= {112233, 112555, }.
S221, the scene recognition module judges whether the second array and the third array comprise the PID in the first array.
It will be appreciated that if it is determined that both the second array and the third array include the PIDs in the first array, then it may be determined that both the second array and the third array include the PIDs of the first process.
For example, if array a= {112233, 234}, array b= {112233, 112235, }, array c= {112233, 112555, }, it may be determined that both the second and third arrays include PIDs in the first array (112233). Wherein the PID (112233) is a process identification of a first process corresponding to the video application.
In one aspect, if the second array and the third array each include the PID of the first process, indicating that the first process is occupying the video playback resources of the GPU, then execution continues with S222 described below.
On the other hand, if the second array or the third array does not include the PID of the first process, which indicates that the first process corresponding to the video application does not currently occupy the video playing resource of the GPU, the following S223 is continuously performed.
S222, the scene recognition module determines that the current user scene is a video playing scene.
That is, if the second array and the third array both include the process PID of the first array, which indicates that the first process corresponding to the video application is occupying the video playing resource of the GPU, then it may be determined that the current user scene is a video playing scene, that is, a scene in which the user is watching video through the video APP.
S223, when the second array or the third array does not comprise the PID in the first array, the scene recognition module clears the second array and the third array, and continuously inquires the GPU information in the life cycle of the first process.
It can be understood that after determining that the first process does not use the GPU 3D and the GPU video decoding, the second array and the third array are cleared, so that the second array and the third array can be ensured to store the latest GPU occupation information each time of updating, and the video playing scene judging data based on the second array and the third array can be updated in time, so that whether the current scene is the video playing scene can be identified more accurately in real time.
After S223, the above-mentioned S217 to S221 are continuously executed, so that it can be monitored in real time whether the current scene of the electronic device is a video playing scene.
It can be appreciated that when the first process corresponding to the video application survives, the video application may run in the foreground of the electronic device and playing the video resource, and may be considered as a video playing scene at this time; the video application may also go back to the electronic device background or the video asset may not be on demand, at which point it may be considered a non-video playing scene. Therefore, in the life cycle of the first process corresponding to the video application (i.e. when the first process survives and the first process is the focus process), it is necessary to continuously query whether the first process is using the GPU 3D and GPU video decoding capabilities to determine in real time whether the current user scene is a video playing scene.
It should be noted that, in the case where the user triggers to close the video application or other reasons cause the video application to close, the corresponding first process of the video application is killed (cleaned), where the PID of the first process may be deleted from the first array (the first array is used to store the process identifier of the video application), because after the video application is closed, the corresponding first process is killed, where it is not necessary to query whether the first process is using the GPU 3D and GPU videodecoding capabilities.
In other alternative embodiments, if process PID array a is not empty, a thread is started, which in turn queries whether the process PID in the array is using the GPU. Specifically, a Windows interface is called, a returned value of the Windows API interface is analyzed, the PID of the process using the GPU 3D or the GPU video decoder is obtained, the obtained PID is compared with the process PID to be queried, and if the obtained PID is the same as the process PID to be queried, the current user scene is determined to be a video playing scene. If the obtained PID is different from the process PID to be queried, determining that the current user scene is not a video playing scene.
The application provides two schemes, wherein when a first process in a white list is started, the first process is immediately judged whether to use GPU 3D and video decoding capability, so that whether the current scene is in a video playing scene is judged. The other scheme is that a first process which is currently running and is in a white list is stored, then a process which uses GPU 3D and video decoding capability is determined, the first process is compared with a process which uses GPU 3D and video decoding capability, whether the first process uses GPU 3D and video decoding capability or not is judged according to a comparison result, and therefore the current scene is judged to be in a video playing scene.
The process of invoking the Windows interface to obtain the process PID which is using the Video 3D capability and the Video decoding capability according to the embodiment of the present application is described in detail in the following.
The Video 3D array and the Video Decode array may be obtained by calling the pdhgetformattdcounterrrayw () interface twice.
PDH_FUNCTION
PdhGetFormattedCounterArrayW(
_In_PDH_HCOUNTERhCounter,
_In_DWORDdwFormat,
_Inout_ LPDWORDlpdwBufferSize,
_Out_LPDWORDlpdwItemCount,
_Out_writes_bytes_opt_(* lpdwBufferSize) PPDH_FMT_COUNTERVALUE_ITEM_W ItemBuffer
);
The Video 3D array and the Video decoder array are acquired, and can be distinguished by hCounter in the PDH function.
For 3d, hcounter may be constructed in the following manner:
PdhAddCounter(hQuery, L"\\GPU Engine(*engtype_3d)\\Utilization Percentage",0,&h3DCounter)。
for videodecoder, hCounter may be constructed in the following manner:
PdhAddCounter(hQuery, L"\\GPU Engine(*engtype_Video*)\\Utilization Percentage",0,&hCounter)。
after hCounter corresponding to the 3D and hCounter corresponding to the video decoder are respectively transferred into pdhgetformat dcounterrrayw (), information about using the capabilities of the GPU 3D and the GPU video decoder can be obtained.
Wherein the API returns lpdwItemCount ppdh_fmt_counter value_item_w structures, representing the progress of possible use of GPU capabilities, traversing each ppdh_fmt_counter value_item_w structure content.
The ppdh_fmt_counter value_item_w structure described above is illustratively represented as:
typedef struct _PDH_FMT_COUNTERVALUE_ITEM_W {
LPWSTRszName;
PDH_FMT_COUNTERVALUEFmtValue;
} PDH_FMT_COUNTERVALUE_ITEM_W, * PPDH_FMT_COUNTERVALUE_ITEM_W;
the ppdh_fmt_counter_item_w structure further calls a structure, and the FmtValue corresponding to the pdh_fmt_counter_value may be obtained by a doubleValue in the structure.
typedef struct _PDH_FMT_COUNTERVALUE {
DWORDCStatus;
union {
LONG longValue;
dubbledoublevalue;// use GPU percentage
LONGLONGlargeValue;
LPCSTR AnsiStringValue;
LPCWSTRWideStringValue;
};
} PDH_FMT_COUNTERVALUE, * PPDH_FMT_COUNTERVALUE;
If FmtValue, double Value is greater than 0, then the value of szName (L "pid_10308_light_0x00000000_0x0000CA13_phys_0_eng_11_engype_video processing") is further parsed, the process PID is fetched (10308), and finally the process PID using GPU 3D or GPU video decoding capability is obtained.
The API queries the process PID by calling the corresponding structure body, and then stores the PID of the process currently using the GPU 3D and GPU video decoding capability into the corresponding 3D and video decoding array according to the query result.
The following illustrates exemplary video play scene recognition process in connection with an actual application scene.
White list content:
</i > -Aiqi-
< Application id= "4001" name= "encyclopedic" sceneType= "4" >
<process num="0" name="QyClient.exe" />
<process num="1" name="QyPlayer.exe" />
<process num="2" name="QyPlayerCore.exe" />
</Application>
The process manager detects that the processes with the process names of QyClient.exe, qyPlayer.exe and QyPlayerCore.exe run and judges that the three processes exist in the white list, so that the process manager sends the process names and the PID to the audio and video status probe.
The audio video status probe stores the process PID into array a, a= {2988, 12668, 2188}.
The audio and video status probe opens a thread to query the current process using the GPU 3D and GPU video decoding capabilities. The following table 1' schematically shows the query results:
TABLE 1'
And the audio and video state probe stores the PID of the current process using the GPU 3D and GPU video decoding capability into the corresponding 3D array B and video decoding array C according to the query result.
For example, array b= {1572, 2988, 12668, 2188}, array c= {2188}.
The content in the array a can be judged to exist in the 3D array B and the VideoDecode array C, so that the current user scene can be determined as the video play scene.
It should be noted that the above steps are only described by taking a video playing scene in which the electronic device is in the video playing scene as an example. Indeed, the electronic device may also be in other user scenes (e.g., gaming scenes, office scenes, social scenes, video browsing scenes, etc.).
In an alternative embodiment, if the scene recognition engine determines that the type of the first process (focus process) belongs to the game class, the power mode of the CPU is changed to the game mode (game mode), the GPU occupancy rate of the first process is greater than 0, and the GPU engine of the first process is a GPU 3D engine, it may be determined that the electronic device is in the game scene.
Wherein the power state probes of the system probe module may send a request to the power manager to subscribe to a power mode change event. The power manager may report the power mode change event to a power state probe of the system probe module when the power module transitions to a game mode (game mode). As such, the scene recognition engine can determine whether the power mode of the CPU is a game mode through the power mode change event.
In addition, the process of the scene recognition engine obtaining the type of the first process may refer to S101, S102, S105, S106 to S114 in fig. 4, and the process of the scene recognition engine determining whether the GPU occupancy rate of the first process is greater than 0 and whether the GPU engine of the first process is a GPU 3D engine refers to S124 to S135. The difference is that the video application is replaced with a game application, and the description thereof is omitted.
The above description illustrates how to identify the user scene where the electronic device is located, after determining the user scene where the electronic device is located, the electronic device may further perform resource scheduling according to the user scene where the electronic device is located and the system load, so that the CPU of the electronic device may operate according to the actual requirement of the user, and the effect of avoiding the CPU from having excessive performance under the condition of not affecting the user experience is achieved.
Next, the resource scheduling process of the electronic device is described by taking the electronic device in a video playing scene as an example. As shown in fig. 7, a resource scheduling method provided by an embodiment of the present application has the following process of resource scheduling:
as shown in fig. 7, the resource scheduling method provided by the embodiment of the present application further includes:
s301, the scene recognition module sends scene information to the basic scheduling policy matching manager.
The scene information is used for indicating a user scene where the electronic equipment is located. For example, the electronic device may pre-assign unique identifiers to different user scenarios, and the scenario information may include the unique identifiers of the user scenarios. For example, the identification (e.g., V01) may indicate that the electronic device is in a video playback scene. For another example, the identification (e.g., V02) may indicate that the electronic device is in a video browsing scenario.
Regarding the process of determining the user scene where the electronic device is located by the scene recognition module, refer to S101 to S136 specifically, which will not be described herein.
S302, the basic strategy matching manager obtains a scheduling strategy 1 according to the scene information.
The scheduling policy 1 includes an OS scheduling policy 1 and a CPU power consumption scheduling policy 1. The OS scheduling policy 1 includes a process priority A of the first process and a first I/O priority. Wherein the scheduling policy 1 may also be referred to as a second scheduling policy.
The priority of the first process is used for measuring the capability of the first process to preempt the CPU, and the higher the priority is, the higher the first process can preferentially meet the occupation requirement of the first process on CPU resources, so that the higher the running smoothness of the first process is. In an alternative embodiment, the priority of the focal process includes, in order from high to low, the level: real-time, high, above normal, below normal, low. The priority of the first process can also be understood as the focal process priority (focus process priority, FPP).
The I/O priority of the first process is used for measuring the responsiveness of the system to the disk and the I/O request of the first process, and the higher the priority is, the higher the responsiveness of the disk and the I/O request of the first process is, namely the faster the response speed is. In an alternative embodiment, the focal process I/O priority includes, in order from high to low, the levels of: critical, high, normal, low, very low. The I/O priority of the first process can also be understood as the focal process I/O priority (focus process IO priority, FPP_IO).
The CPU power consumption scheduling policy 1 includes a first PL1, a first PL2, and a first EPP of the CPU.
It can be seen that the scheduling policy 1 may adjust the process priority, the I/O priority and the CPU power consumption of the first process.
In an alternative embodiment, the electronic device may be preconfigured with various user scenarios and their corresponding scheduling policies. For example, the correspondence between various user scenarios and their corresponding scheduling policies may be as shown in table 2.
For example, if it is determined that the user scenario in which the electronic device is located is a text chat scenario in a social scenario, the scheduling policy 1 includes: the process priority A of the first process is normal, the first I/O priority of the first process is normal, the first PL1 of the CPU is 12W, the first PL2 is 60W, and the first EPP is 220. It should be noted that the scheduling policy in table 2 is only an example, and in practical application, the values of the process priority, the I/O priority, PL1, PL2, and EPP may not coincide with the values in table 2. In addition, table 2 only shows the scheduling policies of a partial scenario, and the actual electronic device may also configure more scheduling policies than table 2.
It should be noted that, the above scheduling policy is a scheduling policy when the default electronic device is in a light load state, and may be configured according to the load characteristics and the CPU power consumption obtained by statistics, where the CPU power consumption of each application under the corresponding load characteristics is counted in advance for the electronic device. Therefore, the scheduling policy 1 obtained by the basic policy matching manager can be used as a reference scheme of a policy for scheduling by the electronic equipment, and the electronic equipment can also obtain an actual scheduling policy according to the scheduling policy 1 and combining with an actual system load.
TABLE 2
S303, the basic policy matching manager sends the scheduling policy 1 to the scene recognition module.
S304, the scene recognition module sends the scheduling strategy 1 and scene information to the load controller.
That is, after the base policy matching manager determines the scheduling policy 1, the scheduling policy 1 is forwarded to the load manager through the scene recognition module. In an alternative embodiment, the scenario recognition module may send the scheduling policy 1 and the scenario information to the load manager in two steps, respectively.
S305, the load controller sends a request for acquiring the system load to the system probe module.
Wherein the system load is the average number of processes in an executable state and processes in an uninterruptible state. The process in the runnable state refers to a process that is using or waiting for using a CPU. The process of the uninterruptible state is a process waiting for I/O access (e.g., disk I/O).
S306, the system probe module sends a request for acquiring the system load to the process manager.
As shown in fig. 3, the system probe module includes a system load probe, and a request to acquire a system load may be sent by the system load probe to the process manager. In an alternative embodiment, the oseeventdriver node may also forward a request to the process manager to acquire the system load of the system load probe (not shown).
S307, the process manager acquires the system load.
S308, the process manager sends a system load to the system probe module.
In particular, the process manager may send the system load to a system load probe of the system probe module. In an alternative embodiment, the system load may also be forwarded by the oseeventdriver node to a system load probe (not shown).
S309, the system probe module sends the system load to the load controller.
And S310, the load controller obtains a scheduling strategy 2 according to the system load, the scene information and the scheduling strategy 1.
Scheduling policy 2 may include an OS scheduling policy 2 (may also be referred to as an OS scheduling policy) and a CPU power consumption scheduling policy 2 (may also be referred to as a first sub-policy). The CPU power consumption scheduling policy 2 includes PL1 ', PL 2', EPP ', and PL 1' is PL1 adjusted by the load manager, and may also be referred to as second PL1.PL 2' is the load regulator adjusted PL2 and may also be referred to as a second PL2. EPP' is the EPP adjusted by the load controller and may also be referred to as the second EPP. Wherein the scheduling policy 2 may also be referred to as a first scheduling policy.
In an alternative embodiment, the load manager may divide the system load into three levels, light load, medium load, heavy load, respectively. The electronic device may be preconfigured with various user scenarios and their corresponding adjustment policies. For example, the adjustment strategy may be as shown in table 3:
TABLE 3 Table 3
For example, if the electronic device is in a video playing scene, and according to table 2, it can be known that the scheduling policy 1 is: the process priority of the video application process is normal, the I/O priority of the video application process is normal, PL1 (i.e., first PL 1) of the CPU is 18W, PL2 (i.e., first PL 2) is 60W, and EPP (i.e., first EPP) is 200. In this case, if the system load is a light load, there is no need to adjust the scheduling policy, that is, the scheduling policy 2 is the scheduling policy 1. If the system load is a medium load, the process priority of the video application process needs to be kept normal, the I/O priority of the video application process is normal, PL1 is increased by 22W on the basis of 18W, PL2 is increased by 30W on the basis of 60W, epp is decreased by 50 on the basis of 200, i.e. the scheduling policy 2 is: the video application process has normal process priority, the video application process has normal I/O priority (OS scheduling policy 2), PL1 ' 40W, PL2 ' 90W, and EPP ' 150 (CPU scheduling policy 2). If the system load is heavy, the process priority of the video application process needs to be kept normal, the I/O priority of the video application process is adjusted to be high, PL1 is increased by 37W on the basis of 18W, PL2 is increased by 45W on the basis of 60W, EPP is reduced by 100 on the basis of 200, namely the scheduling strategy 2 is: the process priority of the video application process is normal, the I/O priority of the video application process is high, PL1 ' is 55W, PL2 ' is 105W, and EPP ' is 100.
It should be noted that, table 3 only shows a part of user scenes and corresponding adjustment policies, and the electronic device may further configure more adjustment policies than table 3, which is not limited herein.
In an alternative embodiment, the specific mapping relationship (for example, mapping is performed by a specific formula) is satisfied between the system load and the CPU power consumption, and the load controller may also calculate the CPU power consumption by using the specific formula and the system load, so as to obtain the scheduling policy 2.
S311, the load management controller sends the OS scheduling strategy 2 to the scheduling executor.
The OS scheduling policy 2 includes a process priority B of the first process and a second I/O priority.
S312, the scheduling executor sends an instruction 1 to the I/O manager.
Wherein instruction 1 carries the second I/O priority of the first process. In addition, as shown in FIG. 3, the dispatch executor includes an I/O priority interface from which instruction 1 may be sent to the I/O manager. Wherein this instruction 1 may also be referred to as a second instruction.
S313, in response to the instruction 1, the I/O manager adjusts the I/O priority of the first process.
That is, the I/O manager may adjust the I/O priority of the first process to the second I/O priority. Thus, the first process can be guaranteed to be capable of preferentially performing the I/O access, and the response time of the first process in the I/O access process is reduced.
S314, the scheduling executor sends an instruction 2 to the process manager.
Wherein instruction 2 carries the process priority B of the first process. In addition, as shown in FIG. 3, the dispatch executor also includes a process priority interface, from which instruction 2 may be sent to the process manager. Wherein this instruction 2 may also be referred to as a first instruction.
S315, in response to receiving the instruction 2, the process manager adjusts the process priority of the first process.
That is, the process manager may adjust the process priority of the first process to process priority B. Therefore, the first process can occupy CPU resources preferentially, and smooth operation of the first process is guaranteed.
Therefore, by adjusting the I/O priority and the process priority of the first process, the I/O access of the first process and the consumption of CPU resources can be preferentially ensured, so that the first process can normally and smoothly run, and the user is ensured to have good experience.
It should be noted that, there is no strict sequence between S312 and S314, S312 may be executed first, S314 may be executed first, S312 may be executed second, S312 may be executed first, or S314 and S312 may be executed simultaneously.
S316, the load management controller sends the CPU power consumption scheduling strategy 2 to the chip strategy fusion device.
S317, the chip strategy fusion device judges that the chip platform type of the CPU is AMD or Intel.
The CPU chip of AMD company and the CPU of Intel company are different in the adjustment mode of CPU power consumption, so that distinction is needed. If the type of the chip platform of the CPU is AMD (may also be referred to as the first type), S318 is executed; if the chip platform type of the PU is Intel (also referred to as second type), then S325 is executed.
S318, the chip policy fusion device sends the CPU power consumption scheduling policy 2 to the scheduling executor.
The CPU power consumption scheduling strategy 2 comprises PL1 ', PL2 ' and EPP '.
And S319, the scheduling executor sends an instruction 3 to the OS2SOC driving node.
Wherein, instruction 3 carries PL1 'and PL 2'. That is, instruction 3 is used to adjust PL1 and PL2 of the CPU. Wherein instruction 3 may also be referred to as a third instruction.
In an alternative embodiment, instruction 3 may be sent by the CPU power consumption scheduling interface of the scheduling executor to the OS2SOC driving node.
S320, the OS2SOC driving node sends an instruction 3 to the CPU.
S321, in response to instruction 3, the cpu adjusts PL1 and PL2.
That is, the CPU may adjust PL1 to PL1 'and PL2 to PL 2'.
S322, the scheduling executor sends an instruction 4 to the power manager.
Wherein instruction 4 carries EPP'. That is, instruction 4 is used to adjust the EPP of the CPU. Instruction 4 may also be referred to as a fourth instruction.
S323, the power manager sends an instruction 4 to the CPU.
S324, in response to the instruction 4, the CPU adjusts EPP.
That is, the CPU may adjust EPP to EPP'.
S325, the chip strategy fusion device determines a dynamic tuning technical strategy number according to the CPU power consumption scheduling strategy 2.
The dynamic tuning technique (dynamic tuning technology, DTT) is a technique for automatically and dynamically distributing power consumption between an intel cube processor and an intel cube independent graphics card by intel cube companies to optimize performance and prolong battery endurance, which can improve the performance of a CPU and a GPU and balance the power of an intelligent mixed workload.
It will be appreciated that there may be a mapping relationship between the DTT policy number and the CPU power consumption scheduling policy 2. A DTT policy table is constructed in BIOS, and any CPU power consumption scheduling policy 2 can be mapped to a DTT policy number in the DTT policy table through parameters (PL 1 ', PL2 ' and EPP ') in the DTT policy table, as shown in Table 4.
The DTT policy number may be used to identify a DTT policy (may also be referred to as a second sub-policy), where the DTT policy corresponding to the DTT policy number is used to adjust pl1_mini, pl1_max, pl2, pl2_time, EPO Gear of the CPU. pL1_MINI is the minimum value of PL1, pL1_MAX is the maximum value of PL1, and pL2_TIME is the duration of PL2. The energy efficiency-performance optimization Gear (Energy Performance Optimize Gear, EPO Gear) is used for representing the strength of the DTT for adjusting the CPU energy efficiency ratio (EPP), the value range is 1-5, and the larger the value is, the more energy efficiency tends to be when the EPP is adjusted; the smaller the value, the more performance is favored when adjusting EPP.
TABLE 4 Table 4
Note that table 4 only shows the correspondence of the parts PL1', PL2', EPP ' and DTT policy numbers, and actually may include more information than table 4. For example, if the CPU power consumption scheduling policy 2 indicates that PL1' is-1, PL2' is-1 and EPP ' is-1, the DTT policy number may be determined to be 0, which corresponds to PL1 MINI of 30, PL1 max of 40, PL2 of 95, PL2 time of 28, epo Gear of 3.
S326, the chip strategy fusion device sends the DTT strategy number to the scheduling executor.
In an alternative embodiment, the chip policy aggregator may also send the power DTT policy (i.e. the second sub-policy) corresponding to the DTT policy number directly to the scheduling executor.
S327, the scheduling executor sends the DTT policy number to the Intel DTT driver.
S328, intel DTT driving sends a DTT strategy number to the CPU.
It will be appreciated that the Intel DTT driver may send a DTT policy number to the CPU via the BIOS.
S329, the CPU runs based on the DTT strategy number.
Therefore, if the chip platform type of the CPU is AMD, the chip strategy fusion device can send an instruction for adjusting EPP to the power manager through the scheduling executor, and the power manager can adjust the EPP of the CPU. In addition, the schedule executor may also send an instruction to adjust PL1, PL2 to the OS2SOC driving node, which drives PL1 and PL2 of the CPU.
If the chip platform type of the CPU is Intel, the chip strategy fusion device can determine that the power consumption scheduling strategy 2 of the CPU obtains a DTT strategy number, and send the DTT strategy number to an Intel DTT driver through a scheduling executor through bios, so that the CPU operates based on the DTT strategy number, and the effect of adjusting the power consumption is achieved.
It can be understood that the present application can acquire the focus window change event and the first information (including the process information of the focus process, the occupation condition of the focus process on the GPU, the peripheral event, the power mode, etc.), determine the current user scene of the electronic device according to the focus window change event and the first information, determine the first scheduling policy in combination with the user scene and the system load of the electronic device, and adjust the process priority, the I/O priority and the power consumption of the CPU of the focus process based on the first scheduling policy, thereby reducing the energy consumption of the electronic device while smoothly meeting the user requirement (ensuring the smooth operation of the focus process).
The embodiment of the application also provides electronic equipment, which comprises a memory and one or more processors.
Wherein the memory is for storing computer program code, the computer program code comprising computer instructions; the computer instructions, when executed by the processor, cause the electronic device to perform the functions or steps of the method embodiments described above. The structure of the electronic device may refer to the structure of the electronic device 100 shown in fig. 1.
The electronic device in the embodiment of the present application may be a notebook computer or a personal computer (personal computer, PC), etc., and the embodiment of the present application is not particularly limited.
The embodiment of the application also provides a chip system, as shown in fig. 8, which comprises at least one processor 801 and at least one interface circuit 802. The processor 801 and the interface circuit 802 may be interconnected by wires. For example, interface circuit 802 may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, interface circuit 802 may be used to send signals to other devices (e.g., processor 801). The interface circuit 802 may, for example, read instructions stored in a memory and send the instructions to the processor 801. The instructions, when executed by the processor 801, may cause the electronic device to perform the various steps of the embodiments described above. Of course, the system-on-chip may also include other discrete devices, which are not particularly limited in accordance with embodiments of the present application.
The embodiment of the application also provides a computer storage medium, which comprises computer instructions, when the computer instructions run on the electronic equipment, the electronic equipment is caused to execute the functions or steps executed by the mobile phone in the embodiment of the method.
The embodiment of the application also provides a computer program product which, when run on a computer, causes the computer to execute the functions or steps executed by the mobile phone in the above method embodiment.
It may be understood that, in order to implement the above-mentioned functions, the electronic device provided in the embodiment of the present application includes corresponding hardware structures and/or software modules for executing each function. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
The embodiment of the application can divide the functional modules of the electronic device according to the method example, for example, each functional module can be divided corresponding to each function, or two or more functions can be integrated in one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (14)
1. A resource scheduling method, which is applied to an electronic device, wherein the electronic device comprises a graphics processor GPU and a central processing unit CPU, the method comprising:
responding to a first operation of starting a first application by a user, and displaying a first window by the electronic equipment, wherein the first window is a focus window;
acquiring process information of a first process corresponding to the first window, wherein the process information of the first process comprises a first process name and a first process identifier, and the first process name corresponds to the first application;
if the first application is determined to be an application program in a preset white list according to the first process name, storing the first process identifier into a first array; if the first process is detected to be cleaned, deleting the first process identifier from the first array; the preset whitelist comprises one or more video-class application programs;
Under the condition that the first array is not empty, a Windows interface is called, and GPU occupation information is obtained, wherein the GPU occupation information comprises process identifiers corresponding to a second process, and the second process is a process which uses the three-dimensional graphics capability and the video decoding capability of the GPU;
when the process identifier corresponding to the second process does not contain the first process identifier, determining that the user scene where the electronic equipment is located is not a video playing scene;
when the process identifier corresponding to the second process comprises the first process identifier, determining that the user scene where the electronic equipment is located is a video playing scene;
determining a first scheduling strategy according to the system load of the electronic equipment and the video playing scene;
adjusting the resource allocation of the electronic equipment according to the first scheduling strategy;
the step of calling the Windows interface to acquire the GPU occupation information comprises the following steps:
invoking the Windows interface, and returning N first structures, wherein each first structure in the N first structures comprises process identification PID information and a counter value;
traversing each first structure body in the N first structure bodies to acquire the counter value in each first structure body;
When the counter value in the second structure body is larger than zero, PID information in the second structure body is acquired, wherein the PID information comprises an identification of a process of using the three-dimensional graphics capability and the video decoding capability of the GPU, and the second structure body is one structure body in the N first structure bodies;
and analyzing the PID information to obtain the GPU occupation information.
2. The method of claim 1, wherein prior to the invoking the Windows interface to obtain GPU occupancy information, the method further comprises:
creating a first thread, wherein the first thread is used for inquiring the GPU occupation information;
the step of calling the Windows interface to acquire the GPU occupation information comprises the following steps:
and responding to the successful creation of the first thread, calling a Windows interface, and acquiring the GPU occupation information.
3. The method of claim 2, wherein the creating the first thread comprises:
when the first array is judged not to be empty, creating the first thread;
the first array is used for storing process identifiers of application programs belonging to the preset white list.
4. The method of claim 1, wherein after the Windows interface is invoked to obtain GPU occupancy information, the method further comprises:
Analyzing the GPU occupation information to obtain a process identifier corresponding to the second process, wherein the process identifier corresponding to the second process comprises an identifier of a process using the three-dimensional graphics capability of the GPU and an identifier of a process using the video decoding capability of the GPU;
storing an identification of the process that is using the three-dimensional graphics capability of the GPU to a second array;
an identification of the process that is using the video decoding capabilities of the GPU is stored to a third array.
5. The method of claim 4, wherein when the process identifier corresponding to the second process includes the first process identifier, determining that the user scene in which the electronic device is located is a video playing scene includes:
and if the second array and the third array both comprise the process identification in the first array, determining that the user scene where the electronic equipment is located is a video playing scene.
6. The method of claim 5, wherein the method further comprises:
and if the second array or the third array does not comprise the process identification in the first array, resetting the second array and the third array.
7. The method of claim 6, wherein after the zeroing the second array and the third array, the method further comprises:
periodically calling the Windows interface to acquire the GPU occupation information in the life cycle of the first process; and determining whether the user scene where the electronic equipment is located is a video playing scene or not according to the GPU occupation information and the first process identifier.
8. The method of claim 1, wherein the Windows interface is a performance database PDH function.
9. The method according to any one of claims 1 to 8, wherein said determining a first scheduling policy from said system load and said video playback scenario comprises:
determining a second scheduling policy according to the video playing scene, wherein the second scheduling policy comprises a process priority A of the first process, a first input/output I/O priority, a first long-time-with-frequency power consumption PL1 of the CPU, a first short-time-with-frequency power consumption PL2 and a first energy efficiency ratio EPP;
obtaining the first scheduling policy according to the system load, the video playing scene and the second scheduling policy, wherein the first scheduling policy at least comprises a process priority B, a second I/O priority of the first process, a second PL1, a second PL2 and a second EPP of the CPU;
The system load is greater than a preset first value, the process priority B is higher than or equal to the process priority a, the second I/O priority is higher than or equal to the first I/O priority, the second PL1 is greater than the first long-time turbo power consumption PL1, the second PL2 is greater than the second PL2, and the second EPP is smaller than the first energy efficiency ratio EPP.
10. The method of any one of claims 1 to 8, wherein the first scheduling policy comprises an operating system, OS, scheduling policy and a CPU power consumption scheduling policy;
wherein the adjusting the resource allocation of the electronic device according to the first scheduling policy includes:
adjusting the process priority and the input/output (I/O) priority of the first process according to the OS scheduling strategy;
and adjusting the power consumption of the CPU according to the CPU power consumption scheduling strategy.
11. The method according to claim 10, wherein the method further comprises:
and determining the type of the chip platform of the CPU, wherein the type of the chip platform comprises a first type and a second type.
12. The method of claim 11, wherein the CPU power consumption scheduling policy comprises a first sub-policy and a second sub-policy, the second sub-policy being a dynamic tuning technique, DTT, policy determined from the first sub-policy;
Wherein the adjusting the power consumption of the CPU according to the CPU power consumption scheduling policy includes:
if the chip platform type is the first type, adjusting the power consumption of the CPU according to the first sub-strategy;
and if the chip platform type is the second type, adjusting the power consumption of the CPU according to the second sub-strategy.
13. An electronic device, the electronic device comprising: a memory and one or more processors;
wherein the memory is for storing computer program code, the computer program code comprising computer instructions; the computer instructions, when executed by the processor, cause the electronic device to perform the method of any one of claims 1 to 12.
14. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when run on an electronic device, causes the electronic device to perform the method of any one of claims 1 to 12.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2022105308598 | 2022-05-16 | ||
CN202210530859 | 2022-05-16 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116028209A CN116028209A (en) | 2023-04-28 |
CN116028209B true CN116028209B (en) | 2023-10-20 |
Family
ID=86077207
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210741197.9A Active CN116028209B (en) | 2022-05-16 | 2022-06-28 | Resource scheduling method, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116028209B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102033740A (en) * | 2009-09-29 | 2011-04-27 | 宏碁股份有限公司 | Application program connection module, method and computer system thereof |
CN106383741A (en) * | 2016-09-13 | 2017-02-08 | 宇龙计算机通信科技(深圳)有限公司 | Application processing method and mobile device |
CN107450998A (en) * | 2017-07-31 | 2017-12-08 | 北京三快在线科技有限公司 | Information real-time synchronization method, device, medium and electronic equipment between more applications |
CN107608678A (en) * | 2017-08-22 | 2018-01-19 | 深圳传音控股有限公司 | The determination methods and mobile terminal of relevance between process |
CN110413365A (en) * | 2019-07-29 | 2019-11-05 | 锐捷网络股份有限公司 | A kind of fusion dispatching method and device |
CN114443256A (en) * | 2022-04-07 | 2022-05-06 | 荣耀终端有限公司 | Resource scheduling method and electronic equipment |
-
2022
- 2022-06-28 CN CN202210741197.9A patent/CN116028209B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102033740A (en) * | 2009-09-29 | 2011-04-27 | 宏碁股份有限公司 | Application program connection module, method and computer system thereof |
CN106383741A (en) * | 2016-09-13 | 2017-02-08 | 宇龙计算机通信科技(深圳)有限公司 | Application processing method and mobile device |
CN107450998A (en) * | 2017-07-31 | 2017-12-08 | 北京三快在线科技有限公司 | Information real-time synchronization method, device, medium and electronic equipment between more applications |
CN107608678A (en) * | 2017-08-22 | 2018-01-19 | 深圳传音控股有限公司 | The determination methods and mobile terminal of relevance between process |
CN110413365A (en) * | 2019-07-29 | 2019-11-05 | 锐捷网络股份有限公司 | A kind of fusion dispatching method and device |
CN114443256A (en) * | 2022-04-07 | 2022-05-06 | 荣耀终端有限公司 | Resource scheduling method and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN116028209A (en) | 2023-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115599513B (en) | Resource scheduling method and electronic equipment | |
CN116028205B (en) | Resource scheduling method and electronic equipment | |
CN116028210B (en) | Resource scheduling method, electronic equipment and storage medium | |
CN116028207B (en) | Scheduling policy determination method, device, equipment and storage medium | |
WO2023221752A1 (en) | Information processing method and electronic device | |
WO2024037068A1 (en) | Task scheduling method, electronic device and computer-readable storage medium | |
WO2023087875A1 (en) | Process scheduling method and terminal device | |
CN116028211B (en) | Display card scheduling method, electronic equipment and computer readable storage medium | |
CN116027880B (en) | Resource scheduling method and electronic equipment | |
CN116027879B (en) | Method for determining parameters, electronic device and computer readable storage medium | |
CN116028209B (en) | Resource scheduling method, electronic equipment and storage medium | |
CN117130454B (en) | Power consumption adjustment method and electronic equipment | |
CN117130772A (en) | Resource scheduling method, electronic equipment and storage medium | |
CN116069209A (en) | Focus window processing method, device, equipment and storage medium | |
CN116025580A (en) | Method for adjusting rotation speed of fan and electronic equipment | |
CN116028005B (en) | Audio session acquisition method, device, equipment and storage medium | |
CN116055443B (en) | Method for identifying social scene, electronic equipment and computer readable storage medium | |
CN116028208B (en) | System load determining method, device, equipment and storage medium | |
CN116028314B (en) | Temperature parameter reading method, electronic device, and computer-readable storage medium | |
CN116027878B (en) | Power consumption adjustment method and electronic equipment | |
CN116089055B (en) | Resource scheduling method and device | |
CN116028206A (en) | Resource scheduling method, electronic equipment and storage medium | |
WO2024216999A1 (en) | Resource scheduling method and apparatus | |
WO2023202406A1 (en) | Display method and electronic device | |
WO2024093491A9 (en) | Performance regulation method and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |