CN113128694B - Method, device and system for data acquisition and data processing in machine learning - Google Patents

Method, device and system for data acquisition and data processing in machine learning Download PDF

Info

Publication number
CN113128694B
CN113128694B CN201911411688.1A CN201911411688A CN113128694B CN 113128694 B CN113128694 B CN 113128694B CN 201911411688 A CN201911411688 A CN 201911411688A CN 113128694 B CN113128694 B CN 113128694B
Authority
CN
China
Prior art keywords
data
importance
evaluation value
current
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911411688.1A
Other languages
Chinese (zh)
Other versions
CN113128694A (en
Inventor
张剑
钟绍宸
孙学文
王奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chaoxing Future Technology Co ltd
Original Assignee
Beijing Chaoxing Future Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chaoxing Future Technology Co ltd filed Critical Beijing Chaoxing Future Technology Co ltd
Priority to CN201911411688.1A priority Critical patent/CN113128694B/en
Publication of CN113128694A publication Critical patent/CN113128694A/en
Application granted granted Critical
Publication of CN113128694B publication Critical patent/CN113128694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • G06F18/2113Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/04Protocols for data compression, e.g. ROHC

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The present specification discloses a method, apparatus and system for data acquisition and data processing in machine learning, wherein the data acquisition method suitable for being executed on an edge computing server includes: training the target model according to the current training set to obtain a current target model; based on the current target model, a current importance calculation model for evaluating the importance of the data sample to the current target model is obtained; calculating an importance evaluation value for each data sample in the test set through the current importance calculation model to obtain an importance mean value L ave; transmitting the parameters of the current importance calculation model and the importance mean value L ave to each edge device in a broadcasting mode; receiving data transmitted by the edge equipment after screening according to the current importance calculation model and the importance mean value L ave, and adding the data into a current training set; and training the target model through the current training set after the received data reach the preset number to obtain the target model with updated parameters.

Description

Method, device and system for data acquisition and data processing in machine learning
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method, an apparatus, and a system for data acquisition and data processing in machine learning.
Background
With the development of technologies such as the internet of things and the internet of vehicles and the deployment of large-scale intelligent terminals, a great amount of data generated by edge computing in a processing terminal plays an increasingly important role, and research on equipment deployment, resource allocation and performance optimization in the edge computing is also increasingly performed. Related studies begin with various aspects to optimize the communication performance of edge computation or to combine the edge computation method into practical applications. Related edge computing techniques mainly study how to improve channel utilization, maximize communication capacity, and reduce latency through allocation of resources, but do not consider the importance of the transmitted data itself to model training. While in edge learning, the goal of optimization should also include the performance of the model, the importance of different data is not the same for learning the model training. The design of the communication scheme should also take into account the increased overall importance of transmitting data in addition to achieving higher communication capacity.
In summary, how to improve the importance of the data transmitted in the communication and optimize the performance of the model becomes a problem to be solved.
Disclosure of Invention
The present disclosure provides a method, apparatus, and system for data acquisition and data processing in machine learning, so as to overcome at least one technical problem in the prior art.
According to a first aspect of embodiments of the present specification, there is provided a data acquisition method in machine learning, adapted to be executed on an edge computing server, comprising: training the target model according to the current training set to obtain a current target model; based on the structure of the current target model, obtaining a current importance calculation model for evaluating the importance of a data sample to the current target model, wherein the current importance calculation model inferentially obtains the analog output of the current target model to each input, and represents the analysis capability of the current target model to the input through the difference between the analog output and the standard output corresponding to each input, so as to be used as an importance evaluation value of the input, and the importance evaluation value represents the analysis capability of the current target model to the data sample; calculating a corresponding importance evaluation value for each data sample in the test set through the current importance calculation model, and calculating an importance mean value L ave of the data samples in the test set; transmitting parameters of the current importance calculation model and the corresponding importance mean value L ave to each edge device in a broadcasting mode, so that each edge device screens data transmitted to an edge calculation server according to the current importance calculation model and the corresponding importance mean value L ave; the receiving edge device filters the data according to the current importance calculation model and the corresponding importance mean value L ave and adds the data into the current training set; and training the target model through the current training set after the received data reach the preset number to obtain the target model with updated parameters.
Optionally, the step of obtaining a current importance calculation model of the data importance of the current target model by predicting the data sample based on the structure of the current target model includes: acquiring a training sample set of an importance calculation model, wherein the training sample set comprises a plurality of importance training samples, each importance training sample comprises a sample input of sample data, a standard output, a model output of a current target model on the sample input and an importance evaluation value of the sample data, the importance evaluation value is the square of a two-norm difference value between the model output and the standard output, and the sample data is a data sample in the target model training set; and training the importance calculation model through the training sample set to obtain a current importance calculation model, and outputting an importance evaluation value of the data sample for the current target model according to the input data sample by the importance calculation model.
Optionally, the receiving edge device filters the data according to the current importance calculation model and the corresponding importance mean value L ave, and adds the data into the current training set; after the received data reach the preset number, training the target model through the current training set, and obtaining the target model with updated parameters, the method further comprises the following steps: after a preset time delay, a current importance calculation model for evaluating the importance of the data sample to the current target model is obtained based on the structure of the current target model, a corresponding importance evaluation value is calculated for each data sample in the test set through the current importance calculation model, an importance mean L ave of the data sample in the test set is calculated, and parameters of the current importance calculation model and the corresponding importance mean L ave are sent to each edge device in a broadcasting mode.
According to a second aspect of embodiments of the present specification, there is provided a data processing method in machine learning, adapted to be executed on an edge device, comprising: receiving parameters of a current importance calculation model and a corresponding importance mean value L ave sent by an edge calculation server; randomly selecting a preset number of data to be added into an important data area with fixed preset capacity, calculating a corresponding importance evaluation value for each data in the important data area through the current importance calculation model, randomly selecting the data to calculate an importance evaluation value of the data, and updating the data in the important data area according to the importance evaluation value of the data; obtaining an importance threshold value theta L ave according to the importance average value L ave and a preset threshold value theta; selecting data with the maximum importance evaluation value in an important data area as data to be transmitted, and calculating the importance evaluation value of the compressed data to be transmitted to obtain a first evaluation value; comparing the first evaluation value with the importance threshold value thetal ave, if the first evaluation value is larger than the importance threshold value thetal ave, increasing the compression rate of the data to be transmitted and calculating an importance evaluation value under a new compression rate until the importance evaluation value of the data to be transmitted is not larger than the importance threshold value thetal ave or reaches the maximum compression rate of data transmission, and obtaining the maximum compression rate of the data to be transmitted and a second evaluation value corresponding to the compression rate; measuring channel conditions, calculating corresponding signal transmission rate according to the obtained channel parameters, obtaining channel competition parameters according to the second evaluation value of the data to be transmitted and the transmission rate, and participating in channel competition according to the channel competition parameters, wherein the channel competition parameters are in direct proportion to the product of the second evaluation value of the data to be transmitted and the transmission rate; and if the access opportunity of the transmission data is obtained in the channel competition, sending the data to be transmitted to an edge computing server at the maximum compression rate of the data to be transmitted.
Optionally, the step of randomly selecting a preset number of data to be added into an important data area with a fixed preset capacity, calculating a corresponding importance evaluation value for each data in the important data area through the current importance calculation model, randomly selecting data to calculate an importance evaluation value of the data, and updating the data in the important data area according to the importance evaluation value of the data includes the steps of: randomly selecting a preset number of data to be added into an important data area with fixed preset capacity; calculating an importance evaluation value of each data in the important data area through the current importance calculation model, and sequencing the data in the important data area according to the importance evaluation value; randomly selecting data outside an important data area, and obtaining an importance evaluation value of the data through the current importance calculation model; and comparing the importance evaluation value of the data with the importance evaluation value of the data in the important data area, and inserting the data into the important data area before the corresponding data if the importance evaluation value of the data is just larger than the importance evaluation value of one data in the important data area.
Optionally, selecting the data with the largest importance evaluation value in the important data area as the data to be transmitted, and calculating the importance evaluation value of the compressed data to be transmitted to obtain a first evaluation value; after comparing the first evaluation value with the importance threshold θl ave, the method further includes: and if the first evaluation value is not greater than the importance threshold value thetal ave, taking the first evaluation value as a second evaluation value and taking the corresponding compression rate as the maximum compression rate of the data to be transmitted.
Optionally, the step of measuring a channel condition and calculating a corresponding signal transmission rate according to the obtained channel parameter, obtaining a channel contention parameter according to the second evaluation value of the data to be transmitted and the transmission rate, and participating in channel contention according to the channel contention parameter includes: measuring channel conditions by using a channel condition measurement algorithm to obtain the signal-to-noise ratio of current signal transmission; according to the signal-to-noise ratio, calculating a corresponding transmission rate through a shannon formula; obtaining channel competition parameters according to the second evaluation value of the data to be transmitted and the transmission rate; and participating in channel contention defined by the contention access protocol of the MAC layer according to the channel contention parameter.
According to a third aspect of embodiments of the present disclosure, there is provided an edge computing server, including a target model training module, an importance model training module, a mean value computing module, a model issuing module, and a training set updating module, wherein: the target model training module is configured to train the target model according to the current training set to obtain a current target model; the importance model training module is configured to obtain a current importance calculation model for evaluating the importance of a data sample to the current target model based on the structure of the current target model, the current importance calculation model inferentially obtains the analog output of the current target model to each input, and characterizes the analysis capability of the current target model to the input through the difference between the analog output and the standard output corresponding to each input, and the importance calculation model is used as an importance evaluation value of the input, and the importance evaluation value characterizes the analysis capability of the current target model to the data sample; the average value calculation module is configured to calculate a corresponding importance evaluation value for each data sample in the test set through the current importance calculation model, and calculate an importance average value L ave of the data samples in the test set; the model issuing module is configured to send parameters of the current importance calculation model and the corresponding importance mean value L ave to each edge device in a broadcasting mode, so that each edge device screens data sent to an edge calculation server according to the current importance calculation model and the corresponding importance mean value L ave; the training set updating module is configured to receive data sent by the edge equipment after screening according to the current importance calculation model and the corresponding importance mean value L ave, and add the data into the current training set; and training the target model through the current training set after the received data reach the preset number to obtain the target model with updated parameters.
According to a fourth aspect of embodiments of the present disclosure, there is provided an edge device, including a model receiving module, an important data updating module, a threshold setting module, a compression rate selecting module, a contention participation module, and a transmission module, wherein: the model receiving module is configured to receive parameters of the current importance calculation model and the corresponding importance mean value L ave sent by the edge calculation server; the important data updating module is configured to randomly select a preset number of data to be added into an important data area with fixed preset capacity, calculate a corresponding importance evaluation value for each data in the important data area through the current importance calculation model, randomly select the data to calculate an importance evaluation value of the data, and update the data in the important data area according to the importance evaluation value of the data; the threshold setting module is configured to obtain an importance threshold value thetaL ave according to the importance average value L ave and a preset threshold value thetaO; the compression rate selection module is configured to select data with the largest importance evaluation value in the important data area as data to be transmitted, calculate the importance evaluation value of the compressed data to be transmitted, and obtain a first evaluation value; comparing the first evaluation value with the importance threshold value thetal ave, if the first evaluation value is larger than the importance threshold value thetal ave, increasing the compression rate of the data to be transmitted and calculating an importance evaluation value under a new compression rate until the importance evaluation value of the data to be transmitted is not larger than the importance threshold value thetal ave or reaches the maximum compression rate of data transmission, and obtaining the maximum compression rate of the data to be transmitted and a second evaluation value corresponding to the compression rate; the competition participation module is configured to measure channel conditions and calculate corresponding signal transmission rate according to the obtained channel parameters, obtain channel competition parameters according to the second evaluation value of the data to be transmitted and the transmission rate, participate in channel competition according to the channel competition parameters, and the channel competition parameters are proportional to the product of the second evaluation value of the data to be transmitted and the transmission rate; and the transmission module is configured to send the data to be transmitted to the edge computing server at the maximum compression rate of the data to be transmitted if the access opportunity of the data to be transmitted is obtained in the channel competition.
According to a fifth aspect of embodiments of the present specification, there is provided a system comprising at least one edge computing server and at least one edge device, the edge computing server comprising a first storage module, the edge device comprising a second storage module, the first storage module storing a first program and the second storage module storing a second program, the edge computing server performing any one of the above methods of data acquisition in machine learning adapted to be performed on the edge computing server when the first program is executed; when the second program is executed, the edge device performs any of the above-described data processing methods in machine learning suitable for execution on the edge device.
The beneficial effects of the embodiment of the specification are as follows:
According to the method, an edge computing server trains a current importance computing model according to model output of a current target model on data samples and standard output of the data samples, and obtains an importance average value of the data samples on a test set, wherein the importance average value represents average analysis capability of the current model on data so as to achieve that the average analysis capability is added into a standard for screening training data. The edge computing server sends the current importance computing model and the importance mean value to the edge device, and the edge device uses the new importance computing model and the new mean value as new standards for updating the important data area after receiving the current importance computing model and the importance mean value sent by the edge computing server. The arrangement of the important data area of the edge equipment realizes the sorting of the important evaluation values of the training data generated on the edge equipment according to the data to the current target model. Then, when transmitting data to the edge computing server, the edge device compresses the data with the maximum importance evaluation value in the important data area, and determines the final compression rate of the data and the corresponding final importance evaluation value by comparing the importance evaluation value of the compressed data with a preset importance threshold value, so that the compression rate of the data with more importance for training the target model is higher, and better model performance is obtained. In the scene, each edge device obtains the opportunity of accessing the channel through competition, and the product of the importance evaluation value of the data to be transmitted by each edge device and the transmission rate of the device is added into the channel competition parameter, so that the advantage of the device with high transmission rate in competition is ensured, the device with larger importance evaluation value of the transmitted data has the advantage when competing for accessing the opportunity, and the total importance of the data obtained by the edge computing server is improved. According to the embodiment of the specification, the training data to be transmitted is screened according to the importance evaluation value, the data with the large importance evaluation value is added into the model training set, the training efficiency of the model is improved, the training effect similar to that of the original data set can be obtained under the condition of less data volume, the requirement on the training data volume is reduced, the communication cost is reduced, better model performance is realized under the given communication resource, the communication burden caused by a large amount of training data is solved, and the method has advancement.
The innovation points of the embodiment of the specification comprise:
1. In this embodiment, the edge computing server trains the current importance computing model according to the model output of the current target model to the data sample and the standard output of the data sample, and obtains an importance average value of the data sample on the test set, where the importance average value characterizes the average analysis capability of the current model on the data, so as to implement adding the average analysis capability to the standard for screening training data. The edge computing server sends the current importance computing model and the importance mean value to the edge device, and the edge device uses the new importance computing model and the new mean value as new standards for updating the important data area after receiving the current importance computing model and the importance mean value sent by the edge computing server. The arrangement of the important data area of the edge equipment realizes the sorting of the important evaluation values of the training data generated on the edge equipment according to the data to the current target model. Then, when transmitting data to the edge computing server, the edge device compresses the data with the maximum importance evaluation value in the important data area, and determines the final compression rate of the data and the corresponding final importance evaluation value by comparing the importance evaluation value of the compressed data with a preset importance threshold value, so that the compression rate of the data with more importance for training the target model is higher, and better model performance is obtained. In the scene, each edge device obtains the opportunity of accessing the channel through competition, and the product of the importance evaluation value of the data to be transmitted by each edge device and the transmission rate of the device is added into the channel competition parameter, so that the advantage of the device with high transmission rate in competition is ensured, the device with larger importance evaluation value of the transmitted data has the advantage when competing for accessing the opportunity, and the total importance of the data obtained by the edge computing server is improved. According to the embodiment of the specification, the training data to be transmitted is screened according to the importance evaluation value, the data with the large importance evaluation value is added into the model training set, the training efficiency of the model is improved, and the similar training effect on the original data set can be obtained under the condition of less data volume, so that the requirement on the training data volume is reduced, the communication cost is reduced, better model performance is realized under the given communication resource, and the communication burden caused by a large amount of training data is solved.
2. In this embodiment, the importance evaluation value is the square of the two norms of the difference between the current target model output and the standard output, and the data importance evaluation is defined by using the Loss in the model training, and the definition is applicable to most machine learning models, so that the communication scheme can be designed for the machine learning system by using the data importance evaluation value, and the method has the advantages of wide application and mobility, and is one of the innovation points of the embodiments of the present specification.
3. In this embodiment, the edge device performs data screening by using the data importance evaluation value, and the arrangement of the important data area realizes that the training data generated on the edge device is ordered according to the importance evaluation value of the data on the current target model training, and applies the limited resource to transmit the data more important to the target model training, so that all the generated training data are not required to be transmitted for training, and the training effect similar to the whole training data set is achieved by using part of the training data, thereby greatly reducing the communication cost.
4. In this embodiment, the edge device determines the data compression rate by using the importance evaluation value, and based on the definition of the importance of the data, the weaker the current model can analyze the data, the larger the importance evaluation value of the data, the more information is required for the model to learn the data, and the higher the compression rate of the data should be, so as to determine the final compression rate of the data and the corresponding final importance evaluation value by comparing the importance evaluation value of the compressed data with a preset importance threshold, comprehensively considering the influence of the communication efficiency and the data on model training, and obtaining better model performance under given communication resources, which is one of the innovation points of the embodiments of the present specification.
5. In this embodiment, a channel allocation scheme is designed by using data importance, and after combining data importance evaluation and the transmission rate of a channel, the advantage of a device with high transmission rate in competition is guaranteed, and a device with a larger evaluation value of the transmitted data importance has advantages when competing for access opportunities, so that the total importance of data obtained by an edge computing server is improved, and the improvement brought by model training per unit communication resource is larger, which is one of innovation points of the embodiments of the present specification.
6. In this embodiment, parameters of a current target model are required for the importance evaluation value to be able to be calculated on each edge device, and an edge calculation server is required to issue model parameters to each edge device, which brings additional communication overhead. Firstly, based on the structure of a current target model, training to obtain an importance calculation model with fewer parameters, and issuing the importance calculation model with fewer parameters to each edge device by an edge calculation server so as to reduce communication expenditure; secondly, new model parameters are issued at intervals instead of every update of the target model, and the target model changes less frequently under the condition that the target model is trained more mature, so that the importance calculation model parameters are issued at intervals, and larger errors are not caused to the calculation of the importance evaluation value. The two optimization modes reduce the communication cost of the model issued by the edge computing server to each edge device, compared with the overall cost saving after the importance evaluation of the data is brought into the system data transmission, the additional communication cost generated by the issuing model is very economical in practice, and the basis is provided for the compression rate selection and the data screening of the data importance application at the edge device end, so that the method is one of innovation points of the embodiment of the specification.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method of data acquisition in machine learning suitable for execution on an edge computing server according to one embodiment of the present disclosure;
FIG. 2 is a flow chart of a method of data processing in machine learning suitable for execution on an edge device according to one embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a method for acquiring and processing data in machine learning according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an edge computing server according to an embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of an edge device according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a system according to an embodiment of the present disclosure.
Detailed Description
The technical solutions of the embodiments of the present specification will be clearly and completely described below with reference to the drawings of the embodiments of the present specification, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without any inventive effort, are intended to be within the scope of the invention.
It should be noted that the terms "comprising" and "having" and any variations thereof in the embodiments and figures herein are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
According to the embodiment of the specification, a scene of machine learning model training by utilizing edge calculation is considered, training data are collected by edge equipment deployed in the scenes of the Internet of things, the Internet of vehicles and the like, and the training data are transmitted to an edge calculation server for model training. For model training, the importance of different training data samples is different, and the communication scheme design can be optimized by utilizing the characteristic, so that the optimization of the machine learning model performance under the limitation of communication resources is realized.
For this purpose, an assessment of the importance of the data in machine learning is defined as the Loss value (Loss) of the data on the machine learning model as it is trained, i.e. the weaker the model's analytical ability to this data sample, the greater the assessment of the importance of the data sample. For example, for model training with L2 Loss, the importance assessment value is
L=||F(x)-Gt||2
Where F (x) is the standard output (i.e., groundtruth) for which the model corresponds to this data sample output and Gt corresponds to this data.
The embodiment of the specification discloses a method, a device and a system for data acquisition and data processing in machine learning. The following will describe in detail.
Fig. 1 is a flowchart of a data acquisition method in machine learning suitable for being executed on an edge computing server according to one embodiment of the present disclosure. As shown in fig. 1, an embodiment of the present disclosure provides a data acquisition method in machine learning, which is adapted to be executed on an edge computing server, and includes:
S110, training the target model according to the current training set to obtain the current target model.
And part of data samples are stored on the edge computing server as an initial training set, and an initial target model is obtained through training of the initial training set before receiving data transmitted by each edge device.
S120, based on the structure of the current target model, obtaining a current importance calculation model for evaluating the importance of a data sample to the current target model, wherein the current importance calculation model inferentially obtains the simulation output of the current target model to each input, and characterizes the analysis capability of the current target model to the input through the difference between the simulation output and the standard output corresponding to each input, and takes the analysis capability of the current target model to the data sample as an importance evaluation value of the input, and the importance evaluation value characterizes the analysis capability of the current target model to the data sample.
As a target model to be trained, the model parameters are numerous, the data of the equipment end are required to be evaluated by using the current model, the edge computing server needs to issue the parameters of the current model to each edge equipment, but issuing all the parameters of the current model tends to cause great communication burden. In order to solve the problem, the idea of network distillation is adopted, a distillation network with fewer model parameters and capable of simulating the output of the current target model is trained, the output of the current target model to equipment end data is predicted by utilizing the distillation network, and the difference between the output and the standard output of the data is calculated by combining the definition of the importance evaluation value to serve as the importance evaluation value of the data. According to the idea of network distillation, the model output of the current target model to the data sample is added into a training sample set of the importance calculation model, and the importance calculation model is obtained through training.
In a specific embodiment, the step of obtaining a current importance calculation model for predicting the data importance of the data sample to the current target model based on the structure of the current target model includes:
Acquiring a training sample set of an importance calculation model, wherein the training sample set comprises a plurality of importance training samples, each importance training sample comprises a sample input of sample data, a standard output, a model output of a current target model on the sample input and an importance evaluation value of the sample data, the importance evaluation value is the square of a two-norm difference value between the model output and the standard output, and the sample data is a data sample in the target model training set;
and training the importance calculation model through the training sample set to obtain a current importance calculation model, and outputting an importance evaluation value of the data sample for the current target model according to the input data sample by the importance calculation model.
Based on the current target model, an importance calculation model with fewer parameters is obtained through training, an edge calculation server transmits the importance calculation model with fewer parameters to each edge device, so that communication cost is reduced, the target model to be trained is considered to be possible when the device transmits data, the edge device can screen the data to be transmitted according to the transmitted model parameters, the quality of the acquired training data is improved, the requirement on the training data is further reduced, communication pressure is reduced, and better model performance can be achieved under the condition of consuming the same communication resources.
S130, calculating a corresponding importance evaluation value for each data sample in the test set through the current importance calculation model, and calculating an importance mean value L ave of the data samples in the test set.
After the importance calculation model capable of evaluating the importance of the data is obtained, the importance evaluation value of any data at the equipment end relative to the current target model can be calculated, and a comparison standard is set for the importance evaluation value of the data at the equipment end, the standard is obtained on a test set or any fixed data set, and the importance average value of each data sample in the fixed data set (the current training set with the continuously enlarged data volume) is calculated through the current importance calculation model to obtain an importance average value L ave. The importance mean L ave is set to set a threshold for the importance evaluation value of the training data collected by the equipment end, which characterizes the analysis capability of the current model to the test set or any fixed data set, and an importance threshold is established on the basis to screen the training data.
And S140, transmitting the parameters of the current importance calculation model and the corresponding importance mean value L ave to each edge device in a broadcasting mode, so that each edge device screens data transmitted to an edge calculation server according to the current importance calculation model and the corresponding importance mean value L ave.
The edge computing server sends the current importance computing model with fewer parameters and an importance average value L ave to the edge equipment, the edge equipment end can screen the acquired training data based on the model and the average value, select a proper compression rate for the data to be sent, bring factors of importance evaluation values of the data to be sent into channel competition, realize different processing of data with different importance evaluation values, improve the quality of the transmitted data, and enable the training performance of the server end model to be better.
S150, receiving data sent by the edge equipment after screening according to the current importance calculation model and the corresponding importance mean value L ave, and adding the data into a current training set; and training the target model through the current training set after the received data reach the preset number to obtain the target model with updated parameters.
The edge computing server receives data sent by each edge device, adds the data into the current training set, trains the target model after collecting a preset number of new data, and continuously optimizes the performance of the target model. After a new target model is obtained, a new importance calculation model is trained according to the new target model, a new importance mean value L ave is calculated and is sent to each edge device to update the screening standard of data on each edge device, and the cycle is performed, so that the edge device can screen data according to the evaluation value of the data on the current target model training performance improvement size, and the utilization efficiency of communication resources is essentially improved.
In a specific embodiment, the receiving edge device filters the data according to the current importance calculation model and the corresponding importance mean value L ave, and adds the data into the current training set; after the received data reach the preset number, training the target model through the current training set, and obtaining the target model with updated parameters, the method further comprises the following steps:
After a preset time delay, a current importance calculation model for evaluating the importance of the data sample to the current target model is obtained based on the structure of the current target model, a corresponding importance evaluation value is calculated for each data sample in the test set through the current importance calculation model, an importance mean L ave of the data sample in the test set is calculated, and parameters of the current importance calculation model and the corresponding importance mean L ave are sent to each edge device in a broadcasting mode.
The edge computing server continuously receives data and adds the data into the training set to train the model to update the model parameters, but the model is not required to be issued again after each update of the model, but is issued at intervals, and the model parameters change less frequently under the condition that the target model is trained to be more mature, so that the issuing of the importance computing model at intervals does not bring larger errors to the evaluation of the importance of the data, and communication resources can be saved.
According to the method, the edge computing server trains the current importance computing model according to the current target model, the importance mean value of the data sample is obtained on the test set, the current importance computing model and the importance mean value are issued to each edge device, so that the edge device screens training data transmitted to the server according to the importance evaluation value, training efficiency of the model is improved by adding data with larger importance evaluation value into the target model training set, a similar training effect on the original data set can be obtained under the condition of smaller data quantity, accordingly, requirements on training data quantity are reduced, communication cost is reduced, better model performance is achieved under given communication resources, and communication burden caused by a large amount of training data is reduced.
Fig. 2 is a flow chart of a data processing method in machine learning suitable for being executed on an edge device according to an embodiment of the present disclosure. As shown in fig. 2, an embodiment of the present disclosure provides a data processing method in machine learning, adapted to be executed on an edge device, including:
S210, receiving parameters of a current importance calculation model and a corresponding importance mean value L ave sent by an edge calculation server.
Each edge device receives a current importance calculation model and an importance mean value L ave of a current target model issued by an edge calculation server, performs screening and sorting and selective transmission on collected training data at an edge device end, and transmits the screened data to the server end to be more beneficial to the performance improvement of the target model, so that the training performance of the target model is optimized.
S220, randomly selecting a preset number of data to be added into an important data area with fixed preset capacity, calculating a corresponding importance evaluation value for each data in the important data area through the current importance calculation model, randomly selecting the data to calculate an importance evaluation value of the data, and updating the data in the important data area according to the importance evaluation value of the data.
In the big data age, for massive training data generated by edge equipment, all training data cannot be transmitted to an edge computing server under the condition of limited communication resources, and the communication burden is greatly reduced by reasonably screening the data and using a smaller data set to achieve almost the same model training effect. The screening of the data requires that the edge device have target model parameters and can evaluate the importance of the data to the current target model so as to realize the transmission of the data on the device according to the importance sequence. Since the importance assessment of data will change continuously with changes in the model, to accommodate model changes, the importance of the data needs to be recalculated when the target model is updated. However, the calculation of importance of a large amount of data on the edge device consumes a large amount of calculation resources, in order to solve the problem, an important data area is set on each edge device, the data area stores the most important small part of data, the importance evaluation value in the important data area is recalculated after each time the target model is updated, meanwhile, the edge device can continuously select data from the whole data randomly to calculate the importance evaluation value, and the data with higher importance is used for replacing the data with lower importance evaluation value in the important data area. When the edge device needs to transmit data, the edge device can select the data with the maximum data importance evaluation value in the important data area for transmission.
In a specific embodiment, the step of randomly selecting a preset number of data to be added into an important data area with a fixed preset capacity, calculating a corresponding importance evaluation value for each data in the important data area through the current importance calculation model, randomly selecting data to calculate an importance evaluation value of the data, and updating the data in the important data area according to the importance evaluation value of the data includes:
Randomly selecting a preset number of data to be added into an important data area with fixed preset capacity; calculating an importance evaluation value of each data in the important data area through the current importance calculation model, and sequencing the data in the important data area according to the importance evaluation value; randomly selecting data outside an important data area, and obtaining an importance evaluation value of the data through the current importance calculation model; and comparing the importance evaluation value of the data with the importance evaluation value of the data in the important data area, and inserting the data into the important data area before the corresponding data if the importance evaluation value of the data is just larger than the importance evaluation value of one data in the important data area.
The arrangement of the important data area at the edge equipment end realizes the sorting of the important evaluation values of the training data generated on the edge equipment on the basis of the data on the current target model training, applies limited resources to the transmission of the data which is more important to the target model training, does not need to transmit all the generated training data for training, achieves the training effect similar to the whole training data set by using part of the training data, and greatly reduces the communication cost.
And S230, obtaining an importance threshold value theta L ave according to the importance average value L ave and a preset threshold value theta.
On the basis of the importance mean value L ave issued by the receiving edge computing server, the importance mean value L ave is multiplied by a threshold value theta to obtain an importance threshold value theta L ave, and the standard of screening data can be adjusted by adjusting the threshold value theta, so that the method has flexibility and adjustability, and is convenient for coping with data requirements under different conditions.
S240, selecting data with the largest importance evaluation value in the important data area as data to be transmitted, and calculating the importance evaluation value of the compressed data to be transmitted to obtain a first evaluation value; and comparing the first evaluation value with the importance threshold value thetal ave, if the first evaluation value is larger than the importance threshold value thetal ave, increasing the compression rate of the data to be transmitted and calculating an importance evaluation value under a new compression rate until the importance evaluation value of the data to be transmitted is not larger than the importance threshold value thetal ave or reaches the maximum compression rate of data transmission, and obtaining the maximum compression rate of the data to be transmitted and a second evaluation value corresponding to the compression rate.
In order to improve communication efficiency, communication data is generally compressed, but lossy compression of data causes information loss, so that trade-off between data size and information loss is required. The consideration of the importance evaluation value of the data is added in the compression rate selection of training data in the machine learning system, and is explained from two aspects as follows: on the one hand, the Loss of information in the training data can move the training data to the other end of the classification hyperplane in the feature space, so that negative influence is brought to the training of the model, the Loss of the compression rate reduction information is increased correspondingly, and for the judgment of the situation, the data Loss with large importance evaluation value can be larger according to the importance evaluation value of the data, so that the probability that the data Loss is moved to the other end of the hyperplane is higher, and the compression rate of the data is increased; on the other hand, the larger the importance evaluation value, the weaker the model is in the capability of extracting and analyzing the characteristics of the data, and the data should provide more detail information to help the model learn the processing method of the data, so that a larger compression rate should be used. By combining the above considerations, the optimal compression rate of each data to be transmitted is selected by setting an importance threshold, and if the importance evaluation value of the data to be transmitted is greater than the importance threshold, a higher compression rate is adopted until the importance evaluation value of the data is not greater than the importance threshold or the compression rate of the data reaches the maximum compression rate of data transmission.
In a specific embodiment, the data with the largest importance evaluation value in the important data area is selected as the data to be transmitted, and the importance evaluation value of the compressed data to be transmitted is calculated to obtain a first evaluation value; after comparing the first evaluation value with the importance threshold θl ave, the method further includes:
and if the first evaluation value is not greater than the importance threshold value thetal ave, taking the first evaluation value as a second evaluation value and taking the corresponding compression rate as the maximum compression rate of the data to be transmitted.
If the importance evaluation value of the data to be transmitted in the edge device is compared with the importance threshold value thetal ave and is not greater than the importance threshold value thetal ave, the compression rate of the data and the corresponding importance evaluation value can be directly determined.
S250, measuring channel conditions, calculating corresponding signal transmission rate according to the obtained channel parameters, obtaining channel competition parameters according to the second evaluation value of the data to be transmitted and the transmission rate, and participating in channel competition according to the channel competition parameters, wherein the channel competition parameters are in direct proportion to the product of the second evaluation value of the data to be transmitted and the transmission rate.
In a communication system, it is necessary to determine channel allocation according to the environment in which each edge device is located, and for an edge device in an environment with higher communication quality, a channel should be allocated with a higher probability to obtain an opportunity to transmit data. After taking the importance evaluation value of the data to the current target model into consideration, the channel allocation opportunity is obtained for the edge device with higher importance evaluation value of the data to be transmitted, so that the channel competition of the edge device is carried out by comprehensively considering two factors of the channel quality and the importance evaluation value of the data to be transmitted. For example, each device obtains the opportunity of transmitting data as P oc Iv, I is the importance of the data to be transmitted on the device, v is the transmission rate in the channel where the device is located, and for the system competing for the channel by each edge device, the parameters involved in competition (related to a specific competition scheme) can be determined according to the respective transmission probabilities of the devices, so that the access probability of each edge device is positively related to P oc Iv.
In a specific embodiment, the step of measuring the channel condition and calculating the corresponding signal transmission rate according to the obtained channel parameter, obtaining the channel competition parameter according to the second evaluation value of the data to be transmitted and the transmission rate, and participating in the channel competition according to the channel competition parameter includes:
Measuring channel conditions by using a channel condition measurement algorithm to obtain the signal-to-noise ratio of current signal transmission; according to the signal-to-noise ratio, calculating a corresponding transmission rate through a shannon formula; obtaining channel competition parameters according to the second evaluation value of the data to be transmitted and the transmission rate; and participating in channel contention defined by the contention access protocol of the MAC layer according to the channel contention parameter.
According to the channel quality and the importance evaluation value of the data to be transmitted, each edge device participates in channel competition, the channel condition is measured by using a channel condition measurement algorithm, the transmission rate is calculated, and further, the channel competition parameter positively correlated with the importance evaluation value of the data to be transmitted is obtained.
And S260, if the access opportunity of the transmission data is obtained in the channel competition, the data to be transmitted is sent to an edge computing server at the maximum compression rate of the data to be transmitted.
When the edge equipment successfully preempts in the channel competition, the data to be transmitted of the equipment is sent to the edge computing server at a corresponding compression rate, and then the process is circulated, the data collected on the edge equipment are continuously screened, the compression rate is selected, the channel competition is participated in, the data are sent to the edge computing server, and the training set of the target model is added, so that the acquisition of the training data of the target model is realized.
In this embodiment, after each time the edge device receives the current importance calculation model and the importance average value issued by the edge calculation server, the edge device uses the new importance calculation model and the average value as new criteria for updating the important data area. The edge device screens the importance evaluation value of the current target model according to the data, selects the compression rate of the data to be transmitted and determines the channel competition parameters, so that the quality of the data transmitted in communication is improved, the utilization efficiency of communication resources is improved, and the model performance is optimized.
Fig. 3 is a schematic diagram of a data acquisition and data processing method in machine learning according to an embodiment of the present disclosure. As shown in fig. 3, the edge computing server interacts with a plurality of edge devices. The edge computing server trains the model, obtains an importance computing model and an importance mean value L ave based on the current target model by utilizing the idea of network distillation, transmits the importance computing model and the importance mean value L ave to each edge device, and receives data transmitted by the edge devices to form a new training set to train the target model continuously. In the edge equipment, a model issued by a server is received, data update of an important data area is kept, the optimal compression rate of transmission data is calculated, channel competition parameters are determined, and if an access opportunity is obtained, the data to be transmitted is sent to the edge calculation server.
In this embodiment, through a schematic diagram of a data acquisition and data processing method in machine learning, an interaction process between an edge computing server and edge equipment is vividly displayed, reasonable evaluation and screening of training data are achieved, reasonable selection of data compression rate is performed, factors of data importance evaluation are included in channel competition, overall importance of data is improved, and quality of training data transmission is improved.
Fig. 4 is a schematic structural diagram of an edge computing server according to an embodiment of the present disclosure. As shown in fig. 4, the embodiment of the present disclosure provides an edge computing server 400, which includes a target model training module 410, an importance model training module 420, a mean computing module 430, a model issuing module 440, and a training set updating module 450, wherein:
The target model training module 410 is configured to train the target model according to the current training set to obtain the current target model.
The importance model training module 420 is configured to obtain a current importance calculation model for evaluating the importance of the data sample to the current target model based on the structure of the current target model, the current importance calculation model inferentially obtains the analog output of the current target model to each input, and characterizes the analysis capability of the current target model to the input through the difference between the analog output and the standard output corresponding to each input, so as to be used as an importance evaluation value of the input, and the importance evaluation value characterizes the analysis capability of the current target model to the data sample.
The average calculating module 430 is configured to calculate a corresponding importance evaluation value for each data sample in the test set through the current importance calculating model, and calculate an importance average L ave of the data samples in the test set.
The model issuing module 440 is configured to send the parameters of the current importance calculation model and the corresponding importance average value L ave to each edge device in a broadcast manner, so that each edge device screens data sent to an edge calculation server according to the current importance calculation model and the corresponding importance average value L ave.
The training set updating module 450 is configured to receive data sent by the edge device after screening according to the current importance calculation model and the corresponding importance mean value L ave, and add the data into the current training set; and training the target model through the current training set after the received data reach the preset number to obtain the target model with updated parameters.
Fig. 5 is a schematic structural diagram of an edge device according to an embodiment of the present disclosure. As shown in fig. 5, an embodiment of the present disclosure provides an edge device 500, including a model receiving module 510, an important data updating module 520, a threshold setting module 530, a compression rate selecting module 540, a contention participation module 550, and a transmission module 560, wherein:
The model receiving module 510 is configured to receive parameters of the current importance calculation model and the corresponding importance mean L ave from the edge calculation server.
The important data updating module 520 is configured to randomly select a preset number of data to be added into an important data area with a fixed preset capacity, calculate a corresponding importance evaluation value for each data in the important data area through the current importance calculation model, randomly select the data to calculate an importance evaluation value of the data, and update the data in the important data area according to the importance evaluation value of the data.
The threshold setting module 530 is configured to obtain an importance threshold θl ave according to the importance average value L ave and a preset threshold θ.
The compression rate selection module 540 is configured to select data with the largest importance evaluation value in the important data area as data to be transmitted, calculate the importance evaluation value of the compressed data to be transmitted, and obtain a first evaluation value; and comparing the first evaluation value with the importance threshold value thetal ave, if the first evaluation value is larger than the importance threshold value thetal ave, increasing the compression rate of the data to be transmitted and calculating an importance evaluation value under a new compression rate until the importance evaluation value of the data to be transmitted is not larger than the importance threshold value thetal ave or reaches the maximum compression rate of data transmission, and obtaining the maximum compression rate of the data to be transmitted and a second evaluation value corresponding to the compression rate.
The contention participation module 550 is configured to measure a channel condition and calculate a corresponding signal transmission rate according to the obtained channel parameter, obtain a channel contention parameter according to the second evaluation value of the data to be transmitted and the transmission rate, and participate in channel contention according to the channel contention parameter, wherein the channel contention parameter is proportional to the product of the second evaluation value of the data to be transmitted and the transmission rate.
The transmission module 560 is configured to send the data to be transmitted to the edge computing server at the maximum compression rate of the data to be transmitted if the access opportunity of the data to be transmitted is obtained in the channel contention.
Fig. 6 is a schematic structural diagram of a system according to an embodiment of the present disclosure. As shown in fig. 6, the present embodiment provides a system 600, including at least one edge computing server 610 and at least one edge device 620, the edge computing server 610 including a first storage module, the edge device 620 including a second storage module, the first storage module storing a first program and the second storage module storing a second program, the edge computing server 610 executing any of the above-described data acquisition methods in machine learning adapted to be executed on the edge computing server when the first program is executed; when the second program is executed, the edge device 620 performs any of the above-described data processing methods in machine learning that are suitable for execution on an edge device.
The implementation process of the functions and roles of the above devices and the modules in the system is specifically shown in the implementation process of the corresponding steps in the above method, and will not be repeated here.
In summary, the embodiments of the present disclosure provide a method, an apparatus, and a system for data acquisition and data processing in machine learning, where an edge computing server trains to obtain an importance computing model with fewer parameters, issues the model and an importance average value, and an edge device screens training data, selects an optimal compression rate for data to be transmitted, participates in channel competition according to an importance evaluation value, and selectively transmits the training data, so as to improve quality of data transmission between the edge computing server and the edge device.
Those of ordinary skill in the art will appreciate that: the drawing is a schematic diagram of one embodiment and the modules or flows in the drawing are not necessarily required to practice the invention.
Those of ordinary skill in the art will appreciate that: the modules in the apparatus of the embodiments may be distributed in the apparatus of the embodiments according to the description of the embodiments, or may be located in one or more apparatuses different from the present embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or may be further split into a plurality of sub-modules.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method of data acquisition in machine learning adapted to be performed on an edge computing server, comprising:
Training the target model according to the current training set to obtain a current target model;
based on the structure of the current target model, obtaining a current importance calculation model for evaluating the importance of a data sample to the current target model, wherein the current importance calculation model inferentially obtains the analog output of the current target model to each input, and represents the analysis capability of the current target model to the input through the difference between the analog output and the standard output corresponding to each input, so as to be used as an importance evaluation value of the input, and the importance evaluation value represents the analysis capability of the current target model to the data sample;
Calculating a corresponding importance evaluation value for each data sample in the test set through the current importance calculation model, and calculating an importance mean value L ave of the data samples in the test set;
The method comprises the steps of sending parameters of a current importance calculation model and corresponding importance average values L ave to edge devices in a broadcasting mode, enabling the edge devices to screen data sent to an edge calculation server according to the current importance calculation model and the corresponding importance average values L ave, randomly selecting a preset number of data to be added into an importance data area with fixed preset capacity by the edge devices, calculating an importance evaluation value of each data in the importance data area through the current importance calculation model, sorting the data in the importance data area according to the importance evaluation values, obtaining an importance threshold value theta ave according to the importance average values L ave and a preset threshold value theta, selecting data with the largest importance evaluation value in the importance data area as data to be transmitted, calculating an importance evaluation value after the data to be transmitted are compressed, obtaining a first evaluation value, comparing the first evaluation value with the importance threshold value theta L ave, and if the first evaluation value is larger than the importance threshold value theta ave, improving the importance evaluation value of the data to be transmitted and calculating the new importance evaluation value under the importance value of the data to be transmitted until the importance evaluation value is larger than the importance value of the importance value to be transmitted or the importance evaluation value to be transmitted is smaller than the importance value to the maximum value of the importance value to be transmitted ave;
the receiving edge device filters the data according to the current importance calculation model and the corresponding importance mean value L ave and adds the data into the current training set; and training the target model through the current training set after the received data reach the preset number to obtain the target model with updated parameters.
2. The method according to claim 1, wherein the step of obtaining a current importance calculation model that evaluates the importance of the data sample to the current object model based on the structure of the current object model, comprises:
Acquiring a training sample set of an importance calculation model, wherein the training sample set comprises a plurality of importance training samples, each importance training sample comprises a sample input of sample data, a standard output, a model output of a current target model on the sample input and an importance evaluation value of the sample data, the importance evaluation value is the square of a two-norm difference value between the model output and the standard output, and the sample data is a data sample in the target model training set;
and training the importance calculation model through the training sample set to obtain a current importance calculation model, and outputting an importance evaluation value of the data sample for the current target model according to the input data sample by the importance calculation model.
3. The method of claim 1, wherein the data sent after the receiving edge device filters according to the current importance calculation model and the corresponding importance mean L ave is added to a current training set; after the received data reach the preset number, training the target model through the current training set, and obtaining the target model with updated parameters, the method further comprises the following steps:
After a preset time delay, a current importance calculation model for evaluating the importance of the data sample to the current target model is obtained based on the structure of the current target model, a corresponding importance evaluation value is calculated for each data sample in the test set through the current importance calculation model, an importance mean L ave of the data sample in the test set is calculated, and parameters of the current importance calculation model and the corresponding importance mean L ave are sent to each edge device in a broadcasting mode.
4. A method of data processing in machine learning adapted to be performed on an edge device, comprising:
Receiving parameters of a current importance calculation model and a corresponding importance mean value L ave sent by an edge calculation server;
Randomly selecting a preset number of data to be added into an important data area with fixed preset capacity, calculating a corresponding importance evaluation value for each data in the important data area through the current importance calculation model, randomly selecting the data to calculate an importance evaluation value of the data, and updating the data in the important data area according to the importance evaluation value of the data;
Obtaining an importance threshold value theta L ave according to the importance average value L ave and a preset threshold value theta;
selecting data with the maximum importance evaluation value in an important data area as data to be transmitted, and calculating the importance evaluation value of the compressed data to be transmitted to obtain a first evaluation value; comparing the first evaluation value with the importance threshold value thetal ave, if the first evaluation value is larger than the importance threshold value thetal ave, increasing the compression rate of the data to be transmitted and calculating an importance evaluation value under a new compression rate until the importance evaluation value of the data to be transmitted is not larger than the importance threshold value thetal ave or reaches the maximum compression rate of data transmission, and obtaining the maximum compression rate of the data to be transmitted and a second evaluation value corresponding to the compression rate;
Measuring channel conditions, calculating corresponding signal transmission rate according to the obtained channel parameters, obtaining channel competition parameters according to the second evaluation value of the data to be transmitted and the transmission rate, and participating in channel competition according to the channel competition parameters, wherein the channel competition parameters are in direct proportion to the product of the second evaluation value of the data to be transmitted and the transmission rate;
And if the access opportunity of the transmission data is obtained in the channel competition, sending the data to be transmitted to an edge computing server at the maximum compression rate of the data to be transmitted.
5. The method of claim 4, wherein the step of randomly selecting a predetermined number of data to be added to the predetermined fixed-capacity important data area, calculating a corresponding importance evaluation value for each data in the important data area by the current importance calculation model, randomly selecting data to calculate an importance evaluation value for the data, and updating the data in the important data area according to the importance evaluation value of the data, comprises:
randomly selecting a preset number of data to be added into an important data area with fixed preset capacity;
Calculating an importance evaluation value of each data in the important data area through the current importance calculation model, and sequencing the data in the important data area according to the importance evaluation value;
randomly selecting data outside an important data area, and obtaining an importance evaluation value of the data through the current importance calculation model;
And comparing the importance evaluation value of the data with the importance evaluation value of the data in the important data area, and inserting the data into the important data area before the corresponding data if the importance evaluation value of the data is just larger than the importance evaluation value of one data in the important data area.
6. The method of claim 4, wherein the data with the largest importance evaluation value in the important data area is selected as data to be transmitted, and the importance evaluation value of the compressed data to be transmitted is calculated to obtain a first evaluation value; after comparing the first evaluation value with the importance threshold θl ave, the method further includes:
and if the first evaluation value is not greater than the importance threshold value thetal ave, taking the first evaluation value as a second evaluation value and taking the corresponding compression rate as the maximum compression rate of the data to be transmitted.
7. The method of claim 4, wherein the steps of measuring channel conditions and calculating a corresponding signal transmission rate based on the obtained channel parameters, obtaining channel contention parameters based on the second evaluation value of the data to be transmitted and the transmission rate, and participating in channel contention based on the channel contention parameters, comprise:
measuring channel conditions by using a channel condition measurement algorithm to obtain the signal-to-noise ratio of current signal transmission;
According to the signal-to-noise ratio, calculating a corresponding transmission rate through a shannon formula;
Obtaining channel competition parameters according to the second evaluation value of the data to be transmitted and the transmission rate;
And participating in channel contention defined by the contention access protocol of the MAC layer according to the channel contention parameter.
8. The edge computing server is characterized by comprising a target model training module, an importance model training module, a mean value computing module, a model issuing module and a training set updating module, wherein:
The target model training module is configured to train the target model according to the current training set to obtain a current target model;
The importance model training module is configured to obtain a current importance calculation model for evaluating the importance of a data sample to the current target model based on the structure of the current target model, the current importance calculation model inferentially obtains the analog output of the current target model to each input, and characterizes the analysis capability of the current target model to the input through the difference between the analog output and the standard output corresponding to each input, and the importance calculation model is used as an importance evaluation value of the input, and the importance evaluation value characterizes the analysis capability of the current target model to the data sample;
The average value calculation module is configured to calculate a corresponding importance evaluation value for each data sample in the test set through the current importance calculation model, and calculate an importance average value L ave of the data samples in the test set;
The model issuing module is configured to send parameters of the current importance calculation model and a corresponding importance mean value L ave to each edge device in a broadcasting mode, so that each edge device screens data sent to an edge calculation server according to the current importance calculation model and the corresponding importance mean value L ave, each edge device randomly selects a preset number of data to be added into an importance data area with fixed preset capacity, calculates an importance evaluation value of each data in the importance data area through the current importance calculation model, sorts the data in the importance data area according to the importance evaluation value, obtains an importance threshold value theta L ave according to the importance mean value L ave and a preset threshold value theta, selects data with the largest importance evaluation value in the importance data area as data to be transmitted, calculates an importance evaluation value after the data to be transmitted are compressed, obtains a first evaluation value, compares the first evaluation value with the importance threshold value theta ave, and if the first evaluation value is larger than the importance threshold value theta ave, increases the importance of the data to be transmitted until the importance of the data to be transmitted reaches the importance of the compression rate of the data to be transmitted is larger than the importance threshold value theta which is calculated to be less than the importance of the importance value of the data to be transmitted or the compression rate of the data to be transmitted to be the highest value of the data to be transmitted to be evaluated is calculated to be the highest or the importance of the importance evaluation value is not calculated to reach the importance threshold value of the importance value of the compression rate of the data to be ave;
The training set updating module is configured to receive data sent by the edge equipment after screening according to the current importance calculation model and the corresponding importance mean value L ave, and add the data into the current training set; and training the target model through the current training set after the received data reach the preset number to obtain the target model with updated parameters.
9. The edge device is characterized by comprising a model receiving module, an important data updating module, a threshold setting module, a compression rate selecting module, a competition participation module and a transmission module, wherein:
The model receiving module is configured to receive parameters of the current importance calculation model and the corresponding importance mean value L ave sent by the edge calculation server;
the important data updating module is configured to randomly select a preset number of data to be added into an important data area with fixed preset capacity, calculate a corresponding importance evaluation value for each data in the important data area through the current importance calculation model, randomly select the data to calculate an importance evaluation value of the data, and update the data in the important data area according to the importance evaluation value of the data;
The threshold setting module is configured to obtain an importance threshold value thetaL ave according to the importance average value L ave and a preset threshold value thetaO;
The compression rate selection module is configured to select data with the largest importance evaluation value in the important data area as data to be transmitted, calculate the importance evaluation value of the compressed data to be transmitted, and obtain a first evaluation value; comparing the first evaluation value with the importance threshold value thetal ave, if the first evaluation value is larger than the importance threshold value thetal ave, increasing the compression rate of the data to be transmitted and calculating an importance evaluation value under a new compression rate until the importance evaluation value of the data to be transmitted is not larger than the importance threshold value thetal ave or reaches the maximum compression rate of data transmission, and obtaining the maximum compression rate of the data to be transmitted and a second evaluation value corresponding to the compression rate;
the competition participation module is configured to measure channel conditions and calculate corresponding signal transmission rate according to the obtained channel parameters, obtain channel competition parameters according to the second evaluation value of the data to be transmitted and the transmission rate, participate in channel competition according to the channel competition parameters, and the channel competition parameters are proportional to the product of the second evaluation value of the data to be transmitted and the transmission rate;
And the transmission module is configured to send the data to be transmitted to the edge computing server at the maximum compression rate of the data to be transmitted if the access opportunity of the data to be transmitted is obtained in the channel competition.
10. A system comprising at least one edge computing server and at least one edge device, the edge computing server comprising a first storage module, the edge device comprising a second storage module, the first storage module storing a first program and the second storage module storing a second program, the edge computing server performing the method of any of claims 1-3 when the first program is executed; when the second program is executed, the edge device performs the method of any of claims 4-7.
CN201911411688.1A 2019-12-31 2019-12-31 Method, device and system for data acquisition and data processing in machine learning Active CN113128694B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911411688.1A CN113128694B (en) 2019-12-31 2019-12-31 Method, device and system for data acquisition and data processing in machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911411688.1A CN113128694B (en) 2019-12-31 2019-12-31 Method, device and system for data acquisition and data processing in machine learning

Publications (2)

Publication Number Publication Date
CN113128694A CN113128694A (en) 2021-07-16
CN113128694B true CN113128694B (en) 2024-07-19

Family

ID=76770423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911411688.1A Active CN113128694B (en) 2019-12-31 2019-12-31 Method, device and system for data acquisition and data processing in machine learning

Country Status (1)

Country Link
CN (1) CN113128694B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024065755A1 (en) * 2022-09-30 2024-04-04 华为技术有限公司 Communication method and apparatus

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021984A (en) * 2016-11-01 2018-05-11 第四范式(北京)技术有限公司 Determine the method and system of the feature importance of machine learning sample
CN109784408A (en) * 2019-01-17 2019-05-21 济南浪潮高新科技投资发展有限公司 A kind of embedded time series Decision-Tree Method and system of marginal end

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822440A (en) * 2017-06-15 2021-12-21 第四范式(北京)技术有限公司 Method and system for determining feature importance of machine learning samples
US11147459B2 (en) * 2018-01-05 2021-10-19 CareBand Inc. Wearable electronic device and system for tracking location and identifying changes in salient indicators of patient health
CN109214436A (en) * 2018-08-22 2019-01-15 阿里巴巴集团控股有限公司 A kind of prediction model training method and device for target scene
CN109710374A (en) * 2018-12-05 2019-05-03 重庆邮电大学 The VM migration strategy of task unloading expense is minimized under mobile edge calculations environment
CN109784474B (en) * 2018-12-24 2020-12-11 宜通世纪物联网研究院(广州)有限公司 Deep learning model compression method and device, storage medium and terminal equipment
CN109741332B (en) * 2018-12-28 2021-06-04 天津大学 Man-machine cooperative image segmentation and annotation method
CN109886397A (en) * 2019-03-21 2019-06-14 西安交通大学 A kind of neural network structure beta pruning compression optimization method for convolutional layer

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021984A (en) * 2016-11-01 2018-05-11 第四范式(北京)技术有限公司 Determine the method and system of the feature importance of machine learning sample
CN109784408A (en) * 2019-01-17 2019-05-21 济南浪潮高新科技投资发展有限公司 A kind of embedded time series Decision-Tree Method and system of marginal end

Also Published As

Publication number Publication date
CN113128694A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
CN113128532B (en) Training sample data acquisition method, processing method, device and system
CN109862590B (en) Network capacity evaluation method and device
CN111026548B (en) Power communication equipment test resource scheduling method for reverse deep reinforcement learning
CN107948083B (en) SDN data center congestion control method based on reinforcement learning
CN113869521A (en) Method, device, computing equipment and storage medium for constructing prediction model
CN113271221B (en) Network capacity opening method and system and electronic equipment
CN116916464B (en) ZigBee-based indoor environment data optimization monitoring and acquisition method
CN113128694B (en) Method, device and system for data acquisition and data processing in machine learning
CN112469102A (en) Time-varying network-oriented active network topology construction method and system
CN117040141A (en) Safety monitoring system and method for electric power intelligent gateway
CN113727092B (en) Video monitoring quality inspection method and device based on decision tree
CN107155192B (en) User experience quality assessment method and device
CN111191113A (en) Data resource demand prediction and adjustment method based on edge computing environment
CN102740109B (en) Method, system and device for determining receiving sensitivity of terminal
CN114390582B (en) Base station site prediction method and device
CN115884195A (en) Model training method, wireless resource scheduling method and device and electronic equipment
CN105451350A (en) Combined unicast and multicast mechanism-based resource allocation method
CN113128692B (en) Method, device and system for data acquisition processing in wireless distributed machine learning
US11797372B2 (en) Method and apparatus for generating time series data based on multi-condition constraints, and medium
CN115715021A (en) Internet of vehicles resource allocation method and system
CN106886463A (en) A kind of control system of Intelligent Dynamic adjustment multi-graphics processor load
CN103384374B (en) A kind of appraisal procedure of speech service quality and equipment
EP4150861A1 (en) Determining cell upgrade
CN114760639A (en) Resource unit allocation method, device, equipment and storage medium
CN109121165B (en) Load balancing processing method and device based on uplink big packet service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant