CN114640853B - Unmanned aerial vehicle image processing system that cruises - Google Patents
Unmanned aerial vehicle image processing system that cruises Download PDFInfo
- Publication number
- CN114640853B CN114640853B CN202210536115.7A CN202210536115A CN114640853B CN 114640853 B CN114640853 B CN 114640853B CN 202210536115 A CN202210536115 A CN 202210536115A CN 114640853 B CN114640853 B CN 114640853B
- Authority
- CN
- China
- Prior art keywords
- image
- area
- gray
- gray level
- frequency
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an unmanned aerial vehicle cruise image processing system, and relates to the field of image recognition. The method mainly comprises the following steps: the image acquisition module acquires a multi-frame gray image of the current moment of the target area by using the unmanned aerial vehicle; the image processing module is used for sequentially carrying out differential operation on adjacent frame images in the multi-frame gray level image and superposing differential operation results to obtain a superposed image; dividing the superposed image into areas, and obtaining the weight of each pixel point in each area by using the frequency of the gray level of each area; the transmission module is used for carrying out Huffman coding on the superposed image according to the weight of each pixel point in each region and sending coded data to the monitoring center; and the monitoring center receives the coded data sent by the transmission module, decodes the received coded data to obtain a gray image, and sends feedback information to the image acquisition module and the image processing module respectively. The embodiment of the invention can obtain the image containing more effective information in emergency.
Description
Technical Field
The application relates to the field of image recognition, in particular to an unmanned aerial vehicle cruise image processing system.
Background
At present, for image identification in the cruising process of an unmanned aerial vehicle, mobile network equipment is mainly utilized to carry out communication transmission on images to be transmitted, however, in an emergency situation, an original communication transmission mode is kept to transmit original images, a monitoring center cannot receive images containing more information within effective time, and meanwhile, due to the fact that the image processing speed and the image storage space are limited, data redundancy possibly exists in the transmitted images, and therefore effective information in the images cannot be transmitted to the monitoring center in time.
Therefore, there is a need for an unmanned aerial vehicle cruise image processing system capable of recognizing more effective information existing in an image in an emergency and transmitting the image containing more effective information.
Disclosure of Invention
In view of the above technical problems, an embodiment of the present invention provides an unmanned aerial vehicle cruising image processing system, which can adaptively adjust compression degrees of different areas of an image to be transmitted according to a frequency of receiving feedback information from a monitoring center by an image processing module, and determine an image acquisition frequency according to the frequency of receiving the feedback information by an image acquisition module, thereby adaptively providing the compressed transmission image to the monitoring center, so as to identify more effective information existing in the image, and enable the monitoring center to receive the image containing more effective information as soon as possible.
The embodiment of the invention provides an unmanned aerial vehicle cruise image processing system, which comprises:
and the image acquisition module is used for acquiring the multi-frame gray level image of the target area at the current moment by using the unmanned aerial vehicle.
And the image processing module is used for sequentially carrying out differential operation on adjacent frame gray level images in the multi-frame gray level images and superposing differential operation results to obtain superposed images. And carrying out region division on the superposed image, and obtaining the weight of each pixel point in each region by using the frequency of the gray level of each region.
And the transmission module is used for carrying out Huffman coding on the superposed image according to the weight of each pixel point in each region and sending the coded data to the monitoring center.
And the monitoring center is used for receiving the coded data sent by the transmission module, carrying out Hoffman decoding on the received coded data to obtain a gray image, and respectively sending feedback information to the image acquisition module and the image processing module.
Further, in the unmanned aerial vehicle cruising image processing system, the image processing module is further configured to determine the number of frames of the gray scale image acquired at the next moment based on the frequency of receiving the feedback information from the monitoring center at the current moment.
Further, in an unmanned aerial vehicle image processing system that cruises, carry out regional division to the superimposed image among the image processing module, utilize the frequency of each regional grey level to obtain the weight of each pixel in each region, include:
and dividing the superposed image into a first area and a second area, wherein the first area is a connected domain with the largest area in the superposed image.
And adjusting the gray level number in the first area to be a first gray level number and adjusting the gray level number in the second area to be a second gray level number in a linear mapping mode, wherein the first gray level number is based on the frequency of the received feedback information, the sum of the first gray level number and the second gray level number is a preset gray level number, and the first gray level number is greater than the second gray level number.
And sequentially giving weights to all the gray levels in the second area according to the sequence of the frequency numbers of the gray levels in the second area from low to high, wherein the larger the frequency number of the gray levels in the second area is, the larger the corresponding weight is.
And determining the weight corresponding to each gray level in the first area according to the adjusted frequency of each gray level in the first area on the basis of each weight corresponding to each gray level in the second area so as to obtain the weight of each pixel point in the first area.
Further, in an unmanned aerial vehicle cruise image processing system, the number of gray levels in a first region is adjusted to a first number of gray levels and the number of gray levels in a second region is adjusted to a second number of gray levels in a linear mapping manner, including:
the second gray scale number is determined based on a frequency of receiving the feedback information, and the lower the frequency of receiving the feedback information, the smaller the second gray scale number.
And compressing the gray scale series in the second region from 256 to a second gray scale series by means of linear mapping.
And determining the first gray scale number according to the second gray scale number and the preset gray scale digit number.
The number of gray levels in the first region is compressed from 256 to the first number of gray levels by means of linear mapping.
Further, in an unmanned aerial vehicle cruise image processing system, on the basis of each weight corresponding to each gray level in a second region, according to the adjusted frequency of each gray level in a first region, determining the weight corresponding to each gray level in the first region to obtain the weight of each pixel point in the first region, the method includes:
the maximum value of the weights corresponding to the respective gray levels in the second region is determined, and 1 is added to the maximum value to obtain the initial weight of the second region.
And sequentially giving weights to all gray levels in the first area according to the order from low frequency to high frequency of the gray levels in the first area, wherein the larger the frequency of the gray levels in the first area is, the larger the corresponding weight is, and the weight of the gray level with the lowest frequency in the first area is given as the initial weight.
And respectively taking the weights corresponding to the gray levels of the pixels in the first area as the weights of the pixels in the first area.
Further, in an unmanned aerial vehicle image processing system that cruises, before carrying out difference operation in proper order to adjacent frame image in the multiframe gray level image of current moment in the image processing module, still include: and respectively carrying out median filtering denoising on each gray level image in the multi-frame gray level image.
Further, in the unmanned aerial vehicle cruise image processing system, the image acquisition module is further used for acquiring position information when multi-frame gray level images of the target area are acquired, and sending the position information to the monitoring center.
Further, in the unmanned aerial vehicle cruising image processing system, when the frequency of the image processing module receiving the feedback information from the monitoring center is lower than a preset frequency threshold value, rescue workers or equipment are informed to arrive at the position where the position information is located to implement search and rescue.
Compared with the prior art, the embodiment of the invention provides an unmanned aerial vehicle cruise image processing system, which has the beneficial effects that: the method and the device have the advantages that the compression degree of different areas of the image to be transmitted can be adaptively adjusted according to the frequency of the image processing module receiving the feedback information from the monitoring center, and the acquisition frequency of the image is determined according to the frequency of the image acquisition module receiving the feedback information, so that the compressed transmission image is adaptively provided to the monitoring center, more effective information in the image is identified, and the monitoring center can receive the image containing more effective information as soon as possible.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an unmanned aerial vehicle cruise image processing system according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating the operation of the image processing module and the transmission module according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a process of obtaining weights of pixels in a superimposed image according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a first region and a second region in an overlay image according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature; in the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
The embodiment of the invention provides an unmanned aerial vehicle cruise image processing system, as shown in fig. 1, comprising: an image acquisition module 100, an image processing module 200, a transmission module 300 and a monitoring center 400.
As shown in fig. 2, the operation process of the image processing module and the transmission module in the embodiment of the present invention may include the following steps:
step S101, the image acquisition module 100 acquires a multi-frame grayscale image of the target area at the current time by using the drone.
Step S102, the image processing module 200 sequentially performs difference operation on adjacent frame gray level images in the multi-frame gray level images, and superposes the difference operation results to obtain a superposed image; and carrying out region division on the superposed image, and obtaining the weight of each pixel point in each region by using the frequency of the gray level of each region.
In step S103, the transmission module 300 performs huffman coding on the superimposed image according to the weight of each pixel point in each region, and sends the coded data to the monitoring center 400.
In step S104, the image capturing module 100 and the image processing module 200 receive feedback information sent from the monitoring center 400.
The monitoring center 400 is configured to receive the encoded data sent by the transmission module 300, perform huffman decoding on the received encoded data to obtain a grayscale image, and send feedback information to the image acquisition module 100 and the image processing module 200, respectively.
It should be noted that the feedback information in the embodiment of the present invention refers to feedback information sent by the monitoring center 400 to the image acquisition module 100 or the image processing module 200 for feeding back the received encoded data.
The monitoring center 200 is configured to receive encoded data sent by the monitoring center, perform huffman decoding on the received encoded data to obtain a grayscale image, and send feedback information to the sending end.
The embodiment of the invention mainly aims to: and acquiring images of the target area by using the unmanned aerial vehicle, performing different data compression coding according to different emergency levels, and transmitting communication information by using the unmanned aerial vehicle.
Further, in step S101, the image acquisition module 100 acquires a multi-frame grayscale image of the target area at the current time by using the drone. The method specifically comprises the following steps:
optionally, after the unmanned aerial vehicle in the scene transmits the image, the transmitted image may be subjected to a class stack concept and temporarily cached in the register. The specific mode is to set a storage upper limit value, arrange the transmitted images in ascending order according to the time sequence, and delete the image corresponding to the feedback information from the memory when receiving the feedback information sent by the monitoring center, thereby releasing the storage space.
In order to realize the emergency communication of the unmanned aerial vehicle in the emergency scene, the emergency scene can be classified according to the frequency, namely the interval duration, of the feedback information sent by the monitoring center received by the sending end, the pixels of the effective information in the image information are distinguished by combining historical data in different emergency grade scenes, and the effective information code carried by the pixels is shortened as much as possible by carrying out dynamic adjustment of the Huffman code according to the emergency grade, so that the unmanned aerial vehicle can be transmitted more efficiently.
In the embodiment of the present invention, the image acquisition module 100 may change the number of frames in the multi-frame grayscale image acquired at the next time based on the frequency of receiving the feedback information from the monitoring center 400, wherein the higher the frequency of receiving the feedback information is, the higher the frequency indicates that the monitoring center can timely and efficiently receive the encoded data, and at this time, the number of frames of the grayscale image acquired at the next time can be correspondingly reduced; on the contrary, the lower the frequency of receiving the feedback information is, it indicates that the monitoring center 400 is more likely to fail to receive the encoded data transmitted by the transmission module in time, or the transmission module has a tendency of failing to continue to perform image transmission with the monitoring center, and at this time, the number of frames of images acquired by the image acquisition module 100 at the next moment needs to be increased to acquire more effective information, so that images containing more information are acquired and transmitted to the monitoring center 400 in time.
Step S102, the image processing module 200 sequentially performs difference operation on adjacent frame gray level images in the multi-frame gray level image, and superposes the difference operation results to obtain a superposed image; and carrying out region division on the superposed image, and obtaining the weight of each pixel point in each region by using the frequency of the gray level of each region.
Firstly, difference operation is sequentially carried out on adjacent frame gray level images in a multi-frame gray level image, and the difference operation results are superposed to obtain a superposed image.
In the process of sequentially carrying out differential operation on adjacent frame gray level images in the collected multi-frame gray level images of the target area at the current moment, the frame difference interval during the differential operation can be determined according to the frequency of receiving feedback information from the monitoring center by the sending end, so that effective information can be obtained, and information redundancy is avoided.
And secondly, carrying out region division on the superposed image, and obtaining the weight of each pixel point in each region by using the frequency of the gray level of each region.
Further, as shown in fig. 3, performing region division on the superimposed image, and obtaining the weight of each pixel in each region by using the frequency of the gray level of each region may include: step S1021, step S1022, step S1023, step S1024.
Further, in step S1021, the superimposed image is divided into a first area and a second area, wherein the first area is a connected domain with the largest area in the superimposed image.
Firstly, through connected domain analysis, connected domains in the superposed image are obtained, the connected domain analysis is also called as a connected domain mark, which means that the connected domains in the image are found and marked, as shown in fig. 4, a schematic diagram of a first region and a second region in the embodiment of the present invention is shown, as shown in fig. 4, the connected domain with the largest area in each connected domain is used as the first region, and meanwhile, the part of the superposed image except the first region is used as the second region.
It should be noted that, in the embodiment of the present invention, the information included in the first area is the most, and the information included in the second area is usually the background portion, so that it is convenient to perform different degrees of gray scale compression on the first area and the second area respectively, so that the effective information is retained to the maximum extent, and the irrelevant background information is compressed as much as possible.
Further, in step S1022, the number of gray levels in the first region is adjusted to be a first number of gray levels and the number of gray levels in the second region is adjusted to be a second number of gray levels in a linear mapping manner, the second number of gray levels may be based on the frequency of receiving the feedback information, the sum of the first number of gray levels and the second number of gray levels is a preset number of gray levels, and the first number of gray levels is greater than the second number of gray levels.
First, a second gray scale number is determined based on a frequency of receiving the feedback information, and the lower the frequency of receiving the feedback information, the smaller the second gray scale number. It should be noted that the lower the frequency of receiving the feedback information, the greater the degree of grayscale compression that needs to be performed on the second region, and at the same time, the information contained in the first region is kept as much as possible while reducing the data transmission amount.
Secondly, compressing the gray level number in the second area from 256 to a second gray level number in a linear mapping mode; that is, the original gray scale range is [0, 255], which corresponds to 256 gray scales, and the gray scales are reduced to a plurality of second gray scales by means of linear mapping. For example, when the second gray scale number is 32, the gray scale values of the pixel points located in the [0, 8] gray scale range before linear mapping are all linearly mapped to 4, the gray scale values of the pixel points located in the [9, 16] gray scale range before linear mapping are all linearly mapped to 12, and thus the mapping of the gray scale of the pixel points in the whole [0, 255] range is completed.
Then, since the first number of gray levels is greater than the second number of gray levels, and the sum of the first number of gray levels and the second number of gray levels is the preset number of gray bits, the first number of gray levels can be obtained, and the first number of gray levels can be determined according to the second number of gray levels and the preset number of gray bits, for example, when the preset number of gray bits is 256 and the second number of gray bits is 32, the first number of gray bits can be obtained as 224.
Then, the number of gray levels in the first region is compressed from 256 to the first number of gray levels by means of linear mapping, and it is necessary to make the linear mapping result a positive integer.
Further, in step S1023, weights are sequentially given to the gray levels in the second area in the order of the frequency count of the gray levels in the second area from low to high, and the larger the frequency count of the gray levels in the second area is, the larger the corresponding weight is.
For example, since the gray level in the second region is the second gray level after the gray level adjustment process of the linear mapping, the weights may be sequentially given to the gray level occurrence frequency from low to high, the weight of the gray level with the smallest frequency may be given as 1, and the weight of the gray level with the largest frequency may be given as the second gray level.
Further, step S1024 is performed to determine, based on the weights corresponding to the gray levels in the second region, the weights corresponding to the gray levels in the first region according to the adjusted frequency of the gray levels in the first region, so as to obtain the weights of the pixels in the first region.
First, the maximum value of the weights corresponding to the respective gray levels in the second region is determined, and the maximum value is added to 1 to be the initial weight of the first region. The initial weight in the embodiment of the present invention refers to a weight corresponding to a gray level with the smallest frequency number in the first region.
And then, sequentially giving weights to all the gray levels in the first area according to the order of the frequency numbers of the gray levels in the first area from low to high, wherein the larger the frequency number of the gray levels in the first area is, the larger the corresponding weight is, and the weight of the gray level with the lowest frequency number in the first area is given as the initial weight.
And respectively taking the weights corresponding to the gray levels of the pixels in the first area as the weights of the pixels in the first area.
Therefore, the weights of all pixel points in the superposed image are obtained respectively, normalization processing can be carried out on the weights to enable the sum of all the weights to be one, so that encoded data can be obtained after encoding is carried out according to a Huffman encoding mode, and the encoded data are transmitted to the monitoring center.
Further, in step S103, the transmission module 300 performs huffman coding on the superimposed image according to the weight of each pixel point in each region, and sends the coded data to the monitoring center 400. The method specifically comprises the following steps:
in this way, the monitoring center can receive the coded data containing more information so as to decompress the image containing more effective information.
Further, in step S104, the image capturing module 100 and the image processing module 200 receive feedback information sent from the monitoring center 400. The feedback information in the embodiment of the invention refers to feedback information which is sent by the monitoring center to the image acquisition module or the image processing module and is used for feeding back the received coded data. The method specifically comprises the following steps:
optionally, the image acquisition module may be further configured to acquire position information when acquiring a multi-frame grayscale image of the target area, and send the position information to the monitoring center.
Further, when the frequency of the image processing module receiving the feedback information from the monitoring center is lower than a preset frequency threshold, the image processing module can inform rescue workers or equipment of reaching the position in the position information to conduct search and rescue.
It should be noted that, in the embodiment of the present invention, the monitoring center may be further configured to perform analysis according to the decoded grayscale image to obtain information included in the grayscale image, for example, when the decoded image is a forest image captured by an unmanned aerial vehicle, the actual situation of the forest image can be analyzed and determined by using the forest image.
In summary, the embodiment of the present invention provides an unmanned aerial vehicle cruise image processing system, which can adaptively adjust the compression degrees of different areas of an image to be transmitted according to the frequency of receiving feedback information from a monitoring center by an image processing module, and determine the acquisition frequency of a picture according to the frequency of receiving feedback information by an image acquisition module, thereby adaptively providing the compressed transmission image to the monitoring center, so as to identify more effective information in the image, and enable the monitoring center to receive the image containing more effective information as soon as possible.
The use of words such as "including," "comprising," "having," and the like in this disclosure is an open-ended term that means "including, but not limited to," and is used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that the various components or steps may be broken down and/or re-combined in the methods and systems of the present invention. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The above-mentioned embodiments are merely examples for clearly illustrating the present invention and do not limit the scope of the present invention. It will be apparent to those skilled in the art that other variations and modifications may be made in the foregoing description, and it is not necessary or necessary to exhaustively enumerate all embodiments herein. All designs identical or similar to the present invention are within the scope of the present invention.
Claims (5)
1. An unmanned aerial vehicle image processing system that cruises, characterized in that includes:
the image acquisition module is used for acquiring a multi-frame gray image of the target area at the current moment by using the unmanned aerial vehicle;
the image processing module is used for sequentially carrying out differential operation on adjacent frame gray level images in the multi-frame gray level images and superposing differential operation results to obtain superposed images; dividing the superposed image into areas, and obtaining the weight of each pixel point in each area by using the frequency of the gray level of each area;
the image processing module is also used for determining the frame number of the gray level image acquired at the next moment based on the frequency of receiving feedback information from the monitoring center at the current moment;
the transmission module is used for carrying out Huffman coding on the superposed image according to the weight of each pixel point in each region and sending coded data to the monitoring center;
the monitoring center is used for receiving the coded data sent by the transmission module, carrying out Hoffman decoding on the received coded data to obtain a gray image, and respectively sending feedback information to the image acquisition module and the image processing module;
in the image processing module, the region division is performed on the superimposed image, and the weight of each pixel point in each region is obtained by using the frequency of each region gray level, including:
dividing the superposed image into a first area and a second area, wherein the first area is a connected area with the largest area in the superposed image;
determining a second gray scale number based on the frequency of receiving the feedback information, wherein the lower the frequency of receiving the feedback information is, the smaller the second gray scale number is;
compressing the gray level number in the second area from 256 to a second gray level number in a linear mapping mode;
the sum of the first gray level number and the second gray level number is a preset gray level number, and the first gray level number is greater than the second gray level number;
determining a first gray scale number according to the second gray scale number and a preset gray scale bit number, and compressing the gray scale number in the first area from 256 to the first gray scale number in a linear mapping mode;
according to the order from low frequency to high frequency of the gray levels in the second area, weights are sequentially given to all the gray levels in the second area, and the larger the frequency of the gray levels in the second area is, the larger the corresponding weight is;
and determining the weight corresponding to each gray level in the first area according to the adjusted frequency of each gray level in the first area on the basis of each weight corresponding to each gray level in the second area so as to obtain the weight of each pixel point in the first area.
2. The unmanned aerial vehicle cruise image processing system according to claim 1, wherein on the basis of the weights corresponding to the gray levels in the second region, the weights corresponding to the gray levels in the first region are determined according to the adjusted frequency of the gray levels in the first region, so as to obtain the weights of the pixels in the first region, and the method includes:
determining the maximum value of the weights corresponding to the gray levels in the second area, and adding 1 to the maximum value to be used as the initial weight of the second area;
sequentially giving weights to all gray levels in the first area according to the sequence of the frequency numbers of the gray levels in the first area from low to high, wherein the larger the frequency number of the gray level in the first area is, the larger the corresponding weight is, and the weight of the gray level with the lowest frequency number in the first area is given as the initial weight;
and respectively taking the weights corresponding to the gray levels of the pixels in the first area as the weights of the pixels in the first area.
3. The unmanned aerial vehicle cruise image processing system according to claim 1, wherein in the image processing module, before sequentially performing differential operation on adjacent frame images in the multi-frame gray image at the current moment, the image processing module further comprises: and respectively carrying out median filtering denoising on each gray level image in the multi-frame gray level image.
4. The unmanned aerial vehicle cruise image processing system according to claim 1, wherein the image acquisition module is further configured to acquire position information when acquiring a multi-frame grayscale image of a target area, and send the position information to a monitoring center.
5. The unmanned aerial vehicle cruise image processing system according to claim 4, wherein when the frequency of the image processing module receiving the feedback information from the monitoring center is lower than a preset frequency threshold, rescuers or equipment are notified to arrive at the position in the position information to perform search and rescue.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210536115.7A CN114640853B (en) | 2022-05-18 | 2022-05-18 | Unmanned aerial vehicle image processing system that cruises |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210536115.7A CN114640853B (en) | 2022-05-18 | 2022-05-18 | Unmanned aerial vehicle image processing system that cruises |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114640853A CN114640853A (en) | 2022-06-17 |
CN114640853B true CN114640853B (en) | 2022-07-29 |
Family
ID=81952898
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210536115.7A Active CN114640853B (en) | 2022-05-18 | 2022-05-18 | Unmanned aerial vehicle image processing system that cruises |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114640853B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115225897B (en) * | 2022-07-14 | 2024-09-24 | 河南职业技术学院 | Video multi-stage encryption transmission method based on Huffman coding |
CN117768615B (en) * | 2023-12-12 | 2024-07-02 | 广州众翔信息科技有限公司 | Image data transmission method and system for monitoring video |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0730768A (en) * | 1993-07-12 | 1995-01-31 | Fujitsu Ltd | Image data transmission processing system |
US5805228A (en) * | 1996-08-09 | 1998-09-08 | U.S. Robotics Access Corp. | Video encoder/decoder system |
JP4032210B2 (en) * | 2001-01-30 | 2008-01-16 | 富士フイルム株式会社 | Mobile device, image transmission system, and image transmission method |
US8824553B2 (en) * | 2003-05-12 | 2014-09-02 | Google Inc. | Video compression method |
JP2005323044A (en) * | 2004-05-07 | 2005-11-17 | Uniden Corp | Image transmitting apparatus and image receiving apparatus |
CN1285215C (en) * | 2004-12-31 | 2006-11-15 | 大唐微电子技术有限公司 | Method of frame rate adjusting for video communication system |
CN101340575B (en) * | 2007-07-03 | 2012-04-18 | 英华达(上海)电子有限公司 | Method and terminal for dynamically regulating video code |
CN101345862B (en) * | 2008-07-25 | 2010-06-09 | 深圳市迈进科技有限公司 | Image transmission method for real-time grasp shoot of network monitoring system |
US8537699B2 (en) * | 2009-06-16 | 2013-09-17 | Qualcomm Incorporated | Managing video adaptation algorithms |
CN102595093A (en) * | 2011-01-05 | 2012-07-18 | 腾讯科技(深圳)有限公司 | Video communication method for dynamically changing video code and system thereof |
CN105208335B (en) * | 2015-09-22 | 2018-08-28 | 成都时代星光科技有限公司 | The aerial high definition multidimensional of high power zoom unmanned plane investigates Transmission system in real time |
KR101830324B1 (en) * | 2016-09-30 | 2018-03-30 | 에스케이플래닛 주식회사 | Terminal Device, method for streaming UI, and storage medium thereof |
CN109524015B (en) * | 2017-09-18 | 2022-04-15 | 杭州海康威视数字技术股份有限公司 | Audio coding method, decoding method, device and audio coding and decoding system |
AU2018372561B2 (en) * | 2017-11-21 | 2023-01-05 | Immersive Robotics Pty Ltd | Image compression for digital reality |
CN109333504A (en) * | 2018-12-05 | 2019-02-15 | 博众精工科技股份有限公司 | A kind of patrol robot and patrol robot management system |
CN110320926A (en) * | 2019-07-24 | 2019-10-11 | 北京中科利丰科技有限公司 | A kind of power station detection method and power station detection system based on unmanned plane |
CN111770266B (en) * | 2020-06-15 | 2021-04-06 | 北京世纪瑞尔技术股份有限公司 | Intelligent visual perception system |
-
2022
- 2022-05-18 CN CN202210536115.7A patent/CN114640853B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN114640853A (en) | 2022-06-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114640853B (en) | Unmanned aerial vehicle image processing system that cruises | |
CN110099280B (en) | Video service quality enhancement method under limitation of wireless self-organizing network bandwidth | |
CN111726633A (en) | Compressed video stream recoding method based on deep learning and significance perception | |
CN105391972A (en) | Image communication apparatus, image transmission apparatus, and image reception apparatus | |
CN103327335B (en) | For the FPGA coded method of unmanned plane image transmission, system | |
CN116405574A (en) | Remote medical image optimization communication method and system | |
CN111491167A (en) | Image encoding method, transcoding method, device, equipment and storage medium | |
CN109474824B (en) | Image compression method | |
CN110636334A (en) | Data transmission method and system | |
CN111713107A (en) | Image processing method and device, unmanned aerial vehicle and receiving end | |
US10430665B2 (en) | Video communications methods using network packet segmentation and unequal protection protocols, and wireless devices and vehicles that utilize such methods | |
KR102324724B1 (en) | Apparatus for compressing and transmitting image using parameters of modem and network and operating method thereof | |
CN107431809A (en) | The method and apparatus of image procossing | |
CN113810717A (en) | Image processing method and device | |
US20220375022A1 (en) | Image Compression/Decompression in a Computer Vision System | |
CN107277507B (en) | Spatial domain transform domain hybrid image compression method | |
CN114093051B (en) | Communication line inspection method, equipment and system and computer readable storage medium | |
CN112104872B (en) | Image transmission method and device | |
CN112333539B (en) | Video real-time target detection method, terminal and server under mobile communication network | |
CN116193113A (en) | Data decompression and compression method and device | |
CN110784620A (en) | Equipment data intercommunication method | |
CN117614900B (en) | Data transmission method and system for intelligent security system | |
CN116582838B (en) | Traffic data transmission method, generation method, device, equipment and medium | |
CN112073722B (en) | Image processing method, device, equipment and storage medium | |
CN116684003B (en) | Quantum communication-based railway line air-ground comprehensive monitoring method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |