WO2023164929A1 - 多源遥感图像融合方法、装置、设备及存储介质 - Google Patents
多源遥感图像融合方法、装置、设备及存储介质 Download PDFInfo
- Publication number
- WO2023164929A1 WO2023164929A1 PCT/CN2022/079283 CN2022079283W WO2023164929A1 WO 2023164929 A1 WO2023164929 A1 WO 2023164929A1 CN 2022079283 W CN2022079283 W CN 2022079283W WO 2023164929 A1 WO2023164929 A1 WO 2023164929A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- fusion
- data
- remote sensing
- image
- source remote
- Prior art date
Links
- 238000003860 storage Methods 0.000 title claims abstract description 42
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 23
- 230000004927 fusion Effects 0.000 claims abstract description 205
- 238000000034 method Methods 0.000 claims abstract description 43
- 238000007499 fusion processing Methods 0.000 claims abstract description 21
- 238000005192 partition Methods 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 15
- 230000009466 transformation Effects 0.000 claims description 15
- 238000012549 training Methods 0.000 claims description 14
- 230000015654 memory Effects 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 5
- 238000012545 processing Methods 0.000 description 11
- 230000006854 communication Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 230000006872 improvement Effects 0.000 description 6
- 238000000638 solvent extraction Methods 0.000 description 5
- 230000003595 spectral effect Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013480 data collection Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present application relates to the technical field of image fusion, in particular to a multi-source remote sensing image fusion method, device, equipment and storage medium.
- the complexity of large-scale image fusion processing determines that multi-source remote sensing image fusion is a computationally intensive process.
- the spatial resolution, spectral resolution, and temporal resolution of satellite remote sensing images have been greatly improved.
- the spatial resolution has reached the decimeter level
- the spectral resolution has reached the nanometer level
- the number of bands has increased to dozens or even hundreds.
- the regression time is shortened to days or even hours, which puts forward higher requirements on the accuracy and speed of image fusion processing.
- the US optical imaging reconnaissance satellite has developed to the sixth generation, represented by Keyhole 12, its ground resolution has reached 0.1 meters, and the data volume of a single image has reached Gbytes.
- the present application provides a multi-source remote sensing image fusion method, device, equipment and storage medium to solve the problem that the existing remote sensing image fusion speed is too slow.
- a technical solution adopted by this application is to provide a multi-source remote sensing image fusion method, including:
- the division of the multi-source remote sensing image data into multiple data to be processed based on preset rules includes:
- Data division is performed on the multi-source remote sensing image data by using a pre-trained data division model to obtain a preset number of the data to be processed.
- pre-training the data partition model includes:
- sample data division results it is sent to each node for image fusion processing, and then the fusion results of each node are summarized to generate a sample fusion image;
- the above training process is executed cyclically until the data partition model meets the preset training requirements.
- the writing of the preliminary fusion data into the storage file of the final fusion result image includes:
- the summarizing the preliminary fusion data of each node, and writing the preliminary fusion data into the storage file of the final fusion result image includes:
- the image fusion algorithm includes at least one of HIS transformation fusion, YIQ transformation fusion, Brovey transformation fusion, direct average fusion, weighted average fusion, high-pass filter fusion, and wavelet fusion.
- the image fusion algorithm selected by the user is obtained as the preset target image fusion algorithm.
- a multi-source remote sensing image fusion device including:
- a receiving module configured to receive multi-source remote sensing image data input by a user
- a division module configured to divide the multi-source remote sensing image data into multiple data to be processed based on preset rules
- a fusion module configured to send the data to be processed to each node, and each node performs data fusion processing using a preset target image fusion algorithm to obtain preliminary fusion data;
- the summary module is used to summarize the preliminary fusion data of each node, and write the preliminary fusion data into the storage file of the final fusion result image.
- the computer device includes a processor, a memory coupled to the processor, and program instructions are stored in the memory, so When the program instructions are executed by the processor, the processor is made to execute the steps of any one of the above multi-source remote sensing image fusion methods.
- another technical solution adopted by the present application is to provide a storage medium storing program instructions capable of implementing any of the above multi-source remote sensing image fusion methods.
- the multi-source remote sensing image fusion method of the present application divides the multi-source remote sensing image into multiple data to be processed after receiving the multi-source remote sensing image, and then sends the multiple data to be processed to the distributed
- each node in the distributed network uses the preset target image fusion algorithm to perform data fusion processing on the data to be processed, and obtains the preliminary fusion data processed by each node, and then summarizes the preliminary fusion data of each node,
- the final fusion image is obtained, which uses the principle of distribution to process the fusion of multi-source remote sensing images, thereby greatly improving the fusion speed of multi-source remote sensing images.
- FIG. 1 is a schematic flow diagram of a multi-source remote sensing image fusion method according to an embodiment of the present invention
- FIG. 2 is a schematic diagram of functional modules of a multi-source remote sensing image fusion device according to an embodiment of the present invention
- Fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
- FIG. 4 is a schematic structural diagram of a storage medium according to an embodiment of the present invention.
- first”, “second”, and “third” in this application are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, features defined as “first”, “second”, and “third” may explicitly or implicitly include at least one of these features.
- “plurality” means at least two, such as two, three, etc., unless otherwise specifically defined. All directional indications (such as up, down, left, right, front, back%) in the embodiments of the present application are only used to explain the relative positional relationship between the various components in a certain posture (as shown in the drawings) , sports conditions, etc., if the specific posture changes, the directional indication also changes accordingly.
- FIG. 1 is a schematic flowchart of a multi-source remote sensing image fusion method according to an embodiment of the present invention. It should be noted that the method of the present invention is not limited to the flow sequence shown in FIG. 1 if substantially the same result is obtained. As shown in Figure 1, the method includes steps:
- Step S101 Receive multi-source remote sensing image data input by the user.
- the multi-source remote sensing image data in the same area can be intelligently synthesized to produce more accurate, more complete and more reliable estimation and judgment than a single source, which improves the spatial decomposition of images Improve the accuracy and clarity of planar mapping, classification accuracy and reliability, enhance interpretation and dynamic monitoring capabilities, reduce ambiguity, and effectively improve the utilization rate of remote sensing image data.
- Step S102 Divide the multi-source remote sensing image data into a plurality of data to be processed based on preset rules.
- this embodiment adopts a distributed idea, divides the fusion of multi-source remote sensing image data into multiple data, and then performs data fusion operations in parallel. Therefore, it is necessary to divide the multi-source remote sensing data into multiple data after obtaining the multi-source remote sensing image data. Specifically, after obtaining the multi-source remote sensing image data input by the user, the multi-source remote sensing image data is divided into a plurality of data to be processed according to preset rules.
- the preset rules include a variety, such as:
- the point-centered partition method also known as one-dimensional partition method, according to this method, the vertices in the data graph are evenly partitioned to different machines, and each vertex and all its adjacent edges are stored together .
- Edge-based partitioning method also known as vertex-cut or two-dimensional partitioning method. Different from one-dimensional partitioning, two-dimensional partitioning distributes the edges (rather than points) in the graph to each computing node to achieve the purpose of load balancing. The reason for this is that in most graph computing applications, the computing overhead is generally the same as The number of edges is directly proportional. If each computing node is allocated roughly the same number of edges, their computing load is basically balanced.
- Hybrid-cut method The idea of mixed partitioning is to treat high-degree vertices and low-degree vertices differently.
- the hybrid partition is allocated according to the hash value of the terminal point of this edge, otherwise, it is allocated according to the hash value of the source point. In this way, all edges corresponding to vertices with smaller degrees will be assigned to the same computing node (equivalent to using a one-dimensional partition method for these vertices), while edges corresponding to vertices with larger degrees will be assigned to Different computing nodes (equivalent to using a two-dimensional partition method for these vertices).
- Three-dimensional division method The above three division methods all regard the attributes corresponding to the vertices or edges as an inseparable whole, but because in many data mining and machine learning applications, the weights of vertices and edges in the data graph are often a vector, which can be regenerated Therefore, this method further divides each point in the data graph into sub-points, and assigns different sub-points divided by the same point to different computing nodes.
- step S102 specifically includes: performing data division on the multi-source remote sensing image data by using a pre-trained data division model, so as to obtain a preset number of the data to be processed.
- the data partition model is implemented based on the method of reinforcement learning.
- the pre-trained neural network model as my data partition model, the data is divided, and each piece of data is sent to the corresponding A node of a node can assign a complete fusion process to several nodes to complete.
- pre-training the data division model includes:
- the above training process is executed cyclically until the data partition model meets the preset training requirements.
- the sample image data refers to pre-prepared multi-source remote sensing image sample data
- the multi-source remote sensing image sample data is obtained in advance corresponding to the actual fusion image, and is pre-constructed by using the sample image data and the actual fusion image pair corresponding to the sample image data.
- the data partition model to be trained is trained to obtain the trained data partition model.
- the data division model is realized based on a neural network model, for example, a convolutional neural network, a recurrent neural network, a long short-term memory neural network, and the like.
- the preset training requirement is preset by the user, which may be the accuracy of the prediction result or a specified number of training times.
- Step S103 Send the data to be processed to each node, and each node performs data fusion processing using a preset target image fusion algorithm to obtain preliminary fusion data, and each node constitutes a distributed network.
- the multiple data to be processed are sent to each node respectively, and then each node performs data fusion processing on the data to be processed allocated to itself, and then each node obtains A preliminary fused data.
- the nodes in this embodiment refer to nodes in a pre-built distributed network, and each node can perform data fusion operations independently.
- Task parallelism requires that all tasks can be divided, and different tasks run on the nodes of each distributed network.
- Pipeline parallelism separates the processing step by step, and each node of the distributed network is responsible for an independent step.
- Data parallelism divides the data set and distributes it to the nodes of each distributed network. Each node performs similar operations. This model has the best load balancing and the best scalability.
- the fusion steps are relatively complicated, and there is a strong data correlation between each step, so it is not suitable to use task parallelism; and the time-consuming difference between each step is relatively large, and it is difficult to use pipeline parallelism
- the operation of the fusion method on each pixel unit is basically the same. Because the image data has the characteristics of consistency and neighborhood, it is an ideal choice to use data parallel mode, and this parallel mode is more suitable for current Mainstream parallel computing system, therefore, in step S102 to step S103 in this embodiment, the multi-source remote sensing image data is first divided, and after obtaining multiple data to be processed, the data to be processed is processed in a data parallel mode , to improve the efficiency of data fusion.
- the image fusion algorithm includes at least one of HIS transformation fusion, YIQ transformation fusion, Brovey transformation fusion, direct average fusion, weighted average fusion, high-pass filter fusion, and wavelet fusion, which is not limited in this embodiment.
- step S101 it also includes:
- the image fusion algorithm selected by the user is obtained as the preset target image fusion algorithm.
- existing image fusion algorithms are not suitable for fusion of all image types, for example: HIS transformation fusion, YIQ transformation fusion, and Brovey transformation fusion tend to distort the original spectral characteristics and cause spectral degradation; Direct average fusion and weighted average fusion reduce the contrast of the image; high-pass filter fusion filters out most of the texture information when filtering high-resolution band images; wavelet fusion improves image resolution while reducing the spectral information of the source image. Retention has reasonably good performance, but is more complicated to implement, etc.
- the user by providing a variety of image fusion algorithms, after the user submits the multi-source remote sensing image data, the user selects an appropriate image fusion algorithm according to his own needs, and uses the image fusion algorithm as the preset target image fusion algorithm, and then use the target image fusion algorithm for data fusion processing.
- Step S104 Summarize the preliminary fusion data of each node, and write the preliminary fusion data into the storage file of the final fusion result image.
- each node completes the data fusion process, and after obtaining the preliminary fusion data, receives the preliminary fusion data fed back by each node, and then writes all the preliminary fusion data into the storage file of the final fusion result image.
- the writing of the preliminary fusion data into the storage file of the final fusion result image includes:
- the speed of the I/O part is always the slowest, and the amount of remote sensing image data is large.
- first obtain the preliminary fusion data of all nodes and then The preliminary fusion data is uniformly written into the storage file of the final fusion result image, so that only one I/O operation is required, which saves the algorithm execution time.
- the communication overhead in the fusion calculation process can be eliminated, but the results of each node after the calculation are distributed, and the final fusion image needs to be collected, spliced and written into the data file. Therefore, The communication process of data collection is inevitable.
- the communication overhead of fusion result collection after parallel computing is relatively large. Reducing the communication overhead caused by data collection can hide communication by overlapping communication with computation or I/O.
- the summarizing the preliminary fusion data of each node, and writing the preliminary fusion data to the storage file of the final fusion result image includes:
- the number of pre-divided data to be processed is used to confirm whether the quantity of preliminary fusion data returned by each node is accurate, and when all nodes have returned the preliminary fusion data, an I/O command is generated , so as to write all the preliminary fusion data to the storage file of the final fusion result image at one time.
- the multi-source remote sensing image fusion method of the embodiment of the present invention after receiving the multi-source remote sensing image, the multi-source remote sensing image is divided into multiple data to be processed, and then the multiple data to be processed are sent to the distributed network, and the distributed Each node in the network uses the preset target image fusion algorithm to perform data fusion processing on the data to be processed, and obtains the preliminary fusion data processed by each node, and then summarizes the preliminary fusion data of each node to obtain the final fusion image. , which uses the distributed principle to process the fusion of multi-source remote sensing images, thus greatly improving the fusion speed of multi-source remote sensing images.
- Fig. 2 is a schematic diagram of functional modules of a multi-source remote sensing image fusion device according to an embodiment of the present invention.
- the device 20 includes a receiving module 21 , a division module 22 , a fusion module 23 and a summary module 24 .
- a division module 22 configured to divide the multi-source remote sensing image data into a plurality of data to be processed based on preset rules
- the fusion module 23 is used to send the data to be processed to each node respectively, and each node uses a preset target image fusion algorithm to perform data fusion processing to obtain preliminary fusion data;
- the summarization module 24 is configured to summarize the preliminary fusion data of each node, and write the preliminary fusion data into the storage file of the final fusion result image.
- the division module 22 executes the operation of dividing the multi-source remote sensing image data into multiple data to be processed based on preset rules, and may also be: use a pre-trained data division model to divide the multi-source remote sensing image data The image data is divided into data to obtain a preset number of the data to be processed.
- pre-training the data division model includes: obtaining sample image data and an actual fused image corresponding to the sample image data; inputting the sample image data into the data division model to be trained to obtain a sample data division result ; According to the sample data division results, it is sent to each node for image fusion processing, and then the fusion results of each node are summarized to generate a sample fusion image; based on the actual fusion image, the sample fusion image and the preset loss function, update the The above data division model; cyclically execute the above training process until the data division model meets the preset training requirements.
- the summary module 24 executes the operation of writing the preliminary fusion data into the storage file of the final fusion result image, which specifically includes: confirming the maximum I/O parallel number supported by its own hardware; I/O parallel number, write the preliminary fusion data to the storage file of the final fusion result image in I/O parallel mode.
- the summarization module 24 executes the operation of summarizing the preliminary fusion data of each node and writing the preliminary fusion data into the storage file of the final fusion result image, specifically including: receiving the information returned by each node Preliminary fusion of data, and confirm whether each node has returned; if so, generate an I/O command for the write operation, and execute the I/O command to write all the preliminary fusion data to the final fusion result at one time image storage file.
- the image fusion algorithm includes at least one of HIS transformation fusion, YIQ transformation fusion, Brovey transformation fusion, direct average fusion, weighted average fusion, high-pass filter fusion, and wavelet fusion.
- the receiving module 21 after the receiving module 21 performs the operation of receiving the multi-source remote sensing image data input by the user, it is further configured to: obtain an image fusion algorithm selected by the user as the preset target image fusion algorithm.
- each embodiment in this specification is described in a progressive manner, and each embodiment focuses on the differences from other embodiments.
- the same and similar parts in each embodiment refer to each other, that is, Can.
- the description is relatively simple, and for related parts, please refer to part of the description of the method embodiments.
- FIG. 3 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
- the computer device 30 includes a processor 31 and a memory 32 coupled to the processor 31.
- Program instructions are stored in the memory 32.
- the processor 31 executes any of the above-mentioned operations. The steps of the multi-source remote sensing image fusion method described in the embodiment.
- the processor 31 may also be referred to as a CPU (Central Processing Unit, central processing unit).
- the processor 31 may be an integrated circuit chip with signal processing capability.
- the processor 31 can also be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components .
- DSP digital signal processor
- ASIC application-specific integrated circuit
- FPGA field programmable gate array
- a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
- FIG. 4 is a schematic structural diagram of a storage medium according to an embodiment of the present invention.
- the storage medium in the embodiment of the present invention stores program instructions 41 capable of realizing all the above-mentioned methods, wherein the program instructions 41 may be stored in the above-mentioned storage medium in the form of software products, including several instructions to make a computer device (which can It is a personal computer, a server, or a network device, etc.) or a processor (processor) that executes all or part of the steps of the methods described in the various embodiments of the present application.
- a computer device which can It is a personal computer, a server, or a network device, etc.
- processor processor
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc., which can store program codes. , or computer equipment such as computers, servers, mobile phones, and tablets.
- the disclosed computer equipment, devices and methods may be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of units is only a logical function division. In actual implementation, there may be other division methods.
- multiple units or components can be combined or integrated. to another system, or some features may be ignored, or not implemented.
- the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
- each functional unit in each embodiment of the present invention may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
- the above integrated units can be implemented in the form of hardware or in the form of software functional units. The above is only the implementation mode of this application, and does not limit the scope of patents of this application. Any equivalent structure or equivalent process conversion made by using the contents of this application specification and drawings, or directly or indirectly used in other related technical fields, All are included in the scope of patent protection of the present application in the same way.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种多源遥感图像融合方法、装置、设备及存储介质,其中方法包括:接收用户输入的多源遥感图像数据;基于预设规则将所述多源遥感图像数据划分为多个待处理数据;将所述待处理数据分别下发至各个节点,并由各个节点利用预设的目标图像融合算法进行数据融合处理,得到初步融合数据,所述各个节点构成分布式网络;汇总各个节点的初步融合数据,并将所述初步融合数据写入至最终融合结果图像的存储文件中。本发明通过利用分布式网络来实现多源遥感图像的融合过程,大大提升了图像融合的效率,降低了多源遥感图像融合所需花费的时间。
Description
本申请涉及图像融合技术领域,特别是涉及一种多源遥感图像融合方法、装置、设备及存储介质。
随着各种新型卫星传感器的使用,遥感数据量迅速膨胀,然而对遥感图像数据的应用范围以及各种应用对遥感图像处理的速度要求却越来越高。遥感图像融合目前已经成为遥感图像生产的一个关键而必要的步骤。
然而,大规模图像融合处理的复杂性决定了多源遥感图像融合是计算密集型的过程。目前,卫星遥感图像的空间分辨率、光谱分辨率和时间分辨率大大提高,空间分辨率已达到分米级,光谱分辨率已达到纳米级,波段数已增加到数十个甚至数百个,回归时间缩短到几天甚至几个小时,这对图像融合处理的精度与速度提出了更高的要求。例如,美国光学成像侦察卫星已发展到第6代,以锁眼12为代表,其地面分辨率已达到0.1米,单幅图像就已达到G字节的数据量。随着遥感融合技术的快速发展,遥感图像的分辨率越来越高,大量的数据处理和操作的复杂性决定了遥感图像处理需要大量的计算,例如在一台1.8G CPU、1GB内存的Pentium 4桌面电脑上采用加拿大APOLLO公司开发的商业遥感图像处理软件PCI Geomatica融合一幅QuickBird的HRP图像(12,000×12,000)和4个波段的LRM图像需要耗时31分钟,因此,现有的遥感图像融合方法已经远远无法满足日益增长的遥感数据处理需求。
发明内容
本申请提供一种多源遥感图像融合方法、装置、设备及存储介质以解决现有的遥感图像融合速度过慢的问题。
为解决上述技术问题,本申请采用的一个技术方案是:提供一种多源遥感图像融合方法,包括:
接收用户输入的多源遥感图像数据;
基于预设规则将所述多源遥感图像数据划分为多个待处理数据;
将所述待处理数据分别下发至各个节点,并由各个节点利用预设的目标图像融合算法进行数据融合处理,得到初步融合数据,所述各个节点构成分布式网络;
汇总各个节点的初步融合数据,并将所述初步融合数据写入至最终融合结果图像的存储文件中。
作为本申请的进一步改进,所述基于预设规则将所述多源遥感图像数据划分为多个待处理数据,包括:
利用预先训练好的数据划分模型对所述多源遥感图像数据进行数据划分,以得到预设数量个所述待处理数据。
作为本申请的进一步改进,预先训练所述数据划分模型,包括:
获取到样本图像数据和与样本图像数据对应的实际融合图像;
将所述样本图像数据输入至待训练的数据划分模型,得到样本数据划分结果;
根据样本数据划分结果下发至各个节点进行图像融合处理,再将各个节点的融合结果汇总以生成样本融合图像;
基于所述实际融合图像、所述样本融合图像和预设的损失函数更新所述数据划分模型;
循环执行上述训练过程直至所述数据划分模型达到预设训练要求。
作为本申请的进一步改进,所述将所述初步融合数据写入至最终融合结果图像的存储文件中,包括:
确认自身硬件所支持的最大I/O并行数;
根据所述最大I/O并行数,将所述初步融合数据以I/O并行的方式写入至最终融合结果图像的存储文件中。
作为本申请的进一步改进,所述汇总各个节点的初步融合数据,并将所述初步融合数据写入至最终融合结果图像的存储文件中,包括:
接收各个节点回传的所述初步融合数据,并确认是否每个节点均已回传;
若是,则生成写入操作的I/O指令,并执行所述I/O指令将所有的初步融合数据一次性写入至最终融合结果图像的存储文件中。
作为本申请的进一步改进,所述图像融合算法包括HIS变换融合、YIQ变换融合、Brovey变换融合、直接平均融合、加权平均融合、高通滤波融合、 小波融合中的至少一种。
作为本申请的进一步改进,所述接收用户输入的多源遥感图像数据之后,还包括:
获取用户选择的图像融合算法并作为所述预设的目标图像融合算法。
为解决上述技术问题,本申请采用的另一个技术方案是:提供一种多源遥感图像融合装置,包括:
接收模块,用于接收用户输入的多源遥感图像数据;
划分模块,用于基于预设规则将所述多源遥感图像数据划分为多个待处理数据;
融合模块,用于将所述待处理数据分别下发至各个节点,并由各个节点利用预设的目标图像融合算法进行数据融合处理,得到初步融合数据;
汇总模块,用于汇总各个节点的初步融合数据,并将所述初步融合数据写入至最终融合结果图像的存储文件中。
为解决上述技术问题,本申请采用的再一个技术方案是:提供一种计算机设备,所述计算机设备包括处理器、与所述处理器耦接的存储器,所述存储器中存储有程序指令,所述程序指令被所述处理器执行时,使得所述处理器执行如上述任一项的多源遥感图像融合方法的步骤。
为解决上述技术问题,本申请采用的再一个技术方案是:提供一种存储介质,存储有能够实现上述任一项的多源遥感图像融合方法的程序指令。
本申请的有益效果是:本申请的多源遥感图像融合方法通过在接收到多源遥感图像后,将多源遥感图像划分为多个待处理数据,再将多个待处理数据发送至分布式网络中,由分布式网络中的各个节点利用预设的目标图像融合算法分别对待处理数据进行数据融合处理,得到每个节点处理后的初步融合数据,再将各个节点的初步融合数据进行汇总,从而得到最终融合图像,其利用分布式的原理以对多源遥感图像的融合进行处理,从而大幅度提升了多源遥感图像的融合速度。
图1是本发明实施例的多源遥感图像融合方法的流程示意图;
图2是本发明实施例的多源遥感图像融合装置的功能模块示意图;
图3是本发明实施例的计算机设备的结构示意图;
图4是本发明实施例的存储介质的结构示意图。
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
本申请中的术语“第一”、“第二”、“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”、“第三”的特征可以明示或者隐含地包括至少一个该特征。本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。本申请实施例中所有方向性指示(诸如上、下、左、右、前、后……)仅用于解释在某一特定姿态(如附图所示)下各部件之间的相对位置关系、运动情况等,如果该特定姿态发生改变时,则该方向性指示也相应地随之改变。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
图1是本发明实施例的多源遥感图像融合方法的流程示意图。需注意的是,若有实质上相同的结果,本发明的方法并不以图1所示的流程顺序为限。如图1所示,该方法包括步骤:
步骤S101:接收用户输入的多源遥感图像数据。
需要说明的是,随着遥感技术的发展,光学、热红外和微波等大量不同卫星传感器对地观测的应用,获取的同一地区的多种遥感影像数据(多时相、多光谱、多传感器、多平台和多分辨率)越来越多,这便是多源遥感。与单源遥 感影像数据相比,多源遥感影像数据所提供的信息具有冗余性、互补性和合作性。多源遥感影像数据的冗余性表示他们对环境或目标的表示、描述或解译结果相同;互补性是指信息来自不同的自由度且相互独立;合作信息是不同传感器在观测和处理信息时对其它信息有依赖关系。通过对多源遥感图像数据进行数据融合,可以将同一地区的多源遥感影像数据加以智能化合成,产生比单一信源更精确、更完全、更可靠的估计和判断,其提高影像的空间分解力和清晰度,提高平面测图精度、分类的精度与可靠性,增强解译和动态监测能力,减少模糊度,有效提高遥感影像数据的利用率等。
步骤S102:基于预设规则将所述多源遥感图像数据划分为多个待处理数据。
需要理解的是,本实施例为了提升数据融合的效率,采用了分布式的思想,将多源遥感图像数据的融合划分为多个数据,然后并行进行数据融合操作。因此,需要在得到多源遥感图像数据后,将对多源遥感数据划分为多个数据。具体地,在得到用户输入的多源遥感图像数据后,将该多源遥感图像数据按预设规则划分为多个待处理数据。该预设规则包括多种,例如:
1、以点为中心的划分方法,也被称为一维划分方法,按照此方法,数据图中的顶点被均匀地划分给不同的机器,每个顶点与它所有的邻边都存储在一起。
2、基于边的划分方法,也被称为点切分(vertex-cut)或二维划分方法。不同于一维划分,二维划分将图中的边(而不是点)均分给各个计算节点从而达到负载均衡的目的,这样做的原因是:在大部分图计算应用中,计算开销一般和边数成正比,如果各个计算节点被分配数目基本相同的边,它们的计算负载就基本是均衡的。
3、混合划分(hybrid-cut)方法。混合划分的思想就是区别对待高度数顶点和低度数顶点。当一条边的终点度数小于预先给定的阈值时,混合划分按照这条边终点的哈希值对其进行分配,反之则按源点的哈希值分配。如此一来,度数较小的顶点对应的所有边都会被分配到同一个计算节点之上(相当于对这些顶点使用了一维划分方法),而度数较大的顶点对应的边则被分配给了不同的计算节点(相当于对这些顶点使用了二维划分方法)。
4、三维划分方法。上述三种划分方法都将顶点或边所对应的属性作为一 个不可分割的整体,但因为在许多数据挖掘和机器学习应用中,数据图中顶点和边的权值经常是一个向量,可以被再次划分,因此,该方法将数据图中的每个点进一步划分成子点,并把同一个点划分出的不同子点分配给不同的计算节点。
除上述所述的数据划分的预设规则外,其他能够实现对多源遥感图像数据进行划分处理的方法均属于本发明的保护范围之内。
需要说明的是,本实施例中,为了提升数据划分的智能化程度和准确性,优选采用机器学习算法来实现数据的划分。具体地,步骤S102具体包括:利用预先训练好的数据划分模型对所述多源遥感图像数据进行数据划分,以得到预设数量个所述待处理数据。
具体地,该数据划分模型基于强化学习的方法来实现,通过使用预训练神经网络模型作为我数据划分模型,以对数据进行划分,将每一片数据按照预训练模型的输出的映射规则发送给对应的一个结点,就可以将一个完整的融合过程分配给若干个结点来完成。
进一步的,预先训练所述数据划分模型,包括:
1、获取到样本图像数据和与样本图像数据对应的实际融合图像;
2、将所述样本图像数据输入至待训练的数据划分模型,得到样本数据划分结果;
3、根据样本数据划分结果下发至各个节点进行图像融合处理,再将各个节点的融合结果汇总以生成样本融合图像;
4、基于所述实际融合图像、所述样本融合图像和预设的损失函数更新所述数据划分模型;
循环执行上述训练过程直至所述数据划分模型达到预设训练要求。
具体地,样本图像数据是指预先准备的多源遥感图像样本数据,该多源遥感图像样本数据对应实际融合图像预先得到,通过利用样本图像数据和与样本图像数据对应的实际融合图像对预先构建的待训练的数据划分模型进行训练,从而得到训练好的数据划分模型。其中,该数据划分模型基于神经网络模型来实现,例如,卷积神经网络、循环神经网络、长短时记忆神经网络等。本实施例中,预设训练要求由用户预先设置,其可以为预测结果的精度,也可以为指定的训练次数。
步骤S103:将所述待处理数据分别下发至各个节点,并由各个节点利用预设的目标图像融合算法进行数据融合处理,得到初步融合数据,所述各个节点构成分布式网络。
具体地,在得到多个待处理数据后,将该多个待处理数据分别下发至每个节点,然后由各个节点对分配给自身的待处理数据进行数据融合处理,然后每个节点均得到一个初步融合数据。需要说明的是,本实施例中的节点指预先构建的分布式网络中的节点,每个节点均可单独进行数据融合操作。
需要说明的是,通常的并行算法的实现模式包括三种:任务并行;流水线并行;数据并行。任务并行要求所有任务能被划分开,各个分布式网络的节点上运行不同的任务。流水线并行则是将处理过程按步骤分开,每个分布式网络的节点负责一个独立的步骤。数据并行是将数据集分割,分给各个分布式网络的节点,每个节点进行相似的操作,这种模型的负载均衡最佳,可扩展性能最好。本实施例中,对于遥感图像融合算法,融合步骤较为复杂,各个步骤间存在较强的数据相关性,不适宜用任务并行;而且各个步骤间的耗时差异较大,使用流水线并行也难度较大;然而,融合方法对各个像素单元的操作是基本相同的,由于图像数据具有一致性和邻域性的特点,使用数据并行的模式是比较理想的选择,同时这种并行模式更适合于当前主流的并行计算系统,因此,本实施例在步骤S102~步骤S103中,先对多源遥感图像数据进行划分,得到多个待处理数据后,对多个待处理数据采用数据并行的模式进行处理,以提升数据融合的效率。
进一步的,所述图像融合算法包括HIS变换融合、YIQ变换融合、Brovey变换融合、直接平均融合、加权平均融合、高通滤波融合、小波融合中的至少一种,本实施例不做限定。
进一步的,步骤S101之后,还包括:
获取用户选择的图像融合算法并作为所述预设的目标图像融合算法。
具体地,需要说明的是,现有的图像融合算法并不能够适用于所有图像类型的融合,例如:HIS变换融合、YIQ变换融合、Brovey变换融合容易扭曲原始的光谱特性,产生光谱退化现象;直接平均融合和加权平均融合降低了图像的对比度;高通滤波融合在对高分辨率波段影像滤波时,滤掉了大部分的纹理信息;小波融合在提高图像分辨率的同时对源图像光谱信息的保留具有相当 好的性能,但实现较为复杂等。因此,本实施例中,通过提供多种图像融合算法,在用户提交多源遥感图像数据后,由用户根据自身需求从中选取合适的图像融合算法,并将该图像融合算法作为预设的目标图像融合算法,再利用该目标图像融合算法进行数据融合处理。
步骤S104:汇总各个节点的初步融合数据,并将所述初步融合数据写入至最终融合结果图像的存储文件中。
具体地,在各个节点均处理完成数据融合,得到初步融合数据后,接收各个节点反馈的初步融合数据,再将所有的初步融合数据写入至最终融合结果图像的存储文件中。
进一步的,所述将所述初步融合数据写入至最终融合结果图像的存储文件中,包括:
1、确认自身硬件所支持的最大I/O并行数。
2、根据所述最大I/O并行数,将所述初步融合数据以I/O并行的方式写入至最终融合结果图像的存储文件中。
需要说明的是,在计算机系统中,I/O部分的速度总是最慢的,而遥感图像数据量很大,为了尽可能的提高程序的运行速度,首先获取所有节点的初步融合数据,再统一将初步融合数据写入最终融合结果图像的存储文件中,这样只需进行1次I/O操作,节约了算法执行时间。并且,还可通过合理的数据划分,消除了融合计算过程中的通信开销,但各节点完成计算后的结果是分布的,需要收集、拼接并写入数据文件才能得到最终融合后的图像,因此数据收集的通信过程是不可避免的。由于遥感图像数据量很大,因此并行计算后融合结果收集的通信开销是比较大的。减少数据收集带来的通信开销可以通过通信与计算或者I/O重叠的方式来隐藏通信。数据收集的实现方法有两种:一是收集所有节点的初步融合数据后,再一次性地将这些数据写入最终融合结果图像的存储文件;二是获取所有节点完成计算得到的初步融合数据,并将其分别写入结果图像存储文件。但是,上述两种方式,无论是收集到所有节点的初步融合数据后再统一写回最终融合结果图像的存储文件,还是收到一个节点的融合结果数据就写回最终融合结果图像的存储文件,两者的文件写回过程都是串行执行的,融合处理的图片越大,I/O时间也就越长,I/O对并行效率的影响也就越大。
因此,本实施例中,为了进一步提高数据融合的效率,通过确认系统软硬件所支持的最大I/O并行数,再将所有获取到的初步融合数据以I/O并行的方式写入至最终融合结果图像的存储文件中,由于不同节点对图像的不同部分进行融合处理,处理结果也放入结果文件的不同位置,所以各个节点的初步融合数据进行存储时不存在冲突,因此我们使用硬件支持对文件同时进行写操作,I/O并行是完全可行的,相对于串行方式而言,该写入过程效率更高,能够节省更多时间。
进一步的,所述汇总各个节点的初步融合数据,并将所述初步融合数据写入至最终融合结果图像的存储文件中,包括:
1、接收各个节点回传的所述初步融合数据,并确认是否每个节点均已回传;
2、若是,则生成写入操作的I/O指令,并执行所述I/O指令将所有的初步融合数据一次性写入至最终融合结果图像的存储文件中。
具体地,在执行初步融合过程中,利用预先划分的待处理数据的数量确认各个节点回传的初步融合数据的数量是否准确,当所有节点均已回传初步融合数据后,生成I/O指令,从而将所有的初步融合数据一次性写入至最终融合结果图像的存储文件中。
本发明实施例的多源遥感图像融合方法通过在接收到多源遥感图像后,将多源遥感图像划分为多个待处理数据,再将多个待处理数据发送至分布式网络中,由分布式网络中的各个节点利用预设的目标图像融合算法分别对待处理数据进行数据融合处理,得到每个节点处理后的初步融合数据,再将各个节点的初步融合数据进行汇总,从而得到最终融合图像,其利用分布式的原理以对多源遥感图像的融合进行处理,从而大幅度提升了多源遥感图像的融合速度。
图2是本发明实施例的多源遥感图像融合装置的功能模块示意图。如图2所示,该装置20包括接收模块21、划分模块22、融合模块23和汇总模块24。
接收模块21,用于接收用户输入的多源遥感图像数据;
划分模块22,用于基于预设规则将所述多源遥感图像数据划分为多个待处理数据;
融合模块23,用于将所述待处理数据分别下发至各个节点,并由各个节点利用预设的目标图像融合算法进行数据融合处理,得到初步融合数据;
汇总模块24,用于汇总各个节点的初步融合数据,并将所述初步融合数据写入至最终融合结果图像的存储文件中。
可选地,划分模块22执行所述基于预设规则将所述多源遥感图像数据划分为多个待处理数据的操作,还可以为:利用预先训练好的数据划分模型对所述多源遥感图像数据进行数据划分,以得到预设数量个所述待处理数据。
可选地,预先训练所述数据划分模型,包括:获取到样本图像数据和与样本图像数据对应的实际融合图像;将所述样本图像数据输入至待训练的数据划分模型,得到样本数据划分结果;根据样本数据划分结果下发至各个节点进行图像融合处理,再将各个节点的融合结果汇总以生成样本融合图像;基于所述实际融合图像、所述样本融合图像和预设的损失函数更新所述数据划分模型;循环执行上述训练过程直至所述数据划分模型达到预设训练要求。
可选地,汇总模块24执行所述将所述初步融合数据写入至最终融合结果图像的存储文件中的操作,具体包括:确认自身硬件所支持的最大I/O并行数;根据所述最大I/O并行数,将所述初步融合数据以I/O并行的方式写入至最终融合结果图像的存储文件中。
可选地,汇总模块24执行所述汇总各个节点的初步融合数据,并将所述初步融合数据写入至最终融合结果图像的存储文件中的操作,具体包括:接收各个节点回传的所述初步融合数据,并确认是否每个节点均已回传;若是,则生成写入操作的I/O指令,并执行所述I/O指令将所有的初步融合数据一次性写入至最终融合结果图像的存储文件中。
可选地,所述图像融合算法包括HIS变换融合、YIQ变换融合、Brovey变换融合、直接平均融合、加权平均融合、高通滤波融合、小波融合中的至少一种。
可选地,接收模块21执行所述接收用户输入的多源遥感图像数据的操作之后,还用于:获取用户选择的图像融合算法并作为所述预设的目标图像融合算法。
关于上述实施例多源遥感图像融合装置中各模块实现技术方案的其他细节,可参见上述实施例中的多源遥感图像融合方法中的描述,此处不再赘述。
需要说明的是,本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部 分互相参见即可。对于装置类实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
请参阅图3,图3为本发明实施例的计算机设备的结构示意图。如图3所示,该计算机设备30包括处理器31及和处理器31耦接的存储器32,存储器32中存储有程序指令,程序指令被处理器31执行时,使得处理器31执行上述任一实施例所述的多源遥感图像融合方法的步骤。
其中,处理器31还可以称为CPU(Central Processing Unit,中央处理单元)。处理器31可能是一种集成电路芯片,具有信号的处理能力。处理器31还可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
参阅图4,图4为本发明实施例的存储介质的结构示意图。本发明实施例的存储介质存储有能够实现上述所有方法的程序指令41,其中,该程序指令41可以以软件产品的形式存储在上述存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施方式所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质,或者是计算机、服务器、手机、平板等计算机设备。
在本申请所提供的几个实施例中,应该理解到,所揭露的计算机设备,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的 形式实现。以上仅为本申请的实施方式,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。
Claims (10)
- 一种多源遥感图像融合方法,其特征在于,包括:接收用户输入的多源遥感图像数据;基于预设规则将所述多源遥感图像数据划分为多个待处理数据;将所述待处理数据分别下发至各个节点,并由各个节点利用预设的目标图像融合算法进行数据融合处理,得到初步融合数据,所述各个节点构成分布式网络;汇总各个节点的初步融合数据,并将所述初步融合数据写入至最终融合结果图像的存储文件中。
- 根据权利要求1所述的多源遥感图像融合方法,其特征在于,所述基于预设规则将所述多源遥感图像数据划分为多个待处理数据,包括:利用预先训练好的数据划分模型对所述多源遥感图像数据进行数据划分,以得到预设数量个所述待处理数据。
- 根据权利要求2所述的多源遥感图像融合方法,其特征在于,预先训练所述数据划分模型,包括:获取到样本图像数据和与样本图像数据对应的实际融合图像;将所述样本图像数据输入至待训练的数据划分模型,得到样本数据划分结果;根据样本数据划分结果下发至各个节点进行图像融合处理,再将各个节点的融合结果汇总以生成样本融合图像;基于所述实际融合图像、所述样本融合图像和预设的损失函数更新所述数据划分模型;循环执行上述训练过程直至所述数据划分模型达到预设训练要求。
- 根据权利要求1所述的多源遥感图像融合方法,其特征在于,所述将所述初步融合数据写入至最终融合结果图像的存储文件中,包括:确认自身硬件所支持的最大I/O并行数;根据所述最大I/O并行数,将所述初步融合数据以I/O并行的方式写入至最终融合结果图像的存储文件中。
- 根据权利要求1所述的多源遥感图像融合方法,其特征在于,所述汇总 各个节点的初步融合数据,并将所述初步融合数据写入至最终融合结果图像的存储文件中,包括:接收各个节点回传的所述初步融合数据,并确认是否每个节点均已回传;若是,则生成写入操作的I/O指令,并执行所述I/O指令将所有的初步融合数据一次性写入至最终融合结果图像的存储文件中。
- 根据权利要求1所述的多源遥感图像融合方法,其特征在于,所述图像融合算法包括HIS变换融合、YIQ变换融合、Brovey变换融合、直接平均融合、加权平均融合、高通滤波融合、小波融合中的至少一种。
- 根据权利要求6所述的多源遥感图像融合方法,其特征在于,所述接收用户输入的多源遥感图像数据之后,还包括:获取用户选择的图像融合算法并作为所述预设的目标图像融合算法。
- 一种多源遥感图像融合装置,其特征在于,包括:接收模块,用于接收用户输入的多源遥感图像数据;划分模块,用于基于预设规则将所述多源遥感图像数据划分为多个待处理数据;融合模块,用于将所述待处理数据分别下发至各个节点,并由各个节点利用预设的目标图像融合算法进行数据融合处理,得到初步融合数据;汇总模块,用于汇总各个节点的初步融合数据,并将所述初步融合数据写入至最终融合结果图像的存储文件中。
- 一种计算机设备,其特征在于,所述计算机设备包括处理器、与所述处理器耦接的存储器,所述存储器中存储有程序指令,所述程序指令被所述处理器执行时,使得所述处理器执行如权利要求1-7中任一项权利要求所述的多源遥感图像融合方法的步骤。
- 一种存储介质,其特征在于,存储有能够实现如权利要求1-7中任一项所述的多源遥感图像融合方法的程序指令。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210200382.7 | 2022-03-01 | ||
CN202210200382.7A CN114529489B (zh) | 2022-03-01 | 2022-03-01 | 多源遥感图像融合方法、装置、设备及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023164929A1 true WO2023164929A1 (zh) | 2023-09-07 |
Family
ID=81627582
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/079283 WO2023164929A1 (zh) | 2022-03-01 | 2022-03-04 | 多源遥感图像融合方法、装置、设备及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114529489B (zh) |
WO (1) | WO2023164929A1 (zh) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117150437A (zh) * | 2023-10-31 | 2023-12-01 | 航天宏图信息技术股份有限公司 | 多源卫星海面风场数据融合方法、装置、设备及介质 |
CN117195292A (zh) * | 2023-09-08 | 2023-12-08 | 广州星屋智能科技有限公司 | 一种基于数据融合和边缘计算的电力业务评估方法 |
CN117392539A (zh) * | 2023-10-13 | 2024-01-12 | 哈尔滨师范大学 | 一种基于深度学习的河流水体识别方法、电子设备及存储介质 |
CN118506040A (zh) * | 2024-07-12 | 2024-08-16 | 中国科学院空天信息创新研究院 | 遥感图像的聚类方法、装置、设备及介质 |
CN118522156A (zh) * | 2024-07-23 | 2024-08-20 | 江西省赣地智慧科技有限公司 | 基于遥感影像的城市道路交通管理系统和方法 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116342449B (zh) * | 2023-03-29 | 2024-01-16 | 银河航天(北京)网络技术有限公司 | 图像增强方法、装置以及存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109118461A (zh) * | 2018-07-06 | 2019-01-01 | 航天星图科技(北京)有限公司 | 一种基于分布式框架的his融合方法 |
CN111079515A (zh) * | 2019-10-29 | 2020-04-28 | 深圳先进技术研究院 | 基于遥感大数据的区域监测方法、装置、终端及存储介质 |
CN111524063A (zh) * | 2019-12-24 | 2020-08-11 | 珠海大横琴科技发展有限公司 | 一种遥感图像融合方法及装置 |
US20200302249A1 (en) * | 2019-03-19 | 2020-09-24 | Mitsubishi Electric Research Laboratories, Inc. | Systems and Methods for Multi-Spectral Image Fusion Using Unrolled Projected Gradient Descent and Convolutinoal Neural Network |
CN113032350A (zh) * | 2021-05-27 | 2021-06-25 | 开采夫(杭州)科技有限公司 | 一种遥感数据处理的方法、系统、电子设备和存储介质 |
CN113496148A (zh) * | 2020-03-19 | 2021-10-12 | 中科星图股份有限公司 | 一种多源数据融合方法及系统 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102521815B (zh) * | 2011-11-02 | 2013-11-13 | 薛笑荣 | 影像快速融合系统及快速融合方法 |
CN102799898A (zh) * | 2012-06-28 | 2012-11-28 | 浙江大学 | 大背景下高分辨率遥感影像目标识别高效处理方法 |
CN103019671B (zh) * | 2012-10-08 | 2016-08-17 | 中国科学院对地观测与数字地球科学中心 | 面向数据密集型遥感图像处理的泛型编程的框架编程方法 |
CN106991665B (zh) * | 2017-03-24 | 2020-03-17 | 中国人民解放军国防科学技术大学 | 基于cuda图像融合并行计算的方法 |
WO2020134856A1 (zh) * | 2018-12-29 | 2020-07-02 | 长沙天仪空间科技研究院有限公司 | 一种遥感卫星系统 |
CN110120047B (zh) * | 2019-04-04 | 2023-08-08 | 平安科技(深圳)有限公司 | 图像分割模型训练方法、图像分割方法、装置、设备及介质 |
CN110889816B (zh) * | 2019-11-07 | 2022-12-16 | 拜耳股份有限公司 | 一种图像分割方法与装置 |
CN111723221B (zh) * | 2020-06-19 | 2023-09-15 | 珠江水利委员会珠江水利科学研究院 | 基于分布式架构的海量遥感数据处理方法及系统 |
CN111754446A (zh) * | 2020-06-22 | 2020-10-09 | 怀光智能科技(武汉)有限公司 | 一种基于生成对抗网络的图像融合方法、系统及存储介质 |
CN111932457B (zh) * | 2020-08-06 | 2023-06-06 | 北方工业大学 | 遥感影像高时空融合处理算法及装置 |
CN112138394B (zh) * | 2020-10-16 | 2022-05-03 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
CN112862871A (zh) * | 2021-01-20 | 2021-05-28 | 华中科技大学 | 图像融合方法及装置 |
CN113222835B (zh) * | 2021-04-22 | 2023-04-14 | 海南大学 | 基于残差网络的遥感全色和多光谱图像分布式融合方法 |
-
2022
- 2022-03-01 CN CN202210200382.7A patent/CN114529489B/zh active Active
- 2022-03-04 WO PCT/CN2022/079283 patent/WO2023164929A1/zh unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109118461A (zh) * | 2018-07-06 | 2019-01-01 | 航天星图科技(北京)有限公司 | 一种基于分布式框架的his融合方法 |
US20200302249A1 (en) * | 2019-03-19 | 2020-09-24 | Mitsubishi Electric Research Laboratories, Inc. | Systems and Methods for Multi-Spectral Image Fusion Using Unrolled Projected Gradient Descent and Convolutinoal Neural Network |
CN111079515A (zh) * | 2019-10-29 | 2020-04-28 | 深圳先进技术研究院 | 基于遥感大数据的区域监测方法、装置、终端及存储介质 |
CN111524063A (zh) * | 2019-12-24 | 2020-08-11 | 珠海大横琴科技发展有限公司 | 一种遥感图像融合方法及装置 |
CN113496148A (zh) * | 2020-03-19 | 2021-10-12 | 中科星图股份有限公司 | 一种多源数据融合方法及系统 |
CN113032350A (zh) * | 2021-05-27 | 2021-06-25 | 开采夫(杭州)科技有限公司 | 一种遥感数据处理的方法、系统、电子设备和存储介质 |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117195292A (zh) * | 2023-09-08 | 2023-12-08 | 广州星屋智能科技有限公司 | 一种基于数据融合和边缘计算的电力业务评估方法 |
CN117195292B (zh) * | 2023-09-08 | 2024-04-09 | 广州星屋智能科技有限公司 | 一种基于数据融合和边缘计算的电力业务评估方法 |
CN117392539A (zh) * | 2023-10-13 | 2024-01-12 | 哈尔滨师范大学 | 一种基于深度学习的河流水体识别方法、电子设备及存储介质 |
CN117392539B (zh) * | 2023-10-13 | 2024-04-09 | 哈尔滨师范大学 | 一种基于深度学习的河流水体识别方法、电子设备及存储介质 |
CN117150437A (zh) * | 2023-10-31 | 2023-12-01 | 航天宏图信息技术股份有限公司 | 多源卫星海面风场数据融合方法、装置、设备及介质 |
CN117150437B (zh) * | 2023-10-31 | 2024-01-30 | 航天宏图信息技术股份有限公司 | 多源卫星海面风场数据融合方法、装置、设备及介质 |
CN118506040A (zh) * | 2024-07-12 | 2024-08-16 | 中国科学院空天信息创新研究院 | 遥感图像的聚类方法、装置、设备及介质 |
CN118522156A (zh) * | 2024-07-23 | 2024-08-20 | 江西省赣地智慧科技有限公司 | 基于遥感影像的城市道路交通管理系统和方法 |
Also Published As
Publication number | Publication date |
---|---|
CN114529489B (zh) | 2024-10-25 |
CN114529489A (zh) | 2022-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023164929A1 (zh) | 多源遥感图像融合方法、装置、设备及存储介质 | |
Hanocka et al. | Meshcnn: a network with an edge | |
Liang et al. | 3D mesh simplification with feature preservation based on whale optimization algorithm and differential evolution | |
CN111149117B (zh) | 机器学习和深度学习模型的基于梯度的自动调整 | |
US8464026B2 (en) | Method and apparatus for computing massive spatio-temporal correlations using a hybrid CPU-GPU approach | |
Chang et al. | GPU-friendly multi-view stereo reconstruction using surfel representation and graph cuts | |
CN104679895A (zh) | 一种医学影像数据存储方法 | |
Imre et al. | Spectrum-preserving sparsification for visualization of big graphs | |
CN113515672A (zh) | 数据处理方法、装置、计算机可读介质及电子设备 | |
US9501509B2 (en) | Throwaway spatial index structure for dynamic point data | |
US20240330130A1 (en) | Graph machine learning for case similarity | |
Remil et al. | Surface reconstruction with data-driven exemplar priors | |
CN111626311A (zh) | 一种异构图数据处理方法和装置 | |
US11687535B2 (en) | Automatic computation of features from a data stream | |
Cheng et al. | Granular-ball computing-based manifold clustering algorithms for ultra-scalable data | |
Yan et al. | Live and Learn: Continual Action Clustering with Incremental Views | |
CN117474130B (zh) | 一种基于多云互享的联邦学习系统、方法及装置 | |
US8078826B2 (en) | Effective memory clustering to minimize page fault and optimize memory utilization | |
Liu et al. | Spectral Descriptors for 3D Deformable Shape Matching: A Comparative Survey | |
Zhou et al. | Multiple point sets registration based on Expectation Maximization algorithm | |
US20230229916A1 (en) | Scalable tensor network contraction using reinforcement learning | |
Bendechache | Study of distributed dynamic clustering framework for spatial data mining | |
CN116956624A (zh) | 空间信息交互点位布局优化方法、装置、设备及存储介质 | |
CN111712811A (zh) | Hd地图的可扩展图形slam | |
CN114782336A (zh) | 基于图卷积神经网络的纤维束取向分布的预测方法和装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22929374 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |