CN113409232B - Bionic false color image fusion model and method based on croaker visual imaging - Google Patents

Bionic false color image fusion model and method based on croaker visual imaging Download PDF

Info

Publication number
CN113409232B
CN113409232B CN202110667804.7A CN202110667804A CN113409232B CN 113409232 B CN113409232 B CN 113409232B CN 202110667804 A CN202110667804 A CN 202110667804A CN 113409232 B CN113409232 B CN 113409232B
Authority
CN
China
Prior art keywords
image
visible light
infrared
enhanced
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110667804.7A
Other languages
Chinese (zh)
Other versions
CN113409232A (en
Inventor
王勇
刘红旗
李新潮
谢文洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202110667804.7A priority Critical patent/CN113409232B/en
Publication of CN113409232A publication Critical patent/CN113409232A/en
Application granted granted Critical
Publication of CN113409232B publication Critical patent/CN113409232B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a bionic false color image fusion model based on a rattlesnake visual imaging and a method thereof, wherein the model performs image preprocessing by extracting common information and specific information of an infrared source image and a visible light source image, thereby improving the quality of a fusion image; by introducing a rattlesnake dual-mode cell mathematical model to design an image fusion structure, a rattlesnake dual-mode cell fusion mechanism is effectively utilized, and a rattlesnake visual perception mechanism is better simulated; the obtained fusion image is improved in color performance, more obvious in detail and more prominent in target, and better accords with the visual characteristics of human eyes.

Description

Bionic false color image fusion model and method based on croaker visual imaging
Technical Field
The invention relates to the technical field of image fusion processing, in particular to a bionic false color image fusion model and method based on crow's-tail visual imaging.
Background
The image fusion technology aims at integrating image information of multiple images with advantages and disadvantages obtained by multiple sensors in the same environment, generating a single Zhang Rongge image with more information, and further acquiring more accurate information from the single Zhang Rongge image. In order to further study the image fusion technology, partial scholars use the rattlesnake as a study object to simulate a visual imaging mechanism of the rattlesnake, for example, A.M. Waxman et al of the American Massa institute of technology utilizes a visual receptive field model imitating the working principle of rattlesnake dual-mode cells, and a fusion structure of a low-light-level image and an infrared image is proposed.
In the Waxman fusion structure, the ON/OFF structure shows contrast perception properties of the center-surround contrast receptive field, the first stage is an enhancement stage, the second stage is a treatment of infrared enhanced visible light and infrared suppressed visible light, and the treatment is consistent with a fusion mechanism of infrared and visible light of the crow's eyes. The Waxman fusion structure simulates an 'infrared enhanced visible light cell' and an 'infrared inhibited visible light cell', and although the infrared signals are respectively subjected to OFF antagonism and ON antagonism enhancement and are transmitted into the surrounding area of the nerve node cell, the infrared signals are substantially inhibited, so that the enhancement of the infrared signals ON the visible light signals is not obvious, and further the obtained fusion image is not ideal in color performance, not obvious in target and not outstanding in detail.
Therefore, how to provide a bionic false color image fusion method with better fusion effect based on the visual imaging of the croaker is a problem to be solved by the person skilled in the art.
Disclosure of Invention
In view of the above, the invention provides a bionic false color image fusion model and a bionic false color image fusion method based on the crow's-tail visual imaging, which solve the problems that the color performance of the fusion image obtained by the existing image fusion method is not ideal enough, the target is not obvious enough, the details are not obvious enough and the like.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
in one aspect, the invention provides a bionic false color image fusion model based on rattlesnake visual imaging, which comprises the following components:
the image preprocessing module is used for extracting common information and specific information of the input infrared source image and the visible light source image and preprocessing the infrared source image and the visible light source image;
the rattlesnake dual-mode cell mechanism simulation module is used for performing rattlesnake dual-mode cell mechanism simulation on the preprocessed infrared source image and the visible light source image through a rattlesnake dual-mode cell mathematical model to obtain six rattlesnake dual-mode cell model output signals;
the enhanced image generation module is used for enhancing the output signals of the six types of rattlesnake dual-mode cell models to obtain enhanced images;
the fusion signal generation module is used for carrying out fusion processing on the enhanced images to obtain fusion signals; and
and the false color fusion image generation module is used for mapping the fusion signals to different color channels of the RGB color space to generate a false color fusion image.
Further, the image preprocessing module includes:
a common information acquisition unit for acquiring common information components of the infrared source image and the visible light source image, that is:
I r (i,j)∩I vis (i,j)=min{I r (i,j),I vis (i,j)}
wherein I is r (I, j) represents an infrared source image, I vis (I, j) represents a visible light source image, (I, j) represents a pixel point corresponding to the two images, I r (i,j)∩I vis (i, j) represents a common information component of both;
a unique information acquisition unit for acquiring unique information components of the infrared source image and the visible light source image, that is:
I r (i,j) * =I r (i,j)-I r (i,j)∩I vis (i,j)
I vis (i,j) * =I vis (i,j)-I r (i,j)∩I vis (i,j)
wherein I is r (i,j) * Representing an infrared source image I r Specific information component of (I, j), I vis (i,j) * Representing visible light source image I vis A unique information component of (i, j);
the preprocessing unit is used for subtracting the specific information component of the visible light source image from the infrared source image to obtain a preprocessing result of the infrared source image, and subtracting the specific information component of the infrared source image from the visible light source image to obtain a preprocessing result of the visible light source image.
Further, the rattlesnake dual-mode cell mathematical model comprises a visible light enhanced infrared cell mathematical model, a visible light suppressed infrared cell mathematical model, an infrared enhanced visible light cell mathematical model, an infrared suppressed visible light cell mathematical model, a cell mathematical model and/or a cell mathematical model.
Further, the expression of the mathematical model of the visible light enhanced infrared cell is as follows:
I +IR←V (i,j)=I IR (i,j)exp[I V (i,j)]
wherein I is +IR←V (I, j) represents an image obtained after visible light-enhanced infrared, I IR (I, j) denotes an infrared image, I V (i, j) represents a visible light image;
the expression of the mathematical model of the visible light inhibition infrared cell is as follows:
I -IR←V (i,j)=I IR (i,j)log[I V (i,j)+1]
wherein I is -IR←V (i, j) represents an image obtained after visible light suppresses infrared rays;
the expression of the infrared enhanced visible light cell mathematical model is as follows:
I +V←IR (i,j)=I V (i,j)exp[I IR (i,j)]
wherein I is +V←IR (i, j) represents an image obtained after infrared-enhanced visible light signal;
the expression of the infrared suppression visible light cell mathematical model is as follows:
I -V←IR (i,j)=I V (i,j)log[I IR (i,j)+1]
wherein I is -V←IR (i, j) represents an image obtained after infrared suppression of the visible light signal;
the expression of the cell mathematical model is as follows:
when I V (i,j)<I R (i, j) the fusion result is:
I AND (i,j)=mI V (i,j)+nI R (i,j)
when I V (i,j)>I R (i, j) the fusion result is:
I AND (i,j)=nI V (i,j)+mI R (i,j)
wherein m is>0.5,n<0.5,I AND (i, j) represents an infrared image and visible lightAn image obtained after image weighting and action;
the expression of the or cell mathematical model is:
when I V (i,j)<I R (i, j) the fusion result is:
I OR (i,j)=nI V (i,j)+mI R (i,j)
when I V (i,j)>I R (i, j) the fusion result is:
I OR (i,j)=mI V (i,j)+nI R (i,j)
wherein m is>0.5,n<0.5,I OR (i, j) represents an image obtained by weighting or acting a visible light image and an infrared image.
Further, the six kinds of rattlesnake dual-mode cell model output signals comprise an AND output signal, or an output signal, an infrared enhanced visible light output signal, an infrared suppressed visible light output signal, a visible light enhanced infrared output signal and a visible light suppressed infrared output signal.
Further, the enhanced image generation module includes:
an enhanced image +or_and generating unit for feeding the OR output signal AND the AND output signal to a center excitation region AND a surround suppression region of an ON-center receptive field, respectively, to generate an enhanced image +or_and;
an enhanced image +vis generation unit for feeding an infrared enhanced visible light output signal and an infrared suppressed visible light output signal to a central excitation region and a surrounding suppression region of the ON-central receptive field, respectively, to generate an enhanced image +vis; and
and the enhanced image and IR generation unit is used for feeding the visible light enhanced infrared output signal and the visible light suppressed infrared signal into a central suppressed area and a surrounding excited area of the OFF-central receptive field respectively to obtain the enhanced image and IR.
Further, the fusion signal generation module includes:
the image feed-in unit is used for feeding the enhanced image +OR_AND, the enhanced image +VIS AND the enhanced image +IR into the center AND surrounding areas corresponding to the two ON-center receptive fields respectively to obtain fusion signals + VlS +OR_AND AND fusion signals + VlS +IR; and
AND a linear OR operation unit for performing linear OR operation on the enhanced image +vis AND the enhanced image +or_and, generating a fusion signal +or_and OR + VlS.
On the other hand, the invention also provides a bionic false color image fusion method based on the crow's-tail visual imaging, which comprises the following steps:
acquiring an infrared source image and a visible light source image to be processed;
inputting the obtained infrared source image and visible light source image into the bionic false color image fusion model based on the crow's tail visual imaging, and outputting a false color fusion image.
Compared with the prior art, the invention discloses a bionic false color image fusion model and a bionic false color image fusion method based on the video imaging of the crow, wherein the model performs image preprocessing by extracting common information and specific information of an infrared source image and a visible light source image, and improves the quality of the fused image; by introducing a rattlesnake dual-mode cell mathematical model to design an image fusion structure, a rattlesnake dual-mode cell fusion mechanism is effectively utilized, and a rattlesnake visual perception mechanism is better simulated; the obtained fusion image is improved in color performance, more obvious in detail and more prominent in target, and better accords with the visual characteristics of human eyes.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a bionic false color image fusion model structure based on the visual imaging of the croaker;
FIG. 2 is a schematic diagram of an implementation principle of an image preprocessing module;
FIG. 3 is a schematic diagram of the structure of an ON-center receptive field model and an OFF-center receptive field model;
fig. 4 is a schematic implementation flow chart of a bionic false color image fusion method based on the visual imaging of the croaker;
fig. 5 is a schematic diagram of the implementation principle of a bionic false color image fusion method based on the visual imaging of the croaker.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
On the one hand, referring to fig. 1, the embodiment of the invention discloses a bionic false color image fusion model based on the visual imaging of a crow, which comprises the following components:
the image preprocessing module 1 is used for extracting common information and specific information of an input infrared source image and a visible light source image, and preprocessing the infrared source image and the visible light source image;
the rattlesnake dual-mode cell mechanism simulation module 2 performs rattlesnake dual-mode cell mechanism simulation on the preprocessed infrared source image and the visible light source image through a rattlesnake dual-mode cell mathematical model by the rattlesnake dual-mode cell mechanism simulation module 2 to obtain six rattlesnake dual-mode cell model output signals;
the enhanced image generation module 3 is used for enhancing the output signals of the six types of the rattlesnake dual-mode cell models to obtain enhanced images;
the fusion signal generation module 4 is used for carrying out fusion processing on the enhanced images by the fusion signal generation module 4 to obtain fusion signals; and
the false color fusion image generation module 5 is used for mapping the fusion signals to different color channels of the RGB color space to generate a false color fusion image.
Specifically, the image preprocessing module 1 includes:
the shared information acquisition unit is used for acquiring shared information components of the infrared source image and the visible light source image, namely:
I r (i,j)∩I vis (i,j)=min{I r (i,j),I vis (i,j)}
wherein I is r (I, j) represents an infrared source image, I vis (I, j) represents a visible light source image, (I, j) represents a pixel point corresponding to the two images, I r (i,j)∩I vis (i, j) represents a common information component of both;
a unique information acquisition unit for acquiring unique information components of the infrared source image and the visible light source image, namely:
I r (i,j) * =I r (i,j)-I r (i,j)∩I vis (i,j)
I vis (i,j) * =I vis (i,j)-I r (i,j)∩I vis (i,j)
wherein I is r (i,j) * Representing an infrared source image I r Specific information component of (I, j), I vis (i,j) * Representing visible light source image I vis A unique information component of (i, j);
a preprocessing unit for preprocessing the infrared source image I r (I, j) subtracting the unique information component I of the visible light source image vis (i,j) * Obtaining the preprocessing result of the infrared source image, namely I r (i,j)-I vis (i,j) * And image I of visible light source vis (I, j) subtracting the unique information component I of the infrared source image r (i,j) * Obtaining a visible light source diagramPreprocessing the image, i.e. I vis (i,j)-I r (i,j) * Will I r (i,j)-I vis (i,j) * And I vis (i,j)-I r (i,j) * Respectively as an infrared image and a visible light image after pretreatment, which are marked as IR and VIS, namely:
fig. 2 shows the principle that each unit in the image preprocessing module performs common feature and unique feature acquisition and preprocessing on the infrared source image and the visible light source image, and finally obtains a preprocessed infrared image IR and a preprocessed visible light image VIS.
In this embodiment, the preprocessing operation is to process the source image input by image fusion according to the later requirement, so as to retain or improve some image information, omit some image information which is not important for the later processing, thereby achieving the effect of enhancing the image, and further improving the quality of the finally obtained fusion image.
If the infrared image and the visible light image are fused into one image to be presented, the image information of the two source images is necessarily selected and emphasized, the specific gravity of the image information common to the infrared image and the visible light source image is reduced by subtracting the common information of the two source images from the infrared image, and the original purpose of subtracting the common information of the two source images from the visible light source image is also highlighted by more highlighting the image information which is unique to the infrared image and is missing from the visible light source image. And the integration and presentation of the fusion image to the infrared source image and the visible light source image information during the subsequent image fusion are facilitated.
In this embodiment, the rattlesnake dual mode cytomath model includes a visible light enhanced infrared cytomath model, a visible light suppressed infrared cytomath model, an infrared enhanced visible light cytomath model, an infrared suppressed visible light cytomath model, a cytomath model, and/or a cytomath model.
In the visible light enhanced infrared cell, infrared signal stimulus is dominant, so that the infrared signal stimulus occupies a main position in a mathematical model of the cell, and the independent effect of the visible light signal stimulus does not generate response, thereby playing a role in auxiliary enhancement, the enhancement effect of a visible light image can be represented by an exponential function, and finally the mathematical model of the visible light enhanced infrared cell is obtained as follows:
I +IR←V (i,j)=I IR (i,j)exp[I V (i,j)]
wherein I is +IR←V (I, j) represents an image obtained after visible light-enhanced infrared, I IR (I, j) denotes an infrared image, I V (i, j) represents a visible light image.
In the visible light inhibition infrared cells, infrared signal stimulation is dominant, so that the infrared signal stimulation takes a main role in a mathematical model of the cells, the independent effect of visible light signal stimulation does not generate response, the auxiliary inhibition effect is achieved, the inhibition effect of a visible light image can be represented by a logarithmic function, and the mathematical model of the visible light inhibition infrared cells is finally obtained as follows:
I -IR←V (i,j)=I IR (i,j)log[I V (i,j)+1]
wherein I is -IR←V( i, j) represents an image obtained after visible light suppresses infrared.
In the infrared enhanced visible light cell, the visible light signal stimulus is dominant, so that the infrared enhanced visible light cell occupies a main position in a mathematical model of the cell, the infrared signal stimulus alone does not respond, the auxiliary enhancement function is realized, the enhancement function of the infrared image can be expressed by using an exponential function, and the mathematical model of the finally obtained infrared enhanced visible light cell is as follows:
I +V←IR (i,j)=I V (i,j)exp[I IR (i,j)]
wherein I is +V←IR (i, j) represents an image obtained after infrared-enhanced visible light signal.
In the infrared inhibition visible light cell, the visible light signal stimulus is dominant, so that the visible light signal stimulus occupies a main position in a mathematical model of the cell, the infrared signal stimulus alone does not respond, an auxiliary inhibition effect is achieved, the inhibition effect of an infrared image can be represented by a logarithmic function, and the mathematical model of the finally obtained infrared inhibition visible light cell is as follows:
I -V←IR (i,j)=I V (i,j)log[I IR (i,j)+1]
wherein I is -V←IR (i, j) represents an image obtained after infrared suppression of the visible light signal;
in the cell, only when two signal stimuli exist simultaneously, the cell can respond more obviously, the infrared signal and the visible light signal are not substantially different, only the respective stimulus intensity can influence the response, so that the combined effect of the visible light image and the infrared image can be simulated in a weighted AND mode, and finally, the mathematical model of the cell is obtained as follows:
when I V (i,j)<I R (i, j) the fusion result is:
I AND (i,j)=mI V (i,j)+nI R (i,j)
when I V (i,j)>I R (i, j) the fusion result is
I AND (i,j)=nI V (i,j)+mI R (i,j)
Wherein m is>0.5,n<0.5,I AND (i, j) represents an image obtained by weighting and applying an infrared image and a visible image.
For either cell, either the infrared signal stimulus or the visible light stimulus alone will be responsive, while the simultaneous presence of both signal stimuli will provide a gain, or the cell will still be responsive.
In the cell, any single action of infrared signal stimulation and visible light stimulation is responded, the simultaneous existence of the two signal stimulation can play a role in gain, and a cooperative win-win partnership is reflected between the two signals, so that the combined action effect of the visible light and the infrared image is simulated in a weighted or mode, and the following cell mathematical model is finally obtained:
when I V (i,j)<I R (i, j) the fusion result is
I OR (i,j)=nI V (i,j)+mI R (i,j)
When I V (i,j)>I R (i, j) the fusion result is
I OR (i,j)=mI V (i,j)+nI R (i,j)
Wherein m is>0.5,n<0.5,I OR (i, j) represents an image obtained by weighting or acting a visible light image and an infrared image.
The six types of rattlesnake dual-mode cell mathematical models are used for processing a visible light image (VIS) and an infrared Image (IR) to obtain or output a signal V U IR, and the six types of rattlesnake dual-mode cell models output signals of the output signal V U IR, the infrared enhanced visible light output signal +V+.IR, the infrared suppressed visible light output signal-V+.IR, the visible light enhanced infrared output signal +IR+.6V and the visible light suppressed infrared output signal-IR+.V.
Specifically, the enhanced image generation module 3 includes:
an enhanced image +OR_AND generating unit for feeding an OR output signal V.u.IR to a central excitation region of the ON-central receptive field AND feeding an AND output signal V.u.IR to a surround-inhibit region of the ON-central receptive field, generating an enhanced image +OR_AND;
an enhanced image+vis generation unit for feeding an infrared enhanced visible light output signal +v≡ir to a central excitation area of the ON-central receptive field, and feeding an infrared suppressed visible light output signal-v≡ir to a surrounding suppression area of the ON-central receptive field, to generate an enhanced image +vis; and
an enhanced image+ir generating unit for feeding the visible light enhanced infrared output signal+ir≡v to the central suppressed region of the OFF-central receptive field, and feeding the visible light suppressed infrared signal-ir≡v to the surrounding excited region of the OFF-central receptive field, to obtain an enhanced image+ir.
In this embodiment, the enhanced image generating module 3 performs enhancement processing on the output signals of the six rattlesnake dual-mode cell models by using the visual receptive field and the mathematical model thereof, and obtains an enhanced image.
The following description of the visual receptive field and its mathematical model is given below:
physiological characteristics indicate that the basic mode of action of the receptive field of retinal nerve cells is concentric spatial antagonism, which can be divided into two classes: one is the ON-center/OFF-surround system (i.e., ON-center excitation/OFF-surround suppression receptive field), commonly referred to simply as the ON-center receptive field, which is structured as shown in FIG. 3 a. And the other is the OFF-center/ON-surround system (i.e., OFF center inhibit/ON surround excitation receptive field), commonly referred to simply as OFF-center receptive field, with the configuration shown in fig. 3 b. Ganglion cell receptor domains can be simulated by using a Gaussian difference function model through mathematical modeling, and the cell liveness of different areas of the ganglion cell receptor domains can be described by using Gaussian distribution, and the acuity of the ganglion cell receptor domains is gradually decreased from the center to the periphery.
One kinetic description of the center-surround challenge domain is the passive film equation (Passive membrane equation). From the description of the visual receptive field dynamics equation, the visual receptive field dynamics equation is given as follows:
steady state output of ON countermeasure system:
OFF counter system steady state output:
wherein C is k (i, j) and S k (i, j) represents the convolution of the center input image, the surround input image, and the gaussian function, respectively, a being the decay constant and E being the polarization constant.
Wherein C is k (i, j) is the receptive field center, and its expression is:
S k (i, j) is a receptive field surrounding area, expressed as:
wherein I is k (i, j) is the input image, is a convolution operator, W c 、W s Gaussian distribution functions of the central region and the surrounding region, respectively, the gaussian template sizes being m×n and p×q, σ, respectively c 、σ s The spatial constants of the Center and Surround areas, respectively, are used as subscripts to distinguish between the Center area (Center) and the Surround area (Surround).
Specifically, the fusion signal generation module 4 includes:
the image feed-in unit is used for feeding the enhanced image +OR_AND, the enhanced image +VIS AND the enhanced image +IR into the center AND surrounding areas corresponding to the two ON-center receptive fields respectively to obtain two image signals of a fusion signal + VlS +OR_AND AND a fusion signal + VlS +IR; and
AND a linear OR operation unit for performing linear OR operation on the enhanced image +vis AND the enhanced image +or_and to generate a fusion signal +or_and OR + VlS.
Finally, the pseudo-color fusion image generation module 5 maps the +vis+or_and, +or_and+ VlS, AND +vis+ir fusion signals obtained in the fusion signal generation module to R, G, B three channels, respectively, using the RGB color space, AND uses the image obtained by the above processing as the pseudo-color fusion image to be finally generated.
On the other hand, referring to fig. 4 and 5, the embodiment of the invention also discloses a bionic false color image fusion method based on the visual imaging of the croaker, which comprises the following steps:
s1: acquiring an infrared source image and a visible light source image to be processed;
s2: and inputting the obtained infrared source image and visible light source image into the bionic false color image fusion model based on the crow's tail visual imaging, and outputting a false color fusion image.
In summary, from the bionics perspective, the embodiment of the invention designs a false color image fusion model based on the rattlesnake vision system imaging system, which is used for acquiring an infrared light and visible light fusion image, and image preprocessing is performed by extracting common information and specific information of the infrared light and the visible light image, so that the quality of the fusion image is improved. The image fusion structure is designed by introducing the rattlesnake dual-mode cell mathematical model, so that the rattlesnake dual-mode cell fusion mechanism is effectively utilized, and the rattlesnake visual perception mechanism is better simulated. Meanwhile, the bionic false color image fusion method better simulates the fusion mechanism of the rattlesnake on the infrared image and the visible light image, the obtained fusion image is improved in color performance, target positions such as characters and the like can be better represented in the fusion image, better performance can be achieved in certain detail aspects, the influence of illumination, smoke shielding and weather conditions on imaging effects can be better improved, the visual characteristics of human eyes are more met, and later-stage personnel can observe, understand and further study.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (5)

1. A bionic false color image fusion model based on rattlesnake visual imaging is characterized by comprising the following components:
the image preprocessing module is used for extracting common information and specific information of the input infrared source image and the visible light source image and preprocessing the infrared source image and the visible light source image;
the image preprocessing module comprises:
the preprocessing unit is used for subtracting the specific information component of the visible light source image from the infrared source image to obtain a preprocessing result of the infrared source image, and subtracting the specific information component of the infrared source image from the visible light source image to obtain a preprocessing result of the visible light source image;
a common information acquisition unit for acquiring common information components of the infrared source image and the visible light source image;
a unique information acquisition unit for acquiring unique information components of the infrared source image and the visible light source image;
the calculation formula of the common information component of the infrared source image and the visible light source image is as follows:
I r (i,j)∩I vis (i,j)=min{I r (i,j),I vis (i,j)}
wherein I is r (I, j) represents an infrared source image, I vis (I, j) represents a visible light source image, (I, j) represents a pixel point corresponding to the two images, I r (i,j)∩I vis (i, j) represents a common information component of both;
the calculation formulas of the specific information components of the infrared source image and the visible light source image are respectively as follows:
I r (i,j)*=I r (i,j)-I r (i,j)∩I vis (i,j)
I vis (i,j)*=I vis (i,j)-I r (i,j)∩I vis (i,j)
wherein I is r (I, j) represents an infrared source image I r The unique information component of (i, j),I vis (I, j) represents visible light source image I vis A unique information component of (i, j);
the rattlesnake dual-mode cell mechanism simulation module is used for performing rattlesnake dual-mode cell mechanism simulation on the preprocessed infrared source image and the visible light source image through a rattlesnake dual-mode cell mathematical model to obtain six rattlesnake dual-mode cell model output signals;
the rattlesnake dual-mode cell mathematical model comprises a visible light enhanced infrared cell mathematical model, a visible light suppressed infrared cell mathematical model, an infrared enhanced visible light cell mathematical model, an infrared suppressed visible light cell mathematical model, a cell mathematical model and/or a cell mathematical model;
the six types of rattlesnake dual-mode cell model output signals comprise a AND output signal or an output signal, an infrared enhanced visible light output signal, an infrared suppressed visible light output signal, a visible light enhanced infrared output signal and a visible light suppressed infrared output signal;
the enhanced image generation module is used for enhancing the output signals of the six types of crow-tail dual-mode cell models to obtain enhanced images;
the enhanced image generation module is used for feeding the or output signal and the and output signal into a central excitation area and a surrounding suppression area of the ON-central receptive field respectively; feeding an infrared enhanced visible light output signal and an infrared suppressed visible light output signal into a central excitation region and a surrounding suppression region of the ON-central receptive field, respectively; and feeding a visible light enhanced infrared output signal and a visible light suppressed infrared signal to a central suppressed region and a surrounding excited region of the OFF-central receptive field, respectively;
the fusion signal generation module is used for carrying out fusion processing on the enhanced images to obtain fusion signals; and
and the false color fusion image generation module is used for mapping the fusion signals to different color channels of the RGB color space to generate a false color fusion image.
2. The bionic false color image fusion model based on the rattlesnake visual imaging of claim 1, wherein the expression of the visible light enhanced infrared cell mathematical model is:
I +IR←V (i,j)=I IR (i,j)exp[I V (i,j)]
wherein I is +IR←V (I, j) represents an image obtained after visible light-enhanced infrared, I IR (I, j) denotes an infrared image, I V (i, j) represents a visible light image;
the expression of the mathematical model of the visible light inhibition infrared cell is as follows:
I -IR←V (i,j)=I IR (i,j)log[I V (i,j)+1]
wherein I is -IR←V (i, j) represents an image obtained after visible light suppresses infrared rays;
the expression of the infrared enhanced visible light cell mathematical model is as follows:
I +V←IR (i,j)=I V (i,j)exp[I IR (i,j)]
wherein I is +V←IR (i, j) represents an image obtained after infrared-enhanced visible light signal;
the expression of the infrared suppression visible light cell mathematical model is as follows:
I -V←IR (i,j)=I V (i,j)log[I IR (i,j)+1]
wherein I is -V←IR (i, j) represents an image obtained after infrared suppression of the visible light signal;
the expression of the cell mathematical model is as follows:
when I V (i,j)<I R (i, j) the fusion result is:
I AND (i,j)=mI V (i,j)+nI R (i,j)
when I V (i,j)>I R (i, j) the fusion result is:
I AND (i,j)=nI V (i,j)+mI R (i,j)
wherein m is more than 0.5, n is less than 0.5, I AND (i, j) represents an image obtained by weighting and acting an infrared image and a visible image;
the expression of the or cell mathematical model is:
I OR (i,j)=nI V (i,j)+mI R (i,j)
I OR (i,j)=mI V (i,j)+nI R (i,j)
wherein m is more than 0.5, n is less than 0.5, I OR (i, j) represents an image obtained by weighting or acting a visible light image and an infrared image.
3. The bionic false color image fusion model based on the visual imaging of the croaker of claim 1, wherein the enhanced image generation module comprises:
an enhanced image +or_and generating unit for feeding the OR output signal AND the AND output signal to a center excitation region AND a surround suppression region of an ON-center receptive field, respectively, to generate an enhanced image +or_and;
an enhanced image +vis generation unit for feeding an infrared enhanced visible light output signal and an infrared suppressed visible light output signal to a central excitation region and a surrounding suppression region of the ON-central receptive field, respectively, to generate an enhanced image +vis; and
and the enhanced image and IR generation unit is used for feeding the visible light enhanced infrared output signal and the visible light suppressed infrared signal into a central suppressed area and a surrounding excited area of the OFF-central receptive field respectively to obtain the enhanced image and IR.
4. A simulated pseudo-color image fusion model based on visual imaging of croaker as claimed in claim 3, wherein said fusion signal generation module comprises:
an image feed-in unit for feeding the enhanced image +OR_AND, enhanced image +VIS AND enhanced image +IR into the center AND surrounding areas corresponding to the two ON-center receptive fields to obtain a fusion signal
+v1s+or_and AND fusion signal +v1s+ir; and
AND a linear OR operation unit for performing linear OR operation on the enhanced image +vis AND the enhanced image +or_and to generate a fusion signal +or_and OR +v1s.
5. A bionic false color image fusion method based on a croaker visual imaging is characterized by comprising the following steps:
acquiring an infrared source image and a visible light source image to be processed;
inputting the obtained infrared source image and visible light source image into a bionic false color image fusion model based on the crow's-tail visual imaging as claimed in any one of claims 1-4, and outputting a false color fusion image.
CN202110667804.7A 2021-06-16 2021-06-16 Bionic false color image fusion model and method based on croaker visual imaging Active CN113409232B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110667804.7A CN113409232B (en) 2021-06-16 2021-06-16 Bionic false color image fusion model and method based on croaker visual imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110667804.7A CN113409232B (en) 2021-06-16 2021-06-16 Bionic false color image fusion model and method based on croaker visual imaging

Publications (2)

Publication Number Publication Date
CN113409232A CN113409232A (en) 2021-09-17
CN113409232B true CN113409232B (en) 2023-11-10

Family

ID=77684422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110667804.7A Active CN113409232B (en) 2021-06-16 2021-06-16 Bionic false color image fusion model and method based on croaker visual imaging

Country Status (1)

Country Link
CN (1) CN113409232B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011004381A1 (en) * 2009-07-08 2011-01-13 Yogesh Chunilal Rathod An apparatus, system, and method for automated production of rule based near live sports event in the form of a video film for entertainment
CN102924596A (en) * 2005-04-29 2013-02-13 詹森生物科技公司 Anti-il-6 antibodies, compositions, methods and uses
CN108090888A (en) * 2018-01-04 2018-05-29 北京环境特性研究所 The infrared image of view-based access control model attention model and the fusion detection method of visible images
CN108133470A (en) * 2017-12-11 2018-06-08 深圳先进技术研究院 Infrared image and low-light coloured image emerging system and method
CN108711146A (en) * 2018-04-19 2018-10-26 中国矿业大学 A kind of coal petrography identification device and method based on visible light and infrared image fusion
CN110120028A (en) * 2018-11-13 2019-08-13 中国科学院深圳先进技术研究院 A kind of bionical rattle snake is infrared and twilight image Color Fusion and device
CN110458877A (en) * 2019-08-14 2019-11-15 湖南科华军融民科技研究院有限公司 The infrared air navigation aid merged with visible optical information based on bionical vision

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8176054B2 (en) * 2007-07-12 2012-05-08 Ricoh Co. Ltd Retrieving electronic documents by converting them to synthetic text
CN104835129B (en) * 2015-04-07 2017-10-31 杭州电子科技大学 A kind of two-hand infrared image fusion method that use local window vision attention is extracted
CN106952246A (en) * 2017-03-14 2017-07-14 北京理工大学 The visible ray infrared image enhancement Color Fusion of view-based access control model attention characteristic
CN110211083A (en) * 2019-06-10 2019-09-06 北京宏大天成防务装备科技有限公司 A kind of image processing method and device
CN111724333B (en) * 2020-06-09 2023-05-30 四川大学 Infrared image and visible light image fusion method based on early visual information processing

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102924596A (en) * 2005-04-29 2013-02-13 詹森生物科技公司 Anti-il-6 antibodies, compositions, methods and uses
WO2011004381A1 (en) * 2009-07-08 2011-01-13 Yogesh Chunilal Rathod An apparatus, system, and method for automated production of rule based near live sports event in the form of a video film for entertainment
CN108133470A (en) * 2017-12-11 2018-06-08 深圳先进技术研究院 Infrared image and low-light coloured image emerging system and method
CN108090888A (en) * 2018-01-04 2018-05-29 北京环境特性研究所 The infrared image of view-based access control model attention model and the fusion detection method of visible images
CN108711146A (en) * 2018-04-19 2018-10-26 中国矿业大学 A kind of coal petrography identification device and method based on visible light and infrared image fusion
CN110120028A (en) * 2018-11-13 2019-08-13 中国科学院深圳先进技术研究院 A kind of bionical rattle snake is infrared and twilight image Color Fusion and device
CN110458877A (en) * 2019-08-14 2019-11-15 湖南科华军融民科技研究院有限公司 The infrared air navigation aid merged with visible optical information based on bionical vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王勇 等."Pseudo color image fusion based on rattlesnake's visual receptive field model".《IEEE International Conference on Artificial Intelligence and Information Systems》.2020,第596-600页. *

Also Published As

Publication number Publication date
CN113409232A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
CN111709902B (en) Infrared and visible light image fusion method based on self-attention mechanism
Smithson Sensory, computational and cognitive components of human colour constancy
Thomas Are theories of imagery theories of imagination? An active perception approach to conscious mental content
Gross Visual computing: the integration of computer graphics, visual perception and imaging
Jacques et al. The initial representation of individual faces in the right occipito-temporal cortex is holistic: Electrophysiological evidence from the composite face illusion
Susilo et al. The composite effect for inverted faces is reliable at large sample sizes and requires the basic face configuration
US20200226351A1 (en) Method and device for determining parameter for gaze tracking device
Susilo et al. Solving the upside-down puzzle: Why do upright and inverted face aftereffects look alike?
CN109859139B (en) Blood vessel enhancement method for color fundus image
McNamara et al. Perception in graphics, visualization, virtual environments and animation
CN108133470A (en) Infrared image and low-light coloured image emerging system and method
Fazlyyyakhmatov et al. The EEG activity during binocular depth perception of 2D images
CN102222231B (en) Visual attention information computing device based on guidance of dorsal pathway and processing method thereof
Sama et al. Independence of viewpoint and identity in face ensemble processing
CN113409232B (en) Bionic false color image fusion model and method based on croaker visual imaging
Clark Three varieties of visual field
Cheng et al. Perspectival shapes are viewpoint-dependent relational properties.
Petrova et al. Cultural influences on oculomotor inhibition of remote distractors: Evidence from saccade trajectories
CN101241593A (en) Picture layer image processing unit and its method
CN110473176A (en) Image processing method and device, method for processing fundus images, electronic equipment
Khan et al. Visual attention: Effects of blur
CN112991250B (en) Infrared and visible light image fusion method based on sonodon acutus visual imaging
DE112018003820T5 (en) INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING PROCESS AND PROGRAM
Wang et al. Pseudo color fusion of infrared and visible images based on the rattlesnake vision imaging system
Cha et al. Novel procedure for generating continuous flash suppression: Seurat meets Mondrian

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant