CN117934691A - Anti-camouflage generation method, vehicle and device - Google Patents

Anti-camouflage generation method, vehicle and device Download PDF

Info

Publication number
CN117934691A
CN117934691A CN202311703845.2A CN202311703845A CN117934691A CN 117934691 A CN117934691 A CN 117934691A CN 202311703845 A CN202311703845 A CN 202311703845A CN 117934691 A CN117934691 A CN 117934691A
Authority
CN
China
Prior art keywords
target
model
mapping
camouflage
camouflaged
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311703845.2A
Other languages
Chinese (zh)
Inventor
胡晓林
刘育秋
晏焕钱
苏航
李建民
朱军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qiyuan Laboratory
Original Assignee
Qiyuan Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qiyuan Laboratory filed Critical Qiyuan Laboratory
Priority to CN202311703845.2A priority Critical patent/CN117934691A/en
Publication of CN117934691A publication Critical patent/CN117934691A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)

Abstract

The application provides an anti-camouflage generation method, a vehicle and a device, belongs to the field of computer vision countermeasure, and solves the problem that an anti-texture pattern in the prior art cannot support equipment with surface curved surface characteristics to perform countermeasure at any angle, wherein the technical scheme comprises the following steps: embedding a target model to be camouflaged into an automatic driving simulation environment engine to generate a target data set, wherein the target data set comprises a test set and a training set; utilizing a 3D model of a target to be camouflaged in a grid form to manufacture U-V mapping corresponding to the 3D model of the target to be camouflaged, and establishing a U-V map with the same size as a coordinate system according to the U-V mapping; and determining mask data of the corresponding camouflage area to be coated according to the U-V mapping and the target data set, wherein the mask data are used for detecting the interference detector. According to the technical scheme, the surface of the object to be camouflaged can be better, the object to be camouflaged can be directly used even if the surface of the object to be camouflaged is a curved surface, and the problem that the device with the surface curved surface characteristic can be used for resisting at any angle is solved.

Description

Anti-camouflage generation method, vehicle and device
Technical Field
The invention relates to the field of computer vision countermeasure, in particular to a countermeasure camouflage generation method, a vehicle and a device.
Background
With the rapid development of computer vision technology, a perception system with a deep neural network as a framework is widely applied to aspects of automatic driving, security monitoring and the like, so that the security of using a target detection algorithm becomes extremely important.
However, deep neural networks have proven to be very sensitive to noise, and the predicted outcome is susceptible to specific patterns.
Most of the existing anti-texture pattern generation methods are used for generating the anti-texture of the planar sticker, and cannot be attached to the surface of an object, so that camouflage cannot be optimized for the surface curved surface characteristics of equipment, and although few works are used for performing anti-training on a 3D model, the existing anti-texture pattern generation methods are all grid single-sided training modes, and have the defect that the existing anti-texture pattern generation methods cannot be directly used in practical application.
Therefore, developing an anti-camouflage technology with high resistance and high practicability for a target detection network model is a technical problem to be solved urgently by those skilled in the art.
Disclosure of Invention
In order to solve the problem that the anti-texture pattern in the prior art cannot support the equipment with the surface curved surface characteristic to perform the countermeasure at any angle, the invention aims to provide a countermeasure camouflage generation method, a vehicle and a device for solving the technical problem.
In a first aspect, according to some embodiments, the present invention provides an anti-camouflage generation method comprising:
Embedding a target model to be camouflaged into an automatic driving simulation environment engine to generate a target data set, wherein the target data set comprises a test set and a training set, the training set is used for training 3D camouflage textures against a camouflage base map, and the test set is used for evaluating the camouflage attack success rate;
utilizing a 3D model of a target to be camouflaged in a grid form to manufacture U-V mapping corresponding to the 3D model of the target to be camouflaged, and establishing a U-V mapping with the same size as a coordinate system according to the U-V mapping, wherein the U-V mapping refers to a coordinate mapping relation in a 3D-to-2D mapping process;
And determining mask data corresponding to the camouflage area to be coated according to the U-V mapping and the target data set, wherein the mask data can avoid the detection of the detector.
Optionally, in some embodiments, after determining the mask data corresponding to the camouflaged area to be painted, further includes:
And printing the mask data, cutting out the texture of the region to be painted, and attaching the texture of the region to be painted to the corresponding position of the target to be camouflaged.
Optionally, in some embodiments, the embedding the target model to be camouflaged into the autopilot simulation environment engine to generate the target data set specifically includes:
generating a target data set of multiple angles, multiple distances and multiple scenes by using an automatic driving simulation engine;
Wherein the angle and distance of the sensor relative to the target is recorded while the target data set is generated.
Specifically, the autopilot simulation engine may be CARLA.
Optionally, in some embodiments, the generating, by using the autopilot simulation engine CARLA, a multi-angle, multi-distance, multi-scene target data set specifically includes:
Embedding a target model V to be camouflaged into different automatic driving simulation environments R p, and recording visible light sensor receiving images with different visual angles and distances under different scenes s i;
Corresponding rendering results are obtained for different scenes s i and sensor positions, and the target data set is generated;
Where i is a positive integer greater than or equal to 1, and represents the number of sensors, and p represents the number of autopilot simulation environments, typically 1, i.e., one autopilot simulation environment.
Optionally, in some embodiments, the obtaining the corresponding rendering result for the different scene s i and the sensor position specifically includes:
The relative coordinates of the sensor are (alpha, theta, d), Θ∈ (0, pi), the sensor position is (α ii,di);
According to the data of the sensor, a rendering result is obtained, wherein the rendering result is obtained by adopting the following formula:
Ipi=Rp(V,αii,di,si)
Where (α ii,di) is the coordinates of the sensor i with respect to the spherical coordinate system of the model, α i is the azimuth angle, θ i is the polar angle, and d i is the distance.
Optionally, in some embodiments, the creating, using the mesh-form target 3D model, a U-V mapping corresponding to the target texture specifically includes:
Designing U-V coordinate mapping of a 3D model of the target;
Designing a coordinate mapping relation between a triangular surface point in U-V expansion of a 2D model and a 3D model data point of the target, and determining that U-V tiling of a region to be coated is not overlapped in mapping of the 2D model;
And according to the coordinate mapping relation, manufacturing a U-V mapping corresponding to the target texture.
Optionally, in some embodiments, the creating, according to the coordinate mapping relationship, a U-V mapping corresponding to the target texture specifically includes:
Designing the arrangement of points in the U-V diagram of the target, and determining the current mapping relation as the U-V mapping corresponding to the target texture when the area of the triangular surface in the U-V diagram mapped by the 2D model is approximately consistent with the area of the triangular surface of the 3D model of the target without separating the triangular surface in the U-V body of the target.
Optionally, in some embodiments, a method for approximating an area of a triangular surface in the U-V diagram mapped by the 2D model to an area of a triangular surface of the 3D model of the target includes:
To reduce the area S k of triangular surface in the U-V mapping of the 2D model and the corresponding area of the target 3D model grid Taking the sum of squares of the difference between the ratio of (2) and 1 as a target, carrying out U-V arrangement in 3D software, and adopting the following formula to design:
Where K is the number of triangular faces and the magnitude of the K value depends on the original design of the 3D model.
Optionally, in some embodiments, after the creating the U-V mapping corresponding to the 3D model of the object to be camouflaged, before building a U-V map with the same size as the coordinate system according to the U-V mapping, the method further includes:
and outputting the part to be disguised in the U-V mapping in the U-V expression form.
Optionally, in some embodiments, the determining mask data corresponding to the camouflage area to be painted according to the U-V map and the target data set specifically includes:
Setting a U-V region corresponding to the grid surface of the target to be camouflaged to be different in color from the target body;
And rendering a 2D model by using a neural network renderer at the same angle distance, and binarizing the rendering result to obtain mask data of the region to be camouflaged corresponding to the generated data set.
Optionally, in some embodiments, the setting the U-V region corresponding to the grid surface of the target to be camouflaged to be different from the target body specifically includes:
And selecting the color C t of the part to be coated in the U-V body C to be camouflaged to be an inverse color or pure white of the target body color { C j, j epsilon C }, and adopting the following formula:
Where r, g, b represent color channel values of three red, green, and blue channels in the image, C j is a j-th pixel of the target camouflage U-V body C, C t represents a region other than the target camouflage U-V body C, j represents each pixel, and t represents a t-th pixel.
Optionally, in some embodiments, the rendering the 2D model with the same angular distance by using a neural network renderer specifically includes:
For each picture I pi in the dataset, a resulting image I mi is rendered under dim settings using the neural network renderer R n for its angle (α ii) and distance d i using the following formula:
Imi=Rn(V,αii,di)。
Optionally, in some embodiments, binarizing the rendering result to obtain mask data of a region to be camouflaged corresponding to the generated dataset, and specifically includes:
calculating the maximum values of the camouflage area t and the non-camouflage area n in the rendering result Taking the intermediate value c min to binarize the rendering result, wherein the value c mid is obtained by adopting the following formula:
Forming a mask M i corresponding to each picture I pi in the dataset, wherein the value M i,j of each pixel on the mask M i is determined by the value I mi,j of the corresponding pixel on the rendering result I mi of the original map, and the value of the mask pixel at the camouflage position is 1, otherwise, is 0;
Determination of In the process, the mask calculation method adopts the following formula:
optionally, in some embodiments, after determining mask data corresponding to the camouflage area to be painted according to the U-V map and the target data set, the method further includes:
The mask data is optimized.
Optionally, in some embodiments, the optimizing the mask data specifically includes:
And rendering the image mapped by the U-V mapping of the camouflage area to be coated by using a micro renderer.
Optionally, in some embodiments, after rendering the model mapped by the U-V map of the camouflage area to be painted, the method further includes:
And multiplying the rendered image with a mask, fusing the multiplied rendered image with a simulation engine data set, optimizing an initial texture by adopting a gradient descent algorithm, carrying out gradient return on a loss function by utilizing a target loss function, and optimizing a to-be-camouflage area of the initial U-V image.
Optionally, in some embodiments, the method for optimizing the initial texture by using the gradient descent algorithm specifically includes:
Multiplying the rendered image with a mask, and fusing the multiplied image with a simulation engine data set to serve as input of a dual-stage target detection model;
The method for optimizing the initial texture by adopting a gradient descent algorithm is characterized in that the confidence score of the target class of the one-stage RPN network in the detection model and the confidence score of the detection frame of the two-stage output result are calculated, the sum of the image gradients of the U-V image is calculated, and the result of weighting addition is used as a target loss function.
Optionally, in some embodiments, the rendering, by using a micro-renderer, the image mapped by the U-V map of the camouflaged area to be painted specifically includes:
Mapping the U-V map T to a 3D model by using a micro renderer according to the pictures I pi in each dataset, and rendering to obtain a result map Where k is the number of iterations, initially k=0;
According to the setting (alpha ii,di,si) in the rendering process, utilizing the mask to make the image I pi in the simulation rendering result and the result image rendered by the renderer Fusing to obtain a rendering result image when the texture map is T i,k
Optionally, in some embodiments, the method for optimizing an initial texture by using a gradient descent algorithm calculates a confidence score of a target class of a one-stage RPN network in a fast R-CNN detection model of the detection algorithm and a confidence score of a detection frame of a two-stage output result, which specifically includes:
Imaging the rendering result Inputting a double-stage detection model F, wherein the output result of the detection model F consists of two parts, and the first part is the class confidenceAnd detecting regression scores of the recommendation boxesThe second part is the detection frame confidence/>, of the second stage
Taking the class confidence of all recommended frames calculated in the first stage of the model of the attacked class vAnd all of the detection frame confidence/>, of the second stageSetting an objective function for reducing the detection accuracy of a modelThe following is shown:
where n is the total number of all recommended frames calculated in the first stage of the RPN network.
Optionally, in some embodiments, the calculating a sum of image gradients of the U-V image, taking a sum of the confidence score of the detection frame and the gradients, and taking a weighted addition result as a target loss function specifically includes:
Calculating the U-V image Each pixelThe corresponding change value adopts the following formula:
Wherein x is the corresponding U-V image Row coordinates, y is the corresponding U-V imageColumn coordinates, the U-V imageSmoothing lossIs the sum of the variation values of the pixels;
Loss of the smoothing The loss function with reduced model detection accuracy is weighted and combined with proper parameters mu and lambda, and the loss function/>, which is returned by the gradient, is obtained by adopting the following formula
Using the loss functionPerforming gradient feedback calculation, optimizing a camouflage texture area on the 3D model, and mapping the camouflage texture area to the initial U-V mapping;
Using the loss function And carrying out gradient feedback calculation, optimizing a camouflage texture area on the 3D model, and mapping the camouflage texture area to the initial U-V mapping.
In a second aspect, an embodiment of the present invention further provides an anti-camouflage generating device, including:
The training module is used for embedding a target model to be camouflaged into the automatic driving simulation environment engine to generate a target data set, wherein the target data set comprises a test set and a training set, the training set is used for training 3D camouflage textures against a camouflage base map, and the test set is used for evaluating the camouflage attack success rate;
The generating module is used for utilizing a 3D model of a target to be camouflaged in a grid form to manufacture U-V mapping corresponding to the 3D model of the target to be camouflaged, and establishing a U-V mapping with the same size as a coordinate system according to the U-V mapping, wherein the U-V mapping refers to a coordinate mapping relation in a 3D-to-2D mapping process; and the mask data of the region to be coated corresponding to disguised is determined according to the U-V mapping and the target data set.
In a third aspect, an embodiment of the present invention further provides a vehicle, where a part of a vehicle body of the vehicle is adhered with a film containing the mask data manufactured by any one of the method steps in the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a device capable of resisting camouflage, where a part of an outer surface of the device is adhered with a film made by any one of the method steps in the first aspect, where the film includes mask data, and the mask data is capable of resisting detection by a detector.
In a fifth aspect, an embodiment of the present invention further provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the method according to any one of the first aspects when the processor executes the program.
In a sixth aspect, a computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the method according to any of the first aspects above.
The technical scheme of the application has at least the following beneficial technical effects: the technical scheme of the embodiment of the application can be suitable for antagonism under any condition, if the interference of the target detection algorithm Faster R-CNN is needed to be antagonized, corresponding antagonism data is determined according to the target detection algorithm Faster R-CNN, if other antagonism algorithms are needed, the corresponding antagonism data is determined according to the root correspondence detection algorithm, so that the antagonism camouflage generation method has strong universality, and the algorithm needing antagonism can be determined according to actual needs; in addition, according to the technical scheme, the full-view countermeasure data can be generated aiming at any model, the countermeasure texture is generated, and the target object is attached; according to the technical scheme, a deep neural network framework is combined, a U-V mapping design is utilized, U-V mapping corresponding to a 3D model of a target to be camouflaged is manufactured, U-V mapping with the same size as a coordinate system is built according to the U-V mapping, textures of adjacent surfaces of a grid model are effectively connected on 2D, and according to the size mapping relation built by the U-V mapping, even if the surfaces are convex surfaces or curved surfaces and other conditions, the surface can be reasonably tiled, the method is suitable for physical realization, a plane and a three-dimensional curved surface are effectively connected through the coordinate mapping relation in the 3D-to-2D mapping process, the 3D effect can be realized even if the mapping is obtained by mask data of the plane.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the conventional technology, the drawings required for the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
Fig. 1 is a flowchart of an anti-camouflage generation method according to an embodiment of the present invention.
FIG. 2 is a schematic overall flow chart of an anti-camouflage generation method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a U-V mapping design according to an embodiment of the present invention;
FIG. 4A is a schematic illustration of a pre-optimized 3D anti-camouflage tiling effect according to an embodiment of the present invention;
FIG. 4B is a schematic illustration of an optimized 3D anti-camouflage tiling effect according to an embodiment of the present invention;
FIG. 5 is a graph showing the target detection stealth effect of a vehicle scaling model in the physical world and the stealth effect compared with other currently mainstream multi-view attack methods, wherein the first line is a non-camouflaged vehicle detection effect, the second, third and fourth lines respectively show the stealth effect of the currently mainstream multi-view attack method, and the last line shows the anti-stealth effect of the current method;
fig. 6 is a schematic block diagram of an anti-camouflage generation device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
If there is a description of "first", "second", etc. in an embodiment of the present application, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second" may include at least one such feature, either explicitly or implicitly; the technical solutions of the embodiments can be combined with each other, and the technical solutions can be realized on the basis of those skilled in the art.
In the embodiments of the present application, all references to a and/or B refer to A, B, and three cases, a and B.
It should be noted that, in the actual implementation process, the sequence numbers of the sequencing mentioned in the present application do not represent that the sequence numbers must be strictly executed according to the sequencing order, so as to distinguish each step and prevent confusion.
In addition, the technical features of the different embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
At present, when the camouflage countermeasure is carried out in the prior art, the countermeasure pattern can be effectively carried out only when the surface of the object to be countermeasure is a plane, one of the reasons is that the current camouflage technology cannot optimize the surface curved surface characteristics of equipment, 2D texture data at any angle of a 3D model cannot be mapped onto the 3D model, although a few works aim at the 3D model to carry out countermeasure training, the grid single-sided training mode is adopted, the pixel distribution is uneven, no connection exists between the surfaces, camouflage is very easy to identify and find, and the 3D model cannot be directly used in practical application.
To solve the above problems, an embodiment of the present invention provides an anti-camouflage generation method, including:
Embedding a target model to be camouflaged into an automatic driving simulation environment engine to generate a target data set, wherein the target data set comprises a test set and a training set, the training set is used for training 3D camouflage textures against a camouflage base map, and the test set is used for evaluating the camouflage attack success rate;
utilizing a 3D model of a target to be camouflaged in a grid form to manufacture U-V mapping corresponding to the 3D model of the target to be camouflaged, and establishing a U-V mapping with the same size as a coordinate system according to the U-V mapping, wherein the U-V mapping refers to a coordinate mapping relation in a 3D-to-2D mapping process;
And determining mask data corresponding to the camouflage area to be coated according to the U-V mapping and the target data set, wherein the mask data is used for interfering detection of an avoidance detector.
The following description will be made with reference to fig. 1 and2 by way of specific examples.
S101, embedding a target model to be camouflaged into an automatic driving simulation environment engine to generate a target data set, wherein the target data set comprises a test set and a training set, the training set is used for training 3D camouflage textures by using a base map for resisting camouflage, and the test set is used for evaluating the attack success rate of camouflage.
The target model V to be camouflaged can be embedded into different automatic driving simulation environments R p, and visible light sensor receiving images with different visual angles and distances under different scenes S can be recorded, so as to obtain the required sensor data.
For the generated target data set, at least a test set and a training set are divided, the training set is used for resisting the camouflage base map to train the 3D camouflage texture, and the test set is used for evaluating the camouflage attack success rate.
Training procedures and testing procedures, which are not specifically enumerated herein, can be obtained by those skilled in the art with reference to the process of machine learning data for artificial intelligence.
S102: and (3) utilizing a 3D model of the target to be camouflaged in a grid form to manufacture U-V mapping corresponding to the 3D model of the target to be camouflaged, and establishing a U-V map with the same size as the coordinate system according to the U-V mapping.
The U-V mapping refers to a coordinate mapping relationship in the 3D-to-2D mapping process.
This step may be performed simultaneously with the step of S101, or may be performed separately, and the two steps do not affect each other.
According to the 3D model of the target to be camouflaged, the U-V mapping with the same size of the coordinate system is obtained, and the method can be simply understood as converting the 3D model into a 2D expression form so as to facilitate subsequent processing.
S103: and determining mask data of the corresponding camouflage area to be coated according to the U-V mapping and the target data set, wherein the mask data are used for interfering detection of the avoidance detector.
After the U-V map is obtained, the manner adopted by those skilled in the art is not limited herein by various manners such as training, conversion, simulation, etc. according to the U-V map and the target data set, and only the mask data corresponding to the camouflage area to be painted, which is used to interfere with the detection of the avoidance detector, may be determined.
The mask data corresponding to the camouflage to-be-painted area can determine the data to be painted according to the actual requirement, for example, the interference of the target detection algorithm Faster R-CNN is wanted to be countered, then the mask data corresponding to the camouflage to-be-painted area can be the avoidance target detection algorithm Faster R-CNN, if other detection algorithms are needed to be countered, the corresponding countering data can be determined by the root corresponding detection algorithm, and the person skilled in the art can set according to the actual requirement.
The technical scheme of the embodiment of the application can be suitable for antagonism under any condition, if the interference of the target detection algorithm Faster R-CNN is needed to be antagonized, corresponding antagonism data is determined according to the target detection algorithm Faster R-CNN, if other antagonism algorithms are needed, the corresponding antagonism data is determined according to the root correspondence detection algorithm, so that the antagonism camouflage generation method has strong universality, and the algorithm needing antagonism can be determined according to actual needs; in addition, according to the technical scheme, the full-view countermeasure data can be generated aiming at any model, the countermeasure texture is generated, and the target object is attached; according to the technical scheme, a deep neural network framework is combined, a U-V mapping design is utilized, U-V mapping corresponding to a 3D model of a target to be camouflaged is manufactured, U-V mapping with the same size as a coordinate system is built according to the U-V mapping, textures of adjacent surfaces of a grid model are effectively connected on 2D, and according to the size mapping relation built by the U-V mapping, even if the surfaces are convex surfaces or curved surfaces and other conditions, the surface can be reasonably tiled, the method is suitable for physical realization, a plane and a three-dimensional curved surface are effectively connected through the coordinate mapping relation in the 3D-to-2D mapping process, the 3D effect can be realized even if the mapping is obtained by mask data of the plane.
Optionally, as one embodiment, after determining the mask data corresponding to the camouflage area to be painted, the method further includes:
and printing the mask data, cutting out an area to be coated, and attaching the area to be coated to the corresponding position of the target to be camouflaged.
After the mask data is obtained, the mask data can effectively interfere with detection of the detector, the mask data is obtained through testing according to the 3D camouflage texture trained by the base map of the camouflage to be resisted in the target data set after being converted into the 2D model, therefore, even if the outer surface of the target to be camouflage has the curvature and radian, the mask data can be effectively applied to the target to be camouflage, and can be directly used in practical application after being pasted on the target of an entity, and the effect is good.
And printing mask data, cutting out an area to be coated, and attaching the area to be coated to a corresponding position of the target to be camouflaged, so that the target to be camouflaged can be hidden, and related detection is avoided.
The following is a description of specific examples.
Printing the optimized U-V diagram, cutting the region to be coated, and attaching textures at corresponding positions to a model car body, so that detection of a double-stage detector Faster R-CNN (or detection of other corresponding detectors) can be avoided, and anti-camouflage is completed.
Specifically, the initial U-V diagram obtained through optimization may be scaled according to the corresponding camouflage area of the scaling model, and printed to obtain an equal camouflage, and the equal camouflage is attached to the corresponding position of the model vehicle body, so that detection of the avoidance dual-stage detector Faster R-CNN (or detection of other corresponding detectors) may be realized, so as to complete physical countermeasure camouflage, and the camouflage effect may be shown in fig. 5.
Optionally, as one embodiment, the embedding the target model to be camouflaged into the autopilot simulation environment engine to generate the target data set specifically includes:
generating a target data set of multiple angles, multiple distances and multiple scenes by using an automatic driving simulation engine;
Wherein the angle and distance of the sensor relative to the target is recorded while the target data set is generated.
Specifically, the autopilot simulation engine may be CARLA.
Optionally, as one embodiment, the generating, by using the autopilot simulation engine CARLA, a target data set of multiple angles, multiple distances, and multiple scenes specifically includes:
Embedding a target model V to be camouflaged into different automatic driving simulation environments R p, and recording visible light sensor receiving images with different visual angles and distances under different scenes s i;
Corresponding rendering results are obtained for different scenes s i and sensor positions, and the target data set is generated;
Where i is a positive integer greater than or equal to 1, and represents the number of sensors, and p represents the number of autopilot simulation environments, typically 1, i.e., one autopilot simulation environment.
Optionally, as an embodiment, the obtaining the corresponding rendering result by the different scenes s i and the sensor positions specifically includes:
The relative coordinates of the sensor are (alpha, theta, d), Θ∈ (0, pi), the sensor position is (α ii,di);
According to the data of the sensor, a rendering result is obtained, wherein the rendering result is obtained by adopting the following formula:
Ipi=Rp(V,αii,di,si)
Where (α ii,di) is the coordinates of the sensor i with respect to the spherical coordinate system of the model, α i is the azimuth angle, θ i is the polar angle, and d i is the distance.
The following description will be made by specific embodiments, in which the object to be camouflaged is a description of a vehicle, and in the actual use process, the object to be camouflaged can be freely adjusted according to actual needs.
Referring to fig. 2, fig. 2 is a detailed flowchart of an anti-camouflage generation method.
S11: embedding the target model V to be camouflaged into different automatic driving simulation environments R p, recording visible light sensor receiving images with different visual angles and distances under different scenes s, wherein the relative coordinates of the sensors are (alpha, theta, d),θ∈(0,π)。
For different scenes s i and sensor positions (α ii,di), the sensor data, i.e., the rendering results, are shown in formula 1:
Ipi=Rp(V,αii,di,si) (1)
Where (α ii,di) is the coordinates of the sensor i with respect to the spherical coordinate system of the model, α i is the azimuth angle, θ i is the polar angle, and d i is the distance.
More specifically, the collision setting may be performed by importing CARLA a 3D model of the target camouflage vehicle. In the practical application process, the influence caused by weather also needs to be considered, so that in order to better simulate the practical environment, the vehicle can be placed in the map after the maps of four weather or environments are further selected, and 10 random placement positions are acquired, namely 40 data acquisition positions are acquired.
S12: for each data acquisition position, a sensor (alpha, theta, d) is arranged in a spherical coordinate system taking a vehicle as a center, wherein the theta acquisition angle is {5 degrees, 20 degrees and 45 degrees }, in practical application, the embodiment of the invention is limited by a pre-training model and the problem of the visual angle effect of the unmanned aerial vehicle in a simulation environment, and the polar angle does not acquire data above 45 degrees because the visual angle effect of the unmanned aerial vehicle is poor; the polar angle azimuth angle alpha acquisition range is (0 degrees, 360 degrees), more specifically, 20 degrees are taken when the step size is 30 degrees and below, and 45 degrees are taken when the step size is 30 degrees and above; the distance dsign may be {8,12,20,30}, based on the above parameters, so that 44×4=176 pieces of image data are acquired at a single location, and the total data amount is 7040 pieces. The number of the positions is 4 per map, 704 are taken as test set data, and the rest are taken as training set data.
The image generated by the emulated renderer is denoted I p.
S13, dividing a test set and a training set aiming at the generated data, wherein the training set is used for resisting a camouflage base map to train 3D camouflage textures, and the test set is used for evaluating the camouflage attack success rate.
And calculating the data quantity N cover of the vehicles in the test set, which are completely shielded by the building or vegetation, so as to facilitate the subsequent evaluation result.
Optionally, as one embodiment, the creating, by using the mesh-form target 3D model, a U-V mapping corresponding to the target texture specifically includes:
Designing U-V coordinate mapping of a 3D model of the target;
between the points of the triangular surface in the U-V expansion of the design 2D model and the 3D model data points of the target
The coordinate mapping relation of the to-be-coated region is determined, and the U-V tiling of the to-be-coated region is not overlapped in the mapping of the 2D model;
And according to the coordinate mapping relation, manufacturing a U-V mapping corresponding to the target texture.
Optionally, as an embodiment, the creating, according to the coordinate mapping relationship, a U-V mapping corresponding to the target texture specifically includes:
Designing the arrangement of points in the U-V diagram of the target, and determining the current mapping relation as the U-V mapping corresponding to the target texture when the area of the triangular surface in the U-V diagram mapped by the 2D model is approximately consistent with the area of the triangular surface of the 3D model of the target without separating the triangular surface in the U-V body of the target.
Optionally, as an embodiment, the method for enabling the area of the triangular surface in the U-V diagram mapped by the 2D model to be approximately consistent with the area of the triangular surface of the 3D model of the target specifically includes:
To reduce the area S k of triangular surface in the U-V mapping of the 2D model and the corresponding area of the target 3D model grid Taking the sum of squares of the difference between the ratio of (2) and 1 as a target, carrying out U-V arrangement in 3D software, and adopting the following formula to design:
Where K is the number of triangular faces and the magnitude of the K value depends on the original design of the 3D model.
Optionally, as one embodiment, after the creating the U-V mapping corresponding to the 3D model of the object to be camouflaged, before building a U-V map with the same size as the coordinate system according to the U-V mapping, the method further includes:
and outputting the part to be disguised in the U-V mapping in the U-V expression form.
The description will be continued taking the above embodiments as examples.
S21, setting U-V coordinate mapping of a 3D target model V, and designing a coordinate mapping relation between a triangular surface point and a 3D grid model point in 2D U-V expansion to ensure that U-V tiling of a region to be coated is not overlapped in 2D mapping.
The arrangement of the points in the U-V diagram of the 3D is designed, and on the premise of not separating triangular surfaces in the U-V body of the 3D, the area of the triangular surfaces in the U-V diagram of the 2D mapping is kept consistent with the area of the triangular surfaces of the original model as much as possible, so that texture deformation caused by curved surface mapping is avoided.
Based on the above conception, the corresponding areas of the triangular surface area S k and the 3D model grid in the 2D U-V mapping are reducedThe sum of the squares of the differences between the ratios of (c) and 1 is targeted for U-V arrangement in 3D software, i.e. the design may use an objective function as shown in equation 2.
As shown in fig. 3, fig. 3 provides a specific corresponding manner for reference.
Where K is the number of triangular faces.
Specifically, 3D modeling software can be utilized to establish a camouflage distribution U-V design for the grid model, and iterative calculation is performed when coordinates of the layout surface are arrangedStopping arrangement when the value is minimum, and generating an initial U-V body mapping relation diagram of 3x2048x2048 size tensors.
Adopting Maya as 3D U-V distribution modification software, and iteratively calculating the sum of the front side length ratio and the back side length ratio of the mappingTo enable the mapped pattern to maximally restore engagement of the 3D mold surface, the value is approximated to the minimum by fine tuning and calculation.
After the 3D U-V distribution is determined, the area is divided in units of 3D U-V volumes, and the 3D U-V distribution is stored as tensors in a size of 3x2048x2048, wherein the color at the boundary of the non-3D U-V volumes is set to (0, 0) and the color at the boundary is set to (1, 1).
S22: and (3) circling the part to be disguised in the U-V mapping in a U-V expression form, and establishing a U-V mapping with the same size as the coordinate system according to the U-V coordinate mapping.
In the 3D model, all external surfaces are not required to be camouflaged sometimes, and only camouflaged parts are required, so that the parts which need to be camouflaged in the U-V mapping are circled in a U-V expression form, and a U-V map with the same size as a coordinate system is established according to the U-V coordinate mapping, so that the calculation amount and resources can be saved, and various unnecessary expenses such as calculation waste and the like are not required to be generated.
According to the technical scheme, a random transformation mode of textures is adopted, a dynamic transformation process of a real object in the physical world is simulated, the detection effect of avoiding the double-stage detector Faster R-CNN is improved, physical anti-camouflage is effectively completed, and the physical camouflage effect is improved.
Optionally, as one embodiment, the determining mask data corresponding to the camouflage area to be painted according to the U-V map and the target data set specifically includes:
Setting a U-V region corresponding to the grid surface of the target to be camouflaged to be different in color from the target body;
And rendering a 2D model by using a neural network renderer at the same angle distance, and binarizing the rendering result to obtain mask data of the region to be camouflaged corresponding to the generated data set.
Optionally, as an embodiment, the setting the U-V region corresponding to the grid surface of the target to be camouflaged to be different from the target body specifically includes:
And selecting the color C t of the part to be coated in the U-V body C to be camouflaged to be an inverse color or pure white of the target body color { C j, j epsilon C }, and adopting the following formula:
Where r, g, b represent color channel values of three red, green, and blue channels in the image, C j is a j-th pixel of the target camouflage U-V body C, C t represents a region other than the target camouflage U-V body C, j represents each pixel, and t represents a t-th pixel.
Optionally, as one embodiment, the rendering the 2D model with the same angular distance by using a neural network renderer specifically includes:
For each picture I pi in the dataset, a resulting image I mi is rendered under dim settings using the neural network renderer R n for its angle (α ii) and distance d i using the following formula:
Imi=Rn(V,αii,di)
optionally, as one embodiment, binarizing the rendering result to obtain mask data of the region to be camouflaged corresponding to the generated dataset, which specifically includes:
calculating the maximum values of the camouflage area t and the non-camouflage area n in the rendering result Taking the intermediate value c min to binarize the rendering result, wherein the value c mid is obtained by adopting the following formula:
Forming a mask M i corresponding to each picture I pi in the dataset, wherein the value M i,j of each pixel on the mask M i is determined by the value I mi,j of the corresponding pixel on the rendering result I mi of the original map, and the value of the mask pixel at the camouflage position is 1, otherwise, is 0;
Determination of In the process, the mask calculation method adopts the following formula:
The description will be continued taking the above embodiments as examples.
And S3, building a mapping image corresponding to the U-V mapping by utilizing the U-V distribution of the 3D, dividing a camouflage coating area based on the U-V body boundary of the 3D, setting colors to distinguish the area to be coated and other areas, sending the U-V mapping image and the grid model into a renderer, respectively rendering the positions of sensors in the data set, and binarizing the obtained image to be used as a mask.
And (3) setting the initial colors in the U-V body areas to be coated to be (0, 0) according to boundary circling in the mapping result diagram corresponding to the U-V body areas to be coated, and setting the initial colors of other area mapping pictures to be (1, 1) at first to obtain a binary U-V mapping picture.
For each picture in the dataset, the relative positions (alpha, theta, d) of the sensors are extracted, the spherical coordinate system is scaled and displaced according to the sensor difference and the proportional relation between the simulation renderer and the adopted micro-renderers, and specifically, the scaling ratio can be selected to be 0.357, and the displacement is (0, -0.10).
And rendering the 3D grid model with the binary U-V mapping by utilizing the relative position coordinates after scaling displacement in micro-renderable mode to obtain rendering results corresponding to pictures in the dataset one by one, and binarizing each rendering result, wherein the demarcation value is a value between the highest value of the rendered area with the coating and the lowest value of other areas. In the embodiment of the invention, (0.5,0.5,0.5) is taken, so that a binarized mask M= (M i,j) is obtained, wherein the value of a camouflage area after rendering of M i,j is 0, and the values of other areas and the background are 1. Wherein the micro-renderers render result sizes are each set to a size of 3x640x640 to facilitate fusion with the simulation results.
The specific process can be as follows:
S31, the color C t of the part to be coated in the selected grid U-V body C is set to be the inverse color or pure white of the vehicle body color { C j, j epsilon C }, and the specific reference can be shown in the formula 3:
Where r, g, b represent color channel values of three red, green, and blue channels in the image, C j is a j-th pixel of the target camouflage U-V body C, C t represents a region other than the target camouflage U-V body C, j represents each pixel, and t represents a t-th pixel.
S32, for each picture I pi in the dataset, a resulting image I mi is rendered under dim light setting using the neural network renderer R n for its angle (α ii) and distance d i, as shown in equation 4:
Imi=Rn(V,αii,di) (4)
S33, calculating the maximum values of the camouflage area t and the non-camouflage area n in the rendering result Taking the intermediate value c min to binarize the rendering result, wherein the value c mid is as shown in formula 5:
A mask M i corresponding to each picture I pi in the dataset is formed, and the value M i,j of each pixel on the mask M i is determined by the value I mi,j of the corresponding pixel on the rendering result I mi of the original map, wherein the mask pixel value of the camouflage position is 1, otherwise, is 0. Assume that The mask calculation method is as shown in equation 6:
according to the technical scheme, the micro-renderer capable of carrying out gradient feedback optimization and the rendering result of the more real physical-based simulation renderer are fused, detection of avoiding the double-stage detector Faster R-CNN is achieved, the camouflage effect of a target to be camouflaged is optimized, and meanwhile the real effect of simulating the physical is guaranteed, so that physical countermeasure camouflage is completed.
Optionally, as one embodiment, after determining mask data corresponding to the camouflage area to be painted according to the U-V map and the target data set, the method further includes:
The mask data is optimized.
Optionally, as one embodiment, the optimizing the mask data specifically includes:
And rendering the image mapped by the U-V mapping of the camouflage area to be coated by using a micro renderer.
Optionally, as one embodiment, the rendering, by using a micro-renderer, the image mapped by the U-V map of the camouflaged area to be painted specifically includes:
Mapping the U-V map T to a 3D model by using a micro renderer according to the pictures I pi in each dataset, and rendering to obtain a result map Where k is the number of iterations, initially k=0;
According to the setting (alpha ii,di,si) in the rendering process, utilizing the mask to make the image I pi in the simulation rendering result and the result image rendered by the renderer Fusing to obtain a rendering result image when the texture map is T i,k
The description will be continued taking the above embodiments as examples.
And S4, using a micro-renderer to render a model mapped by the U-V mapping, multiplying the rendered image with a mask, and fusing the multiplied image with a simulation engine data set to serve as input of a dual-stage target detection algorithm Faster R-CNN.
As described in further detail below.
And S4, specifically, rendering the 3D grid model subjected to the repair and presetting of the U-V mapping by using a micro-renderer, extracting the relative positions (a, theta and D) of sensors of each piece of data in the data set, scaling and displacement, and then rendering to obtain an optimized image, and fusing the optimized image with the original simulation data image based on the mask generated in the step S3.
For the pictures I pi in each dataset, mapping the U-V mapping T to a 3D model by using a micro-renderer, and rendering to obtain a result diagramWhere k is the number of iterations, with k=0 initially;
According to the setting (alpha ii,di,si) in the data, using a mask to render the image I pi in the simulation rendering result and the result image rendered by the so-called renderer Fusing to obtain a rendering result image when the texture map is T i,k
More specifically, the following are given:
Step S41: and (3) marking the binary U-V paste graph set in the step (S3) as a paste graph mask M uv, wherein the value of the area to be camouflaged is 1. The initial color of the U-V mapping is reset, wherein the camouflage area is unchanged, the colors of other areas are modified to be the inherent colors of the original mapping positions of the vehicle body, and the initial mapping P for optimization is obtained and can be expressed as tensors with the size of 3x2048x 2048.
S42: and (3) from uniform distribution, randomly sampling to generate an initial texture T with the same initial size as P, and correspondingly replacing the to-be-camouflaged area in the original U-V mapping by the T to obtain the initialized camouflage mapping, wherein the to-be-camouflaged area is a random value, and other areas are intrinsic colors of the corresponding positions of the vehicle. The specific operation is as formula 11:
P′=P*Muv+T*(1-Muv) (11)
where represents a matrix-based bitwise multiplication.
S43, sending the initialized sticker and the 3D model into a micro-renderer, and rendering each image I p in the dataset to obtain a result I 0 with initialization camouflage.
S44, carrying out random expected transformation on the I 0, and respectively adding brightness, contrast random disturbance and random noise, wherein the specific process is as follows:
s441, randomly sampling a random contrast value B from uniform distribution in brightness disturbance, wherein the probability density of the uniform distribution is shown as a formula 12, and max is 1.2, and min is 0.8:
S442, randomly sampling a random contrast value C from uniform distribution in contrast disturbance, wherein a uniform distribution probability function is the same as S441, max is 0.1, and min is-0.1.
S443, randomly sampling a random noise image I n from uniform distribution in noise disturbance, wherein the size is 3x2048x2048, and the uniform distribution probability function is the same as S441, and max is1 and min is-1. The generated image is multiplied by a noise factor of 0.1.
S444, adding the disturbance to the texture image to obtain a camouflage texture image after random transformation. The specific operation is as in equation 13:
I′0=C·I0+In+B (13)
s45, performing extremum clipping on the fused image to enable the fused image to meet image threshold values [0,1], wherein the specific operation is as shown in formula 14:
S46, fusing the I p、I0 based on the binary mask generated in the S3, so as to obtain a camouflage area rendered by the micro-renderer and a result I r of combining the background rendered by the simulation renderer and the vehicle. The specific operation is as in formula 15:
Ir=Ip*M+I0′*(1-M) (15)
optionally, as one embodiment, after rendering the model mapped by the U-V map of the camouflage area to be painted, the method further includes:
And multiplying the rendered image with a mask, fusing the multiplied rendered image with a simulation engine data set, optimizing an initial texture by adopting a gradient descent algorithm, carrying out gradient return on a loss function by utilizing a target loss function, and optimizing a to-be-camouflage area of the initial U-V image.
Optionally, as one embodiment, the method for optimizing the initial texture by using the gradient descent algorithm specifically includes:
Multiplying the rendered image with a mask, and fusing the multiplied image with a simulation engine data set to serve as input of a dual-stage target detection model;
The method for optimizing the initial texture by adopting a gradient descent algorithm is characterized in that the confidence score of the target class of the one-stage RPN network in the detection model and the confidence score of the detection frame of the two-stage output result are calculated, the sum of the image gradients of the U-V image is calculated, and the result of weighting addition is used as a target loss function.
Optionally, as one embodiment, the method for optimizing an initial texture by using a gradient descent algorithm calculates a confidence score of a target class of a one-stage RPN network in a fast R-CNN detection model of the detection algorithm and a confidence score of a detection frame of a two-stage output result, which specifically includes:
Imaging the rendering result Inputting a double-stage detection model F, wherein the output result of the detection model F consists of two parts, and the first part is the class confidenceAnd detecting regression scores of the recommendation boxesThe second part is the detection frame confidence/>, of the second stage
Taking the class confidence of all recommended frames calculated in the first stage of the model of the attacked class vAnd all of the detection frame confidence/>, of the second stageSetting an objective function for reducing the detection accuracy of a modelThe following is shown:
where n is the total number of all recommended frames calculated in the first stage of the RPN network.
Optionally, as one embodiment, the calculating a sum of image gradients of the U-V image, adding the confidence score of the detection frame and the sum of gradients according to weights, and taking the result of weighted addition as a target loss function specifically includes:
Calculating the U-V image Each pixelThe corresponding change value adopts the following formula:
Wherein x is the corresponding U-V image Row coordinates, y is the corresponding U-V imageColumn coordinates, the U-V imageSmoothing lossIs the sum of the variation values of the pixels;
Loss of the smoothing The loss function with reduced model detection accuracy is weighted and combined with proper parameters mu and lambda, and the loss function/>, which is returned by the gradient, is obtained by adopting the following formula
Using the loss functionPerforming gradient feedback calculation, optimizing a camouflage texture area on the 3D model, and mapping the camouflage texture area to the initial U-V mapping;
Using the loss function And carrying out gradient feedback calculation, optimizing a camouflage texture area on the 3D model, and mapping the camouflage texture area to the initial U-V mapping.
The above embodiments are continued as examples.
Step S5 can be understood as calculating the confidence score of the target class of the one-stage RPN network in the fast R-CNN detection model and the confidence score of the detection frame of the two-stage output result by adopting a gradient feedback optimization initial texture method, calculating the sum of the image gradients of the U-V chartlet at the same time, taking the result of the weighted addition of the two as a target loss function, carrying out gradient feedback on the loss function, and optimizing the to-be-camouflage area of the initial U-V chart. Simply understood, the rendered fusion image of the initialized camouflage texture is sent to a detector Faster R-CNN, the class confidence coefficient in the output result of the network in the first stage and the detection confidence coefficient after the detection frames in the second stage are combined are extracted, and the total loss is obtained by adding the smooth loss after the countermeasures are calculated according to the loss function calculation method, so that the attack effect and the image smoothing effect are evaluated.
The specific process can be as follows:
Will render the resulting image Inputting a double-stage detection model F, wherein the output result of F consists of two parts, namely the class confidence/>, of the first stageAnd detecting regression score/>, of the recommendation boxDetection frame confidence/>, second stage
Obtaining class confidence degrees of all recommended frames calculated in the first stage of the model of the attacked class vAnd confidence/>, of all detection frames of the second stageSetting an objective function/>, which reduces the detection accuracy of the modelAs shown in equation 7:
where n is the total number of all recommended frames calculated in the first stage of the RPN network.
Computing an imageEach pixelThe corresponding change value is shown in formula 8
Wherein x is the corresponding imageRow coordinates, y corresponds to imageColumn coordinates. ImageSmoothing lossThe sum of the variation values of the pixels is shown in formula 9.
The smooth loss and the loss function which reduces the accuracy of model detection are weighted and combined by proper parameters mu and lambda, as shown in a formula 10, to obtain the loss function of gradient return
Using loss functionsAnd (3) carrying out gradient feedback calculation, optimizing a camouflage texture area on the 3D model, and mapping the camouflage texture area to the initial U-V mapping.
Further based on the above embodiment, the specific process may be as follows:
S51, setting the U-V diagram as an optimized tensor, taking I r as input, detecting by using a fast R-CNN detection model, extracting classification confidence coefficients of all preselected frames from the output result of the RPN network in the first stage, and summing the classification confidence coefficients. The specific calculation method is as shown in formula 16:
wherein F refers to the detection model, t refers to the target class of the object, and n refers to the number of pre-selected frames.
S52, detecting by using a fast R-CNN detection model, extracting the detection confidence of the combined detection frame from the output result of the second-stage network, and summing the detection confidence. The specific calculation method is as shown in formula 17:
S53, extracting only a part I t of camouflage textures in the image for calculating the smoothing loss, wherein the extraction method is shown in a formula 18:
It=I0*(1-M) (18)
S54, carrying out smooth loss calculation on textures, and firstly, respectively calculating the gradient squares of single pixel points, wherein the calculation method is shown in a formula 19:
the D xy values for the entire image are then summed to obtain a smooth function of texture as shown in equation 20:
where w, h are the width and height of image I t, respectively.
S55, adding the weight coefficients alpha, beta and gamma to each loss function to obtain an integral optimization target loss function, so that objects cannot be detected or misclassified as much as possible in the optimization process, and simultaneously, textures are kept to be visually flat.
Wherein the values of alpha, beta and gamma can be 1.0,1.0,0.5.
Specifically as shown in formula 21:
Ltotal=αLcls+βLobj+γLtv (21)
According to the technical scheme, the micro-renderer capable of carrying out gradient feedback optimization and the more real physical-based simulation renderer are fused, the physical realization effect is ensured while the whole vehicle camouflage is optimized, the camouflage effect of the object to be camouflaged is improved, the object loss function designed by the embodiment of the application can attack two stages of the dual-stage network at the same time, so that the object is stealth, the stealth effect is realized, and on the other hand, if the stealth fails, the object can be identified by mistake, namely, the object A is camouflaged by the embodiment of the application, even if the object A is detected by the dual-stage detector Faster R-CNN, the object A can be identified as the object B by mistake by the U-V chartlet effect, and the object A is further detected by the dual-stage detector Faster R-CNN, so that the object A is hidden.
According to the technical scheme of the embodiment of the application, fig. 4A is a 3D anti-camouflage tiling comparison obtained before optimization, and fig. 4B is a 3D anti-camouflage tiling comparison obtained after optimization, and it can be determined from the figures that the 3D anti-camouflage tiling obtained after optimization has good effect, and even if the surface of an object to be camouflaged is non-planar, camouflage can be effectively performed.
According to the technical scheme provided by the embodiment of the invention, the camouflage effect is as shown in fig. 5, the camouflage effect is good, the concealment effect is strong, and even if the object A is detected and found by the double-stage detector fast R-CNN, the object A can be wrongly identified as the object B, and the physical realization effect is ensured while the whole vehicle camouflage is concealed and optimized.
The embodiment of the present invention further provides an anti-camouflage generating device 600, as shown in fig. 6, including:
The training module 601 is configured to embed a target model to be camouflaged into the autopilot simulation environment engine, and generate a target data set, where the target data set includes a test set and a training set, the training set is configured to train a 3D camouflage texture against a camouflage base map, and the test set is configured to evaluate a camouflage attack success rate;
The generating module 602 is configured to make a U-V mapping corresponding to a 3D model of a target to be camouflaged in a grid form by using the 3D model of the target to be camouflaged, and establish a U-V map with the same size as a coordinate system according to the U-V mapping, where the U-V mapping refers to a coordinate mapping relationship in a 3D-to-2D mapping process; and the mask data of the region to be coated corresponding to disguised is determined according to the U-V mapping and the target data set.
The embodiment of the invention also provides a vehicle, and a part of the vehicle body of the vehicle is stuck with the film containing the mask data, which is manufactured by any one of the method steps in the embodiment.
The embodiment of the invention also provides a device capable of resisting camouflage, wherein a part of the outer surface of the device is stuck with a film containing the mask data manufactured by the method steps in any one of the embodiments, and the mask data can resist detection of a detector.
An embodiment of the present invention further provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor executes the program to implement the steps of the method according to any one of the foregoing embodiments.
It will be apparent that the described embodiments are some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It is to be understood that the above-described embodiments of the present invention are merely illustrative of or explanation of the principles of the present invention and are in no way limiting of the invention. Accordingly, any modification, equivalent replacement, improvement, etc. made without departing from the spirit and scope of the present invention should be included in the scope of the present invention. Furthermore, the appended claims are intended to cover all such changes and modifications that fall within the scope and boundary of the appended claims, or equivalents of such scope and boundary.

Claims (25)

1. A method of anti-camouflage generation comprising:
Embedding a target model to be camouflaged into an automatic driving simulation environment engine to generate a target data set, wherein the target data set comprises a test set and a training set, the training set is used for training 3D camouflage textures against a camouflage base map, and the test set is used for evaluating the camouflage attack success rate;
utilizing a 3D model of a target to be camouflaged in a grid form to manufacture U-V mapping corresponding to the 3D model of the target to be camouflaged, and establishing a U-V mapping with the same size as a coordinate system according to the U-V mapping, wherein the U-V mapping refers to a coordinate mapping relation in a 3D-to-2D mapping process;
And determining mask data corresponding to the camouflage area to be coated according to the U-V mapping and the target data set, wherein the mask data is used for detecting an interference detector.
2. The method according to claim 1, further comprising, after determining the mask data corresponding to the camouflaged area to be painted:
and printing the mask data, cutting out an area to be coated, and attaching the area to be coated to the corresponding position of the target to be camouflaged.
3. The method according to claim 1, wherein the embedding the object model to be camouflaged into the autopilot simulation environment engine generates an object data set, in particular comprising:
generating a target data set of multiple angles, multiple distances and multiple scenes by using an automatic driving simulation engine;
Wherein the angle and distance of the sensor relative to the target is recorded while the target data set is generated.
4. A method according to claim 3, wherein the generating a multi-angle, multi-distance, multi-scene target data set using an autopilot simulation engine comprises:
Embedding a target model V to be camouflaged into different automatic driving simulation environments R p, and recording visible light sensor receiving images with different visual angles and distances under different scenes s i;
Corresponding rendering results are obtained for different scenes s i and sensor positions, and the target data set is generated;
wherein i is a positive integer greater than or equal to 1, the number of sensors is represented, and p represents the number of automatic driving simulation environments.
5. The method of claim 4, wherein the obtaining corresponding rendering results for the different scenes s i and the sensor positions specifically includes:
The relative coordinates of the sensor are The sensor position is (α ii,di);
According to the data of the sensor, a rendering result is obtained, wherein the rendering result is obtained by adopting the following formula:
Ipi=Rp(V,αii,di,si)
Where (α ii,di) is the coordinates of the sensor i with respect to the spherical coordinate system of the model, α i is the azimuth angle, θ i is the polar angle, and d i is the distance.
6. The method according to claim 1, wherein the creating the U-V mapping corresponding to the target texture by using the target 3D model in the grid form specifically includes:
Designing U-V coordinate mapping of a 3D model of the target;
Designing a coordinate mapping relation between a triangular surface point in U-V expansion of a 2D model and a 3D model data point of the target, and determining that U-V tiling of a region to be coated is not overlapped in mapping of the 2D model;
And according to the coordinate mapping relation, manufacturing a U-V mapping corresponding to the target texture.
7. The method of claim 6, wherein the creating the U-V map corresponding to the target texture according to the coordinate mapping relation specifically includes:
Designing the arrangement of points in the U-V diagram of the target, and determining the current mapping relation as the U-V mapping corresponding to the target texture when the area of the triangular surface in the U-V diagram mapped by the 2D model is approximately consistent with the area of the triangular surface of the 3D model of the target without separating the triangular surface in the U-V body of the target.
8. The method according to claim 7, wherein the area of the triangular surface in the U-V diagram mapped by the 2D model is approximately consistent with the area of the triangular surface of the 3D model of the target, specifically comprising:
To reduce the area S k of triangular surface in the U-V mapping of the 2D model and the corresponding area of the target 3D model grid Taking the sum of squares of the difference between the ratio of (2) and 1 as a target, carrying out U-V arrangement in 3D software, and adopting the following formula to design:
Where K is the number of triangular faces.
9. The method of claim 1, further comprising, after said creating a U-V map corresponding to the 3D model of the object to be camouflaged, before building a U-V map of the same size as the coordinate system from the U-V map:
and outputting the part to be disguised in the U-V mapping in the U-V expression form.
10. The method according to claim 1, wherein determining mask data corresponding to a camouflaged area to be painted according to the U-V map and the target data set, specifically comprises:
Setting a U-V region corresponding to the grid surface of the target to be camouflaged to be different in color from the target body;
And rendering a 2D model by using a neural network renderer at the same angle distance, and binarizing the rendering result to obtain mask data of the region to be camouflaged corresponding to the generated data set.
11. The method of claim 10, wherein the setting the U-V region corresponding to the grid surface of the object to be camouflaged to be different from the object body specifically includes:
And selecting the color C t of the part to be coated in the U-V body C to be camouflaged to be an inverse color or pure white of the target body color { C j, j epsilon C }, and adopting the following formula:
Where r, g, b represent color channel values of three red, green, and blue channels in the image, C j is a j-th pixel of the target camouflage U-V body C, C t represents a region other than the target camouflage U-V body C, j represents each pixel, and t represents a t-th pixel.
12. The method according to claim 10, wherein the rendering the 2D model with the neural network renderer at the same angular distance comprises:
For each picture I pi in the dataset, a resulting image I mi is rendered under dim settings using the neural network renderer R n for its angle (α ii) and distance d i using the following formula:
Imi=Rn(V,αii,di)。
13. the method according to claim 10, wherein binarizing the rendering result to obtain mask data of a region to be camouflaged corresponding to the generated data set, specifically comprises:
calculating the maximum values of the camouflage area t and the non-camouflage area n in the rendering result Taking the intermediate value c min to binarize the rendering result, wherein the value c mid is obtained by adopting the following formula:
Forming a mask M i corresponding to each picture I pi in the dataset, wherein the value M i,j of each pixel on the mask M i is determined by the value I mi,j of the corresponding pixel on the rendering result I mi of the original map, and the value of the mask pixel at the camouflage position is 1, otherwise, is 0;
Determination of In the process, the mask calculation method adopts the following formula:
14. the method of claim 1, further comprising, after determining mask data corresponding to a camouflaged area to be painted from the U-V map and the target data set:
The mask data is optimized.
15. The method according to claim 14, wherein said optimizing said mask data, in particular comprises:
And rendering the image mapped by the U-V mapping of the camouflage area to be coated by using a micro renderer.
16. The method of claim 15, further comprising, after rendering the model mapped to the U-V map of the camouflaged area to be painted:
And multiplying the rendered image with a mask, fusing the multiplied rendered image with a simulation engine data set, optimizing an initial texture by adopting a gradient descent algorithm, carrying out gradient return on a loss function by utilizing a target loss function, and optimizing a to-be-camouflage area of the initial U-V image.
17. The method according to claim 16, wherein the method for optimizing the initial texture using a gradient descent algorithm uses a target loss function, specifically comprising:
Multiplying the rendered image with a mask, and fusing the multiplied image with a simulation engine data set to serve as input of a dual-stage target detection model;
The method for optimizing the initial texture by adopting a gradient descent algorithm is characterized in that the confidence score of the target class of the one-stage RPN network in the detection model and the confidence score of the detection frame of the two-stage output result are calculated, the sum of the image gradients of the U-V image is calculated, and the result of weighting addition is used as a target loss function.
18. The method according to claim 15, wherein the rendering the U-V mapped image of the camouflaged area to be painted with a micro-renderer, specifically comprises:
Mapping the U-V map T to a 3D model by using a micro renderer according to the pictures I pi in each dataset, and rendering to obtain a result map Where k is the number of iterations, initially k=0;
According to the setting (alpha ii,di,si) in the rendering process, utilizing the mask to make the image I pi in the simulation rendering result and the result image rendered by the renderer Fusing to obtain a rendering result image when the texture map is T i,k
19. The method of claim 17, wherein the method for optimizing the initial texture by using a gradient descent algorithm calculates a confidence score of a target class of a one-stage RPN network in the fast R-CNN detection model of the detection algorithm and a confidence score of a detection frame of a two-stage output result, specifically comprising:
Imaging the rendering result Inputting a double-stage detection model F, wherein the output result of the detection model F consists of two parts, and the first part is the class confidenceAnd detecting regression score/>, of the recommendation boxThe second part is the detection frame confidence/>, of the second stage
Taking the class confidence of all recommended frames calculated in the first stage of the model of the attacked class vAnd all of the detection frame confidence/>, of the second stageSetting an objective function/>, which reduces the detection accuracy of the modelThe following is shown:
where n is the total number of all recommended frames calculated in the first stage of the RPN network.
20. The method according to claim 17, wherein said calculating a sum of image gradients of said U-V image, summing said detection frame confidence score and said gradients, weighting the result of the summation as a target loss function, comprises:
Calculating the U-V image Each pixelThe corresponding change value adopts the following formula:
Wherein x is the corresponding U-V image Row coordinates, y is the corresponding U-V imageColumn coordinates, the U-V imageSmoothing lossIs the sum of the variation values of the pixels;
Loss of the smoothing The loss function with reduced model detection accuracy is weighted and combined with proper parameters mu and lambda, and the loss function/>, which is returned by the gradient, is obtained by adopting the following formula
Using the loss functionAnd 03.
Performing gradient feedback calculation, optimizing a camouflage texture area on the 3D model, and mapping the camouflage texture area to the initial U-V mapping;
Using the loss function And carrying out gradient feedback calculation, optimizing a camouflage texture area on the 3D model, and mapping the camouflage texture area to the initial U-V mapping.
21. An anti-camouflage generation device, comprising:
The training module is used for embedding a target model to be camouflaged into the automatic driving simulation environment engine to generate a target data set, wherein the target data set comprises a test set and a training set, the training set is used for training 3D camouflage textures against a camouflage base map, and the test set is used for evaluating the camouflage attack success rate;
The generating module is used for utilizing a 3D model of a target to be camouflaged in a grid form to manufacture U-V mapping corresponding to the 3D model of the target to be camouflaged, and establishing a U-V mapping with the same size as a coordinate system according to the U-V mapping, wherein the U-V mapping refers to a coordinate mapping relation in a 3D-to-2D mapping process; and the mask data of the region to be coated corresponding to disguised is determined according to the U-V mapping and the target data set.
22. A vehicle, wherein a part of a body of the vehicle is stuck with a film containing the mask data produced by the method steps of any one of claims 1 to 19.
23. A device against camouflage, characterized in that part of the outer surface of the device is provided with a film comprising the mask data produced by the method steps of any one of claims 1-19, said mask data being intended to interfere with the detection by a detector.
24. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any of claims 1-19 when the program is executed.
25. A computer-readable storage medium having stored thereon a computer program, characterized by: the computer program implementing the steps of the method of any one of claims 1 to 19 when executed by a processor.
CN202311703845.2A 2023-12-12 2023-12-12 Anti-camouflage generation method, vehicle and device Pending CN117934691A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311703845.2A CN117934691A (en) 2023-12-12 2023-12-12 Anti-camouflage generation method, vehicle and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311703845.2A CN117934691A (en) 2023-12-12 2023-12-12 Anti-camouflage generation method, vehicle and device

Publications (1)

Publication Number Publication Date
CN117934691A true CN117934691A (en) 2024-04-26

Family

ID=90754613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311703845.2A Pending CN117934691A (en) 2023-12-12 2023-12-12 Anti-camouflage generation method, vehicle and device

Country Status (1)

Country Link
CN (1) CN117934691A (en)

Similar Documents

Publication Publication Date Title
CN111640157B (en) Checkerboard corner detection method based on neural network and application thereof
JP5778237B2 (en) Backfill points in point cloud
CN106023257B (en) A kind of method for tracking target based on rotor wing unmanned aerial vehicle platform
CN112418074A (en) Coupled posture face recognition method based on self-attention
TWI649698B (en) Object detection device, object detection method, and computer readable medium
CN109102547A (en) Robot based on object identification deep learning model grabs position and orientation estimation method
CN107679537A (en) A kind of texture-free spatial target posture algorithm for estimating based on profile point ORB characteristic matchings
CN113920107A (en) Insulator damage detection method based on improved yolov5 algorithm
CN103632167B (en) Monocular vision space recognition method under class ground gravitational field environment
CN113159043B (en) Feature point matching method and system based on semantic information
CN117036641A (en) Road scene three-dimensional reconstruction and defect detection method based on binocular vision
CN111160291B (en) Human eye detection method based on depth information and CNN
CN111274964B (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
CN111260687A (en) Aerial video target tracking method based on semantic perception network and related filtering
CN106657948A (en) low illumination level Bayer image enhancing method and enhancing device
CN109766896A (en) A kind of method for measuring similarity, device, equipment and storage medium
CN111626241A (en) Face detection method and device
CN110211190B (en) Method and device for training camera self-motion parameter estimation model and storage medium
US8126275B2 (en) Interest point detection
CN104077603B (en) Outdoor scene monocular vision space recognition method in terrestrial gravity field environment
CN117151207A (en) Antagonistic patch generation method based on dynamic optimization integrated model
CN116758149A (en) Bridge structure displacement detection method based on unmanned aerial vehicle system
CN117934691A (en) Anti-camouflage generation method, vehicle and device
CN113670268B (en) Binocular vision-based unmanned aerial vehicle and electric power tower distance measurement method
CN109636838A (en) A kind of combustion gas Analysis of Potential method and device based on remote sensing image variation detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination