CN110110722A - A kind of region detection modification method based on deep learning model recognition result - Google Patents
A kind of region detection modification method based on deep learning model recognition result Download PDFInfo
- Publication number
- CN110110722A CN110110722A CN201910359641.9A CN201910359641A CN110110722A CN 110110722 A CN110110722 A CN 110110722A CN 201910359641 A CN201910359641 A CN 201910359641A CN 110110722 A CN110110722 A CN 110110722A
- Authority
- CN
- China
- Prior art keywords
- convolutional neural
- neural networks
- vehicle
- depth convolutional
- running equipment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 21
- 238000013136 deep learning model Methods 0.000 title claims abstract description 12
- 238000002715 modification method Methods 0.000 title claims abstract description 12
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 39
- 230000015654 memory Effects 0.000 claims abstract description 30
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical compound OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 claims description 11
- 239000003344 environmental pollutant Substances 0.000 claims description 6
- 231100000719 pollutant Toxicity 0.000 claims description 6
- 239000006185 dispersion Substances 0.000 claims description 4
- 238000009432 framing Methods 0.000 claims description 4
- 238000012217 deletion Methods 0.000 claims description 3
- 230000037430 deletion Effects 0.000 claims description 3
- 238000007689 inspection Methods 0.000 claims 1
- 238000012360 testing method Methods 0.000 abstract description 4
- 238000004040 coloring Methods 0.000 abstract description 3
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 230000005055 memory storage Effects 0.000 abstract 1
- 238000000034 method Methods 0.000 description 6
- 210000002569 neuron Anatomy 0.000 description 5
- 238000000605 extraction Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/242—Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/255—Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/625—License plates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of region detection modification methods based on deep learning model recognition result, and in particular to artificial intelligence field, corrective;The corrective includes running equipment, and the connecting pin of the running equipment is equipped with depth convolutional neural networks pattern memory, and the running equipment includes computer, and the depth convolutional neural networks pattern memory storage inside has depth convolutional neural networks model.The present invention finds out the position that target is likely to occur in figure using texture information, marginal information and the colouring information in image in advance, it can guarantee to keep higher recall rate in the case where choosing less window, greatly reduce the time complexity of subsequent operation, and the candidate window obtained is higher compared to sliding window quality, the optimal identification detected to target object is realized in turn, detection efficiency is higher while testing result is more accurate, robustness is preferable, there is important application prospect in computer vision field practical application.
Description
Technical field
The present invention relates to field of artificial intelligence, it is more particularly related to which a kind of be based on deep learning model
The region detection modification method of recognition result.
Background technique
The method of conventional target detection is generally divided into three phases: selecting the area of some candidates on given image first
Then finally classified using trained classifier to these extracted region features in domain;
Specifically includes the following steps:
1, regional choice: certain a part in figure is framed as candidate region using various sizes of sliding window;
2, feature extraction: the relevant visual signature in candidate region is extracted.Such as the common Harr feature of Face datection;Pedestrian
Detection and general goals detect common HOG feature etc..Due to the Morphological Diversity of target, illumination variation diversity, background is more
The factors such as sample to design the feature of a robust being not so easy, however the quality for extracting feature directly influences point
The accuracy of class;
3, classifier: being identified using classifier, such as common SVM model.
In traditional target detection, multiple dimensioned deformable member model DPM practical effect is preferable, and DPM regards object as
The component of multiple compositions, such as nose, mouth and the eyes of face, describe object with the relationship between component.
But during practice, still there are some disadvantages in DPM, first is that the regional choice plan based on sliding window
Slightly without specific aim, time complexity is high, window redundancy;Second is that the feature of hand-designed for it is multifarious variation there is no compared with
Robustness carefully.
Summary of the invention
In order to overcome the drawbacks described above of the prior art, the embodiment of the present invention provides a kind of based on the identification of deep learning model
As a result region detection modification method, by based on the depth convolutional neural networks in depth convolutional neural networks pattern memory
Model carries out target identification, detects the IoU between any two region, selects most suitable region, i.e. candidate region, compared to
The prior art, for sliding window there are the problem of, texture information, marginal information and color in image is utilized in candidate region
Information finds out the position that target is likely to occur in figure in advance, it is ensured that keeps higher in the case where choosing less window and calls together
The rate of returning greatly reduces the time complexity of subsequent operation, and the candidate window obtained is higher compared to sliding window quality,
Realize the optimal identification detected to target object in turn, detection efficiency is higher while testing result is more accurate, robustness compared with
It is good, there is important application prospect in computer vision field practical application, to solve the problems mentioned in the above background technology.
To achieve the above object, the invention provides the following technical scheme: it is a kind of based on deep learning model recognition result
Region detection modification method, including corrective;
The corrective includes running equipment 1, and the connecting pin of the running equipment 1 is equipped with depth convolutional neural networks mould
Type memory 2, the running equipment 1 include computer 3, and 2 storage inside of depth convolutional neural networks pattern memory has depth
Spend convolutional neural networks model 4;
Specifically includes the following steps:
S1: firstly, being carrier with the computer 3 in running equipment 1 and being based in depth convolutional neural networks pattern memory 2
Depth convolutional neural networks model 4 carry out target identification, i.e., using depth convolutional neural networks model 4 identify target object,
The target object for needing to identify includes light light on and off state, automotive seat before and after number plate of vehicle number, ID code of vehicle, vehicle
Safety belt, automobile body color, vehicle registration certificate, motor vehicle licence application form, Vehicle Security examining report and machine
The report of motor vehicle exhaust pollutant monitoring;
S2: secondly, be directed to identified object, i.e. light light on and off before and after number plate of vehicle number, ID code of vehicle, vehicle
State, automobile safety seat belt, automobile body color, vehicle registration certificate, motor vehicle licence application form, Vehicle Security
Examining report and Dispersion Characteristics of Vehicular Pollutant Within examining report generate corresponding target area collection one by one;
S3: then, for the target area collection of generation, its all possible combining form is calculated;
S4: then, the ratio of candidate frame and former indicia framing in each combination, i.e. IoU are found out;
S5: then, IoU being matched with the threshold value of setting, if IoU is less than threshold value, return continues to find out each group
The IoU of conjunction, on the contrary then deletion are less than the combination of threshold value;
S6: it finally, traversing all combinations according to this, finally obtaining most suitable region and exporting, and then finds out in figure in advance
The position that target is likely to occur, it is ensured that keep higher recall rate in the case where choosing less window, greatly reduce
The time complexity of subsequent operation, and the candidate window obtained is higher compared to sliding window quality.
In a preferred embodiment, the connecting pin of the running equipment 1 is additionally provided with for carrying out accelerating calculating
CUDA GPU5。
In a preferred embodiment, the connecting pin of the running equipment 1 is additionally provided with examines for real-time storage region
Survey the cloud memory 6 of amendment progress.
Technical effect and advantage of the invention:
1, the present invention by based on the depth convolutional neural networks model in depth convolutional neural networks pattern memory into
Row target identification detects the IoU between any two region, most suitable region, i.e. candidate region is selected, compared to existing skill
Art, for sliding window there are the problem of, it is pre- that texture information in image, marginal information and colouring information is utilized in candidate region
First find out the position that target is likely to occur in figure, it is ensured that higher recall rate is kept in the case where choosing less window,
The time complexity of subsequent operation is greatly reduced, and the candidate window obtained is higher compared to sliding window quality, in turn
Realize the optimal identification detected to target object, detection efficiency is higher while testing result is more accurate, and robustness is preferable,
There is important application prospect in computer vision field practical application;
2, by being equipped with CUDA GPU, in order to as carrier and be based on depth convolutional Neural when the computer in running equipment
When depth convolutional neural networks model in network model memory carries out target identification, CUDA GPU assists computer,
To carry out acceleration operation, and then operation efficiency is improved, optimized allocation of resources;
3, by being equipped with cloud memory, in order to when the computer in running equipment is as carrier and based on depth convolution mind
When carrying out target identification through the depth convolutional neural networks model in network model memory, cloud memory real-time reception computer
The operational data generated in the course of work, and it is stored, when something unexpected happened causes computer to power off or crash, when
After computer restores, user can be read out the data in the memory of cloud by computer, to restore original progress, keep away
Exempt from the case where causing progress to lose, Information Security is higher.
Detailed description of the invention
Fig. 1 is corrective system structure diagram of the invention.
Fig. 2 is flowage structure schematic diagram of the invention.
Appended drawing reference are as follows: 1 running equipment, 2 depth convolutional neural networks pattern memories, 3 computers, 4 depth convolutional Neurals
Network model, 5CUDA GPU, 6 cloud memories.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
Such as attached drawing 1 and a kind of attached region detection modification method based on deep learning model recognition result shown in Fig. 2, packet
Include corrective;
The corrective includes running equipment 1, and the connecting pin of the running equipment 1 is equipped with depth convolutional neural networks mould
Type memory 2, the running equipment 1 include computer 3, and 2 storage inside of depth convolutional neural networks pattern memory has depth
Spend convolutional neural networks model 4;
The depth convolutional neural networks model 4 is a kind of neural network algorithm, for identification required target object, is made
For one kind of neural network, depth convolutional neural networks model 4 is obtained more complicated by the superposition of multilayer feature extract layer
Network structure obtains convolutional layer, down-sampling layer, full articulamentum and classifier and constitutes deep neural network structure;
The depth convolutional neural networks model 4 specifically includes:
1, local sensing: the connection of local pixel is closer i.e. in the space relationship of image, and apart from farther away pixel
Correlation is then weaker, and therefore, each neuron only needs localized region to be perceived in fact, without carrying out to global image
Perception;
2, weight is shared: in locally connection, each neuron corresponds to 25 parameters, altogether 10000 neurons, such as
25 parameters of this 10000 neurons of fruit are all equal, then number of parameters just becomes 25, this 25 parameters are corresponding
Convolution operation, regard the mode of feature extraction as, in convolutional neural networks identical convolution unrelated with the position of image-region
As the weight of core with bias is, same convolution kernel sequentially carries out convolution operation to image according to certain, after convolution
To all neurons be all using same convolution kernel area convolved image, be all shared Connecting quantity, therefore, weight is shared
Reduce the number of parameters of convolutional neural networks;
3, convolution: feature extraction is carried out using convolution collecting image, convolution process is exactly the mistake of a reduction number of parameters
Journey, most important convolution process is exactly that the size step-length of convolution kernel designs the selection with quantity, and the feature of the more extractions of number is more
It is more, but the complexity of network is also increasing, and overfitting problem easily occurs, and the size of convolution kernel influences the identification energy of network structure
Power, step-length determine the size and Characteristic Number for taking image;
4, Chi Hua: in convolutional neural networks, pond layer reduces convolutional layer output generally after convolutional layer, through Chi Hualai
Feature vector dimension, pond process reduces the resolution ratio of image to the greatest extent, while reducing the processing dimension of image,
But the effective information of image is remained, the complexity of convolutional layer processing below is reduced, greatly reduces network to image rotation
With the sensibility of translation, there are two types of the pond methods generally used: average pondization and maximum pond, average pond refer to image
The average value of target regional area is calculated, and as the value in the region behind pond, maximum pondization is then to choose image mesh
Mark value of the maximum value in region as Chi Huahou.
Specifically includes the following steps:
S1: firstly, being carrier with the computer 3 in running equipment 1 and being based in depth convolutional neural networks pattern memory 2
Depth convolutional neural networks model 4 carry out target identification, i.e., using depth convolutional neural networks model 4 identify target object,
The target object for needing to identify includes light light on and off state, automotive seat before and after number plate of vehicle number, ID code of vehicle, vehicle
Safety belt, automobile body color, vehicle registration certificate, motor vehicle licence application form, Vehicle Security examining report and machine
The report of motor vehicle exhaust pollutant monitoring;
S2: secondly, be directed to identified object, i.e. light light on and off before and after number plate of vehicle number, ID code of vehicle, vehicle
State, automobile safety seat belt, automobile body color, vehicle registration certificate, motor vehicle licence application form, Vehicle Security
Examining report and Dispersion Characteristics of Vehicular Pollutant Within examining report generate corresponding target area collection one by one;
S3: then, for the target area collection of generation, its all possible combining form is calculated;
S4: then, finding out the ratio of candidate frame and former indicia framing in each combination, i.e. IoU, the IoU are to hand over and compare, and are
The overlapping rate of the candidate frame of generation and former indicia framing, the i.e. ratio of their intersection and union;
S5: then, IoU being matched with the threshold value of setting, if IoU is less than threshold value, return continues to find out each group
The IoU of conjunction, on the contrary then deletion are less than the combination of threshold value;
S6: it finally, traversing all combinations according to this, finally obtaining most suitable region and exporting, and then finds out in figure in advance
The position that target is likely to occur, it is ensured that keep higher recall rate in the case where choosing less window, greatly reduce
The time complexity of subsequent operation, and the candidate window obtained is higher compared to sliding window quality.
Embodiment specifically: the present invention passes through based on the depth convolution in depth convolutional neural networks pattern memory 2
Neural network model 4 carries out target identification, detects the IoU between any two region, selects most suitable region, i.e. candidate regions
Domain, compared to the prior art, for sliding window there are the problem of, texture information, edge in image is utilized in candidate region
Information and colouring information find out the position that target is likely to occur in figure in advance, it is ensured that protect in the case where choosing less window
Higher recall rate is held, greatly reduces the time complexity of subsequent operation, and the candidate window obtained is compared to sliding window
Mouth quality is higher, and then realizes the optimal identification detected to target object, and detection efficiency is higher while testing result is more quasi-
Really, robustness is preferable, there is important application prospect in computer vision field practical application.
A kind of region detection modification method based on deep learning model recognition result as shown in Fig. 1 further includes
CUDA GPU5, the connecting pin of the CUDA GPU5 are connect with running equipment 1;
The CUDA GPU5 is the GPU for having CUDA, and CUDA is a kind of universal parallel computing architecture, which makes GPU
It is able to solve complicated computational problem, it contains the parallel computation engine inside CUDA instruction set architecture and GPU.
Specific embodiment are as follows: when the computer 3 in running equipment 1 as carrier and is based on depth convolutional neural networks model
When depth convolutional neural networks model 4 in memory 2 carries out target identification, CUDA GPU5 assists computer 3, thus
Acceleration operation is carried out, and then improves operation efficiency, is optimized allocation of resources.
A kind of region detection modification method based on deep learning model recognition result as shown in Fig. 1, further includes cloud
Memory 6 is held, the connecting pin of the cloud memory 6 is connect with running equipment 1;
The cloud server 6 is the server for cloud storage, and cloud storage is a kind of mode of online on-line storage,
Data are stored in the more virtual servers usually by third party's trustship, and on non-exclusive server, battalion, data center
Carrier prepares the resource of Storage Virtualization in rear end, and it is provided in a manner of memory resource pool, visitor according to the demand of client
Family can voluntarily store file or object using this memory resource pool.
Specific embodiment are as follows: when the computer 3 in running equipment 1 as carrier and is based on depth convolutional neural networks model
When depth convolutional neural networks model 4 in memory 2 carries out target identification, memory 6 real-time reception computer 3 in cloud is worked
The operational data generated in journey, and it is stored, when something unexpected happened causes computer 3 to power off or crash, when computer 3
After recovery, user can be read out the data in cloud memory 6 by computer 3, to restore original progress, be avoided
The case where causing progress to lose, Information Security are higher.
Last: the foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, all in the present invention
Spirit and principle within, any modification, equivalent replacement, improvement and so on, should be included in protection scope of the present invention it
It is interior.
Claims (3)
1. a kind of region detection modification method based on deep learning model recognition result, which is characterized in that including corrective;
The corrective includes running equipment (1), and the connecting pin of the running equipment (1) is equipped with depth convolutional neural networks mould
Type memory (2), the running equipment (1) include computer (3), and the depth convolutional neural networks pattern memory (2) is internal
It is stored with depth convolutional neural networks model (4);
Specifically includes the following steps:
S1: firstly, being carrier with the computer (3) in running equipment (1) and being based on depth convolutional neural networks pattern memory (2)
In depth convolutional neural networks model (4) carry out target identification, i.e., using depth convolutional neural networks model (4) identify target
Object, the target object for needing to identify include light light on and off state, vapour before and after number plate of vehicle number, ID code of vehicle, vehicle
Vehicle seat seat belt, automobile body color, vehicle registration certificate, motor vehicle licence application form, Vehicle Security detection report
It accuses and Dispersion Characteristics of Vehicular Pollutant Within examining report;
S2: secondly, be directed to identified object, i.e. number plate of vehicle number, ID code of vehicle, light light on and off shape before and after vehicle
State, automobile safety seat belt, automobile body color, vehicle registration certificate, motor vehicle licence application form, Vehicle Security inspection
Announcement and Dispersion Characteristics of Vehicular Pollutant Within examining report are observed and predicted, generates corresponding target area collection one by one;
S3: then, for the target area collection of generation, its all possible combining form is calculated;
S4: then, the ratio of candidate frame and former indicia framing in each combination, i.e. IoU are found out;
S5: then, IoU being matched with the threshold value of setting, if IoU is less than threshold value, return continues to find out each combination
IoU, on the contrary then deletion are less than the combination of threshold value;
S6: finally, traversing all combinations according to this, finally obtaining most suitable region and exporting, and then target in figure is found out in advance
The position being likely to occur, it is ensured that keep higher recall rate in the case where choosing less window, greatly reduce subsequent
The time complexity of operation, and the candidate window obtained is higher compared to sliding window quality.
2. a kind of region detection modification method based on deep learning model recognition result according to claim 1, special
Sign is: the connecting pin of the running equipment 1 is additionally provided with the CUDA GPU (5) for accelerate calculating.
3. a kind of region detection modification method based on deep learning model recognition result according to claim 1, special
Sign is: the connecting pin of the running equipment 1 is additionally provided with the cloud memory for real-time storage region detection amendment progress
(6)。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910359641.9A CN110110722A (en) | 2019-04-30 | 2019-04-30 | A kind of region detection modification method based on deep learning model recognition result |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910359641.9A CN110110722A (en) | 2019-04-30 | 2019-04-30 | A kind of region detection modification method based on deep learning model recognition result |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110110722A true CN110110722A (en) | 2019-08-09 |
Family
ID=67487755
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910359641.9A Pending CN110110722A (en) | 2019-04-30 | 2019-04-30 | A kind of region detection modification method based on deep learning model recognition result |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110110722A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112927231A (en) * | 2021-05-12 | 2021-06-08 | 深圳市安软科技股份有限公司 | Training method of vehicle body dirt detection model, vehicle body dirt detection method and device |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106127161A (en) * | 2016-06-29 | 2016-11-16 | 深圳市格视智能科技有限公司 | Fast target detection method based on cascade multilayer detector |
CN106250812A (en) * | 2016-07-15 | 2016-12-21 | 汤平 | A kind of model recognizing method based on quick R CNN deep neural network |
CN107169421A (en) * | 2017-04-20 | 2017-09-15 | 华南理工大学 | A kind of car steering scene objects detection method based on depth convolutional neural networks |
CN107292886A (en) * | 2017-08-11 | 2017-10-24 | 厦门市美亚柏科信息股份有限公司 | Object intrusion detection method and device based on mesh generation and neutral net |
CN107316058A (en) * | 2017-06-15 | 2017-11-03 | 国家新闻出版广电总局广播科学研究院 | Improve the method for target detection performance by improving target classification and positional accuracy |
CN107368845A (en) * | 2017-06-15 | 2017-11-21 | 华南理工大学 | A kind of Faster R CNN object detection methods based on optimization candidate region |
CN108564084A (en) * | 2018-05-08 | 2018-09-21 | 北京市商汤科技开发有限公司 | character detecting method, device, terminal and storage medium |
CN108597009A (en) * | 2018-04-10 | 2018-09-28 | 上海工程技术大学 | A method of objective detection is carried out based on direction angle information |
CN108764228A (en) * | 2018-05-28 | 2018-11-06 | 嘉兴善索智能科技有限公司 | Word object detection method in a kind of image |
CN108846446A (en) * | 2018-07-04 | 2018-11-20 | 国家新闻出版广电总局广播科学研究院 | The object detection method of full convolutional network is merged based on multipath dense feature |
CN109284704A (en) * | 2018-09-07 | 2019-01-29 | 中国电子科技集团公司第三十八研究所 | Complex background SAR vehicle target detection method based on CNN |
CN109460765A (en) * | 2018-09-25 | 2019-03-12 | 平安科技(深圳)有限公司 | Driving license is taken pictures recognition methods, device and the electronic equipment of image in natural scene |
CN109671060A (en) * | 2018-12-06 | 2019-04-23 | 西安电子科技大学 | Area of computer aided breast lump detection method based on selective search and CNN |
-
2019
- 2019-04-30 CN CN201910359641.9A patent/CN110110722A/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106127161A (en) * | 2016-06-29 | 2016-11-16 | 深圳市格视智能科技有限公司 | Fast target detection method based on cascade multilayer detector |
CN106250812A (en) * | 2016-07-15 | 2016-12-21 | 汤平 | A kind of model recognizing method based on quick R CNN deep neural network |
CN107169421A (en) * | 2017-04-20 | 2017-09-15 | 华南理工大学 | A kind of car steering scene objects detection method based on depth convolutional neural networks |
CN107316058A (en) * | 2017-06-15 | 2017-11-03 | 国家新闻出版广电总局广播科学研究院 | Improve the method for target detection performance by improving target classification and positional accuracy |
CN107368845A (en) * | 2017-06-15 | 2017-11-21 | 华南理工大学 | A kind of Faster R CNN object detection methods based on optimization candidate region |
CN107292886A (en) * | 2017-08-11 | 2017-10-24 | 厦门市美亚柏科信息股份有限公司 | Object intrusion detection method and device based on mesh generation and neutral net |
CN108597009A (en) * | 2018-04-10 | 2018-09-28 | 上海工程技术大学 | A method of objective detection is carried out based on direction angle information |
CN108564084A (en) * | 2018-05-08 | 2018-09-21 | 北京市商汤科技开发有限公司 | character detecting method, device, terminal and storage medium |
CN108764228A (en) * | 2018-05-28 | 2018-11-06 | 嘉兴善索智能科技有限公司 | Word object detection method in a kind of image |
CN108846446A (en) * | 2018-07-04 | 2018-11-20 | 国家新闻出版广电总局广播科学研究院 | The object detection method of full convolutional network is merged based on multipath dense feature |
CN109284704A (en) * | 2018-09-07 | 2019-01-29 | 中国电子科技集团公司第三十八研究所 | Complex background SAR vehicle target detection method based on CNN |
CN109460765A (en) * | 2018-09-25 | 2019-03-12 | 平安科技(深圳)有限公司 | Driving license is taken pictures recognition methods, device and the electronic equipment of image in natural scene |
CN109671060A (en) * | 2018-12-06 | 2019-04-23 | 西安电子科技大学 | Area of computer aided breast lump detection method based on selective search and CNN |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112927231A (en) * | 2021-05-12 | 2021-06-08 | 深圳市安软科技股份有限公司 | Training method of vehicle body dirt detection model, vehicle body dirt detection method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Pereira et al. | A deep learning-based approach for road pothole detection in timor leste | |
Wang et al. | Orientation invariant feature embedding and spatial temporal regularization for vehicle re-identification | |
Rahmouni et al. | Distinguishing computer graphics from natural images using convolution neural networks | |
CN107423700B (en) | Method and device for verifying testimony of a witness | |
Hoang et al. | Enhanced detection and recognition of road markings based on adaptive region of interest and deep learning | |
CN109858429B (en) | Eye ground image lesion degree identification and visualization system based on convolutional neural network | |
Ming et al. | Vehicle detection using tail light segmentation | |
CN105404886A (en) | Feature model generating method and feature model generating device | |
CN111696196B (en) | Three-dimensional face model reconstruction method and device | |
CN111832461A (en) | Non-motor vehicle riding personnel helmet wearing detection method based on video stream | |
CN112686191B (en) | Living body anti-counterfeiting method, system, terminal and medium based on three-dimensional information of human face | |
Cai et al. | Vehicle Detection Based on Deep Dual‐Vehicle Deformable Part Models | |
CN116340887A (en) | Multi-mode false news detection method and system | |
CN115408710A (en) | Image desensitization method and related device | |
CN110110722A (en) | A kind of region detection modification method based on deep learning model recognition result | |
CN114492634A (en) | Fine-grained equipment image classification and identification method and system | |
Kim et al. | Facial landmark extraction scheme based on semantic segmentation | |
CN116664873B (en) | Image information processing method, device and storage medium | |
CN110210561B (en) | Neural network training method, target detection method and device, and storage medium | |
Joshi et al. | Real-time object detection and identification for visually challenged people using mobile platform | |
Ponsa et al. | Cascade of classifiers for vehicle detection | |
Fernandes et al. | Directed adversarial attacks on fingerprints using attributions | |
CN116797789A (en) | Scene semantic segmentation method based on attention architecture | |
CN116935249A (en) | Small target detection method for three-dimensional feature enhancement under unmanned airport scene | |
Arróspide et al. | Region-dependent vehicle classification using PCA features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190809 |