CN106095830A - A kind of image geo-positioning system based on convolutional neural networks and method - Google Patents
A kind of image geo-positioning system based on convolutional neural networks and method Download PDFInfo
- Publication number
- CN106095830A CN106095830A CN201610382449.8A CN201610382449A CN106095830A CN 106095830 A CN106095830 A CN 106095830A CN 201610382449 A CN201610382449 A CN 201610382449A CN 106095830 A CN106095830 A CN 106095830A
- Authority
- CN
- China
- Prior art keywords
- module
- information
- pixel
- geographical
- data storehouse
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Library & Information Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a kind of network information identification system based on convolutional neural networks and method, relate to field of satellite location, it is characterized in that, described system includes: video upload module, picture upload module, picture frame extraction module, GEOGRAPHICAL INDICATION detection module, decomposing module, characteristic extracting module, activation primitive, pond module, full link block, grader analyze judge module, study module, area data storehouse, pixel data storehouse and result display module.This system and method has that image analysis is fast, image recognition accurately, possess the positioning function to video information and possess learning capacity all advantages.
Description
Technical field
The present invention relates to field of satellite location, particularly relate to a kind of image of based on convolutional neural networks geo-location system
System and method.
Background technology
For framing, if the scenery in pushing to is well-known sight spot or landmark, then we can pass through
Special scene labeling just can very clear be known.Photo for common place positions, and everybody is generally by photo
The method adding GEOGRAPHICAL INDICATION indicates.A lot of Android phone are taking pictures when such as now, activate in camera laggard
Entering to shoot menu and just have " geographical position " function, we open this function before having only to shoot.
So photo of shooting will add the geographical location information of upper locality, automatically when we check this on computers
During a little photo, it is switched to " details ", under GPS project wherein it is seen that photograph taking actual geographic position, this
In use GPS longitude and latitude be marked, cellular base station, Wi-Fi etc. certainly can also be used to be marked.Certainly digital camera etc.
Also having similar positioning function, so by adding GEOGRAPHICAL INDICATION in photo Exif information, we just can be the most easily for shining
Sheet positions.But do not has geography information in actual photographed or online treated a lot of photos, then for this
The location of a little photos just seems trouble particularly.
Existing image location system is primarily present following defect:
1, image recognition accuracy is inadequate: in existing image location system, is mostly directly to be stored in data base
Substantial amounts of sample, and different images has various different feature.So directly sample being compared, then can cause comparison
Effect is the poorest, and accuracy rate is the lowest.
2, do not possess learning capacity: existing framing lacks learning capacity in actual use, regardless of
Use which kind of algorithm and analyze determination methods, always causing framing that deviation occurs, if can not be the most continuous
Improve in study, whole image location system can be caused to stagnate.
3, image analysis is slower: existing image location system, the image analysis algorithm of employing the most more tradition, it then follows often
Picture breakdown, deep analysis isotype.The speed causing a pictures location is very slow, and due to the office of algorithm
Sex-limited, cause the result the most usually inaccuracy of location.
4: do not possess the location to video information: video information is not the most carried out fixed by existing image location system
The function of position and means.
Summary of the invention
For the defect of above-mentioned anti-plug-in technical method, the invention provides a kind of image based on convolutional neural networks ground
Reason alignment system and method, this system and method has that image analysis is fast, image recognition accurately, possess the location to video information
Function and possess learning capacity all advantages.
The technical solution used in the present invention is as follows:
A kind of image geo-positioning system based on convolutional neural networks, it is characterised in that described system includes: on video
Carry module, picture upload module, picture frame extraction module, GEOGRAPHICAL INDICATION detection module, decomposing module, characteristic extracting module, swash
Function alive, pond module, full link block, grader analyze judge module, study module, area data storehouse, pixel data storehouse
With result display module;
Described video upload module signal is connected to picture frame extraction module;Picture frame extraction module signal respectively is connected to
Video upload module and GEOGRAPHICAL INDICATION detection module;Described GEOGRAPHICAL INDICATION detection module signal respectively is connected to picture frame and extracts mould
Block, analysis judge module, picture upload module, decomposing module and characteristic extracting module;Described characteristic extracting module, respectively signal
It is connected to GEOGRAPHICAL INDICATION detection module, activation primitive and pond module;Described pond module signal respectively is connected to feature extraction
Module, activation primitive and full link block;Described full link block signal respectively is connected to grader and pond module;Described point
Class device signal respectively is connected to full link block and analyzes judge module;Described analysis judge module signal respectively is connected to decompose
Module, GEOGRAPHICAL INDICATION detection module, grader, result display module and study module;Described study module signal respectively connects
In analyzing judge module, pixel data storehouse and area data storehouse.
Described video upload module, for uploaded videos information, sends video information to picture frame extraction module;Described
Picture upload module, for uploading pictures information, sends pictorial information to GEOGRAPHICAL INDICATION detection module;Described picture frame extracts
Module, for video information being decoded, intercepts the GEOGRAPHICAL INDICATION of video and obtains picture frame complete in video information, will
GEOGRAPHICAL INDICATION and picture frame send to GEOGRAPHICAL INDICATION detection module;Described GEOGRAPHICAL INDICATION detection module, for judging that uploads regards
Frequently whether information and pictorial information there is GEOGRAPHICAL INDICATION, if comprising GEOGRAPHICAL INDICATION, directly sending extremely to analyze by pictorial information and sentencing
Disconnected module, then sends picture to decomposing module and characteristic extracting module without comprising GEOGRAPHICAL INDICATION.
Described decomposing module, the Pixel Information for the pictorial information received is resolved into Pixel Information, after decomposing
Send to analyzing judge module;Described characteristic extracting module, for the picture received is carried out feature extraction, the spy extracted
Reference breath sends to activation primitive;Described activation primitive, for being attached characteristic extracting module and pond module;Described pond
Change module, for the characteristic information received is processed, reduce the characteristic vector of characteristic extracting module output, improve simultaneously
Result;Described full link block, for being attached grader and final characteristic information;Described grader is for according to spy
Reference breath carries out classification process.
Described pixel data storehouse internal memory contains sampled pixel information;Described sampled pixel information is comprised GPS ground by 1.5 hundred million
The pixel composition that the picture resource of reason positional information decomposes;
In described area data storehouse, the data of storage are small area labelling;Described small area is labeled as: will filter out
Population density superlatively district, 30000, the whole world in each regional decompasition become 30000 square areas not of uniform size, and pin
These square areas are added the data message after not isolabeling.
The learning method of described study module comprises the following steps:
Step 1: by the small area labelling in GPS geographical location information corresponding for sampled pixel information and area data storehouse
Compare;
Step 2: the sampled pixel labelling being mutually matched and small area labelling are associated;
Step 3: for the sampled pixel information of all small areas labelling in the data base of area cannot be mated, create new
Small area labelling carries out interrelated.
Described analysis judge module, for judging the geographical position of this picture or video capture according to the information received;
The method that described analysis judge module analysis judges comprises the steps:
Step 1: if receiving the GEOGRAPHICAL INDICATION that GEOGRAPHICAL INDICATION detection module directly transmits, then directly according to geography
Label information judges the shooting ground of picture or video;
Step 2: if receiving the Pixel Information that decomposing module sends over, then send this Pixel Information to learning mould
Block, study module, according to the correlation analysis in pixel data storehouse and area data storehouse, show that the pixel of this Pixel Information is geographical
Position;This geographical position is stored temporarily;
Step 3: the same transmission of image feature information that grader is sended over to study module, study module according to
Correlation analysis in pixel data storehouse and area data storehouse, draws the convolution geographical position of this image feature information;
Step 4: compared in this convolution geographical position and pixel geographical position, if there is difference, then by pixel ground
Reason position and convolution geographical position show to result display module simultaneously as result transmission;
Step 5: user may determine that in result display module which result is accurately, feeds back to result learn mould
Block;
Step 6: study module can adjust in pixel data storehouse and area data storehouse according to the correct result of feedback before
Relatedness.
The sorting technique of described grader comprises the following steps:
Step 1: set hypothesis function as:
Step 2: draw cost function
Step 3: use and assume that function calculates probit for the classification of each characteristic information and is:
Step 4: carry out classification process according to the probability calculated.
Use technique scheme, present invention produces following beneficial effect:
1, image recognition accuracy is high: employing convolutional neural networks is as image recognition, simultaneously also by traditional image
Reason technology is applied to wherein, carries out bi-directional matching, can maximize accuracy and the precision promoting image recognition.
2, possess learning capacity: in the process to output result, user can be evaluated for result each time,
According to evaluation result, whole system can carry out learning and improving, upper once run into analogue when promote location accuracy
And precision.
3, image analysis is fast: for having the image information of GEOGRAPHICAL INDICATION, can directly carry out localization process, improve not
Necessary processing procedure.Simultaneously for there is no the picture of GEOGRAPHICAL INDICATION, use convolutional neural networks to process, compare traditional images
Speed is more accelerated.
4: possess the location to video information: outside Chu Ci, it is also possible to video information is carried out frame extraction, enters for image
Row goes out the shooting ground of video information here after analyzing.
Accompanying drawing explanation
Fig. 1 is a kind of image geo-positioning system based on convolutional neural networks and method in the embodiment of the present invention.
Detailed description of the invention
All features disclosed in this specification, or disclosed all methods or during step, except mutually exclusive
Feature and/or step beyond, all can combine by any way.
Any feature disclosed in this specification (including any accessory claim, summary), unless specifically stated otherwise,
By other equivalences or there is the alternative features of similar purpose replaced.I.e., unless specifically stated otherwise, each feature is a series of
An example in equivalence or similar characteristics.
The embodiment of the present invention 1 provides a kind of image geo-positioning system based on convolutional neural networks, system structure
As shown in Figure 1:
A kind of intelligent medical supersonic image processing equipment, it is characterised in that described equipment includes: image capture device,
Image receiving apparatus, image sorting device, image general purpose processing device, the first algorithm data-base, analysis judgement equipment, display set
Standby, ultrasonoscopy processing equipment and the second data base;
Described image capture device signal is connected to image receiving apparatus;Described image receiving apparatus signal is connected to image
Sorting device;Described image sorting device signal is connected to image general purpose processing device;Described image general purpose processing device signal
It is connected to ultrasonoscopy processing equipment, image sorting device, the first algorithm data-base and display device;Described first algorithm data
Storehouse signal respectively is connected to image general purpose processing device and analyzes judgement equipment;Described analysis judgement equipment signal respectively is connected to
Display device, the first algorithm data-base and the second algorithm data-base;Described ultrasonoscopy processing equipment signal respectively is connected to figure
As general purpose processing device, display device and the second algorithm data-base;Described second algorithm data-base signal respectively is connected to ultrasonic
Image processing equipment and analysis judgement equipment.
Described ultrasonoscopy processing equipment, the image after processing general image processing equipment is carried out at ultrasonoscopy
Managing, it includes: ultrasound image processor and signal are connected to the ultrasonoscopy processing system of processor;Described image general procedure
Equipment, for processing all images sended over, if the result after ultrasonoscopy then will process sends to super
Acoustic image processing equipment proceeds to process, if general image, then the image transmission after processing is carried out to display device
Showing, it includes: general image processor and signal are connected to the common image processing systems of processor.
Described image capture device includes: acquiring ultrasound image equipment and general image collecting device;Described image-receptive
Equipment, for receiving the image information sended over from image capture device, sends image information to image sorting device;
Described image sorting device, for classifying image information according to different image capture devices, sends classification results
To image general purpose processing device.
Described display device, for showing the figure after processing general image processing equipment and the process of ultrasonoscopy processing equipment
Picture;Described analysis judgement equipment, for artificially judging image that display device shows the most accurately, according to judged result, to the
In one algorithm data-base and the second algorithm data-base, the algorithm priority of storage is adjusted.
The embodiment of the present invention 2 provides a kind of image geo-positioning system based on convolutional neural networks, system structure
Scheme as shown in Figure 1:
A kind of image geo-positioning system based on convolutional neural networks, it is characterised in that described system includes: on video
Carry module, picture upload module, picture frame extraction module, GEOGRAPHICAL INDICATION detection module, decomposing module, characteristic extracting module, swash
Function alive, pond module, full link block, grader analyze judge module, study module, area data storehouse, pixel data storehouse
With result display module;
Described video upload module signal is connected to picture frame extraction module;Picture frame extraction module signal respectively is connected to
Video upload module and GEOGRAPHICAL INDICATION detection module;Described GEOGRAPHICAL INDICATION detection module signal respectively is connected to picture frame and extracts mould
Block, analysis judge module, picture upload module, decomposing module and characteristic extracting module;Described characteristic extracting module, respectively signal
It is connected to GEOGRAPHICAL INDICATION detection module, activation primitive and pond module;Described pond module signal respectively is connected to feature extraction
Module, activation primitive and full link block;Described full link block signal respectively is connected to grader and pond module;Described point
Class device signal respectively is connected to full link block and analyzes judge module;Described analysis judge module signal respectively is connected to decompose
Module, GEOGRAPHICAL INDICATION detection module, grader, result display module and study module;Described study module signal respectively connects
In analyzing judge module, pixel data storehouse and area data storehouse.
Described video upload module, for uploaded videos information, sends video information to picture frame extraction module;Described
Picture upload module, for uploading pictures information, sends pictorial information to GEOGRAPHICAL INDICATION detection module;Described picture frame extracts
Module, for video information being decoded, intercepts the GEOGRAPHICAL INDICATION of video and obtains picture frame complete in video information, will
GEOGRAPHICAL INDICATION and picture frame send to GEOGRAPHICAL INDICATION detection module;Described GEOGRAPHICAL INDICATION detection module, for judging that uploads regards
Frequently whether information and pictorial information there is GEOGRAPHICAL INDICATION, if comprising GEOGRAPHICAL INDICATION, directly sending extremely to analyze by pictorial information and sentencing
Disconnected module, then sends picture to decomposing module and characteristic extracting module without comprising GEOGRAPHICAL INDICATION.
Described decomposing module, the Pixel Information for the pictorial information received is resolved into Pixel Information, after decomposing
Send to analyzing judge module;Described characteristic extracting module, for the picture received is carried out feature extraction, the spy extracted
Reference breath sends to activation primitive;Described activation primitive, for being attached characteristic extracting module and pond module;Described pond
Change module, for the characteristic information received is processed, reduce the characteristic vector of characteristic extracting module output, improve simultaneously
Result;Described full link block, for being attached grader and final characteristic information;Described grader is for according to spy
Reference breath carries out classification process.
Described pixel data storehouse internal memory contains sampled pixel information;Described sampled pixel information is comprised GPS ground by 1.5 hundred million
The pixel composition that the picture resource of reason positional information decomposes;
In described area data storehouse, the data of storage are small area labelling;Described small area is labeled as: will filter out
Population density superlatively district, 30000, the whole world in each regional decompasition become 30000 square areas not of uniform size, and pin
These square areas are added the data message after not isolabeling.
The learning method of described study module comprises the following steps:
Step 1: by the small area labelling in GPS geographical location information corresponding for sampled pixel information and area data storehouse
Compare;
Step 2: the sampled pixel labelling being mutually matched and small area labelling are associated;
Step 3: for the sampled pixel information of all small areas labelling in the data base of area cannot be mated, create new
Small area labelling carries out interrelated.
Described analysis judge module, for judging the geographical position of this picture or video capture according to the information received;
The method that described analysis judge module analysis judges comprises the steps:
Step 1: if receiving the GEOGRAPHICAL INDICATION that GEOGRAPHICAL INDICATION detection module directly transmits, then directly according to geography
Label information judges the shooting ground of picture or video;
Step 2: if receiving the Pixel Information that decomposing module sends over, then send this Pixel Information to learning mould
Block, study module, according to the correlation analysis in pixel data storehouse and area data storehouse, show that the pixel of this Pixel Information is geographical
Position;This geographical position is stored temporarily;
Step 3: the same transmission of image feature information that grader is sended over to study module, study module according to
Correlation analysis in pixel data storehouse and area data storehouse, draws the convolution geographical position of this image feature information;
Step 4: compared in this convolution geographical position and pixel geographical position, if there is difference, then by pixel ground
Reason position and convolution geographical position show to result display module simultaneously as result transmission;
Step 5: user may determine that in result display module which result is accurately, feeds back to result learn mould
Block;
Step 6: study module can adjust in pixel data storehouse and area data storehouse according to the correct result of feedback before
Relatedness.
The sorting technique of described grader comprises the following steps:
Step 1: set hypothesis function as:
Step 2: draw cost function
Step 3: use and assume that function calculates probit for the classification of each characteristic information and is:
Step 4: carry out classification process according to the probability calculated.
The embodiment of the present invention 3 provides a kind of image geo-positioning system based on convolutional neural networks, system structure
Scheme as shown in Figure 1:
A kind of image geo-positioning system based on convolutional neural networks, it is characterised in that described system includes: on video
Carry module, picture upload module, picture frame extraction module, GEOGRAPHICAL INDICATION detection module, decomposing module, characteristic extracting module, swash
Function alive, pond module, full link block, grader analyze judge module, study module, area data storehouse, pixel data storehouse
With result display module;
Described video upload module signal is connected to picture frame extraction module;Picture frame extraction module signal respectively is connected to
Video upload module and GEOGRAPHICAL INDICATION detection module;Described GEOGRAPHICAL INDICATION detection module signal respectively is connected to picture frame and extracts mould
Block, analysis judge module, picture upload module, decomposing module and characteristic extracting module;Described characteristic extracting module, respectively signal
It is connected to GEOGRAPHICAL INDICATION detection module, activation primitive and pond module;Described pond module signal respectively is connected to feature extraction
Module, activation primitive and full link block;Described full link block signal respectively is connected to grader and pond module;Described point
Class device signal respectively is connected to full link block and analyzes judge module;Described analysis judge module signal respectively is connected to decompose
Module, GEOGRAPHICAL INDICATION detection module, grader, result display module and study module;Described study module signal respectively connects
In analyzing judge module, pixel data storehouse and area data storehouse.
Described video upload module, for uploaded videos information, sends video information to picture frame extraction module;Described
Picture upload module, for uploading pictures information, sends pictorial information to GEOGRAPHICAL INDICATION detection module;Described picture frame extracts
Module, for video information being decoded, intercepts the GEOGRAPHICAL INDICATION of video and obtains picture frame complete in video information, will
GEOGRAPHICAL INDICATION and picture frame send to GEOGRAPHICAL INDICATION detection module;Described GEOGRAPHICAL INDICATION detection module, for judging that uploads regards
Frequently whether information and pictorial information there is GEOGRAPHICAL INDICATION, if comprising GEOGRAPHICAL INDICATION, directly sending extremely to analyze by pictorial information and sentencing
Disconnected module, then sends picture to decomposing module and characteristic extracting module without comprising GEOGRAPHICAL INDICATION.
Described decomposing module, the Pixel Information for the pictorial information received is resolved into Pixel Information, after decomposing
Send to analyzing judge module;Described characteristic extracting module, for the picture received is carried out feature extraction, the spy extracted
Reference breath sends to activation primitive;Described activation primitive, for being attached characteristic extracting module and pond module;Described pond
Change module, for the characteristic information received is processed, reduce the characteristic vector of characteristic extracting module output, improve simultaneously
Result;Described full link block, for being attached grader and final characteristic information;Described grader is for according to spy
Reference breath carries out classification process.
Described pixel data storehouse internal memory contains sampled pixel information;Described sampled pixel information is comprised GPS ground by 1.5 hundred million
The pixel composition that the picture resource of reason positional information decomposes;
In described area data storehouse, the data of storage are small area labelling;Described small area is labeled as: will filter out
Population density superlatively district, 30000, the whole world in each regional decompasition become 30000 square areas not of uniform size, and pin
These square areas are added the data message after not isolabeling.
The learning method of described study module comprises the following steps:
Step 1: by the small area labelling in GPS geographical location information corresponding for sampled pixel information and area data storehouse
Compare;
Step 2: the sampled pixel labelling being mutually matched and small area labelling are associated;
Step 3: for the sampled pixel information of all small areas labelling in the data base of area cannot be mated, create new
Small area labelling carries out interrelated.
Described analysis judge module, for judging the geographical position of this picture or video capture according to the information received;
The method that described analysis judge module analysis judges comprises the steps:
Step 1: if receiving the GEOGRAPHICAL INDICATION that GEOGRAPHICAL INDICATION detection module directly transmits, then directly according to geography
Label information judges the shooting ground of picture or video;
Step 2: if receiving the Pixel Information that decomposing module sends over, then send this Pixel Information to learning mould
Block, study module, according to the correlation analysis in pixel data storehouse and area data storehouse, show that the pixel of this Pixel Information is geographical
Position;This geographical position is stored temporarily;
Step 3: the same transmission of image feature information that grader is sended over to study module, study module according to
Correlation analysis in pixel data storehouse and area data storehouse, draws the convolution geographical position of this image feature information;
Step 4: compared in this convolution geographical position and pixel geographical position, if there is difference, then by pixel ground
Reason position and convolution geographical position show to result display module simultaneously as result transmission;
Step 5: user may determine that in result display module which result is accurately, feeds back to result learn mould
Block;
Step 6: study module can adjust in pixel data storehouse and area data storehouse according to the correct result of feedback before
Relatedness.
The sorting technique of described grader comprises the following steps:
Step 1: set hypothesis function as:
Step 2: draw cost function
Step 3: use and assume that function calculates probit for the classification of each characteristic information and is:
Use convolutional neural networks as image recognition, also be applied to wherein, enter by traditional image processing techniques simultaneously
Row bi-directional matching, can maximize accuracy and the precision promoting image recognition.
In the process to output result, user can be evaluated for result each time, according to evaluation result, whole
Individual system can carry out learning and improving, upper once run into analogue when promote location accuracy and precision.
For having the image information of GEOGRAPHICAL INDICATION, can directly carry out localization process, improve unnecessary process
Journey.Simultaneously for there is no the picture of GEOGRAPHICAL INDICATION, use convolutional neural networks to process, more accelerate than traditional images speed.
In addition to this it is possible to video information is carried out frame extraction, after being analyzed, go out video information for image here
Shooting ground.
The invention is not limited in aforesaid detailed description of the invention.The present invention expands to any disclose in this manual
New feature or any new combination, and the arbitrary new method that discloses or the step of process or any new combination.
Claims (7)
1. an image geo-positioning system based on convolutional neural networks, it is characterised in that described system includes: video upload
Module, picture upload module, picture frame extraction module, GEOGRAPHICAL INDICATION detection module, decomposing module, characteristic extracting module, activation
Function, pond module, full link block, grader analyze judge module, study module, area data storehouse, pixel data storehouse and
Result display module;
Described video upload module signal is connected to picture frame extraction module;Picture frame extraction module signal respectively is connected to video
Upload module and GEOGRAPHICAL INDICATION detection module;Described GEOGRAPHICAL INDICATION detection module signal respectively be connected to picture frame extraction module,
Analyze judge module, picture upload module, decomposing module and characteristic extracting module;Described characteristic extracting module, signal is even respectively
It is connected to GEOGRAPHICAL INDICATION detection module, activation primitive and pond module;Described pond module signal respectively is connected to feature extraction mould
Block, activation primitive and full link block;Described full link block signal respectively is connected to grader and pond module;Described classification
Device signal respectively is connected to full link block and analyzes judge module;Described analysis judge module signal respectively is connected to decompose mould
Block, GEOGRAPHICAL INDICATION detection module, grader, result display module and study module;Described study module signal respectively is connected to
Analyze judge module, pixel data storehouse and area data storehouse.
2. image geo-positioning system based on convolutional neural networks as claimed in claim 1, it is characterised in that described video
Upload module, for uploaded videos information, sends video information to picture frame extraction module;Described picture upload module, uses
In uploading pictures information, pictorial information is sent to GEOGRAPHICAL INDICATION detection module;Described picture frame extraction module, for by video
Information is decoded, and intercepts the GEOGRAPHICAL INDICATION of video and obtains picture frame complete in video information, by GEOGRAPHICAL INDICATION and image
Frame sends to GEOGRAPHICAL INDICATION detection module;Described GEOGRAPHICAL INDICATION detection module, for judging video information and the picture letter uploaded
Whether breath there is GEOGRAPHICAL INDICATION, if comprising GEOGRAPHICAL INDICATION, directly pictorial information being sent to analyzing judge module, if do not had
Have comprise GEOGRAPHICAL INDICATION then by picture send to decomposing module and characteristic extracting module.
3. image geo-positioning system based on convolutional neural networks as claimed in claim 2, it is characterised in that described decomposition
Module, for the pictorial information received is resolved into Pixel Information, the Pixel Information after decomposing sends extremely to analyze and judges mould
Block;Described characteristic extracting module, for the picture received is carried out feature extraction, the characteristic information extracted sends to activating
Function;Described activation primitive, for being attached characteristic extracting module and pond module;Described pond module, is used for docking
The characteristic information received processes, and reduces the characteristic vector of characteristic extracting module output, improves result simultaneously;Described full connection
Module, for being attached grader and final characteristic information;Described grader is for classifying according to characteristic information
Process.
4. image geo-positioning system based on convolutional neural networks as claimed in claim 3, it is characterised in that described pixel
Databases contains sampled pixel information;Described sampled pixel information is by 1.5 hundred million pictures comprising GPS geographical location information
The pixel composition of decomposing resources;
In described area data storehouse, the data of storage are small area labelling;Described small area is labeled as: complete by filter out
Each regional decompasition in 30000 population density superlatively districts of ball becomes 30000 square areas not of uniform size, and for this
The data message after not isolabeling is added in a little square areas.
5. image geo-positioning system based on convolutional neural networks as claimed in claim 3, it is characterised in that described study
The learning method of module comprises the following steps:
Step 1: the small area labelling in GPS geographical location information corresponding for sampled pixel information and area data storehouse is carried out
Comparison;
Step 2: the sampled pixel labelling being mutually matched and small area labelling are associated;
Step 3: for the sampled pixel information of all small areas labellings in the data base of area cannot be mated, create new small
Area labelling carries out interrelated.
6. image geo-positioning system based on convolutional neural networks as claimed in claim 5, it is characterised in that described analysis
Judge module, for judging the geographical position of this picture or video capture according to the information received;Described analysis judge module
Analyze the method judged to comprise the steps:
Step 1: if receiving the GEOGRAPHICAL INDICATION that GEOGRAPHICAL INDICATION detection module directly transmits, then directly according to GEOGRAPHICAL INDICATION
Information judges the shooting ground of picture or video;
Step 2: if receiving the Pixel Information that decomposing module sends over, then send this Pixel Information to study module,
Study module, according to the correlation analysis in pixel data storehouse and area data storehouse, draws the pixel geography position of this Pixel Information
Put;This geographical position is stored temporarily;
Step 3: the same transmission of the image feature information that sended over by grader is to study module, and study module is according to pixel
Correlation analysis in data base and area data storehouse, draws the convolution geographical position of this image feature information;
Step 4: compared in this convolution geographical position and pixel geographical position, if there is difference, then by pixel geography position
Put and show to result display module simultaneously as result transmission with convolution geographical position;
Step 5: user may determine that in result display module which result is accurately, feeds back to study module by result;
Step 6: study module can adjust the pass in pixel data storehouse and area data storehouse according to the correct result of feedback before
Connection property.
7. the image geo-positioning system based on convolutional neural networks as described in claim 3 or 4 or 5, it is characterised in that institute
The sorting technique stating grader comprises the following steps:
Step 1: set hypothesis function as:
Step 2: draw cost function
Step 3: use and assume that function calculates probit for the classification of each characteristic information and is:
Step 4: carry out classification process according to the probability calculated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610382449.8A CN106095830A (en) | 2016-05-31 | 2016-05-31 | A kind of image geo-positioning system based on convolutional neural networks and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610382449.8A CN106095830A (en) | 2016-05-31 | 2016-05-31 | A kind of image geo-positioning system based on convolutional neural networks and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106095830A true CN106095830A (en) | 2016-11-09 |
Family
ID=57446967
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610382449.8A Withdrawn CN106095830A (en) | 2016-05-31 | 2016-05-31 | A kind of image geo-positioning system based on convolutional neural networks and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106095830A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106846278A (en) * | 2017-02-17 | 2017-06-13 | 深圳市唯特视科技有限公司 | A kind of image pixel labeling method based on depth convolutional neural networks |
CN107707927A (en) * | 2017-09-25 | 2018-02-16 | 咪咕互动娱乐有限公司 | A kind of live data method for pushing, device and storage medium |
CN108399413A (en) * | 2017-02-04 | 2018-08-14 | 清华大学 | A kind of picture shooting region recognition and geographic positioning and device |
CN109743553A (en) * | 2019-01-26 | 2019-05-10 | 温州大学 | A kind of hidden image detection method and system based on deep learning model |
CN112424769A (en) * | 2018-12-18 | 2021-02-26 | 谷歌有限责任公司 | System and method for geographic location prediction |
-
2016
- 2016-05-31 CN CN201610382449.8A patent/CN106095830A/en not_active Withdrawn
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108399413A (en) * | 2017-02-04 | 2018-08-14 | 清华大学 | A kind of picture shooting region recognition and geographic positioning and device |
CN108399413B (en) * | 2017-02-04 | 2020-10-27 | 清华大学 | Picture shooting area identification and geographical positioning method and device |
CN106846278A (en) * | 2017-02-17 | 2017-06-13 | 深圳市唯特视科技有限公司 | A kind of image pixel labeling method based on depth convolutional neural networks |
CN107707927A (en) * | 2017-09-25 | 2018-02-16 | 咪咕互动娱乐有限公司 | A kind of live data method for pushing, device and storage medium |
CN107707927B (en) * | 2017-09-25 | 2021-10-26 | 咪咕互动娱乐有限公司 | Live broadcast data pushing method and device and storage medium |
CN112424769A (en) * | 2018-12-18 | 2021-02-26 | 谷歌有限责任公司 | System and method for geographic location prediction |
CN109743553A (en) * | 2019-01-26 | 2019-05-10 | 温州大学 | A kind of hidden image detection method and system based on deep learning model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11663802B2 (en) | Insect identification method and system | |
CN106095830A (en) | A kind of image geo-positioning system based on convolutional neural networks and method | |
US8532400B1 (en) | Scene classification for place recognition | |
US11335087B2 (en) | Method and system for object identification | |
CN109614989B (en) | Training method and device for rapid model, computer equipment and storage medium | |
US20120011142A1 (en) | Feedback to improve object recognition | |
JP2019087229A (en) | Information processing device, control method of information processing device and program | |
CN101300588A (en) | Determining a particular person from a collection | |
CN112131936A (en) | Inspection robot image identification method and inspection robot | |
TW201229962A (en) | Augmenting image data based on related 3D point cloud data | |
CN108009588A (en) | Localization method and device, mobile terminal | |
CN110704712A (en) | Scene picture shooting position range identification method and system based on image retrieval | |
CN111323024A (en) | Positioning method and device, equipment and storage medium | |
CN109357679B (en) | Indoor positioning method based on significance characteristic recognition | |
CN105354252A (en) | Information processing method and apparatus | |
US20220319232A1 (en) | Apparatus and method for providing missing child search service based on face recognition using deep-learning | |
US9141858B2 (en) | Determining GPS coordinates for images | |
CN115966061B (en) | Disaster early warning processing method, system and device based on 5G message | |
Xu et al. | UAV-ODS: A real-time outfall detection system based on UAV remote sensing and edge computing | |
CN205942690U (en) | Image geolocation system based on convolutional neural network | |
CN111797266B (en) | Image processing method and apparatus, storage medium, and electronic device | |
CN112734966A (en) | Classroom roll call method integrating WiFi data and face recognition | |
CN108307357A (en) | Floor location method based on Beacon three-point fixs | |
CN114513746B (en) | Indoor positioning method integrating triple vision matching model and multi-base station regression model | |
CN107016351A (en) | Shoot the acquisition methods and device of tutorial message |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20161109 |