CN109146892A - A kind of image cropping method and device based on aesthetics - Google Patents
A kind of image cropping method and device based on aesthetics Download PDFInfo
- Publication number
- CN109146892A CN109146892A CN201810813038.9A CN201810813038A CN109146892A CN 109146892 A CN109146892 A CN 109146892A CN 201810813038 A CN201810813038 A CN 201810813038A CN 109146892 A CN109146892 A CN 109146892A
- Authority
- CN
- China
- Prior art keywords
- image
- marking area
- cut
- aesthstic
- bounding box
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the present application provides a kind of image cropping method and device based on aesthetics, belongs to field of computer technology.The described method includes: obtaining image to be cut;According to conspicuousness detection algorithm, the corresponding notable figure of the image to be cut is calculated, wherein the notable figure includes the corresponding saliency map picture of the image to be cut, and the saliency map seems gray level image;By marking area extraction algorithm, significant bounding box is determined in the notable figure;Described wait cut in image, the corresponding marking area of the significant bounding box is determined, wherein the marking area is described wait cut the image-region that significant bounding box includes described in image;According to aesthstic region recognition algorithm and the marking area, the aesthstic zone boundary frame comprising the marking area is determined;Based on the aesthstic zone boundary frame, the image to be cut is cut, target image is obtained.Using the present invention, the efficiency of determining crop box can be improved.
Description
Technical field
This application involves field of computer technology, more particularly to a kind of image cropping method and device based on aesthetics.
Background technique
Image also has aesthetic quality other than comprising semantic information.One image with high aesthetic quality more can table
Up to the semantic information of image, more liked by user.However, it is universal with digital camera and smart phone, it is big in network
Parts of images is shot by the user for not having Professional Photography knowledge, and the aesthetic quality of image is low.Therefore, based on the figure in network
Picture, obtaining, there is the image of high aesthetic quality to become hot research problem.
It is due to image composition an important factor for influencing image aesthetic quality, people, which generally pass through, cuts image
Mode changes image composition, and then improves the aesthetic quality of image.The process flow of common image cropping method are as follows: 1, electronics
Equipment obtains the significant bounding box of image to be cut and significant according to significant bounding box acquisition algorithm and image to be cut
The corresponding coordinate information of bounding box.2, electronic equipment is on the basis of the significant bounding box, according to the coordinate information of significant bounding box
And preset coordinate interval threshold value, sequentially generate multiple candidate crop boxes comprising significant bounding box and each candidate sanction
Cut the corresponding candidate clipping region of frame;Then, electronic equipment obtains each candidate clipping region by aesthetic quality sorter network
Classification results, classification results are probability value of the value range between 0 to 1, then determine most probable value;Later, electronics is set
For using the corresponding candidate crop box of most probable value as crop box, the corresponding image-region of crop box is aesthstic region;3, electric
Sub- equipment is based on crop box, treats cutting image and is cut, obtains the high image of aesthetic quality.Wherein, electronic equipment includes
Server and terminal, aesthstic region are the regions in image with high aesthetic quality.
However, the image cropping method needs to generate thousands of candidate crop boxes, and each candidate sanction is determined one by one
The corresponding probability value of frame is cut, therefore, for individually image to be cut, the time needed for determining crop box is long, determines crop box
Low efficiency.
Summary of the invention
The embodiment of the present application is designed to provide a kind of image cropping method and device based on aesthetics, to improve determination
The efficiency of crop box.Specific technical solution is as follows:
In a first aspect, providing a kind of image cropping method based on aesthetics, which comprises
Obtain image to be cut;
According to conspicuousness detection algorithm, the corresponding notable figure of the image to be cut is calculated, wherein the notable figure includes
The corresponding saliency map picture of the image to be cut, the saliency map seems gray level image;
By marking area extraction algorithm, significant bounding box is determined in the notable figure;
Described wait cut in image, the corresponding marking area of the significant bounding box is determined, wherein the marking area
To be described wait cut the image-region that significant bounding box includes described in image;
According to aesthstic region recognition algorithm and the marking area, the aesthstic zone boundary comprising the marking area is determined
Frame;
Based on the aesthstic zone boundary frame, the image to be cut is cut, target image is obtained.
Optionally, described according to aesthstic region recognition algorithm and the marking area, it determines comprising the marking area
Aesthstic zone boundary frame, comprising:
Obtain corresponding first coordinate information of the marking area, wherein first coordinate information is included in preset
Wait cut in image coordinate system, the coordinate of the corresponding pixel of the non-conterminous endpoint of significant bounding box two;
According to the marking area and aesthstic region recognition algorithm, shift ratio vector is determined, wherein the shift ratio
Vector accounts for aesthstic zone boundary frame by the coordinate shift amount of the marking area up, down, left and right four direction and corresponds to side length
Percentage is constituted;
According to the first coordinate information described in the shift ratio vector sum, the second coordinate information is determined, described second is sat
The bounding box that information is constituted is marked, as aesthstic zone boundary frame.
Optionally, the method also includes:
Pre-stored first image pattern collection is obtained, first training image collection includes multiple first image patterns,
And the corresponding notable figure sample of each first image pattern;
According to the preset first initial neural network, each the first image sample and each the first image sample
Corresponding notable figure sample, determines first object parameter, wherein the first object parameter is the described first initial neural network
The parameter for including;
According to the first object parameter, the conspicuousness detection algorithm is determined.
Optionally, the method also includes:
Pre-stored second image pattern collection is obtained, the second image pattern collection includes multiple second image patterns,
And the corresponding marking area sample of each second image pattern, shift ratio vector sample;
Based on the second image pattern collection, the preset second initial neural network is trained, the aesthetics is obtained
Region recognition algorithm.
Optionally, described to be based on the second image pattern collection, the preset second initial neural network is trained, is obtained
To the aesthstic region recognition algorithm, comprising:
For the second image pattern collection, obtain the corresponding marking area sample of each second image pattern,
And the shift ratio vector sample;
According to the marking area sample of each second image pattern, the shift ratio vector sample and pre-
If the second initial neural network, determine the second target component, wherein second target component be second initial algorithm
The parameter for including;
According to second target component and the second initial neural network, the aesthstic region recognition algorithm is determined.
Second aspect, provides a kind of image cropping device based on aesthetics, and described device includes:
First obtains module, for obtaining image to be cut;
Computing module, for calculating the corresponding notable figure of the image to be cut according to conspicuousness detection algorithm, wherein
The notable figure includes the corresponding saliency map picture of the image to be cut, and the saliency map seems gray level image;
First determining module, for determining significant bounding box in the notable figure by marking area extraction algorithm;
Second determining module, for, wait cut in image, determining the corresponding marking area of the significant bounding box described,
Wherein, the marking area is described wait cut the image-region that significant bounding box includes described in image;
Third determining module, for determining comprising described significant according to aesthstic region recognition algorithm and the marking area
The aesthstic zone boundary frame in region;
Module is cut, for cutting to the image to be cut, obtaining target based on the aesthstic zone boundary frame
Image.
Optionally, the third determining module includes:
Acquisition submodule, for obtaining corresponding first coordinate information of the marking area, wherein the first coordinate letter
Breath is included in preset wait cut in image coordinate system, the seat of the corresponding pixel of the non-conterminous endpoint of significant bounding box two
Mark;
First determines submodule, for according to the marking area and aesthstic region recognition algorithm, determine shift ratio to
Amount, wherein the shift ratio vector accounts for aesthetics by the coordinate shift amount of the marking area up, down, left and right four direction
The percentage that zone boundary frame corresponds to side length is constituted;
Second determines submodule, is used for the first coordinate information according to the shift ratio vector sum, determines the second seat
Information is marked, the bounding box that second coordinate information is constituted, as aesthstic zone boundary frame.
Optionally, described device further include:
Second obtains module, and for obtaining pre-stored second image pattern collection, the second image pattern collection includes
Multiple second image patterns and the corresponding marking area sample of each second image pattern, shift ratio vector sample;
4th determining module carries out the preset second initial neural network for being based on the second image pattern collection
Training obtains the aesthstic region recognition algorithm.
The third aspect provides a kind of electronic equipment, including processor and machine readable storage medium, described machine readable
Storage medium is stored with the machine-executable instruction that can be executed by the processor, and the processor can be performed by the machine
Instruction promotes: realizing method and step described in first aspect.
Fourth aspect provides a kind of machine readable storage medium, is stored with machine-executable instruction, by processor tune
When with executing, the machine-executable instruction promotes the processor: realizing method and step described in first aspect.
A kind of image cropping method and device based on aesthetics provided in an embodiment of the present invention, first according to image to be cut
With pre-stored conspicuousness detection algorithm, the notable figure of image to be cut is obtained;Further according to notable figure and pre-stored aobvious
Extracted region algorithm is write, significant bounding box is obtained;And according to significant bounding box and image to be cut, marking area is determined.Then,
According to marking area and pre-stored aesthstic region recognition algorithm, aesthetics region is determined.Later, it according to aesthstic region, treats
It cuts image to be cut, obtains high aesthetic quality image.For individually image to be cut, due to using aesthstic region recognition
Algorithm determines the corresponding aesthstic region of marking area, can be improved the efficiency of determining crop box.
Certainly, any product or method for implementing the application must be not necessarily required to reach all the above excellent simultaneously
Point.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Fig. 1 is a kind of method flow diagram of image cropping method based on aesthetics provided in an embodiment of the present invention;
Fig. 2 a is a kind of schematic diagram of image to be cut provided in an embodiment of the present invention;
Fig. 2 b is the schematic diagram that a kind of image to be cut provided in an embodiment of the present invention corresponds to notable figure;
Fig. 2 c is the schematic diagram that a kind of image to be cut provided in an embodiment of the present invention corresponds to marking area;
Fig. 2 d is a kind of schematic diagram of the corresponding aesthstic zone boundary frame of image to be cut provided in an embodiment of the present invention;
Fig. 2 e is the schematic diagram that a kind of image to be cut provided in an embodiment of the present invention corresponds to target image;
Fig. 3 is a kind of method flow diagram of image cropping method based on aesthetics provided in an embodiment of the present invention;
Fig. 4 is a kind of schematic diagram of image coordinate system to be cut provided in an embodiment of the present invention;
Fig. 5 is a kind of method flow diagram of image cropping method based on aesthetics provided in an embodiment of the present invention;
Fig. 6 is a kind of method flow diagram of image cropping method based on aesthetics provided in an embodiment of the present invention;
Fig. 7 is a kind of method flow diagram of image cropping method based on aesthetics provided in an embodiment of the present invention;
Fig. 8 is a kind of structural schematic diagram of image cropping device based on aesthetics provided in an embodiment of the present invention;
Fig. 9 is the structural schematic diagram of a kind of electronic equipment provided in an embodiment of the present invention.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on
Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall in the protection scope of this application.
The embodiment of the invention provides a kind of image cropping methods based on aesthetics, can be applied to electronic equipment, such as
Server, smart phone and the PC of picture search website.Based on this method, what server can upload user
Image is cut, and under the premise of retaining the semantic information that original image includes, obtains the image with high aesthetic quality.Intelligence
Mobile phone can obtain the present image taken when receiving the photographing instruction of user, then, calculate current figure by this method
The aesthstic zone boundary frame of picture shows the aesthetics zone boundary frame later in present image, so that user is default by choosing
Clipping function, obtain have high aesthetic quality image.
In order to make it easy to understand, the related notion in the embodiment of the present invention is briefly described, it is specific as follows: for one
Image, significant object is that footprint area is larger in image and is able to reflect object compared with large information capacity, that is, being that user sees this
Main attention object when image also wraps in the image for example, certain user is shot to obtain an image for a building
Tree, automobile and the dustbin etc. around building are included, then the significant object in the image is the building.Marking area be comprising
User is most interested in the region of significant object and image, can most show the region of picture material.Marking area bounding box
It is the bounding box of marking area.
The notable figure of image is used to indicate the significant probability of all pixels point in image, and in notable figure, pixel is shown
Writing probability is the numerical value between 0 to 1, and the significant probability of a certain pixel is bigger, and the significance of the pixel is bigger, i.e. the pixel
Point is easier to cause the concern of user.Therefore, notable figure is the corresponding saliency map picture of image.It should be noted that a certain figure
The notable figure of picture and the size of the image are consistent.
The aesthetic quality of image represents the aesthetic feeling degree that an image has.Aesthstic region is to have aesthetic quality in image
Region;Correspondingly, aesthstic zone boundary frame is the bounding box in aesthstic region.Under normal circumstances, marking area bounding box and beauty
School district domain bounding box is rectangle frame.
As shown in Figure 1, the specific process flow of this method is as follows:
Step 101, image to be cut is obtained.
In an implementation, electronic equipment can after the image for receiving user's upload, using the image as image to be cut,
Obtain the image data of image to be cut.Image data includes multiple pixels that image includes and each pixel in image
In location information.Location information includes the relative position of a certain pixel in the picture.As shown in Figure 2 a, the embodiment of the present invention
Provide the schematic diagram of an image to be cut.
Electronic equipment can also store the image into preset image library, when reaching preset process cycle, electricity
Sub- equipment obtains a certain image as image to be cut according to preset processing sequence in image library.For example, electronic equipment can
With according to the sequencing of each image uplink time, successively to each image procossing, alternatively, can be according to the file of each image by big
To small sequence, successively to each image procossing, the present embodiment does not limit.
Step 102, according to conspicuousness detection algorithm, the corresponding notable figure of image to be cut is calculated.
In the embodiment of the present invention, notable figure is the corresponding saliency map picture of image to be cut, and saliency map picture can be ash
Image is spent, in notable figure, indicates the pixel with maximum significant probability with white, indicates to have with black minimum significant general
The significant probability of the pixel of rate, a certain pixel is bigger, which is shown as closer to white.Therefore, notable figure
The middle multiple pixels for indicating significant object are shown as close to white.
In an implementation, electronic equipment can calculate the picture number of image to be cut by preset conspicuousness detection algorithm
According to, obtain the significant probability wait cut each pixel in image, then, electronic equipment pass through preset Linear Mapping algorithm, will
The significant probability of each pixel is mapped in the numberical range of image, obtains the corresponding image values of each pixel, and based on each
Image values generate notable figure.As shown in Figure 2 b, the embodiment of the invention provides the schematic diagram that image to be cut corresponds to notable figure,
In the notable figure, significant object is formed by being shown as close to multiple pixels of white.
Wherein, electronic equipment calculates the image data of image to be cut by conspicuousness detection algorithm, obtains figure to be cut
The concrete processing procedure of each significant probability of pixel and electronic equipment are generated significant by preset Linear Mapping algorithm as in
The concrete processing procedure of figure is the prior art, and details are not described herein again.
It should be noted that conspicuousness detection algorithm can be any algorithm that can convert the image into notable figure, than
Such as U-Net (U-shaped network) full convolutional network.
Step 103, by marking area extraction algorithm, significant bounding box is determined in notable figure.
In an implementation, electronic equipment can store preset marking area and extract threshold value, and it is aobvious that marking area, which extracts threshold value,
The sum of the significant probability for writing each pixel in the region that bounding box includes, accounts for the significant probability wait cut all pixels point in image
The sum of ratio, such as 90%.
Significant probability of the electronic equipment based on pixel each in notable figure, according to preset marking area extraction algorithm, really
Surely meet the bounding box that marking area extracts threshold value, and using the bounding box as significant bounding box.Electronic equipment in notable figure,
By the location information of the corresponding pixel of significant bounding box, as the location information of significant bounding box, and then by significant bounding box
Location information be stored in preset location information file.
It should be noted that marking area extraction algorithm can be and any can extract threshold based on notable figure and marking area
Value, determines the algorithm of the corresponding bounding box of notable figure, such as heuristic trimming algorithm.Electronic equipment is extracted according to marking area and is calculated
Method, obtaining notable figure and corresponding to the concrete processing procedure of significant bounding box is the prior art, and details are not described herein again.
Step 104, the corresponding marking area of significant bounding box wait cut in image, is being determined.
Wherein, marking area is wait cut the image-region that significant bounding box includes in image.
In an implementation, since notable figure is identical as the size of image to be cut, electronic equipment can be according to significant side
Location information of boundary's frame in notable figure determines significant bounding box wait cut the location information in image.Then, electronic equipment
Wait cut in image, the image-region for including using significant bounding box is as marking area.That is, electronic equipment can believe position
Significant location information of the bounding box in notable figure in file is ceased, as the location information wait cut marking area in image.
As shown in Figure 2 c, the embodiment of the invention provides the schematic diagrames that image to be cut corresponds to marking area, wherein white
Wire frame is significant bounding box, and the image-region that white wire frame includes is marking area.
Step 105, according to aesthstic region recognition algorithm and marking area, the aesthstic zone boundary comprising marking area is determined
Frame.
In an implementation, electronic equipment can calculate aesthstic area by preset aesthstic region recognition algorithm and marking area
The location information of domain bounding box, then electronic equipment passes through the location information of aesthstic zone boundary frame, determines aesthetics zone boundary
Frame.As shown in Figure 2 d, the embodiment of the invention provides the schematic diagrames of the corresponding aesthstic zone boundary frame of image to be cut, wherein small
White wire frame be significant bounding box, big white wire frame is aesthstic zone boundary frame.
It should be noted that aesthstic region recognition algorithm includes Recurrent networks, aesthstic zone boundary frame includes marking area.
Step 106, it based on aesthstic zone boundary frame, treats cutting image and is cut, obtain target image.
In an implementation, electronic equipment determines aesthetics wait cut in image according to the location information of aesthstic zone boundary frame
The image-region that zone boundary frame includes is to clipping region.Then, electronic equipment extracts the multiple pictures for including to clipping region
The location information of vegetarian refreshments and each pixel is as the image data to clipping region.Later, electronic equipment is according to the image
Data, displaying target image.
As shown in Figure 2 e, the embodiment of the invention provides treating to cut image and cut based on aesthstic zone boundary frame,
The schematic diagram of obtained target image.
Specifically, as shown in figure 3, determining the beauty comprising marking area according to aesthstic region recognition algorithm and marking area
The specific process flow of school district domain bounding box are as follows:
Step 301, corresponding first coordinate information of marking area is obtained.
Electronic equipment is preset with image coordinate system to be cut, and image coordinate system to be cut includes plane right-angle coordinate xOy.
Wait cut in image coordinate system, co-ordinate zero point is some endpoint of image to be cut, and coordinate unit is a pixel.Cause
This, the location information of pixel includes coordinate information.Electronic equipment by the number of image x-axis direction pixel to be cut, as
The side length of image x-axis direction to be cut, correspondingly, by the number of image y-axis to be cut direction pixel, as image to be cut
The side length in y-axis direction.
In an implementation, electronic equipment obtains position of the significant bounding box in notable figure in preset location information file
Confidence breath, and using the location information as corresponding first coordinate information of marking area.Wherein, the first coordinate information is included in pre-
If wait cut in image coordinate system, the coordinate of the corresponding pixel of the non-conterminous endpoint of significant bounding box two.Significant bounding box
In two non-conterminous endpoints include significant bounding box leading diagonal two endpoints.Electronic equipment can be believed according to the first coordinate
Breath, determines marking area wait cut the position in image coordinate system.
For example, withIndicate corresponding first coordinate information of marking area, wherein behalf marking area,WithThe coordinate of significant bounding box end point is represented,WithIt represents non-conterminous another with the endpoint in significant bounding box
The coordinate of endpoint.WhenWhen for { 40,60,100,60 }, two non-conterminous endpoints pair of significant bounding box are indicated
The coordinate for the pixel answered be (40,100) and (60,60), due to significant bounding box be rectangle frame, it is known that significantly bounding box other
The coordinate of two endpoints is (40,60) and (60,100).
Step 302, according to marking area and aesthstic region recognition algorithm, shift ratio vector is determined.
Wherein, shift ratio vector accounts for aesthstic region by the coordinate shift amount of marking area up, down, left and right four direction
The percentage that bounding box corresponds to side length is constituted.In the embodiment of the present invention, each percentage in shift ratio vector is positive number.
In an implementation, electronic equipment, according to the first coordinate information of marking area, obtains significant area wait cut in image
The image data in domain.Later, electronic equipment passes through the image data that aesthstic region recognition algorithm calculates marking area, obtained knot
The shift ratio vector that fruit is made of four percentages.Four percentages that shift ratio vector includes respectively represent significant area
The coordinate shift amount of domain up, down, left and right four direction accounts for the percentage that aesthstic zone boundary frame corresponds to side length.Image to be cut
Up, down, left and right four direction in coordinate system can respectively represent the negative axis of y positive axis, y, the negative axis of x and x positive axis four direction.
For example, using haThe side length for indicating the aesthstic zone boundary direction KuangyZhou, uses waIndicate aesthstic zone boundary frame x-axis direction
Side length, haIt is 300, waIt is 400, when first percentage in shift ratio vector is 0.1, image coordinate system to be cut
The positive axis direction of middle upward direction, i.e. y, the coordinate shift amount of marking area are aesthetics zone boundary frame side length ha0.1 times, then should
Coordinate shift amount is 30.Other three directions and so on, repeat no more.
Step 303, it according to the first coordinate information of shift ratio vector sum, determines the second coordinate information, the second coordinate is believed
The bounding box constituted is ceased, as aesthstic zone boundary frame.
In an implementation, electronic equipment determines the side length of marking area according to the first coordinate information of marking area, then,
By the side length of each percentage and marking area corresponding direction in shift ratio vector, aesthstic zone boundary frame counterparty is calculated
To side length.Electronic equipment corresponds to side length, shift ratio vector and the first coordinate information according to aesthstic zone boundary frame, meter
Calculate the coordinate of four endpoints.Electronic equipment constructs bounding box based on the coordinate of four endpoints, and using the bounding box as aesthstic area
Domain bounding box.
It is apparent from, in the embodiment of the present invention, aesthstic zone boundary frame includes marking area bounding box.Second coordinate information includes
Preset wait cut in image coordinate system, the coordinate of the corresponding pixel of the non-conterminous endpoint of aesthstic zone boundary frame two.Class
As, the second coordinate information of aesthstic zone boundary frame can be used as aesthstic region wait cut the location information in image, beauty
The side length of school district domain bounding box is equal with the corresponding sides length in aesthstic region.
The embodiment of the invention provides according to the first coordinate information of shift ratio vector sum, the tool of the second coordinate information is determined
Body process:
For example, as shown in figure 4,401 indicate image to be cut, image to be cut in image coordinate system xOy to be cut
W is used correspondingly, 403 indicate the corresponding significant bounding box of marking area having a size of w*hsIndicate the side of marking area x-axis direction
It is long, use hsIndicate the side length in marking area y-axis direction;402 indicate aesthstic zone boundary frame, use waIndicate aesthetics zone boundary frame x
The side length of axis direction, uses haIndicate the side length in the aesthstic zone boundary direction KuangyZhou, a represents the corresponding beauty of aesthstic zone boundary frame
School district domain.
First coordinate information of marking area isWith [Δ yt,Δyb,Δxt,Δxb] indicate deviation ratio
Example vector, wherein Δ ytIndicate that the coordinate shift amount of the positive axis direction of y accounts for haPercentage, similar, Δ ybIndicate y negative axis directions
Coordinate shift amount account for haPercentage;ΔxtIndicate that the coordinate shift amount of x negative axis directions accounts for waPercentage;ΔxbIndicate x just
The coordinate shift amount of axis direction accounts for waPercentage.
Electronic equipment determines w according to the first coordinate informationsAnd hs, specific calculation is
Then, the w that electronic equipment is determined according to shift ratio vector sumsAnd hs, calculate the w in aesthstic regionaAnd ha, specific to calculate
Mode are as follows: wa=ws/(1-Δxt-Δxb), ha=hs/(1-Δyt-Δyb).Later, electronic equipment according to shift ratio vector,
First coordinate information and the w determinedaAnd ha, the second coordinate information is calculated, specific calculation is as follows:
As shown in figure 5, the embodiment of the invention also provides a kind of training method of conspicuousness detection algorithm, specifically include with
Lower step:
Step 501, pre-stored first image pattern collection is obtained.
Wherein, the first image pattern collection is previously stored in electronic equipment, the first training image collection includes multiple first figures
Decent and the corresponding notable figure sample of each first image pattern.First training image collection includes SALICON
(Saliency in Context, conspicuousness) eye movement data collection.
In an implementation, electronic equipment can obtain the first image pattern collection when receiving preset first training instruction.
It may include the mark of the first image pattern collection in first training instruction, electronic equipment can be according to the mark of the first image pattern collection
Know, obtains the first image pattern collection.
Step 502, according to the preset first initial neural network, each first image pattern and each first image pattern
Corresponding notable figure sample, determines first object parameter.
Wherein, first object parameter is the parameter that the first initial neural network includes.First initial neural network includes more
Kind full convolutional neural networks, for example, the full convolutional network of U-Net, SegNet (Semantic Image Segmentation Nets,
Image, semantic segmentation) full convolutional network.
In an implementation, electronic equipment is directed to the first image pattern collection, by each first image pattern and each first image sample
This corresponding notable figure sample is input in the preset first initial neural network, then by the output of the first initial neural network
As a result it is used as first object parameter.
Step 503, according to first object parameter, conspicuousness detection algorithm is determined.
As shown in fig. 6, being specifically included the embodiment of the invention also provides a kind of training method of aesthstic region recognition algorithm
Following steps:
Step 601, pre-stored second image pattern collection is obtained.
In an implementation, the second image pattern collection is previously stored in electronic equipment, the second image pattern collection includes multiple
Two image patterns and the corresponding marking area sample of each second image pattern, shift ratio vector sample.
Second image pattern includes that AVA (Atomic Visual Action, the movement of atom vision) data set mid-score is super
Cross 6 high quality graphic sample.
The embodiment of the invention provides the methods that a kind of electronic equipment determines the second image pattern collection, and concrete processing procedure is such as
Under:
For each second image pattern, electronic equipment can obtain second image pattern by conspicuousness detection algorithm
Notable figure obtains the corresponding significant bounding box of the notable figure, and the coordinate of significant bounding box by marking area extraction algorithm
Information.Electronic equipment determines marking area sample in the second image pattern, according to the coordinate information of significant bounding box.As a result,
Electronic equipment obtains the corresponding marking area sample of each second image pattern.
For each second image pattern, electronic equipment can be using the coordinate information of second image pattern as second figure
The coordinate information of decent corresponding aesthstic zone boundary frame, then, by the coordinate information of aesthstic zone boundary frame and significant
The coordinate information of bounding box determines the shift ratio vector sample of second image pattern.Electronic equipment obtains each second as a result,
The corresponding shift ratio vector sample of image pattern.
Electronic equipment by each second image pattern and the corresponding marking area sample of each second image pattern, partially
Shifting ratio vector sample determines the second image pattern collection.
Electronic equipment can obtain pre-stored second image pattern when receiving preset second training instruction
Collection.Electronic equipment also can receive the second image pattern collection of technical staff's input.
Step 602, it is based on the second image pattern collection, the preset second initial neural network is trained, aesthetics is obtained
Region recognition algorithm.
In an implementation, electronic equipment is using the second image pattern collection as training sample, to the preset second initial nerve net
Network is trained, and the neural network that training is obtained is as aesthstic region recognition algorithm.Second initial neural network includes a variety of
Recurrent networks, the network structures of a variety of Recurrent networks is different, network structure include the disposing way of full articulamentum in Recurrent networks,
The number of neuron.
Specifically, as shown in fig. 7, be based on the second image pattern collection, the preset second initial neural network is trained,
Obtain aesthstic region recognition algorithm the specific process is as follows:
Step 701, for the second image pattern collection, obtain the corresponding marking area sample of each second image pattern and
Shift ratio vector sample.
In an implementation, electronic equipment obtains each second image pattern and each second figure that the second image pattern collection includes
Decent corresponding marking area sample and shift ratio vector sample.
Step 702, according to the marking area sample of each second image pattern, shift ratio vector sample and preset
Second initial neural network, determines the second target component.
Wherein, the second target component is the parameter that the second initial algorithm includes.
In an implementation, for each second image pattern, electronic equipment is defeated by the marking area sample of second image pattern
Enter into the second initial neural network, obtains shift ratio vector.Electronic equipment is by the shift ratio of a certain second image pattern
Vector shift ratio vector sample corresponding with second image pattern, as one group of test data, electronic equipment is obtained as a result,
The test data of all second image patterns.Later, electronic equipment passes through error backpropagation algorithm and each second image pattern
Test data, determine the neural network weight of the second initial neural network, and using obtained neural network weight as second
Target component.
In the embodiment of the present invention, electronic equipment passes through error backpropagation algorithm and the corresponding test of each second image pattern
Data, the detailed process for calculating neural network weight is the prior art, and details are not described herein again.
Step 703, according to the second target component and the second initial neural network, aesthetics region recognition algorithm is determined.
A kind of image cropping method and device based on aesthetics provided in an embodiment of the present invention, first according to image to be cut
With pre-stored conspicuousness detection algorithm, the notable figure of image to be cut is obtained;Further according to notable figure and pre-stored aobvious
Extracted region algorithm is write, significant bounding box is obtained;And according to significant bounding box and image to be cut, marking area is determined.Then,
According to marking area and pre-stored aesthstic region recognition algorithm, aesthetics region is determined.Later, it according to aesthstic region, treats
It cuts image to be cut, obtains high aesthetic quality image.For individually image to be cut, due to using aesthstic region recognition
Algorithm determines the corresponding aesthstic region of marking area, can be improved the efficiency of determining crop box.
The embodiment of the invention also provides a kind of image cropping devices based on aesthetics, as shown in figure 8, described device packet
It includes:
First obtains module 810, for obtaining image to be cut;
Computing module 820, for calculating the corresponding notable figure of the image to be cut according to conspicuousness detection algorithm,
In, the notable figure includes the corresponding saliency map picture of the image to be cut, and the saliency map seems gray level image;
First determining module 830, for determining significant boundary in the notable figure by marking area extraction algorithm
Frame;
Second determining module 840, for, wait cut in image, determining the corresponding significant area of the significant bounding box described
Domain, wherein the marking area is described wait cut the image-region that significant bounding box includes described in image;
Third determining module 850, for determining comprising described aobvious according to aesthstic region recognition algorithm and the marking area
Write the aesthstic zone boundary frame in region;
Module 860 is cut, for cutting, obtaining to the image to be cut based on the aesthstic zone boundary frame
Target image.
Optionally, the third determining module includes:
Acquisition submodule, for obtaining corresponding first coordinate information of the marking area, wherein the first coordinate letter
Breath is included in preset wait cut in image coordinate system, the seat of the corresponding pixel of the non-conterminous endpoint of significant bounding box two
Mark;
First determines submodule, for according to the marking area and aesthstic region recognition algorithm, determine shift ratio to
Amount, wherein the shift ratio vector accounts for aesthetics by the coordinate shift amount of the marking area up, down, left and right four direction
The percentage that zone boundary frame corresponds to side length is constituted;
Second determines submodule, is used for the first coordinate information according to the shift ratio vector sum, determines the second seat
Information is marked, the bounding box that second coordinate information is constituted, as aesthstic zone boundary frame.
Optionally, described device further include:
Second obtains module, and for obtaining pre-stored second image pattern collection, the second image pattern collection includes
Multiple second image patterns and the corresponding marking area sample of each second image pattern, shift ratio vector sample;
4th determining module carries out the preset second initial neural network for being based on the second image pattern collection
Training obtains the aesthstic region recognition algorithm.
A kind of image cropping method and device based on aesthetics provided in an embodiment of the present invention, first according to image to be cut
With pre-stored conspicuousness detection algorithm, the notable figure of image to be cut is obtained;Further according to notable figure and pre-stored aobvious
Extracted region algorithm is write, significant bounding box is obtained;And according to significant bounding box and image to be cut, marking area is determined.Then,
According to marking area and pre-stored aesthstic region recognition algorithm, aesthetics region is determined.Later, it according to aesthstic region, treats
It cuts image to be cut, obtains high aesthetic quality image.For individually image to be cut, due to using aesthstic region recognition
Algorithm determines the corresponding aesthstic region of marking area, can be improved the efficiency of determining crop box.
The embodiment of the invention also provides a kind of electronic equipment, as shown in figure 9, include processor 901, communication interface 902,
Memory 903 and communication bus 904, wherein processor 901, communication interface 902, memory 903 are complete by communication bus 904
At mutual communication,
Memory 903, for storing computer program;
Processor 901, when for executing the program stored on memory 903, so that the node device executes following step
Suddenly, which includes:
Provide a kind of image cropping method based on aesthetics, which comprises
Obtain image to be cut;
According to conspicuousness detection algorithm, the corresponding notable figure of the image to be cut is calculated, wherein the notable figure includes
The corresponding saliency map picture of the image to be cut, the saliency map seems gray level image;
By marking area extraction algorithm, significant bounding box is determined in the notable figure;
Described wait cut in image, the corresponding marking area of the significant bounding box is determined, wherein the marking area
To be described wait cut the image-region that significant bounding box includes described in image;
According to aesthstic region recognition algorithm and the marking area, the aesthstic zone boundary comprising the marking area is determined
Frame;
Based on the aesthstic zone boundary frame, the image to be cut is cut, target image is obtained.
Optionally, described according to aesthstic region recognition algorithm and the marking area, it determines comprising the marking area
Aesthstic zone boundary frame, comprising:
Obtain corresponding first coordinate information of the marking area, wherein first coordinate information is included in preset
Wait cut in image coordinate system, the coordinate of the corresponding pixel of the non-conterminous endpoint of significant bounding box two;
According to the marking area and aesthstic region recognition algorithm, shift ratio vector is determined, wherein the shift ratio
Vector accounts for aesthstic zone boundary frame by the coordinate shift amount of the marking area up, down, left and right four direction and corresponds to side length
Percentage is constituted;
According to the first coordinate information described in the shift ratio vector sum, the second coordinate information is determined, described second is sat
The bounding box that information is constituted is marked, as aesthstic zone boundary frame.
Optionally, the method also includes:
Pre-stored first image pattern collection is obtained, first training image collection includes multiple first image patterns,
And the corresponding notable figure sample of each first image pattern;
According to the preset first initial neural network, each the first image sample and each the first image sample
Corresponding notable figure sample, determines first object parameter, wherein the first object parameter is the described first initial neural network
The parameter for including;
According to the first object parameter, the conspicuousness detection algorithm is determined.
Optionally, the method also includes:
Pre-stored second image pattern collection is obtained, the second image pattern collection includes multiple second image patterns,
And the corresponding marking area sample of each second image pattern, shift ratio vector sample;
Based on the second image pattern collection, the preset second initial neural network is trained, the aesthetics is obtained
Region recognition algorithm.
Optionally, described to be based on the second image pattern collection, the preset second initial neural network is trained, is obtained
To the aesthstic region recognition algorithm, comprising:
For the second image pattern collection, obtain the corresponding marking area sample of each second image pattern,
And the shift ratio vector sample;
According to the marking area sample of each second image pattern, the shift ratio vector sample and pre-
If the second initial neural network, determine the second target component, wherein second target component be second initial algorithm
The parameter for including;
According to second target component and the second initial neural network, the aesthstic region recognition algorithm is determined.
Machine readable storage medium may include RAM (Random Access Memory, random access memory), can also
To include NVM (Non-Volatile Memory, nonvolatile memory), for example, at least a magnetic disk storage.In addition, machine
Device readable storage medium storing program for executing can also be that at least one is located remotely from the storage device of aforementioned processor.
Above-mentioned processor can be general processor, including CPU (Central Processing Unit, central processing
Device), NP (Network Processor, network processing unit) etc.;Can also be DSP (Digital Signal Processing,
Digital signal processor), ASIC (Application Specific Integrated Circuit, specific integrated circuit),
FPGA (Field-Programmable Gate Array, field programmable gate array) or other programmable logic device are divided
Vertical door or transistor logic, discrete hardware components.
A kind of image cropping method and device based on aesthetics provided in an embodiment of the present invention, first according to image to be cut
With pre-stored conspicuousness detection algorithm, the notable figure of image to be cut is obtained;Further according to notable figure and pre-stored aobvious
Extracted region algorithm is write, significant bounding box is obtained;And according to significant bounding box and image to be cut, marking area is determined.Then,
According to marking area and pre-stored aesthstic region recognition algorithm, aesthetics region is determined.Later, it according to aesthstic region, treats
It cuts image to be cut, obtains high aesthetic quality image.For individually image to be cut, due to using aesthstic region recognition
Algorithm determines the corresponding aesthstic region of marking area, can be improved the efficiency of determining crop box.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality
Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation
In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to
Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those
Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment
Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that
There is also other identical elements in process, method, article or equipment including the element.
Each embodiment in this specification is all made of relevant mode and describes, same and similar portion between each embodiment
Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for device reality
For applying example, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to embodiment of the method
Part explanation.
The foregoing is merely the preferred embodiments of the application, are not intended to limit the protection scope of the application.It is all
Any modification, equivalent replacement, improvement and so within spirit herein and principle are all contained in the protection scope of the application
It is interior.
Claims (10)
1. a kind of image cropping method based on aesthetics, which is characterized in that the described method includes:
Obtain image to be cut;
According to conspicuousness detection algorithm, the corresponding notable figure of the image to be cut is calculated, wherein the notable figure includes described
The corresponding saliency map picture of image to be cut, the saliency map seems gray level image;
By marking area extraction algorithm, significant bounding box is determined in the notable figure;
Described wait cut in image, the corresponding marking area of the significant bounding box is determined, wherein the marking area is institute
It states wait cut the image-region that significant bounding box includes described in image;
According to aesthstic region recognition algorithm and the marking area, the aesthstic zone boundary frame comprising the marking area is determined;
Based on the aesthstic zone boundary frame, the image to be cut is cut, target image is obtained.
2. the method according to claim 1, wherein described according to aesthstic region recognition algorithm and the significant area
Domain determines the aesthstic zone boundary frame comprising the marking area, comprising:
Obtain corresponding first coordinate information of the marking area, wherein first coordinate information is included in preset to be cut
It cuts in image coordinate system, the coordinate of the corresponding pixel of the non-conterminous endpoint of significant bounding box two;
According to the marking area and aesthstic region recognition algorithm, shift ratio vector is determined, wherein the shift ratio vector
The percentage that aesthstic zone boundary frame corresponds to side length is accounted for by the coordinate shift amount of the marking area up, down, left and right four direction
Than constituting;
According to the first coordinate information described in the shift ratio vector sum, the second coordinate information is determined, second coordinate is believed
The bounding box constituted is ceased, as aesthstic zone boundary frame.
3. the method according to claim 1, wherein the method also includes:
Pre-stored first image pattern collection is obtained, first training image collection includes multiple first image patterns, and
The corresponding notable figure sample of each first image pattern;
It is corresponding according to the preset first initial neural network, each the first image sample and each the first image sample
Notable figure sample, determine first object parameter, wherein the first object parameter be the described first initial neural network include
Parameter;
According to the first object parameter, the conspicuousness detection algorithm is determined.
4. the method according to claim 1, wherein the method also includes:
Pre-stored second image pattern collection is obtained, the second image pattern collection includes multiple second image patterns, and
The corresponding marking area sample of each second image pattern, shift ratio vector sample;
Based on the second image pattern collection, the preset second initial neural network is trained, obtains the aesthstic region
Recognizer.
5. according to the method described in claim 4, it is characterized in that, described be based on the second image pattern collection, to preset
Second initial neural network is trained, and obtains the aesthstic region recognition algorithm, comprising:
For the second image pattern collection, obtain the corresponding marking area sample of each second image pattern and
The shift ratio vector sample;
According to the marking area sample of each second image pattern, the shift ratio vector sample and preset
Second initial neural network, determines the second target component, wherein second target component is that second initial algorithm includes
Parameter;
According to second target component and the second initial neural network, the aesthstic region recognition algorithm is determined.
6. a kind of image cropping device based on aesthetics, which is characterized in that described device includes:
First obtains module, for obtaining image to be cut;
Computing module, for calculating the corresponding notable figure of the image to be cut, wherein described according to conspicuousness detection algorithm
Notable figure includes the corresponding saliency map picture of the image to be cut, and the saliency map seems gray level image;
First determining module, for determining significant bounding box in the notable figure by marking area extraction algorithm;
Second determining module, for, wait cut in image, determining the corresponding marking area of the significant bounding box described,
In, the marking area is described wait cut the image-region that significant bounding box includes described in image;
Third determining module, for determining to include the marking area according to aesthstic region recognition algorithm and the marking area
Aesthstic zone boundary frame;
Module is cut, for cutting to the image to be cut, obtaining target figure based on the aesthstic zone boundary frame
Picture.
7. device according to claim 6, which is characterized in that the third determining module includes:
Acquisition submodule, for obtaining corresponding first coordinate information of the marking area, wherein the first coordinate information packet
It includes preset wait cut in image coordinate system, the coordinate of the corresponding pixel of the non-conterminous endpoint of significant bounding box two;
First determines submodule, for determining shift ratio vector according to the marking area and aesthstic region recognition algorithm,
In, the shift ratio vector accounts for aesthstic regional edge by the coordinate shift amount of the marking area up, down, left and right four direction
The percentage that boundary's frame corresponds to side length is constituted;
Second determines submodule, is used for the first coordinate information according to the shift ratio vector sum, determines that the second coordinate is believed
Breath, the bounding box that second coordinate information is constituted, as aesthstic zone boundary frame.
8. device according to claim 6, which is characterized in that described device further include:
Second obtains module, and for obtaining pre-stored second image pattern collection, the second image pattern collection includes multiple
Second image pattern and the corresponding marking area sample of each second image pattern, shift ratio vector sample;
4th determining module, for being trained to the preset second initial neural network based on the second image pattern collection,
Obtain the aesthstic region recognition algorithm.
9. a kind of electronic equipment, which is characterized in that including processor and machine readable storage medium, the machine readable storage is situated between
Matter is stored with the machine-executable instruction that can be executed by the processor, and the processor is promoted by the machine-executable instruction
Make: realizing any method and step of claim 1-5.
10. a kind of machine readable storage medium, which is characterized in that be stored with machine-executable instruction, by processor call and
When execution, the machine-executable instruction promotes the processor: realizing any method and step of claim 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810813038.9A CN109146892B (en) | 2018-07-23 | 2018-07-23 | Image clipping method and device based on aesthetics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810813038.9A CN109146892B (en) | 2018-07-23 | 2018-07-23 | Image clipping method and device based on aesthetics |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109146892A true CN109146892A (en) | 2019-01-04 |
CN109146892B CN109146892B (en) | 2020-06-19 |
Family
ID=64801470
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810813038.9A Active CN109146892B (en) | 2018-07-23 | 2018-07-23 | Image clipping method and device based on aesthetics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109146892B (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109712164A (en) * | 2019-01-17 | 2019-05-03 | 上海携程国际旅行社有限公司 | Image intelligent cut-out method, system, equipment and storage medium |
CN109886317A (en) * | 2019-01-29 | 2019-06-14 | 中国科学院自动化研究所 | General image aesthetics appraisal procedure, system and equipment based on attention mechanism |
CN110147833A (en) * | 2019-05-09 | 2019-08-20 | 北京迈格威科技有限公司 | Facial image processing method, apparatus, system and readable storage medium storing program for executing |
CN110456960A (en) * | 2019-05-09 | 2019-11-15 | 华为技术有限公司 | Image processing method, device and equipment |
CN110580678A (en) * | 2019-09-10 | 2019-12-17 | 北京百度网讯科技有限公司 | image processing method and device |
CN111199541A (en) * | 2019-12-27 | 2020-05-26 | Oppo广东移动通信有限公司 | Image quality evaluation method, image quality evaluation device, electronic device, and storage medium |
CN111316319A (en) * | 2019-03-15 | 2020-06-19 | 深圳市大疆创新科技有限公司 | Image processing method, electronic device, and computer-readable storage medium |
CN111461968A (en) * | 2020-04-01 | 2020-07-28 | 北京字节跳动网络技术有限公司 | Picture processing method and device, electronic equipment and computer readable medium |
CN111461964A (en) * | 2020-04-01 | 2020-07-28 | 北京字节跳动网络技术有限公司 | Picture processing method and device, electronic equipment and computer readable medium |
CN111461969A (en) * | 2020-04-01 | 2020-07-28 | 北京字节跳动网络技术有限公司 | Method, device, electronic equipment and computer readable medium for processing picture |
CN111640123A (en) * | 2020-05-22 | 2020-09-08 | 北京百度网讯科技有限公司 | Background-free image generation method, device, equipment and medium |
CN111696112A (en) * | 2020-06-15 | 2020-09-22 | 携程计算机技术(上海)有限公司 | Automatic image cutting method and system, electronic equipment and storage medium |
CN111768416A (en) * | 2020-06-19 | 2020-10-13 | Oppo广东移动通信有限公司 | Photo clipping method and device |
CN112017193A (en) * | 2020-08-24 | 2020-12-01 | 杭州趣维科技有限公司 | Image cropping device and method based on visual saliency and aesthetic score |
CN112541934A (en) * | 2019-09-20 | 2021-03-23 | 百度在线网络技术(北京)有限公司 | Image processing method and device |
CN112700454A (en) * | 2020-12-28 | 2021-04-23 | 北京达佳互联信息技术有限公司 | Image cropping method and device, electronic equipment and storage medium |
CN113066008A (en) * | 2020-01-02 | 2021-07-02 | 杭州喔影网络科技有限公司 | Jigsaw generating method and equipment |
CN113205522A (en) * | 2021-04-28 | 2021-08-03 | 华中科技大学 | Intelligent image clipping method and system based on antithetical domain adaptation |
CN113538460A (en) * | 2021-07-12 | 2021-10-22 | 中国科学院地质与地球物理研究所 | Shale CT image cutting method and system |
CN113763291A (en) * | 2021-09-03 | 2021-12-07 | 深圳信息职业技术学院 | Performance evaluation method for preserving boundary filtering algorithm, intelligent terminal and storage medium |
CN114529715A (en) * | 2022-04-22 | 2022-05-24 | 中科南京智能技术研究院 | Image identification method and system based on edge extraction |
WO2022127814A1 (en) * | 2020-12-15 | 2022-06-23 | 影石创新科技股份有限公司 | Method and apparatus for detecting salient object in image, and device and storage medium |
CN114911551A (en) * | 2021-02-08 | 2022-08-16 | 花瓣云科技有限公司 | Display method and electronic equipment |
WO2023093683A1 (en) * | 2021-11-24 | 2023-06-01 | 北京字节跳动网络技术有限公司 | Image cropping method and apparatus, model training method and apparatus, electronic device, and medium |
CN117152409A (en) * | 2023-08-07 | 2023-12-01 | 中移互联网有限公司 | Image clipping method, device and equipment based on multi-mode perception modeling |
US11914850B2 (en) | 2019-06-30 | 2024-02-27 | Huawei Technologies Co., Ltd. | User profile picture generation method and electronic device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103544685A (en) * | 2013-10-22 | 2014-01-29 | 华南理工大学 | Method and system for beautifying composition of image based on main body adjustment |
CN105100625A (en) * | 2015-08-27 | 2015-11-25 | 华南理工大学 | Figure image auxiliary shooting method and system based on image aesthetics |
CN105528757A (en) * | 2015-12-08 | 2016-04-27 | 华南理工大学 | Content-based image aesthetic quality improvement method |
CN107146198A (en) * | 2017-04-19 | 2017-09-08 | 中国电子科技集团公司电子科学研究院 | A kind of intelligent method of cutting out of photo and device |
CN107392244A (en) * | 2017-07-18 | 2017-11-24 | 厦门大学 | The image aesthetic feeling Enhancement Method returned based on deep neural network with cascade |
-
2018
- 2018-07-23 CN CN201810813038.9A patent/CN109146892B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103544685A (en) * | 2013-10-22 | 2014-01-29 | 华南理工大学 | Method and system for beautifying composition of image based on main body adjustment |
CN105100625A (en) * | 2015-08-27 | 2015-11-25 | 华南理工大学 | Figure image auxiliary shooting method and system based on image aesthetics |
CN105528757A (en) * | 2015-12-08 | 2016-04-27 | 华南理工大学 | Content-based image aesthetic quality improvement method |
CN107146198A (en) * | 2017-04-19 | 2017-09-08 | 中国电子科技集团公司电子科学研究院 | A kind of intelligent method of cutting out of photo and device |
CN107392244A (en) * | 2017-07-18 | 2017-11-24 | 厦门大学 | The image aesthetic feeling Enhancement Method returned based on deep neural network with cascade |
Non-Patent Citations (1)
Title |
---|
WENGUAN WANG、JIANBING SHEN: "Deep Cropping via Attention Box Prediction and Aesthetics Assessment", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 * |
Cited By (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109712164A (en) * | 2019-01-17 | 2019-05-03 | 上海携程国际旅行社有限公司 | Image intelligent cut-out method, system, equipment and storage medium |
CN109886317A (en) * | 2019-01-29 | 2019-06-14 | 中国科学院自动化研究所 | General image aesthetics appraisal procedure, system and equipment based on attention mechanism |
CN109886317B (en) * | 2019-01-29 | 2021-04-27 | 中国科学院自动化研究所 | General image aesthetic evaluation method, system and equipment based on attention mechanism |
CN111316319A (en) * | 2019-03-15 | 2020-06-19 | 深圳市大疆创新科技有限公司 | Image processing method, electronic device, and computer-readable storage medium |
WO2020186385A1 (en) * | 2019-03-15 | 2020-09-24 | 深圳市大疆创新科技有限公司 | Image processing method, electronic device, and computer-readable storage medium |
EP3951574A4 (en) * | 2019-05-09 | 2022-06-01 | Huawei Technologies Co., Ltd. | Image processing method and apparatus, and device |
CN110147833A (en) * | 2019-05-09 | 2019-08-20 | 北京迈格威科技有限公司 | Facial image processing method, apparatus, system and readable storage medium storing program for executing |
CN110456960A (en) * | 2019-05-09 | 2019-11-15 | 华为技术有限公司 | Image processing method, device and equipment |
WO2020224488A1 (en) * | 2019-05-09 | 2020-11-12 | 华为技术有限公司 | Image processing method and apparatus, and device |
US12008761B2 (en) | 2019-05-09 | 2024-06-11 | Huawei Technologies Co., Ltd. | Image processing method and apparatus, and device |
US11914850B2 (en) | 2019-06-30 | 2024-02-27 | Huawei Technologies Co., Ltd. | User profile picture generation method and electronic device |
CN110580678A (en) * | 2019-09-10 | 2019-12-17 | 北京百度网讯科技有限公司 | image processing method and device |
CN112541934B (en) * | 2019-09-20 | 2024-02-27 | 百度在线网络技术(北京)有限公司 | Image processing method and device |
CN112541934A (en) * | 2019-09-20 | 2021-03-23 | 百度在线网络技术(北京)有限公司 | Image processing method and device |
CN111199541A (en) * | 2019-12-27 | 2020-05-26 | Oppo广东移动通信有限公司 | Image quality evaluation method, image quality evaluation device, electronic device, and storage medium |
CN113066008A (en) * | 2020-01-02 | 2021-07-02 | 杭州喔影网络科技有限公司 | Jigsaw generating method and equipment |
CN111461968B (en) * | 2020-04-01 | 2023-05-23 | 抖音视界有限公司 | Picture processing method, device, electronic equipment and computer readable medium |
CN111461964A (en) * | 2020-04-01 | 2020-07-28 | 北京字节跳动网络技术有限公司 | Picture processing method and device, electronic equipment and computer readable medium |
CN111461964B (en) * | 2020-04-01 | 2023-04-25 | 抖音视界有限公司 | Picture processing method, device, electronic equipment and computer readable medium |
CN111461969B (en) * | 2020-04-01 | 2023-04-07 | 抖音视界有限公司 | Method, device, electronic equipment and computer readable medium for processing picture |
CN111461969A (en) * | 2020-04-01 | 2020-07-28 | 北京字节跳动网络技术有限公司 | Method, device, electronic equipment and computer readable medium for processing picture |
CN111461968A (en) * | 2020-04-01 | 2020-07-28 | 北京字节跳动网络技术有限公司 | Picture processing method and device, electronic equipment and computer readable medium |
US11704811B2 (en) | 2020-05-22 | 2023-07-18 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and apparatus for generating background-free image, device, and medium |
EP3846122A3 (en) * | 2020-05-22 | 2021-11-24 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and apparatus for generating background-free image, device, and medium |
KR20210047282A (en) * | 2020-05-22 | 2021-04-29 | 베이징 바이두 넷컴 사이언스 앤 테크놀로지 코., 엘티디. | Background-free image generation method and device, equipment and medium |
CN111640123A (en) * | 2020-05-22 | 2020-09-08 | 北京百度网讯科技有限公司 | Background-free image generation method, device, equipment and medium |
KR102466394B1 (en) * | 2020-05-22 | 2022-11-11 | 베이징 바이두 넷컴 사이언스 앤 테크놀로지 코., 엘티디. | Background-free image generation method and device, equipment and medium |
CN111640123B (en) * | 2020-05-22 | 2023-08-11 | 北京百度网讯科技有限公司 | Method, device, equipment and medium for generating background-free image |
CN111696112A (en) * | 2020-06-15 | 2020-09-22 | 携程计算机技术(上海)有限公司 | Automatic image cutting method and system, electronic equipment and storage medium |
CN111696112B (en) * | 2020-06-15 | 2023-04-07 | 携程计算机技术(上海)有限公司 | Automatic image cutting method and system, electronic equipment and storage medium |
CN111768416A (en) * | 2020-06-19 | 2020-10-13 | Oppo广东移动通信有限公司 | Photo clipping method and device |
CN111768416B (en) * | 2020-06-19 | 2024-04-19 | Oppo广东移动通信有限公司 | Photo cropping method and device |
CN112017193A (en) * | 2020-08-24 | 2020-12-01 | 杭州趣维科技有限公司 | Image cropping device and method based on visual saliency and aesthetic score |
WO2022127814A1 (en) * | 2020-12-15 | 2022-06-23 | 影石创新科技股份有限公司 | Method and apparatus for detecting salient object in image, and device and storage medium |
CN112700454B (en) * | 2020-12-28 | 2024-05-14 | 北京达佳互联信息技术有限公司 | Image cropping method and device, electronic equipment and storage medium |
CN112700454A (en) * | 2020-12-28 | 2021-04-23 | 北京达佳互联信息技术有限公司 | Image cropping method and device, electronic equipment and storage medium |
CN114911551A (en) * | 2021-02-08 | 2022-08-16 | 花瓣云科技有限公司 | Display method and electronic equipment |
CN113205522B (en) * | 2021-04-28 | 2022-05-13 | 华中科技大学 | Intelligent image clipping method and system based on antithetical domain adaptation |
CN113205522A (en) * | 2021-04-28 | 2021-08-03 | 华中科技大学 | Intelligent image clipping method and system based on antithetical domain adaptation |
CN113538460B (en) * | 2021-07-12 | 2022-04-08 | 中国科学院地质与地球物理研究所 | Shale CT image cutting method and system |
CN113538460A (en) * | 2021-07-12 | 2021-10-22 | 中国科学院地质与地球物理研究所 | Shale CT image cutting method and system |
CN113763291B (en) * | 2021-09-03 | 2023-08-29 | 深圳信息职业技术学院 | Performance evaluation method for maintaining boundary filtering algorithm, intelligent terminal and storage medium |
CN113763291A (en) * | 2021-09-03 | 2021-12-07 | 深圳信息职业技术学院 | Performance evaluation method for preserving boundary filtering algorithm, intelligent terminal and storage medium |
WO2023093683A1 (en) * | 2021-11-24 | 2023-06-01 | 北京字节跳动网络技术有限公司 | Image cropping method and apparatus, model training method and apparatus, electronic device, and medium |
CN114529715B (en) * | 2022-04-22 | 2022-07-19 | 中科南京智能技术研究院 | Image identification method and system based on edge extraction |
CN114529715A (en) * | 2022-04-22 | 2022-05-24 | 中科南京智能技术研究院 | Image identification method and system based on edge extraction |
CN117152409A (en) * | 2023-08-07 | 2023-12-01 | 中移互联网有限公司 | Image clipping method, device and equipment based on multi-mode perception modeling |
CN117152409B (en) * | 2023-08-07 | 2024-09-27 | 中移互联网有限公司 | Image clipping method, device and equipment based on multi-mode perception modeling |
Also Published As
Publication number | Publication date |
---|---|
CN109146892B (en) | 2020-06-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109146892A (en) | A kind of image cropping method and device based on aesthetics | |
CN110163198B (en) | Table identification reconstruction method and device and storage medium | |
CN109255352B (en) | Target detection method, device and system | |
US9235759B2 (en) | Detecting text using stroke width based text detection | |
CN110738207A (en) | character detection method for fusing character area edge information in character image | |
CN109583449A (en) | Character identifying method and Related product | |
CN110349082B (en) | Image area clipping method and device, storage medium and electronic device | |
CN109685055A (en) | Text filed detection method and device in a kind of image | |
CN107229932A (en) | A kind of recognition methods of image text and device | |
CN110163076A (en) | A kind of image processing method and relevant apparatus | |
Du et al. | Segmentation and sampling method for complex polyline generalization based on a generative adversarial network | |
CN109376659A (en) | Training method, face critical point detection method, apparatus for face key spot net detection model | |
CN111738280A (en) | Image identification method, device, equipment and readable storage medium | |
CN113112511B (en) | Method and device for correcting test paper, storage medium and electronic equipment | |
CN109858409A (en) | Manual figure conversion method, device, equipment and medium | |
CN109472193A (en) | Method for detecting human face and device | |
CN110414571A (en) | A kind of website based on Fusion Features reports an error screenshot classification method | |
CN109740585A (en) | A kind of text positioning method and device | |
CN110147833A (en) | Facial image processing method, apparatus, system and readable storage medium storing program for executing | |
CN113570540A (en) | Image tampering blind evidence obtaining method based on detection-segmentation architecture | |
CN109635755A (en) | Face extraction method, apparatus and storage medium | |
CN113011409A (en) | Image identification method and device, electronic equipment and storage medium | |
CN114283431B (en) | Text detection method based on differentiable binarization | |
CN111783561A (en) | Picture examination result correction method, electronic equipment and related products | |
CN111476308B (en) | Remote sensing image classification method and device based on priori geometric constraint and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |