CN108259770B - Image processing method, image processing device, storage medium and electronic equipment - Google Patents
Image processing method, image processing device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN108259770B CN108259770B CN201810277626.5A CN201810277626A CN108259770B CN 108259770 B CN108259770 B CN 108259770B CN 201810277626 A CN201810277626 A CN 201810277626A CN 108259770 B CN108259770 B CN 108259770B
- Authority
- CN
- China
- Prior art keywords
- image
- preset
- target
- replacement
- position information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the application discloses an image processing method, an image processing device, a storage medium and electronic equipment, wherein the method comprises the steps of continuously acquiring multiple preset images and selecting a basic image from the multiple preset images; acquiring the depth of field information of each figure image in the basic image, and obtaining a preset depth of field range according to the depth of field information of each figure image; determining a target figure image exceeding a preset depth range; acquiring target position information of a target character image in the basic image, and determining a replacement preset image of the position of a user corresponding to the target character image and the change of the target position information from other preset images except the basic image; and in the basic image, replacing the screenshot of the corresponding target position information in the preset image with the target character image to obtain the target preset image after screenshot replacement. The unnecessary figure image is removed from the basic image, a better background image is obtained, and the effect of the photo is improved.
Description
Technical Field
The present application relates to the field of electronic device technologies, and in particular, to an image processing method and apparatus, a storage medium, and an electronic device.
Background
The user often needs to use the terminal camera to shoot, especially when going out and traveling, can shoot at some famous scenic spots and remember, because famous scenic spot people are very many, have on the photo of shooing that unknown people also on it, the scenic spot image also can be sheltered from by these unknown people simultaneously, and the photo effect of shooing is not good.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a storage medium and electronic equipment, which can reduce people who are not known in photos and improve the photo effect.
In a first aspect, an embodiment of the present application provides an image processing method, including:
continuously acquiring multiple preset images, and selecting a basic image from the multiple preset images;
acquiring the depth of field information of each figure image in the basic image, and obtaining a preset depth of field range according to the depth of field information of each figure image;
determining the target figure image beyond the preset depth of field range;
acquiring target position information of the target person image in the basic image, and determining a replacement preset image of the position of the user corresponding to the target person image and the target position information change from other preset images except the basic image;
and in the basic image, replacing the screenshot corresponding to the target position information in the replacement preset image with the target character image to obtain a target preset image with the screenshot replaced.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the basic image acquisition module is used for continuously acquiring multiple preset images and selecting a basic image from the multiple preset images;
the preset depth of field range acquisition module is used for acquiring depth of field information of each figure image in the basic image and acquiring a preset depth of field range according to the depth of field information of each figure image;
the target face image determining module is used for determining a target figure image exceeding the preset depth range;
a replacement preset image obtaining module, configured to obtain target position information of the target person image in the base image, and determine, from other preset images other than the base image, a replacement preset image in which a position of a user corresponding to the target person image and the target position information change;
and the processing module is used for replacing the screenshot corresponding to the target position information in the replacement preset image with the target character image in the basic image to obtain the target preset image after screenshot replacement.
In a third aspect, a storage medium is provided, on which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the steps of the image processing method provided in the embodiments of the present application.
In a fourth aspect, the electronic device provided in the embodiments of the present application includes a processor and a memory, where the memory has a computer program, and the processor is configured to execute the steps of the image processing method provided in the embodiments of the present application by calling the computer program.
In the embodiment of the application, firstly, multiple preset images are continuously acquired, and a basic image is selected from the multiple preset images; then obtaining the depth of field information of each figure image in the basic image, and obtaining a preset depth of field range according to the depth of field information of each figure image; then determining the target figure image beyond the preset depth of field range; then obtaining the target position information of the target character image in the basic image, and determining a replacement preset image of the position of the user corresponding to the target character image and the change of the target position information from other preset images except the basic image; and finally, replacing the screenshot replacing the corresponding target position information in the preset image with the target character image in the basic image. Whether the unnecessary figure image exists in the basic image or not is firstly identified, if yes, the unnecessary figure image is determined as the target figure image, the position of the target figure image is obtained, then the replacement preset image with the changed user position corresponding to the target figure image is found out in other preset images, and finally the screenshot of the corresponding position is extracted from the replacement preset image to replace the figure image, so that the unnecessary figure image is removed from the basic image, a better background image is obtained, and the effect of the picture is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a scene schematic diagram of an image processing method according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application.
Fig. 3 is another schematic flow chart of an image processing method according to an embodiment of the present application.
Fig. 4 is a schematic flowchart of another image processing method according to an embodiment of the present application.
Fig. 5 is a schematic view of another scene of the image processing method according to the embodiment of the present application.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 7 is another schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of an image processing circuit of an electronic device according to an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present invention are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the invention and should not be taken as limiting the invention with regard to other embodiments that are not detailed herein.
The embodiment of the present application provides an image processing method, and an execution subject of the image processing method may be the image processing apparatus provided in the embodiment of the present application, or an electronic device integrated with the image processing apparatus, where the image processing apparatus may be implemented in a hardware or software manner. The electronic device may be a smart phone, a tablet computer, a palm computer, a notebook computer, or a desktop computer.
It can be understood that the execution main body of the embodiment of the present application may be a terminal device such as a smart phone, a tablet computer, a palmtop computer, or the like, and may also be another terminal device with a camera unit.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of an image processing method according to an embodiment of the present application. Users often need to take pictures using a terminal camera. After entering a shooting preview interface of a terminal camera, the terminal can acquire images and display the images on the interface for a user to preview. The image collected by the terminal can be stored in a buffer queue, namely, a plurality of frames of images are stored in the buffer queue. When the acquired images need to be processed to a certain extent, the terminal can acquire the recently acquired multi-frame images from the buffer queue. For example, the terminal may obtain 8 frames of recently acquired images from the buffer queue, and perform certain processing on the 8 frames of images. The 8 frames of images comprise a frame of image as a basic image, the basic image is provided with a passerby image, the passerby moves to other positions in the other frame of image, a screenshot corresponding to the position of the passerby image in the basic image in the other frame of image is intercepted, then the passerby image is replaced by the screenshot in the basic image, and the image without the passerby image is obtained.
Referring to fig. 2, fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application, where the image processing method includes:
101, continuously acquiring multiple frames of preset images, and selecting a basic image from the multiple frames of preset images.
After entering a shooting preview interface of a terminal camera, the terminal can acquire images and display the images on the interface for a user to preview. The image collected by the terminal can be stored in a buffer queue, namely, a plurality of frames of images are stored in the buffer queue. When the acquired images need to be processed to a certain extent, the terminal can acquire the recently acquired multi-frame images from the buffer queue. For example, the terminal may obtain 8 frames of recently acquired images from the buffer queue, and perform certain processing on the 8 frames of images.
When the photographing function of the terminal is started, the live-action image is firstly acquired, and a camera continuously acquires multi-frame images, such as continuously acquiring 8-frame images, and stores the images in a buffer queue. A base image is then selected from the plurality of frame images. Specifically, the first frame image may be selected as the base image, and the middle frame image may also be selected as the base image.
And 102, obtaining the depth of field information of each character image in the basic image, and obtaining a preset depth of field range according to the depth of field information of each character image.
After the basic image is obtained, the number of the character images included in the basic image is obtained through character recognition. Specifically, the number of character images can be recognized by a face recognition technique. However, some character images are shadows or side faces and cannot be identified through the face recognition technology, and the number of the character images can be identified through the face recognition technology. The human figure can be identified by identifying the overall characteristics of the human figure, such as the shape of the human figure, and can also be identified by identifying local characteristics, such as head characteristics and ear characteristics.
The depth information is a range of distances between the front and rear of the subject measured at the front edge of the camera lens or other imager, where a sharp image can be obtained. The distance between the aperture, the lens and the object is an important factor affecting the depth of field.
After the focusing is completed, a sharp image appears in a range before and after the focal point, and this range of distances in front and behind is called the depth of field. There is a space with a certain length in front of the lens (in front of and behind the focus), when the object is in the space, the image on the negative film is just between the two circle circles before and after the focus. The length of the space in which the subject is located is called the depth of field. In other words, the subject in this space, whose image blur degree appears on the film side, is within the limited range of the permissible circle of confusion, and the length of this space is the depth of field.
And after the figure images are identified, obtaining the depth of field information of each figure image in the basic image. The position relation of each local image in the base image in the depth direction can be obtained through the depth information. For example, the person in front, the road in the middle, and the building at a distance may be sequentially located among the person, the road, and the building by the depth information. Similarly, the longitudinal spatial distance between the individual character images can be obtained according to the depth information of the individual character images.
After the depth of field information of each person image in the basic image is obtained, because the person to be shot is taken as a focus during shooting, the depth of field at the focus can be taken as a reference value, then, an offset value is respectively arranged at two ends on the basis of the reference value, and a preset depth of field range is obtained, namely, the reference value-the offset value is the minimum value, and the reference value + the offset value is the maximum value.
When multiple persons are photographed together, the depth of field at the focus of the basic image can be taken as a reference value, then the scene information of each person image comprises a depth of field value, then the depth of field value of each person image is compared with the reference value to obtain a difference value, because the multiple persons are photographed together, the difference values of the depth of field values and the reference value are small and are close to each other, then the depth of field values which are small in difference value with the reference value and are close to each other can be counted, and a preset depth of field range can be obtained.
It should be noted that multiple frames of preset images can be acquired through the two cameras, so as to be used for acquiring depth information of each person image in the basic image.
103, determining the target person image beyond the preset depth of field range.
And after the preset depth of field range is obtained, comparing the depth of field value in the depth of field information of each character image with the preset depth of field range, and if the depth of field value exceeds the preset depth of field range, determining that the character image is a target character image, namely a person unknown by the user.
And 104, acquiring target position information of the target person image in the basic image, and determining a replacement preset image of the position of the user corresponding to the target person image and the change of the target position information from other preset images except the basic image.
And determining a replacement preset image from other preset images except the basic image, searching the position of the user corresponding to the target character image in the other preset images, comparing the position of the user with the target position information, and determining the preset image as the replacement preset image if the comparison result shows that the position of the user is changed.
And 105, replacing the screenshot of the corresponding target position information in the replacement preset image with the target character image in the basic image to obtain the target preset image after screenshot replacement.
Firstly, extracting a screenshot corresponding to the target position information in a replacement preset image, and then replacing the screenshot with a target character image in a basic image. Therefore, the unnecessary figure images are removed from the basic image, a better background image is obtained, and the effect of the photo is improved.
It should be noted that the image processing method in this embodiment may be used to process the image in the buffer queue, that is, to replace the screenshot of the image buffered in the buffer queue to obtain a target preset image, where the target preset image is also stored in the buffer queue, or store the target preset image in an internal storage space, such as a manner of storing a photo.
In some embodiments, the step of obtaining target position information of a target person image in a base image, and determining a replacement preset image in which a position of a user corresponding to a target face image and the target position information are changed from other preset images except the base image includes:
acquiring target position information of a target character image in a basic image, wherein the target position information comprises a first coordinate point set;
acquiring a plurality of second position information of the user corresponding to the target person image and a plurality of second coordinate point sets corresponding to the plurality of second position information from other preset images except the basic image;
determining a target coordinate point set which does not intersect with the first coordinate point set from the plurality of second coordinate point sets;
and determining a replacement preset image corresponding to the target coordinate point set from other preset images except the basic image.
In some embodiments, after the step of obtaining, from other preset images than the base image, a plurality of second position information of the user corresponding to the target person image and a plurality of second coordinate point sets corresponding to the plurality of second position information, the method further includes:
and if the second coordinate point sets intersect with the first coordinate point set, blurring the target person image or blurring the area beyond the preset depth of field on the basic image.
In some embodiments, the step of replacing, in the base image, the screenshot replacing the corresponding target position information in the preset image with the target person image includes:
acquiring a screenshot for replacing corresponding target position information in a preset image;
determining whether the screenshot includes a character image;
if not, replacing the screenshot with the target character image in the basic image;
if yes, blurring the target person image on the basis of the basic image.
In some embodiments, before the step of replacing, in the base image, the target person image with the screenshot replacing the corresponding target position information in the preset image, the method further includes:
acquiring a first coordinate of a reference object and a third coordinate point set of target position information in a basic image;
acquiring a second coordinate of the reference object in the replacement preset image;
according to the difference value of the second coordinate and the first coordinate, the third coordinate point set is shifted to obtain a fourth coordinate point set;
and extracting a screenshot corresponding to the fourth coordinate point set in the replacement preset image.
In some embodiments, the step of continuously acquiring a plurality of frames of preset images and selecting a base image from the plurality of frames of preset images comprises:
continuously acquiring multiple frames of preset images, and selecting a basic image from the multiple frames of preset images according to the eye size of each face image in the multiple frames of preset images;
after the step of obtaining the target preset image after screenshot replacement, the method further comprises the following steps:
determining a face image to be processed with an eye size smaller than a first preset threshold value in a target preset image;
determining a replacement face image with the eye size larger than a second preset threshold value from other preset images except the target preset image, wherein the replacement face image and the face image to be processed are face images of the same user;
replacing a face image to be processed with a replacement face image in a target preset image to obtain a target preset image subjected to image replacement processing;
and performing image noise reduction processing on the target preset image subjected to the image replacement processing to obtain a composite image.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
Referring to fig. 3, fig. 3 is another schematic flow chart of an image processing method according to an embodiment of the present application, where the flow of the image processing method may include:
and 201, continuously acquiring multiple frames of preset images, and selecting a basic image from the multiple frames of preset images.
After entering a shooting preview interface of a terminal camera, the terminal can acquire images and display the images on the interface for a user to preview. The image collected by the terminal can be stored in a buffer queue, namely, a plurality of frames of images are stored in the buffer queue. When the acquired images need to be processed to a certain extent, the terminal can acquire the recently acquired multi-frame images from the buffer queue. For example, the terminal may obtain 8 frames of recently acquired images from the buffer queue, and perform certain processing on the 8 frames of images. A base image may be selected from the plurality of frame images. The first frame image may be selected as the base image, or an intermediate frame image may be selected as the base image.
202, obtaining the depth of field information of each person image in the basic image, and obtaining a preset depth of field range according to the depth of field information of each person image.
After the basic image is obtained, the number of the character images included in the basic image is obtained through character recognition. Specifically, the number of character images can be recognized by a face recognition technique. However, some character images are shadows or side faces and cannot be identified through the face recognition technology, and the number of the character images can be identified through the face recognition technology. The human figure can be identified by identifying the overall characteristics of the human figure, such as the shape of the human figure, and can also be identified by identifying local characteristics, such as head characteristics and ear characteristics.
And after the figure images are identified, obtaining the depth of field information of each figure image in the basic image. The position relation of each local image in the base image in the depth direction can be obtained through the depth information. For example, the person in front, the road in the middle, and the building at a distance may be sequentially located among the person, the road, and the building by the depth information. Similarly, the longitudinal spatial distance between the individual character images can be obtained according to the depth information of the individual character images.
After the depth of field information of each person image in the basic image is obtained, because the person to be shot is taken as a focus during shooting, the depth of field at the focus can be taken as a reference value, then, an offset value is respectively arranged at two ends on the basis of the reference value, and a preset depth of field range is obtained, namely, the reference value-the offset value is the minimum value, and the reference value + the offset value is the maximum value.
When multiple persons are photographed together, the depth of field at the focus of the basic image can be taken as a reference value, then the scene information of each person image comprises a depth of field value, then the depth of field value of each person image is compared with the reference value to obtain a difference value, because the multiple persons are photographed together, the difference values of the depth of field values and the reference value are small and are close to each other, then the depth of field values which are small in difference value with the reference value and are close to each other can be counted, and a preset depth of field range can be obtained.
And 203, determining the target person image beyond the preset depth of field range.
And after the preset depth of field range is obtained, comparing the depth of field value in the depth of field information of each character image with the preset depth of field range, and if the depth of field value exceeds the preset depth of field range, determining that the character image is a target character image, namely a person unknown by the user.
And 204, acquiring target position information of the target person image in the basic image, wherein the target position information comprises a first coordinate point set.
And acquiring the target position information of the target person image in the basic image. Wherein the target location information comprises a first set of coordinate points. And taking the basic image as a coordinate axis, wherein each pixel point or a plurality of pixel points correspond to one coordinate point, and the target character image comprises a region which is represented by a first coordinate point set.
It should be noted that the target person image may be a partial person image, such as an upper body, and the lower body may be blocked by other objects.
And 205, acquiring a plurality of second position information of the user corresponding to the target person image and a plurality of second coordinate point sets corresponding to the plurality of second position information from other preset images except the basic image.
And in each other preset image, second position information of the user corresponding to the target character image and a second coordinate point set corresponding to the second position information are obtained. In this way, the plurality of other preset images have a plurality of second position information and a plurality of second coordinate point sets.
In some embodiments, if the plurality of second coordinate point sets intersect with the first coordinate point set, the target person image is blurred on the base image.
If the second coordinate point sets and the first coordinate point sets are intersected, it is indicated that the user corresponding to the target person image does not move, or the moving distance is small, and a better background image cannot be obtained from other preset images, and the target person image is subjected to blurring processing on the basis image. The influence of unknown people on the photo is reduced, and the figure image required by the user is highlighted.
In some embodiments, if the plurality of second coordinate point sets intersect with the first coordinate point set, blurring an area beyond a preset depth of field on the base image.
If the second coordinate point sets and the first coordinate point sets are intersected, it is indicated that the user corresponding to the target person image does not move, or the moving distance is small, and a better background image cannot be obtained from other preset images, and blurring processing is performed on an area exceeding a preset depth of field range on the basic image. The influence of unknown people on the photo is reduced, and the figure image required by the user is highlighted.
And 206, determining a target coordinate point set which does not intersect with the first coordinate point set from the plurality of second coordinate point sets.
It can be understood that, if the second coordinate point set corresponding to the user image in one other preset image does not have a set with the target coordinate point set, that is, does not have the same coordinate point, it indicates that the user has moved a position in the preset image and in the base image, and the moved position is larger.
And 207, determining a replacement preset image corresponding to the target coordinate point set from other preset images except the basic image.
And after the target coordinate point set is obtained, determining the preset image corresponding to the target coordinate point set as a replacement preset image from other preset images except the basic image.
208, the first coordinates of the reference object and the third coordinate point set of the target position information are acquired in the base image.
In the basic image, the origin of coordinates of the basic image needs to be determined, and the coordinate set of the preset image is obtained. A fixed object is selected from the base image as a reference object, such as a pillar, and first coordinates of the reference object, such as the coordinates of the top of the pillar, are obtained.
In the replacement preset image, the second coordinates of the reference object are acquired 209.
And in the replacement preset image, acquiring a second coordinate of the reference object which is the same as the basic image.
And 210, shifting the third coordinate point set according to the difference value of the second coordinate and the first coordinate to obtain a fourth coordinate point set.
And comparing the second coordinate with the first coordinate to obtain a difference value, if the difference value is 0, indicating that the coordinate marks of the basic image and the replacement preset image are consistent, and collecting the fourth coordinate point as a third coordinate point set. If the difference value is not 0, it is determined that the coordinate marks of the basic image and the replacement preset image are inconsistent, the coordinate origins of the basic image and the replacement preset image are different, and the third coordinate point set needs to be shifted according to the difference value to obtain a fourth coordinate point set.
And 211, extracting a screenshot corresponding to the fourth coordinate point set in the replacement preset image.
And in the replacement preset image, directly extracting the screenshot corresponding to the fourth coordinate point set, wherein the screenshot position corresponds to the position of the target character image in the basic image, and the screenshot periphery and the target character image periphery can be better fused.
At 212, it is determined whether the screenshot includes a character image.
Whether the screenshot includes a character image or not is determined, specifically, only the screenshot content can be detected, or images around the screenshot can be obtained, and then the screenshot and the whole image of the images around the screenshot are detected, so that the screenshot is prevented from being identified only by a part of the character image.
And 213, if not, replacing the screenshot with the target person image in the basic image to obtain a target preset image after screenshot replacement.
And if the screenshot does not comprise a character image, replacing the screenshot with the target character image in the base image. Namely, the background image is obtained from other preset images, and the background image is replaced by the target person image in the basic image to obtain a preset image without the target person image.
And 214, if the target character image is included in the basic image, blurring the target character image to obtain a target preset image.
If the screenshot includes the character image, the background image obtained by replacing the target character image with the screenshot still includes other character images, and the background image is obviously different after replacement and has poorer effect, the target character image is blurred in the base image. The influence of the target person image on the base image is reduced.
As can be seen from the above, in the embodiment of the present application, multiple preset images are continuously obtained, and a basic image is selected from the multiple preset images; then obtaining the depth of field information of each figure image in the basic image, and obtaining a preset depth of field range according to the depth of field information of each figure image; then determining the target figure image beyond the preset depth of field range; then obtaining the target position information of the target character image in the basic image, and determining a replacement preset image of the position of the user corresponding to the target character image and the change of the target position information from other preset images except the basic image; and finally, replacing the screenshot replacing the corresponding target position information in the preset image with the target character image in the basic image. Whether the unnecessary figure image exists in the basic image or not is firstly identified, if yes, the unnecessary figure image is determined as the target figure image, the position of the target figure image is obtained, then the replacement preset image with the changed user position corresponding to the target figure image is found out in other preset images, and finally the screenshot of the corresponding position is extracted from the replacement preset image to replace the figure image, so that the unnecessary figure image is removed from the basic image, a better background image is obtained, and the effect of the picture is improved.
Referring to fig. 4, fig. 4 is a schematic flowchart of another image processing method according to an embodiment of the present disclosure, where the image processing method includes:
301, continuously acquiring multiple preset images, and selecting a basic image from the multiple preset images according to the eye size of each face image in the multiple preset images.
For example, after multiple frames of preset images are continuously acquired, the terminal may determine each frame of image in the multiple frames of preset images as an image to be processed. Then, the terminal can determine a frame of basic image from the buffer image, wherein the basic image at least comprises a face image meeting a preset condition. After that, the terminal may perform a preset process on the base image and output the base image.
For example, the preset condition may be that the eyes of a certain user in the base image are larger than the eyes of the user in other images to be processed.
For example, all the images to be processed are single person images of the same user, and the images to be processed are A, B, C, D, E, F, G, H respectively. The numerical values indicating the size of the user's eyes in the image A, B, C, D, E, F, G, H are 83, 84, 88, 86, 85, 84, and 84, respectively. Then, the terminal may determine the image D as the base image since the eyes of the user of the image D are maximally opened.
In one embodiment, the terminal may detect the eye size in the image as follows. For example, the terminal may first identify the eye region in the image by a face and eye recognition technology, and then obtain the area ratio of the eye region in the whole image. If the area ratio is large, it is considered that the eyes of the user are open largely and the size of the eyes is large. If the area ratio is small, it is considered that the user has a small opening of the eyes and the size of the eyes is small. For another example, the terminal may calculate the number of pixel points occupied by human eyes in the image in the vertical direction, and the size of the number may be used to represent the size of the human eyes.
302, obtaining depth of field information of each person image in the basic image, and obtaining a preset depth of field range according to the depth of field information of each person image.
After the basic image is obtained, the number of the character images included in the basic image is obtained through character recognition. And after the figure images are identified, obtaining the depth of field information of each figure image in the basic image. The position relation of each local image in the base image in the depth direction can be obtained through the depth information. For example, the person in front, the road in the middle, and the building at a distance may be sequentially located among the person, the road, and the building by the depth information. Similarly, the longitudinal spatial distance between the individual character images can be obtained according to the depth information of the individual character images.
After the depth of field information of each person image in the base image is obtained, because the person to be photographed, such as a family person, is taken as a focus during photographing, the depth of field at the focus can be taken as a reference value, then, an offset value is respectively set at two ends on the basis of the reference value, and a preset depth of field range is obtained, namely, the reference value-the offset value is a minimum value, and the reference value + the offset value is a maximum value.
When multiple persons are photographed together, the depth of field at the focus of the basic image can be taken as a reference value, then the scene information of each person image comprises a depth of field value, then the depth of field value of each person image is compared with the reference value to obtain a difference value, because the multiple persons are photographed together, the difference values of the depth of field values and the reference value are small and are close to each other, then the depth of field values which are small in difference value with the reference value and are close to each other can be counted, and a preset depth of field range can be obtained.
303, determining the image of the target person beyond the preset depth of field range.
And after the preset depth of field range is obtained, comparing the depth of field value in the depth of field information of each character image with the preset depth of field range, and if the depth of field value exceeds the preset depth of field range, determining that the character image is a target character image, namely a person unknown by the user.
And 304, acquiring the target position information of the target person image in the basic image, and determining a replacement preset image with the position of the user corresponding to the target person image and the target position information changed from other preset images except the basic image.
And determining a replacement preset image from other preset images except the basic image, searching the position of the user corresponding to the target character image in the other preset images, comparing the position of the user with the target position information, and determining the preset image as the replacement preset image if the comparison result shows that the position of the user is changed.
And 305, replacing the screenshot of the corresponding target position information in the replacement preset image with the target character image in the basic image to obtain the target preset image after screenshot replacement.
Firstly, extracting a screenshot corresponding to the target position information in a replacement preset image, and then replacing the screenshot with a target character image in a basic image. Therefore, the unnecessary figure images are removed from the basic image, a better background image is obtained, and the effect of the photo is improved.
And 306, determining the face image to be processed with the eye size smaller than the first preset threshold value in the target preset image.
And replacing the screenshot of the basic image to obtain a target preset image, detecting the eye size of each human face image in the target preset image, and determining the human face image as a human face image to be processed if the eye size of each human face image in the target preset image is smaller than a first preset threshold value. The first preset threshold may be a uniform threshold, or different thresholds may be set according to each face image. For example, the face image a is collected from all the eye sizes in the storage queue to obtain a maximum eye size, and then based on the maximum eye size, a first preset threshold is set according to a preset proportion, for example, the first preset threshold is 60% of the maximum eye size, or other ratios such as 40%, 80%, and the like may be used.
307, determining a replacement face image with the eye size larger than a second preset threshold value from other preset images except the target preset image, wherein the replacement face image and the face image to be processed are face images of the same user.
And detecting the eye size of the user corresponding to the replaced face image in other preset images, and determining the preset image with the eye size larger than a second preset threshold value as the replaced face image. If the second predetermined threshold is 80% of the maximum eye size, other ratios such as 70%, 90%, etc. are also possible.
And 308, replacing the face image to be processed with a replacement face image in the target preset image to obtain the target preset image subjected to image replacement processing.
In the target preset image, the face image to be processed is replaced by the face image to be processed in the replacement image, so that the eye size of the face image to be processed in the target preset image subjected to image replacement processing is large and cannot be an image with closed eyes or narrow eyes, and an image without closed eyes of all faces is obtained.
It should be noted that the replacement may be the replacement of the whole face image, or may be the replacement of only the eye image.
309, performing image denoising processing on the target preset image subjected to the image replacement processing to obtain a composite image.
And performing image noise reduction processing on the target preset image subjected to the image replacement processing, such as performing mean filtering, median filtering or Gaussian filtering to perform image noise reduction.
It should be noted that the image processing method in this embodiment may be used to process the images in the buffer queue, that is, determine the basic image according to the eye size of the facial image for the multiple frames of preset images buffered in the buffer queue, remove the passerby image to obtain the target preset image, and replace the facial image with an eye-open image to obtain the final image, so as to improve the effect of the photograph. The final image may be stored as a photograph within an internal memory chip of the terminal.
In some embodiments, the target preset image subjected to the image replacement processing is subjected to image noise reduction processing. For example, the terminal may perform noise reduction processing on the target preset image in a multi-frame noise reduction manner. For example, if the image D is determined as the target preset image, the terminal may perform multi-frame noise reduction processing on the image D according to 4 frames of continuously acquired images including the image D. For example, the terminal may perform multi-frame noise reduction on image D from image C, E, F.
In multi-frame denoising, the terminal may first align the image C, D, E, F and obtain the pixel values for each set of aligned pixels in the image. If the pixel values of the same group of alignment pixels are not different, the terminal can calculate the pixel value mean value of the group of alignment pixels, and replace the pixel value of the corresponding pixel of the image D with the pixel value mean value. If the pixel values of the alignment pixels in the same group are different, the pixel values in the image D may not be adjusted.
For example, the pixel P1 in the image C, the pixel P2 in the image D, the pixel P3 in the image E, and the pixel P4 in the image F are a group of mutually aligned pixels, where the pixel value of P1 is 101, the pixel value of P2 is 102, the pixel value of P3 is 103, and the pixel value of P4 is 104, and then the average value of the pixel values of the group of mutually aligned pixels is 102.5, then the terminal may adjust the pixel value of the P2 pixel in the image D from 102 to 102.5, thereby performing noise reduction on the P2 pixel in the image D. If the pixel value of P1 is 80, the pixel value of P2 is 102, the pixel value of P3 is 83, and the pixel value of P4 is 90, then the pixel value of P2 may not be adjusted at this time, i.e., the pixel value of P2 remains 102, because their pixel values are more different.
Referring to fig. 5, fig. 5 is a schematic view of another scene of the image processing method according to the embodiment of the present application. In this embodiment, after entering the preview interface of the camera, the terminal may acquire one frame of image every 30 to 60 milliseconds according to the current environmental parameter, and store the acquired image in the buffer queue. The buffer queue may be a fixed-length queue, for example, the buffer queue may store 15 frames of images newly acquired by the terminal.
For example, the user a opens the terminal camera to prepare for shooting a group photo for three people, namely third, fourth and fifth, and at this time, the terminal can detect that the camera is acquiring an image containing a human face. In this case, the terminal may first acquire a current environmental parameter, for example, the environmental parameter is the ambient light brightness.
After entering a preview interface of a camera, the terminal collects a frame of image at regular intervals according to the currently collected ambient light brightness. Before a user presses a photographing button of a camera, the terminal can firstly acquire 3 acquired frames of images from the cache queue, and it can be understood that all the 3 frames of images contain face images of C, D and E. Then, the terminal can detect whether the positions of the face images of the third person, the third person and the fourth person in the 3 frames of images in the image picture are displaced or not. For example, in this embodiment, the terminal detects that the positions of the face images of c, d, and e in the 3 frames of images in the screen are not displaced.
Then, the terminal can judge whether the terminal is currently in a dark light environment according to the acquired environmental light brightness. For example, the terminal determines that it is currently in a dim light environment.
Then, the terminal may, according to the obtained information: the positions of the human face images of the third, the fourth and the fifth images in the picture are not displaced, and the human face images are currently in a dark light environment, so that a target frame number is determined, and the target frame number represents the number of the images required to be acquired by the terminal. For example, the number of target frames is determined to be 6 frames.
After the target frame number is determined, when an image needs to be acquired from the buffer queue (for example, when a nail presses a photographing button), the terminal may acquire an output value of the gyro sensor when acquiring each frame image in the buffer queue. Then, the terminal can determine the target image from the buffer queue. And when the terminal collects the target image, the output value of the gyroscope sensor is in a preset range.
For example, since the output values of the gyro sensor are angular velocities of the terminal in three axial directions, the target image determined by the terminal may be an image satisfying the following condition: the square sum of the angular velocities of the gyro sensor in the three axial directions when the terminal acquires an image is less than or equal to 0.12. When the sum of squares of the angular velocities in the three axial directions of the gyro sensor is less than or equal to 0.12, it can be regarded that the terminal is not shaken or is shaken little, and thus the target image can be regarded as an image captured while the terminal is kept stable (no shaking or little shaking occurs).
For example, 8 frames of images in the buffer queue are determined as target images, respectively S, T, U, V, W, X, Y, Z, and these 8 frames of images are just images continuously acquired by the terminal.
Since the number of the target images is 8 frames greater than the number of the target frames is 6 frames, the terminal can detect whether an image group exists in the target images, the number of the images contained in the image group is 6 frames, and the images in the image group are continuously acquired by the terminal.
For example, since the image S, T, U, V, W, X, Y, Z happens to be captured continuously by the terminal, the terminal can determine that three image sets are included, a first image set S, T, U, V, W, X, a second image set T, U, V, W, X, Y, and a third image set U, V, W, X, Y, Z.
Then, the terminal may acquire the sharpness of each target image, and accordingly acquire the sum of the sharpness of the images included in each image group, and acquire the image included in the image group with the largest sum of the sharpness. For example, the sum of the degrees of sharpness of the third image group U, V, W, X, Y, Z is largest.
After the U, V, W, X, Y, Z frames of images are acquired, the terminal can perform face and eye recognition on the 6 frames of images and acquire the eye size of the face part in the images. For example, the U, V, W, X, Y, Z images have values for eye size of 81, 83, 84, 86, 85, respectively. U, V, W, X, Y, Z the numerical values representing the eye size of D are 75, 77, 79, 78, 77 respectively. U, V, W, X, Y, Z the numerical values representing the eye size of E are 84, 85, 86, 88, 86, respectively.
Since the 6 frames of images are images of a plurality of people including people.
For example, for C, the face image with its eyes at their maximum appears in images X and Y. For the third, the face image whose eyes are the largest appears in the image X. For penta, the image whose eyes are the largest appears in image Y. Since the face images with the largest eyes of two persons appear in the image Y, the terminal can determine the image Y as the target preset image.
After the image Y is determined as the target preset image, the terminal may replace the face image in the image Y with the face image in the image X (with the largest eyes). It can be understood that after the face image replacement is completed, the eyes of the third person, the fourth person and the fifth person in the image Y are all the largest eyes in the U, V, W, X, Y, Z6 frames of images.
After that, the terminal may perform multi-frame noise reduction processing on the image Y subjected to face image replacement based on the image W, X, Z, and output the image subjected to noise reduction processing to an album as a photograph.
It can be understood that, in this embodiment, the image Y originally includes the face images in the large eye states of the third person and the fifth person, and the terminal replaces the face image in the third person in the image Y with the face image in the large eye state in the third person in the image X, so that after image replacement, the image Y includes the face images in the large eye states of the third person, the third person and the fifth person. After that, the terminal performs noise reduction processing on the image Y and outputs the image Y to the album to be a photo, so that the photo is a big-eye photo of three people, namely C, D and E, and the imaging effect of the photo is good due to the noise reduction processing.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. The image processing apparatus 400 may include a base image acquisition module 401, a preset depth of field acquisition module 402, a target face image determination module 403, a replacement preset image acquisition module 404, and a processing module 405.
A basic image obtaining module 401, configured to continuously obtain multiple preset images, and select a basic image from the multiple preset images;
a preset depth-of-field range obtaining module 402, configured to obtain depth-of-field information of each person image in the basic image, and obtain a preset depth-of-field range according to the depth-of-field information of each person image;
a target face image determining module 403, configured to determine a target person image exceeding a preset depth range;
a replacement preset image obtaining module 404, configured to obtain target position information of a target person image in the base image, and determine a replacement preset image in which a position of a user corresponding to the target person image and the target position information change from other preset images other than the base image;
and the processing module 405 is configured to replace the target person image with the screenshot replacing the corresponding target position information in the preset image in the base image.
Referring to fig. 7, fig. 7 is another schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. The replacement preset image obtaining module 404 includes a target position information obtaining module 4041, a second position information obtaining module 4042, a target coordinate point set obtaining module 4043, and a replacement preset image determining module 4044.
A target position information obtaining module 4041, configured to obtain target position information of a target person image in the base image, where the target position information includes a first coordinate point set;
a second position information obtaining module 4042, configured to obtain, from other preset images except the basic image, a plurality of second position information of the user corresponding to the target person image and a plurality of second coordinate point sets corresponding to the plurality of second position information;
a target coordinate point set obtaining module 4043, configured to determine, from the multiple second coordinate point sets, a target coordinate point set that does not intersect with the first coordinate point set;
the replacement preset image determining module 4044 is further configured to determine a replacement preset image corresponding to the target coordinate point set from other preset images except the base image.
In some embodiments, the processing module 405 is further configured to perform blurring on the target person image or blurring on an area beyond a preset depth of field on the base image if the plurality of second coordinate point sets intersect with the first coordinate point set.
In some embodiments, the processing module 405 is further configured to obtain a screenshot of the corresponding target location information in the replacement preset image; determining whether the screenshot includes a character image; if not, replacing the screenshot with the target character image in the basic image; if yes, blurring the target person image on the basis of the basic image.
In some embodiments, the processing module 405 is further configured to obtain the first coordinate of the reference object and the third coordinate point set of the target position information in the base image; acquiring a second coordinate of the reference object in the replacement preset image; according to the difference value of the second coordinate and the first coordinate, the third coordinate point set is shifted to obtain a fourth coordinate point set; and extracting a screenshot corresponding to the fourth coordinate point set in the replacement preset image.
In some embodiments, the basic image obtaining module 401 is further configured to continuously obtain multiple frames of preset images, and select a basic image from the multiple frames of preset images according to the eye size of each face image in the multiple frames of preset images.
The processing module 405 is further configured to determine a to-be-processed face image of which the eye size is smaller than a first preset threshold in the target preset image; determining a replacement face image with the eye size larger than a second preset threshold value from other preset images except the target preset image, wherein the replacement face image and the face image to be processed are face images of the same user; replacing a face image to be processed with a replacement face image in a target preset image to obtain a target preset image subjected to image replacement processing; and performing image noise reduction processing on the target preset image subjected to the image replacement processing to obtain a composite image.
The embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is used to execute the steps in the image processing method provided in this embodiment by calling the computer program stored in the memory.
For example, the electronic device may be a mobile terminal such as a tablet computer or a smart phone. Referring to fig. 8, fig. 8 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
The mobile terminal 500 may include components such as a sensor 501, a memory 502, a processor 503, and the like. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 6 is not intended to be limiting of mobile terminals and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The sensor 501 may include a gyro sensor (e.g., a three-axis gyro sensor), an acceleration sensor, and the like.
The memory 502 may be used to store applications and data. Memory 502 stores applications containing executable code. The application programs may constitute various functional modules. The processor 503 executes various functional applications and data processing by running an application program stored in the memory 502.
The processor 503 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by running or executing an application program stored in the memory 502 and calling data stored in the memory 502, thereby performing overall monitoring of the mobile terminal.
In this embodiment, the processor 503 in the mobile terminal loads the executable code corresponding to the process of one or more application programs into the memory 502 according to the following instructions, and the processor 503 runs the application programs stored in the memory 502, thereby implementing the steps:
continuously acquiring multiple preset images, and selecting a basic image from the multiple preset images;
acquiring the depth of field information of each figure image in the basic image, and obtaining a preset depth of field range according to the depth of field information of each figure image;
determining a target figure image exceeding a preset depth range;
acquiring target position information of a target character image in the basic image, and determining a replacement preset image of the position of a user corresponding to the target character image and the change of the target position information from other preset images except the basic image;
and in the basic image, replacing the screenshot of the corresponding target position information in the preset image with the target character image to obtain the target preset image after screenshot replacement.
As shown in fig. 9, the image processing circuit includes an image signal processor 640 and a control logic 650. Image data captured by the imaging device 610 is first processed by an image signal processor 640, which analyzes the image data to capture image statistics that may be used to determine and/or one or more control parameters of the imaging device 610. The imaging device 610 may include a camera having one or more lenses 611 and an image sensor 612. Image sensor 612 may include an array of color filters (e.g., Bayer filters), and image sensor 612 may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 612 and provide a set of raw image data that may be processed by image signal processor 640. The sensor 620 may provide raw image data to the image signal processor 640 based on the sensor 620 interface type. The sensor 620 interface may utilize a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
The image signal processor 640 processes raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the image signal processor 640 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The image signal processor 640 may also receive pixel data from the image memory 630. For example, raw pixel data is sent from the sensor 620 interface to the image memory 630, and the raw pixel data in the image memory 630 is then provided to the image signal processor 640 for processing. The image Memory 630 may be a part of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the sensor 620 interface or from the image memory 630, the image signal processor 640 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 630 for additional processing before being displayed. An image signal processor 640 receives the processed data from the image memory 630 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to a display 670 for viewing by a user and/or further processing by a graphics engine or GPU (graphics processing Unit). Further, the output of the image signal processor 640 may also be sent to the image memory 630, and the display 670 may read image data from the image memory 630. In one embodiment, image memory 630 may be configured to implement one or more frame buffers. In addition, the output of the image signal processor 640 may be transmitted to an encoder/decoder 660 in order to encode/decode image data. The encoded image data may be saved and decompressed before being displayed on the display 670 device. The encoder/decoder 660 may be implemented by a CPU or GPU or co-processor.
The statistical data determined by the image signal processor 640 may be sent to the control logic 650. For example, the statistical data may include image sensor 612 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 611 shading correction, and the like. The control logic 650 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of the imaging device 610 and, based on the received statistical data, control parameters. For example, the control parameters may include sensor 620 control parameters (e.g., gain, integration time for exposure control), camera flash control parameters, lens 611 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 611 shading correction parameters.
The following steps are steps for implementing the image processing method provided by the embodiment by using the image processing technology in fig. 9:
continuously acquiring multiple preset images, and selecting a basic image from the multiple preset images;
acquiring the depth of field information of each figure image in the basic image, and obtaining a preset depth of field range according to the depth of field information of each figure image;
determining a target figure image exceeding a preset depth range;
acquiring target position information of a target character image in the basic image, and determining a replacement preset image of the position of a user corresponding to the target character image and the change of the target position information from other preset images except the basic image;
and in the basic image, replacing the screenshot of the corresponding target position information in the preset image with the target character image to obtain the target preset image after screenshot replacement.
In one embodiment, when the electronic device performs the steps of acquiring the target position information of the target person image in the base image, and determining, from other preset images except the base image, a replacement preset image in which the position of the user corresponding to the target face image and the target position information are changed, the electronic device may perform:
acquiring target position information of a target character image in a basic image, wherein the target position information comprises a first coordinate point set;
acquiring a plurality of second position information of the user corresponding to the target person image and a plurality of second coordinate point sets corresponding to the plurality of second position information from other preset images except the basic image;
determining a target coordinate point set which does not intersect with the first coordinate point set from the plurality of second coordinate point sets;
and determining a replacement preset image corresponding to the target coordinate point set from other preset images except the basic image.
In one embodiment, when the electronic device performs the step of acquiring, from other preset images except the base image, a plurality of second position information of the user corresponding to the target person image and a plurality of second coordinate point sets corresponding to the plurality of second position information, the electronic device may perform:
and if the second coordinate point sets intersect with the first coordinate point set, blurring the target person image or blurring the area beyond the preset depth of field on the basic image.
In one embodiment, when the electronic device replaces the target person image with the screenshot replacing the corresponding target position information in the preset image in the base image, the electronic device may perform:
acquiring a screenshot for replacing corresponding target position information in a preset image;
determining whether the screenshot includes a character image;
if not, replacing the screenshot with the target character image in the basic image;
if yes, blurring the target person image on the basis of the basic image.
In one embodiment, before the step of replacing, in the base image, the target person image with the screenshot that replaces the corresponding target position information in the preset image, the electronic device may further perform:
acquiring a first coordinate of a reference object and a third coordinate point set of target position information in a basic image;
acquiring a second coordinate of the reference object in the replacement preset image;
according to the difference value of the second coordinate and the first coordinate, the third coordinate point set is shifted to obtain a fourth coordinate point set;
and extracting a screenshot corresponding to the fourth coordinate point set in the replacement preset image.
In one embodiment, when the electronic device performs the steps of continuously acquiring multiple frames of preset images and selecting a base image from the multiple frames of preset images, the steps of:
continuously acquiring multiple frames of preset images, and selecting a basic image from the multiple frames of preset images according to the eye size of each face image in the multiple frames of preset images;
after the step of obtaining the target preset image after replacing the screenshot is executed by the electronic device, the following steps may be executed:
determining a face image to be processed with an eye size smaller than a first preset threshold value in a target preset image;
determining a replacement face image with the eye size larger than a second preset threshold value from other preset images except the target preset image, wherein the replacement face image and the face image to be processed are face images of the same user;
replacing a face image to be processed with a replacement face image in a target preset image to obtain a target preset image subjected to image replacement processing;
and performing image noise reduction processing on the target preset image subjected to the image replacement processing to obtain a composite image.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the image processing method, and are not described herein again.
The image processing apparatus provided in the embodiment of the present application and the image processing method in the above embodiment belong to the same concept, and any method provided in the embodiment of the image processing method may be run on the image processing apparatus, and a specific implementation process thereof is described in detail in the embodiment of the image processing method, and is not described herein again.
In the image processing apparatus according to the embodiment of the present application, each functional module may be integrated into one processing chip, each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, which, when executed on a computer, causes the computer to execute the steps in the image processing method provided in the embodiment.
It should be noted that, for the image processing method in the embodiments of the present application, it can be understood by those skilled in the art that all or part of the processes for implementing the image processing method in the embodiments of the present application can be completed by controlling the related hardware through a computer program, the computer program can be stored in a computer readable storage medium, such as a memory, and executed by at least one processor, and the processes of the embodiments of the image processing method can be included in the execution process. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
The foregoing detailed description has provided an image processing method, an image processing apparatus, a storage medium, and an electronic device according to embodiments of the present application, and specific examples are applied herein to explain the principles and implementations of the present application, and the descriptions of the foregoing embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (8)
1. An image processing method, comprising:
continuously acquiring multiple preset images, and selecting a basic image from the multiple preset images;
acquiring the depth of field information of each figure image in the basic image, and obtaining a preset depth of field range according to the depth of field information of each figure image;
determining the target figure image beyond the preset depth of field range;
acquiring target position information of the target person image in the basic image, wherein the target position information comprises a first coordinate point set;
acquiring a plurality of second position information of the user corresponding to the target character image and a plurality of second coordinate point sets corresponding to the second position information from other preset images except the basic image;
determining a target coordinate point set which does not intersect with the first coordinate point set from a plurality of second coordinate point sets;
determining a replacement preset image corresponding to the target coordinate point set from other preset images except the basic image;
and in the basic image, replacing the screenshot corresponding to the target position information in the replacement preset image with the target character image to obtain a target preset image with the screenshot replaced.
2. The image processing method according to claim 1, wherein the step of obtaining, from a preset image other than the base image, a plurality of second position information of the user corresponding to the target person image and a plurality of second coordinate point sets corresponding to the plurality of second position information further comprises:
and if the second coordinate point sets and the first coordinate point sets have intersection, blurring the target person image on the basic image, or blurring an area beyond the preset depth of field range.
3. The image processing method according to claim 1, wherein the step of replacing the screenshot corresponding to the target position information in the replacement preset image with the target person image in the base image comprises:
acquiring a screenshot corresponding to the target position information in the replacement preset image;
determining whether the screenshot includes a character image;
if not, replacing the screenshot with the target character image in the basic image;
and if so, blurring the target character image on the basis of the basic image.
4. The image processing method according to claim 1, wherein before the step of replacing the screenshot corresponding to the target position information in the replacement preset image with the target person image in the base image, the method further comprises:
acquiring a first coordinate of a reference object and a third coordinate point set of the target position information in the basic image;
acquiring a second coordinate of the reference object in the replacement preset image;
according to the difference value of the second coordinate and the first coordinate, offsetting the third coordinate point set to obtain a fourth coordinate point set;
and extracting a screenshot corresponding to the fourth coordinate point set from the replacement preset image.
5. The image processing method according to claim 1, wherein the step of successively acquiring a plurality of preset images and selecting a base image from the plurality of preset images comprises:
continuously acquiring multiple frames of preset images, and selecting a basic image from the multiple frames of preset images according to the eye size of each face image in the multiple frames of preset images;
after the step of obtaining the target preset image after screenshot replacement, the method further comprises the following steps:
determining a face image to be processed with an eye size smaller than a first preset threshold value in the target preset image;
determining a replacement face image with the eye size larger than a second preset threshold value from other preset images except the target preset image, wherein the replacement face image and the face image to be processed are face images of the same user;
replacing the face image to be processed with the replacement face image in the target preset image to obtain a target preset image subjected to image replacement processing;
and performing image noise reduction processing on the target preset image subjected to the image replacement processing to obtain a composite image.
6. An image processing apparatus characterized by comprising:
the basic image acquisition module is used for continuously acquiring multiple preset images and selecting a basic image from the multiple preset images;
the preset depth of field range acquisition module is used for acquiring depth of field information of each figure image in the basic image and acquiring a preset depth of field range according to the depth of field information of each figure image;
the target face image determining module is used for determining a target figure image exceeding the preset depth range;
the replacement preset image obtaining module is used for obtaining target position information of the target figure image in the basic image and determining a replacement preset image of the position of the user corresponding to the target figure image and the change of the target position information from other preset images except the basic image, wherein the replacement preset image obtaining module further comprises a target position information obtaining module, a second position information obtaining module, a target coordinate point set obtaining module and a replacement preset image determining module;
the target position information acquisition module is used for acquiring target position information of the target person image in the basic image, and the target position information comprises a first coordinate point set;
the second position information acquisition module is used for acquiring a plurality of second position information of the user corresponding to the target person image and a plurality of second coordinate point sets corresponding to the second position information from other preset images except the basic image;
the target coordinate point set acquisition module is used for determining a target coordinate point set which does not intersect with the first coordinate point set from a plurality of second coordinate point sets;
the replacement preset image determining module is used for determining a replacement preset image corresponding to the target coordinate point set from other preset images except the basic image;
and the processing module is used for replacing the screenshot corresponding to the target position information in the replacement preset image with the target character image in the basic image to obtain the target preset image after screenshot replacement.
7. A storage medium having stored thereon a computer program, characterized in that, when the computer program is run on a computer, it causes the computer to execute the image processing method according to any one of claims 1 to 5.
8. An electronic device comprising a processor and a memory, the memory storing a computer program, wherein the processor is adapted to perform the image processing method according to any one of claims 1 to 5 by invoking the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810277626.5A CN108259770B (en) | 2018-03-30 | 2018-03-30 | Image processing method, image processing device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810277626.5A CN108259770B (en) | 2018-03-30 | 2018-03-30 | Image processing method, image processing device, storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108259770A CN108259770A (en) | 2018-07-06 |
CN108259770B true CN108259770B (en) | 2020-06-02 |
Family
ID=62747690
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810277626.5A Active CN108259770B (en) | 2018-03-30 | 2018-03-30 | Image processing method, image processing device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108259770B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111179299A (en) * | 2018-11-09 | 2020-05-19 | 珠海格力电器股份有限公司 | Image processing method and device |
CN111311482B (en) * | 2018-12-12 | 2023-04-07 | Tcl科技集团股份有限公司 | Background blurring method and device, terminal equipment and storage medium |
CN110059643B (en) * | 2019-04-23 | 2021-06-15 | 王雪燕 | Method for multi-image feature comparison and preferential fusion, mobile terminal and readable storage medium |
CN110135436B (en) * | 2019-04-30 | 2020-11-27 | 中国地质大学(武汉) | Method and equipment for identifying flashing beacon light by using intelligent trolley and storage equipment |
CN112532881B (en) * | 2020-11-26 | 2022-07-05 | 维沃移动通信有限公司 | Image processing method and device and electronic equipment |
CN112672048A (en) * | 2020-12-21 | 2021-04-16 | 山西方天圣华数字科技有限公司 | Image processing method based on binocular image and neural network algorithm |
CN115567783B (en) * | 2022-08-29 | 2023-10-24 | 荣耀终端有限公司 | Image processing method |
CN117789131B (en) * | 2024-02-18 | 2024-05-28 | 广东电网有限责任公司广州供电局 | Risk monitoring method, risk monitoring device, risk monitoring equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101266685A (en) * | 2007-03-14 | 2008-09-17 | 中国科学院自动化研究所 | A method for removing unrelated images based on multiple photos |
JP2014179925A (en) * | 2013-03-15 | 2014-09-25 | Canon Inc | Image processing apparatus, and control method thereof |
CN105187722A (en) * | 2015-09-15 | 2015-12-23 | 努比亚技术有限公司 | Depth-of-field adjustment method and apparatus, terminal |
CN105791685A (en) * | 2016-02-29 | 2016-07-20 | 广东欧珀移动通信有限公司 | Control method, control device and electronic device |
CN106447741A (en) * | 2016-11-30 | 2017-02-22 | 努比亚技术有限公司 | Picture automatic synthesis method and system |
CN106791119A (en) * | 2016-12-27 | 2017-05-31 | 努比亚技术有限公司 | A kind of photo processing method, device and terminal |
CN107295262A (en) * | 2017-07-28 | 2017-10-24 | 努比亚技术有限公司 | Image processing method, mobile terminal and computer-readable storage medium |
CN107481186A (en) * | 2017-08-24 | 2017-12-15 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and computer equipment |
CN107493432A (en) * | 2017-08-31 | 2017-12-19 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110234779A1 (en) * | 2010-03-05 | 2011-09-29 | Allen Weisberg | Remotely controllable photo booth with interactive slide show and live video displays |
-
2018
- 2018-03-30 CN CN201810277626.5A patent/CN108259770B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101266685A (en) * | 2007-03-14 | 2008-09-17 | 中国科学院自动化研究所 | A method for removing unrelated images based on multiple photos |
JP2014179925A (en) * | 2013-03-15 | 2014-09-25 | Canon Inc | Image processing apparatus, and control method thereof |
CN105187722A (en) * | 2015-09-15 | 2015-12-23 | 努比亚技术有限公司 | Depth-of-field adjustment method and apparatus, terminal |
CN105791685A (en) * | 2016-02-29 | 2016-07-20 | 广东欧珀移动通信有限公司 | Control method, control device and electronic device |
CN106447741A (en) * | 2016-11-30 | 2017-02-22 | 努比亚技术有限公司 | Picture automatic synthesis method and system |
CN106791119A (en) * | 2016-12-27 | 2017-05-31 | 努比亚技术有限公司 | A kind of photo processing method, device and terminal |
CN107295262A (en) * | 2017-07-28 | 2017-10-24 | 努比亚技术有限公司 | Image processing method, mobile terminal and computer-readable storage medium |
CN107481186A (en) * | 2017-08-24 | 2017-12-15 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and computer equipment |
CN107493432A (en) * | 2017-08-31 | 2017-12-19 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
Also Published As
Publication number | Publication date |
---|---|
CN108259770A (en) | 2018-07-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108259770B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
JP7015374B2 (en) | Methods for image processing using dual cameras and mobile terminals | |
CN107948519B (en) | Image processing method, device and equipment | |
CN110072052B (en) | Image processing method and device based on multi-frame image and electronic equipment | |
CN108055452B (en) | Image processing method, device and equipment | |
CN110290289B (en) | Image noise reduction method and device, electronic equipment and storage medium | |
CN109068058B (en) | Shooting control method and device in super night scene mode and electronic equipment | |
CN111402135A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
KR102266649B1 (en) | Image processing method and device | |
CN110166707B (en) | Image processing method, image processing apparatus, electronic device, and storage medium | |
CN109089046B (en) | Image noise reduction method and device, computer readable storage medium and electronic equipment | |
CN108401110B (en) | Image acquisition method and device, storage medium and electronic equipment | |
CN107704798B (en) | Image blurring method and device, computer readable storage medium and computer device | |
CN107481186B (en) | Image processing method, image processing device, computer-readable storage medium and computer equipment | |
US20220222830A1 (en) | Subject detecting method and device, electronic device, and non-transitory computer-readable storage medium | |
US11233948B2 (en) | Exposure control method and device, and electronic device | |
CN107395991B (en) | Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment | |
CN108093158B (en) | Image blurring processing method and device, mobile device and computer readable medium | |
CN107563979B (en) | Image processing method, image processing device, computer-readable storage medium and computer equipment | |
CN110660090A (en) | Subject detection method and apparatus, electronic device, and computer-readable storage medium | |
EP3809327A1 (en) | Subject recognition method, electronic device, and computer readable storage medium | |
CN110717871A (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN113313626A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
US20200364832A1 (en) | Photographing method and apparatus | |
CN110740266B (en) | Image frame selection method and device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd. Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |