CN107483834B - Image processing method, continuous shooting method and device and related medium product - Google Patents

Image processing method, continuous shooting method and device and related medium product Download PDF

Info

Publication number
CN107483834B
CN107483834B CN201710877073.2A CN201710877073A CN107483834B CN 107483834 B CN107483834 B CN 107483834B CN 201710877073 A CN201710877073 A CN 201710877073A CN 107483834 B CN107483834 B CN 107483834B
Authority
CN
China
Prior art keywords
image
image frame
acquired
continuous shooting
reference image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710877073.2A
Other languages
Chinese (zh)
Other versions
CN107483834A (en
Inventor
吴鸿儒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710877073.2A priority Critical patent/CN107483834B/en
Publication of CN107483834A publication Critical patent/CN107483834A/en
Application granted granted Critical
Publication of CN107483834B publication Critical patent/CN107483834B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention discloses a continuous shooting method and a continuous shooting device, wherein the continuous shooting method comprises the following steps: receiving a continuous shooting starting instruction, acquiring a starting image frame which is acquired by a camera and contains a face image, and storing the starting image frame as a reference image; acquiring an acquired image frame which is acquired by the camera and contains a face image; detecting a difference value between the face feature information in the collected image frame and the face feature information of the reference image; and when the difference value is larger than or equal to a threshold value, setting the collected image frame as a reference image and storing the reference image, and executing the step of acquiring the collected image frame which contains the face image and is collected by the camera. The invention can save terminal resources and improve the accuracy of capturing useful image frames in the continuous shooting process.

Description

Image processing method, continuous shooting method and device and related medium product
Technical Field
The invention relates to the field of terminals, in particular to an image processing method, a continuous shooting method and device and a related medium product.
Background
With the continuous development of terminal technology, the integrated functions of the terminal become more and more, and become an indispensable part of people's life gradually. Among them, the GIF (Graphic Interchange Format) dynamic graph mode provided by the terminal is interesting and is very popular with the users. The principle of such a GIF kinetic diagram is: a plurality of images are stored in a GIF file, and a plurality of image data stored in a GIF file are read out one by one and displayed to constitute a simplest animation. In the prior art, the continuous shooting mode usually adopts a fixed time interval to acquire image frames as continuous shooting images, and in order to enable the captured images to have continuity, the fixed time interval is set to be short, but the expression or action speed of the user cannot keep up with the continuous shooting speed of a camera, so that some blurred image frames or useless image frames in the process of changing into action expressions can be captured in the continuous shooting process, and the storage resources of the terminal are wasted.
Disclosure of Invention
The embodiment of the invention provides a continuous shooting method and a continuous shooting device, which can save storage resources of a terminal and improve the accuracy of capturing useful image frames in the continuous shooting process.
The embodiment of the invention provides a continuous shooting method, which comprises the following steps:
receiving a continuous shooting starting instruction, acquiring a starting image frame which is acquired by a camera and contains a face image, and storing the starting image frame as a reference image;
acquiring an acquired image frame which is acquired by the camera and contains a face image;
detecting a difference value between the face feature information in the collected image frame and the face feature information of the reference image;
and when the difference value is larger than or equal to a threshold value, setting the collected image frame as a reference image and storing the reference image, and executing the step of acquiring the collected image frame which contains the face image and is collected by the camera.
Correspondingly, the embodiment of the invention provides a continuous shooting device, which comprises:
the image frame acquisition unit is used for receiving a continuous shooting starting instruction and acquiring a starting image frame which is acquired by the camera and contains a facial image;
a reference image setting unit for storing the start image frame as a reference image;
the image frame acquisition unit is also used for acquiring an acquired image frame which is acquired by the camera and contains a face image;
the human face feature detection unit is used for detecting a difference value between human face feature information in the collected image frame and human face feature information of the reference image;
the reference image setting unit is further configured to set the acquired image frame as a reference image and store the reference image when the disparity value is greater than or equal to a threshold value.
The embodiment of the invention also provides an image processing method, which comprises the following steps:
acquiring a starting image frame which is acquired by a camera and contains a face image, and storing the starting image frame as a reference image;
acquiring an acquired image frame which is acquired by the camera and contains a face image;
detecting a difference value between the face feature information in the collected image frame and the face feature information of the reference image;
and when the difference value is larger than or equal to a threshold value, setting the collected image frame as a reference image and storing the reference image.
An embodiment of the present invention further provides an image processing apparatus, including:
the image frame acquisition unit is used for acquiring a starting image frame which is acquired by the camera and contains a face image;
a reference image setting unit for storing the start image frame as a reference image;
the image frame acquisition unit is also used for acquiring an acquired image frame which is acquired by the camera and contains a face image;
the human face feature detection unit is used for detecting a difference value between human face feature information in the collected image frame and human face feature information of the reference image;
the reference image setting unit is further configured to set the acquired image frame as a reference image and store the reference image when the disparity value is greater than or equal to a threshold value.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored, where the computer program includes instructions for executing any one of the methods provided in the embodiment of the present invention.
According to the embodiment of the invention, a continuous shooting starting instruction can be received, the initial image frame which is collected by a camera and contains a face image is obtained, and the initial image frame is used as a reference image and is stored; the method also can acquire an acquired image frame which is acquired by the camera and contains a face image; detecting a difference value between the face feature information in the collected image frame and the face feature information of the reference image; and when the difference value is larger than the threshold value, the acquired image frame is set as a reference image and stored, and the step of acquiring the acquired image frame which contains the facial image and is acquired by the camera is executed, so that the storage resource of the terminal can be saved, the accuracy of capturing the useful image frame in the continuous shooting process is improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a continuous shooting method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another continuous shooting method provided in the embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a continuous shooting apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an image frame acquiring unit according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a face feature detection unit according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In specific implementation, the terminal mentioned in the embodiment of the present invention includes but is not limited to: electronic equipment such as a smart phone (e.g., an Android phone and an IOS phone), a tablet computer, a notebook computer, a palm computer, and a digital camera. The method described in the embodiments of the invention is performed by a set of codes stored in the memory of the terminal, which are executed in a computer system based on the von neumann architecture.
A continuous shooting method and apparatus provided by the embodiment of the present invention will be described in detail below with reference to fig. 1 to 5.
Referring to fig. 1, which is a schematic flow chart of a continuous shooting method according to an embodiment of the present invention, the continuous shooting method shown in the figure may include the following steps:
s101, receiving a continuous shooting starting instruction, acquiring a starting image frame which is acquired by a camera and contains a face image, and storing the starting image frame as a reference image.
In the specific implementation, the camera can be started at first, a continuous shooting mode is selected in a setting menu of the terminal, the camera performs automatic focusing after entering a framing mode, and a user inputs a shooting instruction when a gesture is prepared so as to be regarded as inputting a continuous shooting starting instruction, wherein the shooting instruction is input in various modes, and the shooting instruction can be triggered by clicking a shooting key by the user or triggered by the terminal when the terminal reaches a delayed shooting mode. In the continuous shooting method in the embodiment, the main application scenario is to collect continuous shooting images including facial images of people, so that when the terminal receives a continuous shooting instruction, the camera collects a starting image frame including the facial images, and the starting image frame is used as a reference image and stored as a first image shot in the continuous shooting process.
And S102, acquiring an acquired image frame which is acquired by the camera and contains a face image.
In step S101, after the initial image frame captured by the camera is used as a reference image, the camera continues to frame and perform auto-focusing to capture a captured image frame containing a face image, and waits for another image in the continuous shooting process.
S103, detecting a difference value between the face feature information in the collected image frame and the face feature information of the reference image.
In a specific implementation, this step may be based on a face recognition technology, and includes first performing face detection (determining whether a face image exists in a complex background and segmenting the face image from the background) on an acquired image frame and a reference image to obtain a face image included in the acquired image frame and the reference image, then sampling the face image included in the acquired image frame and the reference image to extract face feature information, and further detecting a difference value between the face feature information in the acquired image frame and the face feature information in the reference image. Methods of face detection include, but are not limited to: a reference template method, a face rule method, a sample learning method, a skin color model method or a characteristic sub-face method and the like. The extraction of the face feature information generally comprises a feature vector method and a face pattern template method. The feature vector method for face recognition comprises the following steps: the size, the position, the distance and other attributes of the facial image contour such as an iris, a nose wing, a mouth corner and the like are determined, and then the geometric characteristic quantity (Euclidean distance, curvature and angle) or the algebraic characteristic quantity (matrix characteristic vector) of each characteristic point in the facial image contour is calculated to describe the facial characteristics of the facial image in the image frame.
In this embodiment, coordinate information of facial feature points (such as a mouth corner, a nose wing, an eye corner, an eye contour, a nose tip, an eyebrow, and an eyebrow tail) of a facial image in the captured image frame and the reference image in the image frame may be obtained, and then a coordinate difference between each facial feature point in the captured image frame and each corresponding facial feature point in the reference image may be detected, respectively, to determine whether the facial image in the captured image frame has an expression or a position change, for example, a change in a distance between the eyebrows may determine whether a face has an expression of frown, and a change in a distance between the nose tip and the eye corner may determine whether a face has an expression of frown.
For example, a threshold value of a difference value between horizontal and vertical coordinates of a mouth corner feature point a and a mouth corner feature point B in the face feature is set to 0.5, after face detection and face feature information extraction are performed on the acquired image frame and the reference image, coordinates of the mouth corner a and the mouth corner B in the acquired image frame are (0.5,1.5) and (2.5,1.5), respectively, and coordinates of the mouth corner a and the mouth corner B in the reference image are (1,1) and (2,1), respectively, so that it can be detected that the difference value between the horizontal and vertical coordinates of the a and the B in the acquired image frame and the reference image is 0.5, respectively, and is equal to the preset threshold value, and at this time, step S104 is executed, and the acquired image frame is set as the reference image and stored. The difference between the abscissas of the point A and the point B of the mouth angle can represent the length of the mouth, the difference between the ordinates of the point A and the ordinate of the point B of the mouth angle can represent the height difference of the mouth angle, and the length of the mouth in the face image in the collected image frame is increased and the mouth angle is raised relative to the mouth of the face image in the reference image, so that the mouth in the collected image frame can be judged to be in a smile state relative to the mouth in the face image in the reference image, and the face image in the collected image frame is judged to be in an expression change relative to the reference image.
And S104, when the difference value is larger than or equal to a threshold value, setting the collected image frame as a reference image and storing the reference image, and executing the step of acquiring the collected image frame which contains the face image and is collected by the camera.
In this embodiment, after the captured image frame is set as a reference image and stored, if the image to be captured for the continuous shooting is not completed, the steps S102 to S104 are executed in a loop until the continuous shooting is completed.
In another optional embodiment, after performing the setting of the acquired image frame as a reference image and storing step, the method further comprises: judging whether the number of the image frames which are taken as the reference images and stored is larger than or equal to a first threshold value or not, if so, ending the continuous shooting; otherwise, executing the step of acquiring the collected image frame containing the face image collected by the camera.
In specific implementation, the number of images required to be shot in the continuous shooting at this time can be designated as a first threshold while the camera is started to select the continuous shooting mode, usually, the terminal provides options of 4 continuous shooting, 6 continuous shooting, 8 continuous shooting or 16 continuous shooting modes for a user to select, and the number of images required to be shot in the continuous shooting can be customized after the continuous shooting mode is selected. The terminal judges whether the images shot by the continuous shooting reach a preset first threshold value every time the terminal shoots one image in the continuous shooting process, if so, the continuous shooting is finished; if not, the steps S102 to S104 are continuously and circularly executed until the continuous shooting is finished.
In a further optional embodiment, after performing the step of receiving a burst firing instruction, the method further comprises: and acquiring a starting time stamp of the continuous shooting starting instruction.
Correspondingly, the step of acquiring the collected image frame containing the facial image collected by the camera further comprises: acquiring a current timestamp, calculating continuous shooting time length according to the current timestamp and the initial timestamp, judging whether the continuous shooting time length is greater than or equal to a second threshold value or not, and if so, ending the continuous shooting; otherwise, executing the step of acquiring the collected image frame containing the face image collected by the camera.
In a specific implementation, when a continuous shooting mode is selected, setting continuous shooting time required by the continuous shooting as a second threshold in advance, starting timing after receiving a continuous shooting starting instruction from the terminal, judging whether the continuous shooting time reaches the preset second threshold or not when the terminal shoots one image in the continuous shooting process, and if so, ending the continuous shooting; if not, the steps S102 to S104 are continuously and circularly executed until the continuous shooting time length reaches the preset threshold value, so that the continuous shooting is completed.
In yet another optional embodiment, after performing the step of setting the acquired image frame as a reference image and storing, the method further comprises: and receiving a dynamic image generation instruction, acquiring an image frame which is used as a reference image and is stored, and generating a dynamic image according to the image frame which is used as the reference image and is stored.
In the specific implementation, after the continuous shooting is finished, the terminal can directly make the image shot by the continuous shooting into the GIF dynamic image or make the image shot by the continuous shooting into the GIF dynamic image according to the user requirements, so that the interestingness of the continuous shooting mode is improved.
In yet another alternative embodiment, step S102 may include the steps of:
11) acquiring a plurality of cache image frames which are acquired by the camera and contain facial images;
12) calculating a focusing evaluation value of each cache image frame in the plurality of cache image frames;
13) and selecting the cache image frame with the largest focusing evaluation value from the plurality of cache image frames as the acquisition image frame.
After the terminal acquires the first continuous shooting image, the terminal acquires a plurality of cache image frames containing face images through the camera, and simultaneously calculates the focusing evaluation value of a focusing area in each image frame, wherein the focusing evaluation value is a definition reference value (such as a sharpness value) of the focusing area, and then selects the cache image frame with the largest focusing evaluation value as an acquired image frame, so that the relatively clear acquired image frame can be acquired to increase the definition of the continuous shooting image, and a certain anti-shake effect is achieved.
According to the embodiment of the invention, a continuous shooting starting instruction can be received, the initial image frame which is collected by a camera and contains a face image is obtained, and the initial image frame is used as a reference image and is stored; the method also can acquire an acquired image frame which is acquired by the camera and contains a face image; detecting a difference value between the face feature information in the collected image frame and the face feature information of the reference image; and when the difference value is larger than the threshold value, the acquired image frame is set as a reference image and stored, and the step of acquiring the acquired image frame which contains the facial image and is acquired by the camera is executed, so that the storage resource of the terminal can be saved, the accuracy of capturing the useful image frame in the continuous shooting process is improved, and the user experience is improved.
Referring to fig. 2, which is a schematic flow chart of another continuous shooting method provided in the embodiment of the present invention, the continuous shooting method shown in the figure may include the following steps:
s201, receiving a continuous shooting starting instruction, acquiring a starting image frame which is acquired by a camera and contains a face image, and storing the starting image frame as a reference image.
And S202, acquiring an acquired image frame which is acquired by the camera and contains a face image.
S203, respectively obtaining coordinate values of the human face features in the collected image frame and the reference image.
In a specific implementation, this step may be based on a face recognition technology, and first perform face detection (determining whether a face image exists in a complex background and segmenting the face image from the background) on the captured image frame and the reference image to obtain a face image included in the captured image frame and the reference image, and then sample the face image included in the captured image frame and the reference image to obtain coordinate values of the face feature point.
And S204, judging whether the difference value between the coordinate values of the face features in the reference image and the coordinate values of the face features in the collected image frame is greater than or equal to a threshold value.
If the difference between the coordinate values of the face feature in the reference image and the coordinate values of the face feature in the captured image frame is greater than or equal to the threshold value, the process continues to step S205. For example, the threshold value of the difference value between the horizontal and vertical coordinates of the mouth corner feature point a and the horizontal and vertical coordinates of the point B in the face feature is set to 0.5, after the face detection and the face feature information extraction are performed on the collected image frame and the reference image, the coordinates of the mouth corner feature point a and the mouth corner feature point B in the collected image frame are (0.5,1.5) and (2.5,1.5), respectively, and the coordinates of the mouth corner feature point a and the mouth corner feature point B in the reference image are (1,1) and (2,1), respectively, so that it can be detected that the difference value between the horizontal and vertical coordinates of the point a and the horizontal and vertical coordinates of the point B in the collected image frame and the reference image is 0.5 and equal to the preset threshold. Optionally, if the difference between the coordinate values of the face feature in the reference image and the coordinate values of the face feature in the collected image frame is smaller than the threshold, the step S202 is executed.
And S205, setting the collected image frame as a reference image and storing the reference image.
S206, judging whether the number of the image frames which are taken as the reference images and stored is larger than or equal to a first threshold value.
In specific implementation, the number of images required to be shot in the continuous shooting at this time can be designated as a first threshold while the camera is started to select the continuous shooting mode, usually, the terminal provides options of 4 continuous shooting, 6 continuous shooting, 8 continuous shooting or 16 continuous shooting modes for a user to select, and the number of the images required to be shot in the continuous shooting can be customized after the continuous shooting mode is selected. The terminal judges whether the images shot by the continuous shooting reach a preset first threshold value every time the terminal shoots one image in the continuous shooting process, if so, the continuous shooting is finished; if not, the procedure goes to step S202 to step S206 until the continuous shooting is completed.
And S207, ending the continuous shooting.
In the embodiment of the invention, when a continuous shooting starting instruction is received, an initial image frame which is collected by a camera and contains a facial image can be obtained, the initial image frame is used as a reference image and is stored, a collected image frame which is collected by the camera and contains the facial image can also be obtained, further coordinate values of human face features in the collected image frame and the reference image are respectively obtained, when the difference value between the coordinate value of the human face feature in the reference image and the coordinate value of the human face feature in the collected image frame is judged to be more than or equal to a threshold value, the collected image frame is set as the reference image and is stored, then whether the number of the image frames which are used as the reference image and are stored is more than or equal to a first threshold value is judged, and when the judgment result is yes, the continuous shooting is finished, the storage resource of a terminal can be saved, and the accuracy of capturing, the user experience is improved.
Referring to fig. 3, a schematic structural diagram of a continuous shooting device according to an embodiment of the present invention is shown, where the continuous shooting device at least includes: an image frame acquisition unit 301, a reference image setting unit 302, and a face feature detection unit 303.
The image frame acquiring unit 301 is configured to receive a continuous shooting start instruction, and acquire a start image frame containing a facial image acquired by a camera.
In the specific implementation, the camera can be started at first, a continuous shooting mode is selected in a setting menu of the terminal, the camera performs automatic focusing after entering a framing mode, and a user inputs a shooting instruction when a gesture is prepared so as to be regarded as inputting a continuous shooting starting instruction, wherein the shooting instruction is input in various modes, and the shooting instruction can be triggered by clicking a shooting key by the user or triggered by the terminal when the terminal reaches a delayed shooting mode. In the continuous shooting method in this embodiment, the main application scenario is to capture a continuous shooting image including a face image of a person, so when the image frame acquisition unit 301 receives a continuous shooting instruction, the image frame acquisition unit 301 captures a starting image frame including the face image through a camera, and the reference image setting unit 302 takes and stores the starting image frame as a reference image as a first image captured in the continuous shooting process.
A reference image setting unit 302, configured to take the starting image frame as a reference image and store the reference image.
The image frame acquiring unit 301 is further configured to acquire an image frame including a facial image acquired by the camera.
After the initial image frame is used as a reference image by the reference image setting unit 302, the camera continues to view and perform automatic focusing to acquire an image frame including a facial image and wait for other images in the continuous shooting process.
A face feature detection unit 303, configured to detect a difference value between the face feature information in the acquired image frame and the face feature information of the reference image.
In a specific implementation, the face feature detection unit 303 may perform face detection on the acquired image frame and the reference image (determine whether a face image exists in a complex background and segment the face image from the background) to obtain a face image included in the acquired image frame and the reference image, then sample the face image included in the acquired image frame and the reference image to extract face feature information, and further detect a difference value between the face feature information in the acquired image frame and the face feature information in the reference image based on a face recognition technology. Methods of face detection include, but are not limited to: a reference template method, a face rule method, a sample learning method, a skin color model method or a characteristic sub-face method and the like. The extraction of the face feature information generally comprises a feature vector method and a face pattern template method. The feature vector method for face recognition comprises the following steps: the size, the position, the distance and other attributes of the facial image facial contour such as the iris, the nasal ala, the mouth corner and the like of eyes are determined, and then the geometric characteristic quantity (Euclidean distance, curvature, angle) or the algebraic characteristic quantity (matrix characteristic vector) of each characteristic point in the facial image facial contour is calculated to describe the facial features of the facial image in the image frame.
In this embodiment, coordinate information of facial feature points (such as a mouth corner, a nose wing, an eye corner, an eye contour, a nose tip, an eyebrow, and an eyebrow tail) of a facial image in the captured image frame and the reference image in the image frame may be obtained, and then a coordinate difference between each facial feature point in the captured image frame and each corresponding facial feature point in the reference image may be detected, respectively, to determine whether the facial image in the captured image frame has an expression or a position change, for example, a change in a distance between the eyebrows may determine whether a face has an expression of frown, and a change in a distance between the nose tip and the eye corner may determine whether a face has an expression of frown.
For example, the threshold value of the difference value between the horizontal and vertical coordinates of the mouth corner feature point a and the horizontal and vertical coordinates of the point B in the face feature is set to 0.5, after face detection and face feature information extraction are performed on the collected image frame and the reference image, the coordinates of the mouth corner feature point a and the coordinates of the point B in the collected image frame are (0.5,1.5) and (2.5,1.5), and the coordinates of the mouth corner feature point a and the coordinates of the point B in the reference image are (1,1) and (2,1), respectively, so that it can be detected that the difference value between the horizontal and vertical coordinates of the point a and the point B in the collected image frame and the reference image is 0.5, respectively, which is equal to the preset threshold value, and the reference image setting unit 302 sets the collected image frame as the reference. The difference between the abscissas of the point A and the point B of the mouth angle can represent the length of the mouth, the difference between the ordinates of the point A and the ordinate of the point B of the mouth angle can represent the height difference of the mouth angle, and the length of the mouth in the face image in the collected image frame is increased and the mouth angle is raised relative to the mouth of the face image in the reference image, so that the mouth in the collected image frame can be judged to be in a smile state relative to the mouth in the face image in the reference image, and the face image in the collected image frame is judged to be in an expression change relative to the reference image.
The reference image setting unit 302 is further configured to set the acquired image frame as a reference image and store the reference image when the disparity value is greater than or equal to a threshold.
In this embodiment, after the reference image setting unit 302 sets the acquired image frame as a reference image and stores the reference image, if the image required to be acquired in the continuous shooting is not completed, the image frame may continuously pass through the image frame acquiring unit 301, and the reference image setting unit 302 and the human face feature detecting unit 303 complete the acquisition of the continuous shooting image until the continuous shooting is completed.
In another optional embodiment, the continuous shooting apparatus further includes: an image number judging unit 304, configured to judge whether the number of the image frames serving as the reference image and stored in the reference image setting unit is greater than or equal to a first threshold after the reference image setting unit sets and stores the acquired image frames as the reference image, and if so, end the continuous shooting; otherwise, the image frame acquiring unit 301 acquires an acquired image frame containing a facial image acquired by the camera.
In specific implementation, the number of images required to be shot in the continuous shooting at this time can be designated as a first threshold while the camera is started to select the continuous shooting mode, usually, the terminal provides options of 4 continuous shooting, 6 continuous shooting, 8 continuous shooting or 16 continuous shooting modes for a user to select, and the number of images required to be shot in the continuous shooting can be customized after the continuous shooting mode is selected. When one image is shot in the continuous shooting process, the image quantity judging unit 304 judges whether the shot image reaches a preset first threshold value, if so, the continuous shooting is finished; if not, the image frame acquisition unit 301, the reference image setting unit 302 and the human face feature detection unit 303 continue to shoot the continuous shooting image until the continuous shooting is completed.
In still another optional embodiment, the continuous shooting apparatus further comprises a timestamp acquiring unit 305, configured to acquire a start timestamp of the continuous shooting start instruction after the image frame acquiring unit receives the continuous shooting start instruction.
The timestamp obtaining unit 305 is further configured to obtain a current timestamp.
Correspondingly, the continuous shooting device further comprises: a continuous shooting duration judging unit 306, configured to calculate a continuous shooting duration according to the current timestamp and the start timestamp, judge whether the continuous shooting duration is greater than or equal to a second threshold, and if so, end continuous shooting; otherwise, the image frame acquiring unit 301 acquires an acquired image frame containing a facial image acquired by the camera.
In a specific implementation, when a continuous shooting mode is selected in advance, a continuous shooting time required by the continuous shooting can be set as a second threshold, after a continuous shooting starting instruction is received from a terminal, the timestamp acquisition unit 305 acquires continuous shooting starting time and starts timing, when one image is shot in the continuous shooting process, the timestamp acquisition unit 305 acquires current time to calculate continuous shooting time, the continuous shooting time judgment unit 306 judges whether the continuous shooting time reaches the preset second threshold, and if so, the continuous shooting is ended; if not, the continuous shooting is continuously taken by the image frame acquisition unit 301, the reference image setting unit 302 and the human face feature detection unit 303 until the continuous shooting time reaches a preset threshold value, so that the continuous shooting is completed.
In a further optional embodiment, the continuous shooting apparatus further comprises: and a dynamic image synthesizing unit 307, configured to receive a dynamic image generation instruction after the reference image setting unit sets the acquired image frame as a reference image and stores the reference image, acquire an image frame which is a reference image and stored, and generate a dynamic image according to the image frame which is the reference image and stored.
After the continuous shooting is finished, the motion picture synthesis unit 307 may directly or according to the user's requirement make the image of the continuous shooting into the GIF motion picture, thereby improving the interest of the continuous shooting mode.
Referring to fig. 4, a schematic structural diagram of an image frame acquiring unit according to an embodiment of the present invention, the image frame acquiring unit 301 may include: a buffer image frame acquisition subunit 3101, a focus evaluation value calculation subunit 3102, and an image frame acquisition and selection subunit 3103.
A buffered image frame acquiring subunit 3101, configured to acquire multiple buffered image frames that contain facial images and are acquired by the camera.
A focus evaluation value operator unit 3102, configured to calculate a focus evaluation value of each of the plurality of buffer image frames.
The collected image frame selecting subunit 3103 is configured to select a cache image frame with a largest focusing evaluation value from the multiple cache image frames as the collected image frame.
After the first continuous shooting image is obtained, the cache image frame obtaining subunit 3101 collects a plurality of cache image frames containing facial images through a camera, the focus evaluation value calculating subunit 3102 calculates a focus evaluation value of a focus area in each cache image frame, wherein the focus evaluation value is a definition reference value (such as a sharpness value) of the focus area, and then the collection image frame selecting subunit 3103 selects the cache image frame with the largest focus evaluation value as a collection image frame, so that a relatively clear collection image frame can be obtained to increase the definition of the continuous shooting image, and a certain anti-shake effect is achieved.
Referring to fig. 5, in a schematic structural diagram of a face feature detection unit according to an embodiment of the present invention, as shown in the drawing, the face feature detection unit 303 may include: a face feature acquisition subunit 3301 and a face feature judgment subunit 3303.
And the face feature acquiring subunit 3301 is configured to acquire coordinate values of the face features in the acquired image frames.
In a specific implementation, the facial feature obtaining subunit 3301 may, based on a face recognition technology, first perform face detection on the collected image frame (determine whether a facial image exists in a complex background and segment the facial image from the background) to obtain a facial image included in the collected image frame, and then sample the facial images included in the collected image frame and the reference image to obtain coordinate values of the facial feature points.
The face feature obtaining subunit 3301 is further configured to obtain coordinate values of a face feature in the reference image.
A face feature determining subunit 3302, configured to determine whether a difference between coordinate values of a face feature in the reference image and coordinate values of the face feature in the acquired image frame is greater than a threshold, if yes, the reference image setting unit 302 sets the acquired image frame as a reference image and stores the reference image; if not, the image frame acquiring unit 301 acquires an acquired image frame containing a facial image acquired by the camera.
For example, the threshold value of the difference value between the horizontal and vertical coordinates of the point a and the point B of the mouth corner feature point in the face feature is set to 0.5, the coordinates of the point a and the point B of the mouth corner feature point in the captured image frame obtained by the face feature obtaining subunit 3301 are (0.5,1.5) and (2.5,1.5), respectively, and the coordinates of the point a and the point B of the mouth corner feature point in the reference image are (1,1) and (2,1), respectively, so that the face feature judging subunit 3302 can detect that the difference value between the horizontal and vertical coordinates of the point a and the point B in the captured image frame and the reference image is 0.5, respectively, which is equal to the preset threshold value, and at this time, the reference image setting unit 302 sets the captured image frame as the.
The continuous shooting method and apparatus disclosed in the embodiments of the present invention have been described in detail, and the above disclosure is only for the preferred embodiments of the present invention, but it should not be construed as limiting the scope of the present invention.

Claims (17)

1. An image processing method, comprising:
acquiring a starting image frame which is acquired by a camera and contains a face image, and storing the starting image frame as a reference image;
acquiring an acquired image frame which is acquired by the camera and contains a face image;
detecting a difference value between the face feature information in the collected image frame and the face feature information of the reference image; when the difference value is larger than or equal to a threshold value, setting the collected image frame as a reference image and storing the reference image;
wherein the acquiring of the image frame containing the facial image acquired by the camera comprises:
acquiring a plurality of cache image frames which are acquired by the camera and contain facial images; calculating a focusing evaluation value of each cache image frame in the plurality of cache image frames, wherein the focusing evaluation value is a definition reference value of a focusing area; selecting the cache image frame with the largest focusing evaluation value from the plurality of cache image frames as the collected image frame;
wherein detecting a difference value between the facial feature information in the collected image frame and the facial feature information of the reference image comprises: acquiring coordinate information of facial feature points of facial images in an acquired image frame and a reference image in the image frame, respectively detecting coordinate difference values of the facial feature points in the acquired image frame and the corresponding facial feature points in the reference image to judge whether the facial images in the acquired image frame have expressions or position changes, the change of the distance between eyebrows can judge whether the human face has expressions of frown eyebrows, and the change of the distance between the nose tip and the corner of eyes can judge whether the human face has expressions of frown eyebrows, wherein the difference of the horizontal coordinates of a point A and a point B of a mouth corner represents the length of a mouth, the difference of the vertical coordinates of the point A and the point B of the mouth corner represents the height difference of the mouth corner, when the length of the mouth in the facial images in the acquired image frame is increased and the mouth corner is raised relative to the mouth of the facial images in the reference image, the mouth in the facial images in the acquired image frame can be judged relative to the mouth in the facial images in the reference image, the mouth in the captured image frame is changed to a smile state, and the facial image in the captured image frame is expressively changed with respect to the reference image.
2. The method of claim 1, wherein said obtaining a starting image frame captured by a camera and containing a facial image further comprises: receiving a continuous shooting starting instruction;
wherein the method further comprises: and when the difference value is larger than or equal to a threshold value, executing the step of acquiring the collected image frame which is collected by the camera and contains the face image.
3. The method of claim 2, wherein after the setting and storing the captured image frame as a reference image, further comprising:
judging whether the number of the image frames which are taken as the reference images and stored is larger than or equal to a first threshold value or not, if so, ending the continuous shooting; otherwise, executing the step of acquiring the collected image frame containing the face image collected by the camera.
4. The method according to claim 2, wherein after the step of receiving the continuous shooting start command, the method further comprises:
acquiring a starting timestamp of the continuous shooting starting instruction;
the step of obtaining a captured image frame containing a facial image captured by the camera further comprises:
acquiring a current timestamp, calculating continuous shooting time length according to the current timestamp and the initial timestamp, judging whether the continuous shooting time length is greater than or equal to a second threshold value or not, and if so, ending the continuous shooting; otherwise, executing the step of acquiring the collected image frame containing the face image collected by the camera.
5. The method according to claim 3, wherein after the step of receiving the continuous shooting start command, the method further comprises:
acquiring a starting timestamp of the continuous shooting starting instruction;
the step of obtaining a captured image frame containing a facial image captured by the camera further comprises:
acquiring a current timestamp, calculating continuous shooting time length according to the current timestamp and the initial timestamp, judging whether the continuous shooting time length is greater than or equal to a second threshold value or not, and if so, ending the continuous shooting; otherwise, executing the step of acquiring the collected image frame containing the face image collected by the camera.
6. The method of any of claims 1 to 5, wherein the step of setting the acquired image frame as a reference image and storing further comprises:
and receiving a dynamic image generation instruction, acquiring an image frame which is used as a reference image and is stored, and generating a dynamic image according to the image frame which is used as the reference image and is stored.
7. The method according to any one of claims 1 to 5, wherein the detecting the difference value of the facial feature information of the acquired image frame and the reference image comprises:
acquiring coordinate values of the face features in the collected image frames;
acquiring coordinate values of the face features in the reference image;
judging whether the difference value between the coordinate values of the face features in the reference image and the coordinate values of the face features in the collected image frame is greater than or equal to the threshold value, if so, setting the collected image frame as a reference image and storing the reference image; and if not, executing the step of acquiring the acquired image frame containing the facial image acquired by the camera.
8. The method of claim 6, wherein the detecting the difference value of the facial feature information of the acquired image frame and the reference image comprises:
acquiring coordinate values of the face features in the collected image frames;
acquiring coordinate values of the face features in the reference image;
judging whether the difference value between the coordinate values of the face features in the reference image and the coordinate values of the face features in the collected image frame is greater than or equal to the threshold value, if so, setting the collected image frame as a reference image and storing the reference image; and if not, executing the step of acquiring the acquired image frame containing the facial image acquired by the camera.
9. An image processing apparatus characterized by comprising:
the image frame acquisition unit is used for acquiring a starting image frame which is acquired by the camera and contains a face image;
a reference image setting unit for storing the start image frame as a reference image;
the image frame acquisition unit is also used for acquiring an acquired image frame which is acquired by the camera and contains a face image;
the human face feature detection unit is used for detecting a difference value between human face feature information in the collected image frame and human face feature information of the reference image;
the reference image setting unit is further used for setting the collected image frame as a reference image and storing the reference image when the difference value is larger than or equal to a threshold value;
the image frame acquisition unit includes:
the cached image frame acquisition subunit is used for acquiring a plurality of cached image frames which are acquired by the camera and contain facial images;
the focusing evaluation value operator unit is used for calculating the focusing evaluation value of each cache image frame in the plurality of cache image frames, and the focusing evaluation value is a definition reference value of a focusing area;
the acquisition image frame selection subunit is used for selecting the cache image frame with the largest focusing evaluation value from the plurality of cache image frames as the acquisition image frame;
wherein detecting a difference value between the facial feature information in the collected image frame and the facial feature information of the reference image comprises: acquiring coordinate information of facial feature points of facial images in an acquired image frame and a reference image in the image frame, respectively detecting coordinate difference values of the facial feature points in the acquired image frame and the corresponding facial feature points in the reference image to judge whether the facial images in the acquired image frame have expressions or position changes, the change of the distance between eyebrows can judge whether the human face has expressions of frown eyebrows, and the change of the distance between the nose tip and the corner of eyes can judge whether the human face has expressions of frown eyebrows, wherein the difference of the horizontal coordinates of a point A and a point B of a mouth corner represents the length of a mouth, the difference of the vertical coordinates of the point A and the point B of the mouth corner represents the height difference of the mouth corner, when the length of the mouth in the facial images in the acquired image frame is increased and the mouth corner is raised relative to the mouth of the facial images in the reference image, the mouth in the facial images in the acquired image frame can be judged relative to the mouth in the facial images in the reference image, the mouth in the captured image frame is changed to a smile state, and the facial image in the captured image frame is expressively changed with respect to the reference image.
10. The apparatus of claim 9,
the image frame acquisition unit is specifically used for acquiring a starting image frame which is acquired by a camera and contains a facial image after receiving a continuous shooting starting instruction;
the image frame acquisition unit is further configured to: and when the difference value is larger than or equal to a threshold value, executing the step of acquiring the collected image frame which is collected by the camera and contains the face image.
11. The apparatus of claim 10, further comprising:
the image quantity judging unit is used for judging whether the quantity of the image frames which are used as reference images and stored is larger than or equal to a first threshold value or not after the reference image setting unit sets the collected image frames as the reference images and stores the reference images, and if so, the continuous shooting is finished; otherwise, the image frame acquiring unit acquires an acquired image frame containing the facial image acquired by the camera.
12. The apparatus of claim 10, further comprising:
the time stamp obtaining unit is used for obtaining a starting time stamp of the continuous shooting starting instruction after the image frame obtaining unit receives the continuous shooting starting instruction;
the timestamp obtaining unit is further used for obtaining a current timestamp;
a continuous shooting duration judging unit, configured to calculate a continuous shooting duration according to the current timestamp and the start timestamp, judge whether the continuous shooting duration is greater than or equal to a second threshold, and if so, end the continuous shooting; otherwise, the image frame acquiring unit acquires an acquired image frame containing the facial image acquired by the camera.
13. The apparatus of claim 11, further comprising:
the time stamp obtaining unit is used for obtaining a starting time stamp of the continuous shooting starting instruction after the image frame obtaining unit receives the continuous shooting starting instruction;
the timestamp obtaining unit is further used for obtaining a current timestamp;
a continuous shooting duration judging unit, configured to calculate a continuous shooting duration according to the current timestamp and the start timestamp, judge whether the continuous shooting duration is greater than or equal to a second threshold, and if so, end the continuous shooting; otherwise, the image frame acquiring unit acquires an acquired image frame containing the facial image acquired by the camera.
14. The apparatus of any one of claims 9 to 13, further comprising:
and the dynamic image synthesis unit is used for receiving a dynamic image generation instruction after the reference image setting unit sets the acquired image frame as a reference image and stores the reference image, acquiring the image frame which is used as the reference image and stored, and generating a dynamic image according to the image frame which is used as the reference image and stored.
15. The apparatus according to any one of claims 9 to 13, wherein the face feature detection unit comprises:
the face feature acquisition subunit is used for acquiring coordinate values of the face features in the acquired image frames;
the face feature acquisition subunit is further configured to acquire coordinate values of face features in the reference image;
a face feature judgment subunit, configured to judge whether a difference between coordinate values of a face feature in the reference image and coordinate values of the face feature in the acquired image frame is greater than or equal to the threshold, and if so, the reference image setting unit sets the acquired image frame as a reference image and stores the reference image; if not, the image frame acquisition unit acquires the acquired image frame which contains the facial image and is acquired by the camera.
16. The apparatus of claim 14, wherein the face feature detection unit comprises: the face feature acquisition subunit is used for acquiring coordinate values of the face features in the acquired image frames;
the face feature acquisition subunit is further configured to acquire coordinate values of face features in the reference image;
a face feature judgment subunit, configured to judge whether a difference between coordinate values of a face feature in the reference image and coordinate values of the face feature in the acquired image frame is greater than or equal to the threshold, and if so, the reference image setting unit sets the acquired image frame as a reference image and stores the reference image; if not, the image frame acquisition unit acquires the acquired image frame which contains the facial image and is acquired by the camera.
17. A computer-readable storage medium, characterized in that it stores a computer program comprising instructions for carrying out the method of any one of claims 1 to 8.
CN201710877073.2A 2015-02-04 2015-02-04 Image processing method, continuous shooting method and device and related medium product Expired - Fee Related CN107483834B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710877073.2A CN107483834B (en) 2015-02-04 2015-02-04 Image processing method, continuous shooting method and device and related medium product

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710877073.2A CN107483834B (en) 2015-02-04 2015-02-04 Image processing method, continuous shooting method and device and related medium product
CN201510058180.3A CN104683692B (en) 2015-02-04 2015-02-04 A kind of continuous shooting method and device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201510058180.3A Division CN104683692B (en) 2015-02-04 2015-02-04 A kind of continuous shooting method and device

Publications (2)

Publication Number Publication Date
CN107483834A CN107483834A (en) 2017-12-15
CN107483834B true CN107483834B (en) 2020-01-14

Family

ID=53318197

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201710877073.2A Expired - Fee Related CN107483834B (en) 2015-02-04 2015-02-04 Image processing method, continuous shooting method and device and related medium product
CN201510058180.3A Expired - Fee Related CN104683692B (en) 2015-02-04 2015-02-04 A kind of continuous shooting method and device

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201510058180.3A Expired - Fee Related CN104683692B (en) 2015-02-04 2015-02-04 A kind of continuous shooting method and device

Country Status (1)

Country Link
CN (2) CN107483834B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105323483B (en) * 2015-10-27 2018-06-29 广东欧珀移动通信有限公司 GIF picture shootings and playback method and GIF picture shootings and play system
CN105554373A (en) * 2015-11-20 2016-05-04 宇龙计算机通信科技(深圳)有限公司 Photographing processing method and device and terminal
CN106303235A (en) * 2016-08-11 2017-01-04 广东小天才科技有限公司 Photographing processing method and device
CN106303234A (en) * 2016-08-11 2017-01-04 广东小天才科技有限公司 photographing processing method and device
CN107240143A (en) * 2017-05-09 2017-10-10 北京小米移动软件有限公司 Bag generation method of expressing one's feelings and device
CN107368777A (en) * 2017-06-02 2017-11-21 广州视源电子科技股份有限公司 Smile action detection method and device and living body identification method and system
US11191341B2 (en) * 2018-01-11 2021-12-07 Casio Computer Co., Ltd. Notification device, notification method, and storage medium having program stored therein
CN108401110B (en) * 2018-03-18 2020-09-08 Oppo广东移动通信有限公司 Image acquisition method and device, storage medium and electronic equipment
CN109389086B (en) * 2018-10-09 2021-03-05 北京科技大学 Method and system for detecting unmanned aerial vehicle image target
CN109447006A (en) * 2018-11-01 2019-03-08 北京旷视科技有限公司 Image processing method, device, equipment and storage medium
CN109659006B (en) * 2018-12-10 2021-03-23 深圳先进技术研究院 Facial muscle training method and device and electronic equipment
WO2020155052A1 (en) * 2019-01-31 2020-08-06 华为技术有限公司 Method for selecting images based on continuous shooting and electronic device
CN110769150A (en) * 2019-09-23 2020-02-07 珠海格力电器股份有限公司 Photographing method, device, terminal and computer readable medium
TWI777126B (en) * 2020-01-22 2022-09-11 中國醫藥大學 Method of facial characteristic angle measurement and device thereof
CN111669504B (en) * 2020-06-29 2021-11-05 维沃移动通信有限公司 Image shooting method and device and electronic equipment
CN113239220A (en) * 2021-05-26 2021-08-10 Oppo广东移动通信有限公司 Image recommendation method and device, terminal and readable storage medium
CN114245017B (en) * 2021-12-21 2024-09-27 维沃移动通信有限公司 Shooting method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731859A (en) * 2005-09-09 2006-02-08 北京中星微电子有限公司 Video compression method and video system using the method
DE102009049528A1 (en) * 2009-10-15 2011-04-21 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method for detecting facial movement e.g. facial expression, of face in image sequence, involves calculating difference between positions of reference points in respective images, and detecting movement of face based on difference
CN103152524A (en) * 2013-03-05 2013-06-12 东莞宇龙通信科技有限公司 Shooting device and continuous shooting method thereof
CN103491299A (en) * 2013-09-17 2014-01-01 宇龙计算机通信科技(深圳)有限公司 Photographic processing method and device
CN103903213A (en) * 2012-12-24 2014-07-02 联想(北京)有限公司 Shooting method and electronic device
CN104185981A (en) * 2013-10-23 2014-12-03 华为终端有限公司 Method and terminal selecting image from continuous captured image

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005275916A (en) * 2004-03-25 2005-10-06 Toyota Motor Corp Image recognition device and image recognition method
JP2005323227A (en) * 2004-05-11 2005-11-17 Fuji Photo Film Co Ltd Device, method, and program for picking up image
FR2875322B1 (en) * 2004-09-14 2007-03-02 Atmel Grenoble Soc Par Actions METHOD FOR AIDING FACE RECOGNITION
KR101354899B1 (en) * 2007-08-29 2014-01-27 삼성전자주식회사 Method for photographing panorama picture
JP2009081636A (en) * 2007-09-26 2009-04-16 Casio Comput Co Ltd Image recording apparatus and photographing method
KR101398475B1 (en) * 2007-11-21 2014-05-26 삼성전자주식회사 Apparatus for processing digital image and method for controlling thereof
CN101290539A (en) * 2008-06-12 2008-10-22 北京中星微电子有限公司 Electronic equipment usage situation judgement method and system
JP5247356B2 (en) * 2008-10-29 2013-07-24 キヤノン株式会社 Information processing apparatus and control method thereof
WO2010070820A1 (en) * 2008-12-17 2010-06-24 パナソニック株式会社 Image communication device and image communication method
JP2011211628A (en) * 2010-03-30 2011-10-20 Sony Corp Image processing device and method, and program
CN103020580B (en) * 2011-09-23 2015-10-28 无锡中星微电子有限公司 Fast face detecting method
CN103856617A (en) * 2012-12-03 2014-06-11 联想(北京)有限公司 Photographing method and user terminal
TWI496109B (en) * 2013-07-12 2015-08-11 Vivotek Inc Image processor and image merging method thereof
CN103685948A (en) * 2013-12-04 2014-03-26 乐视致新电子科技(天津)有限公司 Shooting method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731859A (en) * 2005-09-09 2006-02-08 北京中星微电子有限公司 Video compression method and video system using the method
DE102009049528A1 (en) * 2009-10-15 2011-04-21 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method for detecting facial movement e.g. facial expression, of face in image sequence, involves calculating difference between positions of reference points in respective images, and detecting movement of face based on difference
CN103903213A (en) * 2012-12-24 2014-07-02 联想(北京)有限公司 Shooting method and electronic device
CN103152524A (en) * 2013-03-05 2013-06-12 东莞宇龙通信科技有限公司 Shooting device and continuous shooting method thereof
CN103491299A (en) * 2013-09-17 2014-01-01 宇龙计算机通信科技(深圳)有限公司 Photographic processing method and device
CN104185981A (en) * 2013-10-23 2014-12-03 华为终端有限公司 Method and terminal selecting image from continuous captured image

Also Published As

Publication number Publication date
CN104683692A (en) 2015-06-03
CN107483834A (en) 2017-12-15
CN104683692B (en) 2017-10-17

Similar Documents

Publication Publication Date Title
CN107483834B (en) Image processing method, continuous shooting method and device and related medium product
CN109819342B (en) Barrage content control method and device, computer equipment and storage medium
KR101706365B1 (en) Image segmentation method and image segmentation device
CN108525305B (en) Image processing method, image processing device, storage medium and electronic equipment
WO2019137131A1 (en) Image processing method, apparatus, storage medium, and electronic device
CN106161939B (en) Photo shooting method and terminal
CN110956691B (en) Three-dimensional face reconstruction method, device, equipment and storage medium
CN104519263B (en) The method and electronic equipment of a kind of image acquisition
EP3308536B1 (en) Determination of exposure time for an image frame
CN108712603B (en) Image processing method and mobile terminal
CN103685940A (en) Method for recognizing shot photos by facial expressions
CN103079034A (en) Perception shooting method and system
CN106161962B (en) A kind of image processing method and terminal
US20170161553A1 (en) Method and electronic device for capturing photo
WO2019011073A1 (en) Human face live detection method and related product
CN106331497B (en) A kind of image processing method and terminal
CN106127167A (en) The recognition methods of destination object, device and mobile terminal in a kind of augmented reality
CN104883505A (en) Electronic equipment and photographing control method therefor
CN104219444A (en) Method and device for processing video shooting
CN109257537B (en) Photographing method and device based on intelligent pen, intelligent pen and storage medium
JP5949030B2 (en) Image generating apparatus, image generating method, and program
CN104735357A (en) Automatic picture shooting method and device
CN112036311A (en) Image processing method and device based on eye state detection and storage medium
CN107357424B (en) Gesture operation recognition method and device and computer readable storage medium
KR20160046399A (en) Method and Apparatus for Generation Texture Map, and Database Generation Method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200114

CF01 Termination of patent right due to non-payment of annual fee