CN110097622B - Method and device for rendering image, electronic equipment and computer readable storage medium - Google Patents

Method and device for rendering image, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN110097622B
CN110097622B CN201910331282.6A CN201910331282A CN110097622B CN 110097622 B CN110097622 B CN 110097622B CN 201910331282 A CN201910331282 A CN 201910331282A CN 110097622 B CN110097622 B CN 110097622B
Authority
CN
China
Prior art keywords
parameter
image
target object
determining
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910331282.6A
Other languages
Chinese (zh)
Other versions
CN110097622A (en
Inventor
李润祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910331282.6A priority Critical patent/CN110097622B/en
Publication of CN110097622A publication Critical patent/CN110097622A/en
Priority to PCT/CN2020/074443 priority patent/WO2020215854A1/en
Application granted granted Critical
Publication of CN110097622B publication Critical patent/CN110097622B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure discloses a method, an apparatus, an electronic device and a computer-readable storage medium for rendering an image. Wherein the method of rendering an image comprises: acquiring an image; determining a first parameter of a target object in the image; determining a second parameter of the target object in the image; correcting the first parameter according to the second parameter; rendering the target object in the image according to the modified first parameter. By adopting the technical scheme, the first parameter of the target object can be corrected according to other parameters of the target object in the image, and the target object is rendered according to the corrected first parameter, so that the rendering mode is more flexible.

Description

Method and device for rendering image, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of information processing, and in particular, to a method and an apparatus for rendering an image, an electronic device, and a computer-readable storage medium.
Background
With the development of computer technology, the application range of intelligent terminals has been widely improved, for example, images and videos can be taken through the intelligent terminals.
Meanwhile, the intelligent terminal also has strong data processing capacity, for example, when the intelligent terminal is used for shooting a target object, the image obtained by shooting the intelligent terminal can be processed in real time through an image segmentation algorithm so as to identify the target object in the shot image. Taking the example of processing the video by the human body image segmentation algorithm, computer equipment such as an intelligent terminal can process each frame of image of the video in real time by the human body image segmentation algorithm and accurately identify the outline of a person object in the image and each key point of the person object, so that the positions of the face, the right hand and the like of the person object in the image can be determined, and the identification can be accurate to the pixel level.
In the prior art, it is also possible to "beautify" a person object recognized from an image, for example, the person object is rendered by a preset rendering parameter to achieve an effect of beautifying, as an example, a target width parameter of the person object may be preset, when a face of the person object in the image is rounded, the face of the person object in the image is rendered according to the target width parameter to achieve an effect of "face thinning", but for a face object with a large distance between two eyes, the rendering operation of "face thinning" is performed according to the target width parameter, the beautifying effect may not be achieved, or even is applicable to the contrary, because a rendering manner of rendering the person object in the image according to the preset rendering parameter in the prior art is not flexible enough, and differences between different individual face objects are not considered.
Disclosure of Invention
The embodiment of the disclosure provides a method, an apparatus, an electronic device, and a computer-readable storage medium for rendering an image, which can correct a first parameter of a target object according to other parameters of the target object in the image, and render the target object according to the corrected first parameter, so that a rendering manner is more flexible.
In a first aspect, an embodiment of the present disclosure provides a method for rendering an image, including: acquiring an image; determining a first parameter of a target object in the image; determining a second parameter of the target object in the image; correcting the first parameter according to the second parameter; rendering the target object in the image according to the modified first parameter.
Further, determining a second parameter of the target object in the image comprises: and determining the second parameter corresponding to the first parameter according to a preset first corresponding relation.
Further, determining a first parameter of a target object in the image comprises: and determining the first parameter corresponding to the second parameter according to a preset second corresponding relation.
Further, modifying the first parameter according to the second parameter includes: determining a modification rule associated with the first parameter according to the second parameter; and correcting the first parameter according to the correction rule.
Further, the correction rule includes a value range of the first parameter; modifying the first parameter according to the modification rule, comprising: and correcting the first parameter according to the value range.
Further, before modifying the first parameter according to the second parameter, the method further comprises: determining a target parameter corresponding to the first parameter; and correcting the first parameter according to the value range, wherein the step of correcting the first parameter comprises the following steps: and under the condition that the target parameter belongs to the value range, determining the target parameter as the corrected first parameter.
Further, correcting the first parameter according to the value range includes: and under the condition that the target parameter does not belong to the value range, correcting the first parameter according to the boundary value of the value range and the target parameter.
Further, the modification rule includes a modification type corresponding to the first parameter; modifying the first parameter according to the modification rule, comprising: and correcting the first parameter according to the correction type.
In a second aspect, an embodiment of the present disclosure provides an apparatus for rendering an image, including: the image acquisition module is used for acquiring an image; a first parameter determination module for determining a first parameter of a target object in the image; a second parameter determination module for determining a second parameter of the target object in the image; the correction module is used for correcting the first parameter according to the second parameter; and the rendering module is used for rendering the target object in the image according to the corrected first parameter.
Further, the second parameter determination module is further configured to: and determining the second parameter corresponding to the first parameter according to a preset first corresponding relation.
Further, the first parameter determination module is further configured to: and determining the first parameter corresponding to the second parameter according to a preset second corresponding relation.
Further, the correction module is further configured to: determining a modification rule associated with the first parameter according to the second parameter; and correcting the first parameter according to the correction rule.
Further, the correction rule includes a value range of the first parameter; the correction module is further configured to: and correcting the first parameter according to the value range.
Further, the correction module is further configured to: determining a target parameter corresponding to the first parameter; and under the condition that the target parameter belongs to the value range, determining the target parameter as the corrected first parameter.
Further, the correction module is further configured to: and under the condition that the target parameter does not belong to the value range, correcting the first parameter according to the boundary value of the value range and the target parameter.
Further, the modification rule includes a modification type corresponding to the first parameter; the correction module is further configured to: and correcting the first parameter according to the correction type.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a memory for storing computer readable instructions; and one or more processors configured to execute the computer readable instructions, such that the processors when executed implement any of the methods of rendering an image of the first aspect.
In a fourth aspect, the present disclosure provides a non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer instructions, which when executed by a computer, cause the computer to perform the method for rendering an image according to any one of the first aspect.
The disclosure discloses a method, an apparatus, an electronic device and a computer-readable storage medium for rendering an image. The method for rendering the image is characterized by comprising the following steps: acquiring an image; determining a first parameter of a target object in the image; determining a second parameter of the target object in the image; correcting the first parameter according to the second parameter; rendering the target object in the image according to the modified first parameter. The embodiment of the disclosure provides a method, an apparatus, an electronic device, and a computer-readable storage medium for rendering an image, which can correct a first parameter of a target object according to other parameters of the target object in the image, and render the target object according to the corrected first parameter, so that a rendering manner is more flexible.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and other drawings can be obtained according to the drawings without creative efforts for those skilled in the art.
Fig. 1 is a flowchart of a first embodiment of a method for rendering an image according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a second embodiment of a method for rendering an image according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an embodiment of an apparatus for rendering an image according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be further noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than being drawn according to the number, shape and size of the components in actual implementation, and the type, number and proportion of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
Fig. 1 is a flowchart of a first embodiment of a method for rendering an image according to an embodiment of the present disclosure, where the method for rendering an image according to this embodiment may be executed by an apparatus for rendering an image, and the apparatus may be implemented as software, hardware, or a combination of software and hardware, for example, the apparatus for rendering an image includes a computer device (e.g., an intelligent terminal), so that the method for rendering an image according to this embodiment is executed by the computer device.
As shown in fig. 1, a method of rendering an image according to an embodiment of the present disclosure includes the following steps:
step S101, acquiring an image;
in step S101, the apparatus for rendering an image acquires an image to implement a method of rendering an image through the current and/or subsequent steps. The means for rendering an image may include a camera, so that the image acquired in step S101 includes an image taken by the camera; the apparatus rendering the image may not include the photographing apparatus but be communicatively connected to the photographing apparatus such that acquiring the image in step S101 includes acquiring the image photographed by the photographing apparatus through the communication connection; the image rendering device may further acquire an image from a preset storage location, so as to implement the method for rendering an image through current and/or subsequent steps.
Those skilled in the art will appreciate that the video is composed of a series of image frames, each of which may also be referred to as an image, and thus the step S101 of acquiring an image includes acquiring an image from the video.
Step S102, determining a first parameter of a target object in the image;
optionally, the target object includes a human object, or includes a key part object of a human body, for example, a face object, a five sense organ object, a trunk object, an arm object, and the like. As described in the background of the present disclosure, the related art computer device has a powerful data processing capability, such as being capable of identifying a pixel region and/or a keypoint of a target object in an image through an image segmentation algorithm, and also being capable of identifying a keypoint of a target object in an image through other keypoint locating techniques, so that the apparatus for rendering an image in the embodiment of the present disclosure can determine the target object in the image and/or a first parameter of the target object based on the image segmentation algorithm and/or the keypoint locating technique.
As will be appreciated by those skilled in the art, the image in the embodiments of the present disclosure is composed of pixels, and each pixel in the image may be characterized by a position parameter and a color parameter, so that the aforementioned image segmentation algorithm may determine a pixel region of a target object in the image based on the position parameter and/or the color parameter of the pixel of the image, and the aforementioned keypoint locating technique may match a preset keypoint feature (e.g., a color feature and/or a shape feature) with the position parameter and/or the color parameter of the pixel of the image to determine the keypoint of the target object. A typical way to characterize a pixel of an image is represented by a five-tuple (x, y, r, g, b), where coordinates x and y are used as a position parameter of the pixel, and where color components r, g, and b are values of the pixel in RGB space, and the color of the pixel can be obtained by superimposing r, g, and b. Of course, the position parameter and the color parameter of the pixel may also be expressed by other manners, for example, the position parameter of the pixel is expressed by polar coordinates or UV coordinates, and the color parameter of the pixel is expressed by Lab space or CMY space, which is not limited in this disclosure.
As an example, a target object and/or a first parameter of the target object are determined in the image based on an image segmentation algorithm, wherein a common image segmentation algorithm may divide the image into regions according to similarity or homogeneity of color parameters of pixels in the image, and then determine pixels included in the combined regions as pixel regions of the target object by means of region combination, and further may determine a keypoint of the target object and a first parameter of other target objects based on the pixel regions; the method can further comprise the steps of determining a basic region of the target object according to color features and/or shape features of the target object, searching for a contour of the target object from the basic region according to discontinuity and catastrophe of color parameters of the target object, performing spatial extension according to the position of the contour, namely performing image segmentation according to feature points, lines and surfaces of an image to determine the contour of the target object, wherein a region in the contour of the target object is a pixel region of the target object, and further determining key points of the target object and first parameters of other target objects based on the pixel region. Of course, other image segmentation algorithms may also be used, and the embodiments of the present disclosure do not limit various image segmentation algorithms, and any existing or future image segmentation algorithm may be used in the embodiments of the present disclosure to determine the target object in the image and/or the first parameter of the target object.
As yet another example, based on color features and/or shape features of a target object and by a keypoint localization technique determining a target object and/or first parameters of the target object, wherein for example the target object comprises a facial object of a human body, contour keypoints of the facial object may be characterized by color features and/or shape features, and then feature extraction is performed in the image according to the position parameters and/or color parameters of pixels of the image according to the color features and/or shape features to determine contour keypoints of the facial object, since keypoints only occupy a very small area in an image (typically only a few to tens of pixels in size), the area occupied by color features and/or shape features corresponding to keypoints on an image is also typically very limited and local, there are two types of feature extraction methods currently used: (1) extracting one-dimensional range image features vertical to the contour; (2) the two-dimensional range image feature extraction of the key point square neighborhood includes various implementation methods, such as an ASM and AAM method, a statistical energy function method, a regression analysis method, a deep learning method, a classifier method, a batch extraction method, and the like, and the embodiment of the present disclosure is not particularly limited. After identifying the contour key points of the facial object, the contour of the target object can be found based on the discontinuity and the discontinuity of the contour key points and the color parameters of the target object, or other first parameters of the target object can be determined based on the contour key points of the facial object.
In the embodiment of the present disclosure, optionally, the first parameter of the target object includes, but is not limited to, one or more of the following parameters: color parameters, position parameters, length parameters, width parameters, shape parameters, scale parameters, type parameters, expression parameters, pose parameters, and other parameters. Alternatively, the first parameter of the target object may be calculated, characterized, or determined by a position parameter and/or a color parameter of pixels in a pixel area of the target object in the image. As an example, the first parameter of the target object comprises a length parameter of the eye object, the length parameter comprising a length from outer to inner canthus of the left or right eye object (e.g. the length parameter is characterized by a number of pixels, then the length parameter comprises a number of pixels between outer to inner canthus of the left or right eye object); as yet another example, the first parameter of the target object includes a scale parameter of an eye object, the scale parameter including a ratio between a length from an outer corner to an inner corner of a left eye object or a right eye object and a width of a face corresponding to the eye object; as another example, the first parameter of the target object includes a color parameter of the facial object, the color parameter including an average of color parameters of all pixels within a pixel region of the facial object (e.g., characterizing the color parameter of a pixel based on RGB channels, then the color parameter of the facial object is (r, g, b), where r is the sum of the values of all pixels within the pixel region of the facial object on r channel divided by the number of all pixels, g and b being calculated accordingly as described above). As another embodiment, the first parameter of the target object includes type parameters of a facial object, the type parameters include round face type, sharp face type and standard face type, and the first parameter may be determined according to a ratio of a width of a face at a transversely extending line of a mouth corner of the facial object to a width of a face at a zygomatic bone of the facial object, for example, when the ratio is less than 0.6, the first parameter is determined to be the sharp face type, when the ratio is greater than 0.8, the first parameter is determined to be the round face type, and the other parameters are determined to be the standard face type. The embodiment of the present disclosure does not limit the form and content of the first parameter of the target object, and the first parameter of the target object includes any parameter that can characterize the target object.
Step S103, determining a second parameter of the target object in the image;
in the embodiment of the present disclosure, optionally, the second parameter of the target object in the image includes, but is not limited to, one or more of the following parameters: color parameters, position parameters, length parameters, width parameters, shape parameters, scale parameters, type parameters, expression parameters, pose parameters, and other parameters. Regarding how to determine the target object and/or the second parameter of the target object from the image, the same or corresponding description on determining the target object and/or the first parameter of the target object in step S102 may be referred to, and details are not repeated here. Optionally, the second parameter is different from the first parameter.
It should be noted that, although steps are numbered in the embodiment of the present disclosure, the order of numbering does not mean the order of the steps, and taking step S102 and step S103 as an example, step S102 may be executed before step S103, may be executed after step S103, or may execute step S102 and step S103 simultaneously.
In an alternative embodiment, step S103: determining a second parameter of the target object in the image, comprising: and determining the second parameter corresponding to the first parameter according to a preset first corresponding relation. For example, step S102 is performed first: determining a first parameter of a target object in the image, and if the first parameter of the target object includes a color parameter of a facial object of a human body, the first correspondence indicates that a second parameter corresponding to the color parameter includes a face type parameter, determining the face type parameter of the facial object in step S103. For example, the first preset relationship may be realized by storing a corresponding relationship table, and after the first parameter is determined in step S102, the second parameter corresponding to the first parameter may be determined by querying the corresponding relationship table.
In yet another alternative embodiment, step S102: determining a first parameter of a target object in the image, comprising: and determining the first parameter corresponding to the second parameter according to a preset second corresponding relation. As mentioned before, step S102 may be performed after step S103, and after step S103 is performed: after determining the second parameter of the target object in the image, if the second parameter of the target object includes a hair style parameter of a human object and the second correspondence indicates that the first parameter corresponding to the hair style parameter includes a face parameter, the face parameter of the human object is determined in step S102. For implementation of the second corresponding relationship, reference may be made to the same or corresponding description of the first corresponding relationship, and details are not repeated here.
Step S104, correcting the first parameter according to the second parameter;
after the first parameter and the second parameter of the target object in the image are determined through step S102 and step S103, the first parameter is corrected according to the second parameter in step S104. As described in the background of the present disclosure, for example, when performing a face beautifying operation of "face thinning" on a face of a human subject, the face width parameter of the face subject in an image is often corrected based on a preset target width parameter, but this correction method does not consider that different individuals may have different features, and a better correction effect may not be obtained by performing correction using a uniform target parameter for different individuals, so in step S104, the first parameter is corrected according to the second parameter of the target subject, so that correction of the first parameter can be achieved according to the feature of the target subject in the image, so as to obtain a better correction effect.
In an alternative embodiment, modifying the first parameter in accordance with the second parameter includes: determining a modification rule associated with the first parameter according to the second parameter; and correcting the first parameter according to the correction rule. Wherein, the modification rule may be a preset modification rule. As an example, the first parameter of the target object includes an eyebrow parameter of the face object, the second parameter of the target object includes a face parameter of the face object, and if the face parameter determined in step S103 is a round face and the round face is suitable for a flat eyebrow and/or a thick eyebrow, the preset correction rule may include correcting the eyebrow parameter of the target object to the flat eyebrow and/or the thick eyebrow, optionally, the correction rule may be stored in advance by means of a correspondence table, and after determining the second parameter, a correction rule corresponding to the second parameter is determined by referring to the correspondence table, where the correction rule is associated with the first parameter.
Optionally, the modification rule includes a value range of the first parameter; modifying the first parameter according to the modification rule, comprising: and correcting the first parameter according to the value range. Based on the foregoing embodiment, for example, the first parameter of the target object includes an eyebrow shape parameter of the facial object, the second parameter of the target object includes a face shape parameter of the facial object, and since a round face fits a thick eyebrow, the modification rule includes a width range of the modified eyebrow shape, the width range including, for example, a minimum width and a maximum width of the eyebrow shape, so that when the first parameter, i.e., the eyebrow shape parameter, is modified according to the modification rule, it is ensured that the width of the eyebrow shape in the modified eyebrow shape parameter is greater than or equal to the minimum width in the width range and less than or equal to the maximum width in the width range, and the width of the eyebrow shape in the eyebrow shape parameter of the facial object can be modified to the minimum width, the maximum width, or a middle value of the width range.
Optionally, before modifying the first parameter according to the second parameter, the method further includes: determining a target parameter corresponding to the first parameter; and correcting the first parameter according to the value range, wherein the step of correcting the first parameter comprises the following steps: and under the condition that the target parameter belongs to the value range, determining the target parameter as the corrected first parameter. Optionally, modifying the first parameter according to the value range includes: and under the condition that the target parameter does not belong to the value range, correcting the first parameter according to the boundary value of the value range and the target parameter. In the embodiment of the present disclosure, the target parameter corresponding to the first parameter includes, for example, a color parameter, a position parameter, a length parameter, a width parameter, a shape parameter, a scale parameter, a type parameter, an expression parameter, a posture parameter, other parameters, and the like, for example, the target parameter is preset, and the target parameter may be determined through comparison and analysis of a large number of images, for example, when the first parameter is corrected according to the target parameter, a better correction effect may be obtained with a higher probability.
As an example, the first parameter of the target object includes a face length parameter of a human object, for example, a 60-pixel number between a chin key point and a top key point among key points of a face contour of the human object is determined as the face length parameter of the human object; the second parameter of the target object comprises a height parameter of the human object, e.g., the height parameter is 500 pixels); the correction rule determined according to the height parameter comprises: the ratio of the face length parameter to the height parameter is [0.125, 0.2], so that according to the rule it can be determined that the face length parameter has a value in the range of 62.5 pixels to 100 pixels, or the rule comprises a value in the range of 62.5 pixels to 100 pixels of the face length parameter, so that the first parameter can be modified according to the value range, for example the 60 pixels are modified according to a suitable algorithm or rule to the range of 62.5 pixels to 100 pixels (for example the face length parameter is directly modified to 62.5, 100 pixels, etc.). If the method further determines a target parameter corresponding to the first parameter, i.e. the face length parameter, for example, the target parameter is 65 pixels (a target parameter corresponding to the first parameter, e.g. the face width parameter of the human subject is 50 pixels, may be determined according to a preset rule, which includes determining the target parameter corresponding to the face length parameter as 1.3 times the face width parameter, i.e. 65 pixels), the target parameter, i.e. 65 pixels, may be determined as the modified first parameter, i.e. the face length parameter, since 65 pixels belong to a range of 62.5 pixels to 100 pixels; if the target parameter is 60 pixels (for example, the face width parameter of the human object is 50 pixels, the preset rule includes that the target parameter corresponding to the face length parameter is determined to be 60 pixels which are 1.2 times of the face width parameter), and since 60 pixels do not belong to the range of 62.5 pixels to 100 pixels, the face length parameter may be modified according to the target parameter, namely 60 pixels, and the boundary value of the range, namely 62.5 pixels and/or 100 pixels (for example, the first parameter, namely the face length parameter, is modified to be an average value of the target parameter and one boundary value of the range).
Optionally, the modification rule includes a modification type corresponding to the first parameter; modifying the first parameter according to the modification rule, comprising: and correcting the first parameter according to the correction type. As an example, the first parameter of the target object determined in step S102 comprises an eye parameter of a facial object, e.g. the eye parameter comprises a contour parameter of an eye in the facial object, an eye corner position parameter, an eye length parameter, an eye shadow color parameter, and/or an eye widest width parameter; the second parameter of the target object determined in step S103 includes a face type parameter of the facial object, for example, the face type parameter is a sharp face, and a sharp face is of a "danfeng eye" type, that is, a modification type included in the modification rule and corresponding to the eye parameter is of a "danfeng eye" type, so that in step S104, the eye parameter is modified according to the "danfeng eye" type, for example, according to a requirement of the "danfeng eye" type, an canthus position parameter in the eye parameter is modified to be higher than an inner canthus, and a ratio of an eye length parameter to a face width is modified to reach a preset ratio, and the like.
Step S105, rendering the target object in the image according to the corrected first parameter.
The first parameter is modified in step S104, so that the target object in the image can be rendered according to the modified first parameter in step S105 to implement image processing functions such as "beauty". In the process of rendering the target object in step S105, the image may be processed by using an existing or future image processing technology, for example, creating a vector diagram of the image through color space conversion and performing smoothing processing on the image, and performing variation of a position parameter and/or a color parameter on pixels in a region of the human object in the image according to the type and content of the first parameter, and the like, which are not described herein again.
According to the technical scheme of the embodiment of the disclosure, after the first parameter and the second parameter of the target object in the image are determined, the first parameter can be corrected according to the second parameter, and the target object in the image is rendered based on the corrected first parameter, that is, the first parameter of the target object is corrected according to other characteristics of the target object in the image and the target object is rendered, so that the target object can be rendered by adopting the unique, different or more appropriate first parameter for different target objects, and the rendering mode is more flexible.
Fig. 2 is a flowchart of a second embodiment of a method for rendering an image according to the embodiment of the present disclosure, in the second embodiment of the method, in step S105: after rendering the target object in the image according to the modified first parameter, the method further includes step S201; displaying the image; and/or storing the image. Since the function of rendering the target object in the image, for example, image processing such as beauty treatment of the human object in the image captured by the capturing device is implemented in step S105, the beauty treated image may be displayed and/or stored in step S201, so that the user can instantly view the rendered image effect and persist the rendered image.
Fig. 3 is a schematic structural diagram of an embodiment of an apparatus 300 for rendering an image according to an embodiment of the present disclosure, and as shown in fig. 3, the apparatus 300 for rendering an image includes an image obtaining module 301, a first parameter determining module 302, a second parameter determining module 303, a modifying module 304, and a rendering module 305. The image acquisition module 301 is configured to acquire an image; the first parameter determination module 302 is configured to determine a first parameter of a target object in the image; the second parameter determining module 303, configured to determine a second parameter of the target object in the image; the correcting module 304 is configured to correct the first parameter according to the second parameter; the rendering module 305 is configured to render the target object in the image according to the modified first parameter.
In an optional embodiment, the apparatus for rendering an image further comprises: a display module 305 and/or a storage module 306, wherein the display module 305 is used for displaying the image, and the storage module 306 is used for storing the image.
The apparatus shown in fig. 3 may perform the method of the embodiment shown in fig. 1 and/or fig. 2, and the parts not described in detail in this embodiment may refer to the related description of the embodiment shown in fig. 1 and/or fig. 2. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 1 and/or fig. 2, and are not described herein again.
Referring now to FIG. 4, a block diagram of an electronic device 300 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, electronic device 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus or a communication line 404. An input/output (I/O) interface 405 is also connected to the bus or communication line 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 409, or from the storage device 408, or from the ROM 402. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 401.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the method of rendering an image in the above embodiments.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (6)

1. A method of rendering an image, comprising:
acquiring an image;
determining a first parameter of a target object in the image by a position parameter of a pixel in a pixel region of the target object in the image;
determining a second parameter of a target object in the image by a position parameter of a pixel in a pixel region of the target object in the image;
determining a modification rule associated with the first parameter according to the second parameter; wherein the correction rule comprises a ratio value range of the first parameter and the second parameter;
determining the value range of the first parameter according to the ratio value range of the first parameter and the second parameter;
determining a target parameter corresponding to the first parameter;
determining the target parameter as the corrected first parameter under the condition that the target parameter belongs to the value range of the first parameter;
determining an average value of a boundary value in the value range of the target parameter and the first parameter as the corrected first parameter under the condition that the target parameter does not belong to the value range of the first parameter;
rendering the target object in the image according to the modified first parameter.
2. The method of rendering an image of claim 1, wherein determining the second parameter of the target object in the image comprises:
and determining the second parameter corresponding to the first parameter according to a preset first corresponding relation.
3. The method of rendering an image of claim 1, wherein determining a first parameter of a target object in the image comprises:
and determining the first parameter corresponding to the second parameter according to a preset second corresponding relation.
4. An apparatus for rendering an image, comprising:
the image acquisition module is used for acquiring an image;
a first parameter determination module for determining a first parameter of a target object in the image by a position parameter of a pixel in a pixel region of the target object in the image;
a second parameter determination module for determining a second parameter of a target object in the image by a position parameter of a pixel in a pixel region of the target object in the image;
the correction module is used for determining a correction rule associated with the first parameter according to the second parameter; wherein the correction rule comprises a ratio value range of the first parameter and the second parameter; determining the value range of the first parameter according to the ratio value range of the first parameter and the second parameter; determining a target parameter corresponding to the first parameter; determining the target parameter as the corrected first parameter under the condition that the target parameter belongs to the value range of the first parameter; determining an average value of a boundary value in the value range of the target parameter and the first parameter as the corrected first parameter under the condition that the target parameter does not belong to the value range of the first parameter;
and the rendering module is used for rendering the target object in the image according to the corrected first parameter.
5. An electronic device, comprising:
a memory for storing computer readable instructions; and
a processor for executing the computer readable instructions such that the processor when executed implements a method of rendering an image according to any of claims 1-3.
6. A non-transitory computer readable storage medium storing computer readable instructions which, when executed by a computer, cause the computer to perform the method of rendering an image of any one of claims 1-3.
CN201910331282.6A 2019-04-23 2019-04-23 Method and device for rendering image, electronic equipment and computer readable storage medium Active CN110097622B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910331282.6A CN110097622B (en) 2019-04-23 2019-04-23 Method and device for rendering image, electronic equipment and computer readable storage medium
PCT/CN2020/074443 WO2020215854A1 (en) 2019-04-23 2020-02-06 Method and apparatus for rendering image, electronic device, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910331282.6A CN110097622B (en) 2019-04-23 2019-04-23 Method and device for rendering image, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110097622A CN110097622A (en) 2019-08-06
CN110097622B true CN110097622B (en) 2022-02-25

Family

ID=67445687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910331282.6A Active CN110097622B (en) 2019-04-23 2019-04-23 Method and device for rendering image, electronic equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN110097622B (en)
WO (1) WO2020215854A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097622B (en) * 2019-04-23 2022-02-25 北京字节跳动网络技术有限公司 Method and device for rendering image, electronic equipment and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103605975A (en) * 2013-11-28 2014-02-26 小米科技有限责任公司 Image processing method and device and terminal device
CN104715236A (en) * 2015-03-06 2015-06-17 广东欧珀移动通信有限公司 Face beautifying photographing method and device

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004062651A (en) * 2002-07-30 2004-02-26 Canon Inc Image processor, image processing method, its recording medium and its program
WO2004110264A1 (en) * 2003-06-11 2004-12-23 Kose Corporation Skin evaluation method and image simulation method
KR20100056270A (en) * 2008-11-19 2010-05-27 삼성전자주식회사 Digital image signal processing method for color correction and digital image signal processing apparatus for applying the method
JP2013179464A (en) * 2012-02-28 2013-09-09 Nikon Corp Electronic camera
CN105279487B (en) * 2015-10-15 2022-03-15 Oppo广东移动通信有限公司 Method and system for screening beauty tools
CN106169172A (en) * 2016-07-08 2016-11-30 深圳天珑无线科技有限公司 A kind of method and system of image procossing
CN108229278B (en) * 2017-04-14 2020-11-17 深圳市商汤科技有限公司 Face image processing method and device and electronic equipment
CN109419140A (en) * 2017-08-31 2019-03-05 丽宝大数据股份有限公司 Recommend eyebrow shape display methods and electronic device
CN107680033B (en) * 2017-09-08 2021-02-19 北京小米移动软件有限公司 Picture processing method and device
CN107886484B (en) * 2017-11-30 2020-01-10 Oppo广东移动通信有限公司 Beautifying method, beautifying device, computer-readable storage medium and electronic equipment
CN108665521B (en) * 2018-05-16 2020-06-02 京东方科技集团股份有限公司 Image rendering method, device, system, computer readable storage medium and equipment
CN108734126B (en) * 2018-05-21 2020-11-13 深圳市梦网科技发展有限公司 Beautifying method, beautifying device and terminal equipment
CN108876732A (en) * 2018-05-25 2018-11-23 北京小米移动软件有限公司 Face U.S. face method and device
CN108765352B (en) * 2018-06-01 2021-07-16 联想(北京)有限公司 Image processing method and electronic device
CN108921856B (en) * 2018-06-14 2022-02-08 北京微播视界科技有限公司 Image cropping method and device, electronic equipment and computer readable storage medium
CN109584151B (en) * 2018-11-30 2022-12-13 腾讯科技(深圳)有限公司 Face beautifying method, device, terminal and storage medium
CN110097622B (en) * 2019-04-23 2022-02-25 北京字节跳动网络技术有限公司 Method and device for rendering image, electronic equipment and computer readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103605975A (en) * 2013-11-28 2014-02-26 小米科技有限责任公司 Image processing method and device and terminal device
CN104715236A (en) * 2015-03-06 2015-06-17 广东欧珀移动通信有限公司 Face beautifying photographing method and device

Also Published As

Publication number Publication date
CN110097622A (en) 2019-08-06
WO2020215854A1 (en) 2020-10-29

Similar Documents

Publication Publication Date Title
CN111368685B (en) Method and device for identifying key points, readable medium and electronic equipment
CN110084154B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN110072047B (en) Image deformation control method and device and hardware device
CN110070063B (en) Target object motion recognition method and device and electronic equipment
CN110062157B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN109784304B (en) Method and apparatus for labeling dental images
CN113658065B (en) Image noise reduction method and device, computer readable medium and electronic equipment
CN110298785A (en) Image beautification method, device and electronic equipment
CN112330527A (en) Image processing method, image processing apparatus, electronic device, and medium
CN110288519A (en) Image beautification method, device and electronic equipment
CN110211195B (en) Method, device, electronic equipment and computer-readable storage medium for generating image set
CN110069974A (en) Bloom image processing method, device and electronic equipment
CN110288521A (en) Image beautification method, device and electronic equipment
CN115311178A (en) Image splicing method, device, equipment and medium
CN111199169A (en) Image processing method and device
US11804032B2 (en) Method and system for face detection
CN109981989B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN114049417B (en) Virtual character image generation method and device, readable medium and electronic equipment
CN110097622B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN110047126B (en) Method, apparatus, electronic device, and computer-readable storage medium for rendering image
CN110059739B (en) Image synthesis method, image synthesis device, electronic equipment and computer-readable storage medium
CN110288552A (en) Video beautification method, device and electronic equipment
CN110264431A (en) Video beautification method, device and electronic equipment
CN110766631A (en) Face image modification method and device, electronic equipment and computer readable medium
CN116684394A (en) Media content processing method, apparatus, device, readable storage medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.

CP01 Change in the name or title of a patent holder