CN108495032B - Image processing method, image processing device, storage medium and electronic equipment - Google Patents

Image processing method, image processing device, storage medium and electronic equipment Download PDF

Info

Publication number
CN108495032B
CN108495032B CN201810254539.8A CN201810254539A CN108495032B CN 108495032 B CN108495032 B CN 108495032B CN 201810254539 A CN201810254539 A CN 201810254539A CN 108495032 B CN108495032 B CN 108495032B
Authority
CN
China
Prior art keywords
image
distance
shooting
virtual model
projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810254539.8A
Other languages
Chinese (zh)
Other versions
CN108495032A (en
Inventor
蓝和
谭筱
王健
邹奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810254539.8A priority Critical patent/CN108495032B/en
Publication of CN108495032A publication Critical patent/CN108495032A/en
Application granted granted Critical
Publication of CN108495032B publication Critical patent/CN108495032B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an image processing method, an image processing device, a storage medium and electronic equipment, wherein the image processing method is applied to first electronic equipment and comprises the following steps: starting an image shooting function and acquiring a current shooting image; in the process of acquiring the current shot image, receiving a three-dimensional image sent by second electronic equipment, wherein the three-dimensional image is obtained by shooting a target shot object by the second electronic equipment; generating a virtual model of the target shooting object according to the three-dimensional image; the virtual model is projected on the current shot image to generate a composite image, and the composite image is displayed in the preview frame, so that the photo of the user in different places is not required to be obtained through post-stage matting, and the method is simple and high in flexibility.

Description

Image processing method, image processing device, storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus, a storage medium, and an electronic device.
Background
With the development of terminal technology, the functions that the terminal can support become more and more powerful. For example, the terminal has a camera so that a photographing function and the like can be supported.
In many scenarios, a user may take a picture using the terminal's capture function. For example, when the user goes to a tour or meets with a friend, the scene at that time can be recorded through the shooting function of the terminal, and at this time, the terminal stores the shot image in an album, so that when the user wants to recall the good time, the image can be viewed from the album. However, there is no good co-photographing mode for users in different places, and a common method is to take two photos of people separately, and to synthesize the people in the two photos into one image in a post-period manner by means of matting.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a storage medium and electronic equipment, which can better complete the co-shooting of users at different places and have good shooting effect.
The embodiment of the application provides an image processing method, which is applied to first electronic equipment and comprises the following steps:
starting an image shooting function and acquiring a current shooting image;
receiving a three-dimensional image sent by second electronic equipment in the shooting process of a current shot image, wherein the three-dimensional image is obtained by shooting a target shot object by the second electronic equipment;
generating a virtual model of the target shooting object according to the three-dimensional image;
projecting the virtual model on a currently captured image to generate a composite image, and displaying the composite image in a preview frame.
An embodiment of the present application further provides an image processing apparatus, applied to a first electronic device, including:
the acquisition module is used for starting an image shooting function and acquiring a current shot image;
the receiving module is used for receiving a three-dimensional image sent by second electronic equipment in the process of acquiring a current shot image, wherein the three-dimensional image is obtained by shooting a target shot object by the second electronic equipment;
the generating module is used for generating a virtual model of the target shooting object according to the three-dimensional image;
and the projection module is used for projecting the virtual model on the current shot image to generate a composite image and displaying the composite image in a preview frame.
The embodiment of the application also provides a storage medium, wherein a plurality of instructions are stored in the storage medium, and the instructions are suitable for being loaded by a processor to execute any one of the image processing methods.
An embodiment of the present application further provides an electronic device, which includes a processor and a memory, where the processor is electrically connected to the memory, the memory is used to store instructions and data, and the processor is used in any one of the steps of the image processing method described above.
The image processing method, the device, the storage medium and the electronic equipment are applied to first electronic equipment, an image shooting function is started, a current shooting image is obtained, then, in the process of obtaining the current shooting image, a three-dimensional image sent by second electronic equipment is received, the three-dimensional image is obtained by shooting a target shooting object by the second electronic equipment, a virtual model of the target shooting object is generated according to the three-dimensional image, the virtual model is projected on the current shooting image to generate a synthetic image, and the synthetic image is displayed in a preview frame, so that the co-shooting of a different-place user can be obtained in the later period without image matting.
Drawings
The technical solution and other advantages of the present application will become apparent from the detailed description of the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a schematic view of an application scenario of an image processing system according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application.
Fig. 3 is another schematic flow chart of an image processing method according to an embodiment of the present application.
Fig. 4 is a schematic view of a preview box in the first electronic device according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a projection area provided in an embodiment of the present application.
Fig. 6 is a schematic diagram of a synthetic process of a character C projected on a cow body according to an embodiment of the present application.
Fig. 7 is a schematic view of a human C projected on a cow body according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of a projection module 40 according to an embodiment of the present disclosure.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides an image processing method, an image processing device, a storage medium and electronic equipment.
Referring to fig. 1, fig. 1 provides an image processing system, where the image processing system may include any one of the image processing apparatuses provided in the embodiments of the present application, and the image processing apparatus may be integrated in a first electronic device and a second electronic device, and the first electronic device and the second electronic device may include a device with a shooting function, such as a smartphone and a tablet computer.
The first electronic device can start an image shooting function, obtain a current shooting image, receive a three-dimensional image sent by a second electronic device in the process of obtaining the current shooting image, wherein the three-dimensional image is obtained by shooting a target shooting object by the second electronic device, generate a virtual model of the target shooting object according to the three-dimensional image, project the virtual model on the current shooting image to generate a composite image, and display the composite image in a preview frame.
For example, in fig. 1, the first electronic device and the second electronic device are both smart phones, which have a camera and a display screen, the camera is mainly used for capturing images, the display screen is used for displaying composite images, and the camera of the second electronic device is a dual camera. Specifically, when a user of the first electronic device clicks a 'co-shooting' key on a display interface, the user can generate a camera starting instruction, the instruction can be simultaneously transmitted to the camera of the user and a dual camera of the second electronic device to control the camera of the user and the dual camera of the second electronic device to shoot images, wherein the image shot by the first electronic device is used as a real scene image, a three-dimensional image shot by the second electronic device is used for the first electronic device to generate a virtual model, and then the first electronic device can project the generated virtual model into the real scene image to realize co-shooting between a real scene and a remote real object.
As shown in fig. 2, fig. 2 is a schematic flowchart of an image processing method provided in an embodiment of the present application, and the specific flowchart is as follows:
101. and starting an image shooting function and acquiring a current shot image.
In this embodiment, the user may start the first electronic device to take an image by clicking a certain key, such as a "group photo" key, or by starting a different-place group photo function through voice or a designated gesture.
102. And in the process of acquiring the current shot image, receiving a three-dimensional image sent by second electronic equipment, wherein the three-dimensional image is obtained by shooting a target shot object by the second electronic equipment.
In this embodiment, the first electronic device is a master device, the second electronic device is a slave device, when it is necessary to perform remote photography, the master device and the slave device establish a communication connection in advance, and then start a photography function through the master device, at this time, the slave device also performs image shooting at the same time, and transmits a three-dimensional image shot by itself to the master device in real time, where the three-dimensional image is shot by two cameras of the slave device, for example, the two cameras simultaneously shoot a target photographic subject to obtain two images, then calculates depth-of-field information of the target photographic subject according to a fixed distance between the two cameras and a gray level difference between the two images, and generates a three-dimensional image according to the depth-of-field information.
103. And generating a virtual model of the target shooting object according to the three-dimensional image.
In this embodiment, the corresponding virtual model may be generated according to the texture information and the depth information included in the three-dimensional image. It should be noted that the operation of generating the virtual model may be performed by the first electronic device or may be performed by the second electronic device, which is not limited herein, and since a single three-dimensional image can only generate a three-dimensional model corresponding to a shooting angle, if an entire virtual model of a target shooting object is to be generated, three-dimensional images, such as a front angle and a rear angle, or a left angle and a right angle, can be shot from multiple angles, and then the entire virtual model is generated from the multiple three-dimensional images.
104. The virtual model is projected on the currently captured image to generate a composite image, and the composite image is displayed in a preview frame.
In this embodiment, the user may set the projection position and the projection view angle by himself, for example, designate the projection position and the projection view angle by voice, touch, gesture, or the like, and then project the virtual model to the projection position according to the projection view angle. Of course, to further increase the fidelity and avoid the serious discomfort caused by the large difference between the projection size of the virtual model and the size of the real scene character, the step 104 may further include:
1-1, detecting the shooting distance of at least one shot object in the current shot image.
In this embodiment, the current captured image may also be a three-dimensional image, that is, the camera in the first electronic device may also be a dual camera, and the shooting distance (that is, the depth of field) of the object to be captured may be calculated by using the dual cameras.
1-2, determining the projection point of the virtual model on the current shot image.
In this embodiment, the currently-captured image may be displayed in the preview frame in real time, and the user may select a desired projection point in the preview frame by clicking or the like.
And 1-3, projecting the virtual model on the current shot image according to the shooting distance and the projection point.
For example, the steps 1 to 3 may specifically include:
1-3-1, determining a target photographic object from the at least one photographed object, and determining a photographing distance of the target photographic object as a first distance.
In this embodiment, the subject may be a human being, an animal, or a non-living object. For a single object to be shot, the shooting distance of the object to be shot can be directly used as the first distance, and for a plurality of objects to be shot, because the distance between each object to be shot and the first electronic device is usually different, the user can freely select the shooting distance of a certain object to be shot as the projection distance of the virtual model to perform size scaling on the virtual model, and the selection mode can be any one of clicking, voice, gesture and the like.
1-3-2, adjusting the size of the virtual model according to the first distance.
In this embodiment, an adjustment ratio corresponding to the first distance may be obtained, and the virtual model is scaled by using the adjustment ratio, where the adjustment ratio is preset by a user, for example, the user associates and stores various shooting distances and corresponding adjustment ratios in a local library in advance, and the adjustment ratio may be determined according to a real size of an object and a display size of an image.
And 1-3-3, projecting the adjusted virtual model by taking the projection point as a projection center.
In this embodiment, it is considered that if two persons standing front and back in a real environment are photographed, a person standing in front usually blocks part or even all of an image of a person standing in back, so to increase the fidelity, the blocking problem may also be considered when a virtual model is projected, that is, the above steps 1-3-3 may specifically include:
determining the projection area of the adjusted virtual model on the current shot image by taking the projection point as a projection center;
detecting whether a photographed object overlapping with the projection area exists;
if the image is overlapped, taking the shooting distance of the overlapped shot objects as a second distance, and acquiring an overlapped area;
determining a displayable region in the projection region according to the first distance, the second distance and the overlapping region;
and projecting the virtual model corresponding to the displayable area.
In this embodiment, the projection point may be projected as a projection center, and the projection point may also be projected as another point of the virtual model, such as the highest point or the lowest point of the virtual model. The overlapping region is a region where the subject and the virtual model overlap each other when displayed, the display content of the overlapping region is determined according to the imaging distance between the subject and the virtual model, and the displayable region is a region where the virtual model can normally display.
For example, the step of "determining a displayable region from the projection region according to the first distance, the second distance, and the overlapping region" may specifically include:
when the first distance is smaller than the second distance, taking the area except the overlapped area in the projection area as a displayable area;
and when the first distance is not more than the second distance, taking the projection area as a displayable area.
In this embodiment, when the photographed object and the virtual model are overlapped in display, which is in front of the photographed object and which is behind the photographed object may be determined according to the photographing distance between the photographed object and the virtual model, and the object located behind the photographed object may be partially shielded by the object located in front of the photographed object when the photographed object and the virtual model are displayed, where the shielded part is also an overlapping area of the photographed object and the virtual model on the image.
It is easy to understand that the composite image may be displayed not only in the preview frame of the first electronic device, but also in the preview frame of the second electronic device, that is, after the step 104, the image processing method may further include:
generating a display instruction carrying the synthetic image, wherein the display instruction is used for indicating to display the synthetic image;
and sending the display instruction to the second electronic equipment.
In this embodiment, before ending the photographing of the co-photography in the different places, for example, when the user of the first electronic device does not click the photographing confirmation button, the second electronic device needs to transmit the photographed image to the first electronic device in real time, and also needs to receive and display the synthesized image returned by the first electronic device in real time, so that the users of both parties can conveniently view the co-photography at the same time.
As can be seen from the above, the image processing method provided in this embodiment is applied to a first electronic device, and obtains a current captured image by starting an image capturing function, and then receives a three-dimensional image sent by a second electronic device in the process of obtaining the current captured image, where the three-dimensional image is obtained by capturing a target object by the second electronic device, and generates a virtual model of the target object according to the three-dimensional image, and then projects the virtual model onto the current captured image to generate a composite image, and displays the composite image in a preview frame, so that a group photograph of a different user can be obtained without matting in a later stage.
In this embodiment, a description will be given from the perspective of an image processing apparatus, and in particular, the image processing apparatus will be integrated into a first electronic device and a second electronic device, and the two cameras of the first electronic device and the second electronic device are both dual cameras as an example for detailed description.
Referring to fig. 3, a specific flow of an image processing method may be as follows:
201. the first electronic device starts an image shooting function and obtains a current shooting image.
202. The method comprises the steps that a first electronic device receives a three-dimensional image sent by a second electronic device in the process of acquiring a current shot image, wherein the three-dimensional image is obtained by shooting a target shot object by the second electronic device.
For example, referring to fig. 1, when a user a of a first electronic device clicks a "group photo" button on a display interface, the user a generates a group photo command, the group photo command is simultaneously transmitted to a camera of the user a and a camera of a second electronic device to trigger the cameras to capture images, and the second electronic device transmits a three-dimensional image captured by the second electronic device to the first electronic device.
203. The first electronic device generates a virtual model of the target photographic subject from the three-dimensional image.
For example, the first electronic device generates a corresponding virtual model according to texture information and depth information included in the three-dimensional image.
204. The first electronic equipment detects the shooting distance of at least one shot object in the current shot image and determines the projection point of the virtual model on the current shot image.
For example, referring to fig. 4, there may be three subjects of a cow, a person a, and a person B on the current captured image, the virtual model may be a three-dimensional model of a person C, and specifically, when the captured image is displayed in the preview frame, the user may click on the preview frame and use the image position corresponding to the clicked position as the projection point.
205. The first electronic equipment determines a target shooting object from the at least one shot object, determines the shooting distance of the target shooting object as a first distance, and then adjusts the size of the virtual model according to the first distance.
206. And the first electronic equipment determines the projection area of the adjusted virtual model on the current shot image by taking the projection point as a projection center.
For example, the first electronic device determines an adjustment ratio according to the shooting distance of the target shooting object, and performs the size scaling on the three-dimensional model of the person C by using the adjustment ratio. Specifically, the user may select a target object by clicking, voice, gesture, or the like, and may set a projection direction (the default projection direction is the front projection direction), and then, when performing projection with the projection point as the projection center, the projection area needs to be determined in combination with the projection direction.
207. The first electronic device detects whether there is a subject overlapping the projection area, and if so, executes step 208 described below, and if not, sets the projection area as a displayable area.
For example, referring to fig. 5, if the projection area is indicated by a dashed line, when the projection point is M1, the display position of the cow clearly overlaps the projection area, and when the projection point is M2, the subject does not overlap the projection area.
208. The first electronic equipment takes the shooting distance of the shot objects which are overlapped as a second distance, acquires an overlapped area, and takes an area except the overlapped area in the projection area as a displayable area when the first distance is smaller than the second distance; and when the first distance is not more than the second distance, taking the projection area as a displayable area.
209. The first electronic device projects the virtual model corresponding to the displayable region to generate a composite image.
For example, if the user wants to project the virtual model of the character C in front of the cow at the projection point M1, the shooting distance of B may be used as the first distance, and the projected composite image may be shown in fig. 6, and if the user wants to project the virtual model of the character C in back of the cow, the shooting distance of a may be used as the first distance, and the projected composite image may be shown in fig. 7.
210. The first electronic device displays the composite image in a preview frame, generates a display instruction carrying the composite image, and then sends the display instruction to the second electronic device, wherein the display instruction is used for indicating the composite image to be displayed.
For example, after the composite image is generated and displayed, when the user clicks the "ok" button in the preview box, the first electronic device may save the composite image in an album and at the same time, may send it to the second electronic device for display.
As can be seen from the above description, the image processing method provided by this embodiment is applied to a first electronic device and a second electronic device, where the first electronic device can start an image capturing function and obtain a current captured image, and during the obtaining process of the current captured image, receive a three-dimensional image sent by the second electronic device, where the three-dimensional image is obtained by the second electronic device capturing a target object, then obtain the current captured image, and receive a three-dimensional image captured by the second electronic device, where the three-dimensional image is obtained by the second electronic device capturing the target object, then generate a virtual model of the target object according to the three-dimensional image, then generate a corresponding virtual model according to texture information and depth information included in the three-dimensional image, then detect a capturing distance of at least one captured object in the current captured image, determining a projection point of the virtual model on a current shot image, then determining a target shot object from the at least one shot object, determining a shooting distance of the target shot object as a first distance, then adjusting the size of the virtual model according to the first distance, then determining a projection area of the adjusted virtual model on the current shot image by taking the projection point as a projection center, then detecting whether a shot object overlapped with the projection area exists, if so, taking the shooting distance of the overlapped shot object as a second distance, and acquiring an overlapped area, and taking an area except the overlapped area in the projection area as a displayable area when the first distance is smaller than the second distance; when the first distance is not larger than the second distance, the projection area is used as a displayable area, then the composite image is displayed in the preview frame, a display instruction carrying the composite image is generated, then the display instruction is sent to the second electronic equipment, and the display instruction is used for indicating and displaying the composite image, so that the photo album of the user in different places can be obtained without matting in the later period.
According to the method described in the foregoing embodiment, the embodiment will be further described from the perspective of an image processing apparatus, which may be specifically implemented as a stand-alone entity, or may be implemented by being integrated in an electronic device, such as a terminal, where the terminal may include a mobile phone, a tablet computer, and the like.
Referring to fig. 8, fig. 8 specifically illustrates an image processing apparatus provided in an embodiment of the present application, which is applied to a first electronic device, and the image processing apparatus may include: the system comprises a starting module 10, a receiving module 20, a generating module 30 and a projecting module 40, wherein:
(1) start module 10
And the starting module 10 is used for starting the image shooting function and acquiring the current shot image.
In this embodiment, the user may start the first electronic device to take an image by clicking a certain key, such as a "group photo" key, or by starting a different-place group photo function through voice or a designated gesture.
(2) Receiving module 20
The receiving module 20 is configured to receive a three-dimensional image sent by a second electronic device in a process of acquiring a current captured image, where the three-dimensional image is obtained by capturing a target captured object by the second electronic device.
In this embodiment, the first electronic device is a master device, the second electronic device is a slave device, when it is necessary to perform remote photography, the master device and the slave device establish a communication connection in advance, and then start a photography function through the master device, at this time, the slave device also performs image shooting at the same time, and transmits a three-dimensional image shot by itself to the master device in real time, where the three-dimensional image is shot by two cameras of the slave device, for example, the two cameras simultaneously shoot a target photographic subject to obtain two images, then calculates depth-of-field information of the target photographic subject according to a fixed distance between the two cameras and a gray level difference between the two images, and generates a three-dimensional image according to the depth-of-field information.
(3) Generation module 30
And a generating module 30, configured to generate a virtual model of the target object according to the three-dimensional image.
In this embodiment, the corresponding virtual model may be generated according to the texture information and the depth information included in the three-dimensional image. It should be noted that the operation of generating the virtual model may be performed by the first electronic device or may be performed by the second electronic device, which is not limited herein, and since a single three-dimensional image can only generate a three-dimensional model corresponding to a shooting angle, if an entire virtual model of a target shooting object is to be generated, three-dimensional images, such as a front angle and a rear angle, or a left angle and a right angle, can be shot from multiple angles, and then the entire virtual model is generated from the multiple three-dimensional images.
(4) Projection module 40
And a projection module 40 for projecting the virtual model on the current captured image to generate a composite image and displaying the composite image in the preview frame.
In this embodiment, the user may set the projection position and the projection view angle by himself, for example, designate the projection position and the projection view angle by voice, touch, gesture, or the like, and then project the virtual model to the projection position according to the projection view angle. Of course, to further increase the fidelity and avoid the serious discomfort caused by the large difference between the projection size of the virtual model and the size of the real scene character, please refer to fig. 9, the projection module 40 may specifically include:
and a detection sub-module 41 for detecting a shooting distance of at least one object in the current shot image.
In this embodiment, the current captured image may also be a three-dimensional image, that is, the camera in the first electronic device may also be a dual camera, and the shooting distance (that is, the depth of field) of the object to be captured may be calculated by using the dual cameras.
A determination submodule 42 for determining a projection point of the virtual model on the currently captured image;
in this embodiment, the currently-captured image may be displayed in the preview frame in real time, and the user may select a desired projection point in the preview frame by clicking or the like.
And a projection sub-module 43 for projecting the virtual model on the currently captured image according to the capturing distance and the projection point.
For example, the projection submodule 43 may be specifically configured to:
1-3-1, determining a target photographic object from the at least one photographed object, and determining a photographing distance of the target photographic object as a first distance.
In this embodiment, the subject may be a human being, an animal, or a non-living object. For a single object to be shot, the shooting distance of the object to be shot can be directly used as the first distance, and for a plurality of objects to be shot, because the distance between each object to be shot and the first electronic device is usually different, the user can freely select the shooting distance of a certain object to be shot as the projection distance of the virtual model to perform size scaling on the virtual model, and the selection mode can be any one of clicking, voice, gesture and the like.
1-3-2, adjusting the size of the virtual model according to the first distance.
In this embodiment, an adjustment ratio corresponding to the first distance may be obtained, and the virtual model is scaled by using the adjustment ratio, where the adjustment ratio is preset by a user, for example, the user associates and stores various shooting distances and corresponding adjustment ratios in a local library in advance, and the adjustment ratio may be determined according to a real size of an object and a display size of an image.
And 1-3-3, projecting the adjusted virtual model by taking the projection point as a projection center.
In this embodiment, it is considered that if two persons standing front and back in a real environment are photographed, a person standing in front usually blocks part or even all of an image of a person standing behind, so to increase the fidelity, when a virtual model is projected, the blocking problem can be considered, that is, the projection submodule 43 can be used for:
determining the projection area of the adjusted virtual model on the current shot image by taking the projection point as a projection center;
detecting whether a photographed object overlapping with the projection area exists;
if the image is overlapped, taking the shooting distance of the overlapped shot objects as a second distance, and acquiring an overlapped area;
determining a displayable region from the projection region according to the first distance, the second distance and the overlapping region;
and projecting the virtual model corresponding to the displayable area.
In this embodiment, the projection point may be projected as a projection center, and the projection point may also be projected as another point of the virtual model, such as the highest point or the lowest point of the virtual model. The overlapping region is a region where the subject and the virtual model overlap each other when displayed, the display content of the overlapping region is determined according to the imaging distance between the subject and the virtual model, and the displayable region is a region where the virtual model can normally display.
Further, the projection sub-module 43 may be configured to:
when the first distance is smaller than the second distance, taking the area except the overlapped area in the projection area as a displayable area;
and when the first distance is not more than the second distance, taking the projection area as a displayable area.
In this embodiment, when the photographed object and the virtual model are overlapped in display, which is in front of the photographed object and which is behind the photographed object may be determined according to the photographing distance between the photographed object and the virtual model, and the object located behind the photographed object may be partially shielded by the object located in front of the photographed object when the photographed object and the virtual model are displayed, where the shielded part is also an overlapping area of the photographed object and the virtual model on the image.
It is easy to understand that the composite image can be displayed not only in the preview frame of the first electronic device, but also in the preview frame of the second electronic device, that is, the projection module 40 can be further configured to:
generating a display instruction carrying the synthetic image, wherein the display instruction is used for indicating to display the synthetic image;
and sending the display instruction to the second electronic equipment.
In this embodiment, before ending the photographing of the co-photography in the different places, for example, when the user of the first electronic device does not click the photographing confirmation button, the second electronic device needs to transmit the photographed image to the first electronic device in real time, and also needs to receive and display the synthesized image returned by the first electronic device in real time, so that the users of both parties can conveniently view the co-photography at the same time.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
As can be seen from the above, the image processing apparatus provided in this embodiment is applied to a first electronic device, and starts an image capturing function through the starting module 10, and obtains a currently captured image, and then, during the acquisition of the currently captured image, the receiving module 20 receives the three-dimensional image transmitted by the second electronic device, the three-dimensional image is obtained by shooting a target shooting object by the second electronic device, the generating module 30 generates a virtual model of the target shooting object according to the three-dimensional image, then the projecting module 40 projects the virtual model on the current shooting image to generate a composite image, and displays the composite image in a preview frame, therefore, the photo of the user at different places can be obtained without matting in the later period, the method is simple, and the photo generated by the virtual model is high in fidelity and good in photo combining effect because the virtual model is generated by using the real user image.
In addition, the embodiment of the application also provides electronic equipment which can be equipment such as a smart phone and a tablet computer. As shown in fig. 10, the electronic device 900 includes a processor 901, a memory 902, a display 903, and a control circuit 904. The processor 901 is electrically connected to the memory 902, the display 903, and the control circuit 904.
The processor 901 is a control center of the electronic device 900, connects various parts of the whole electronic device by using various interfaces and lines, executes various functions of the electronic device and processes data by running or loading an application program stored in the memory 902 and calling the data stored in the memory 902, thereby performing overall monitoring of the electronic device.
In this embodiment, the processor 901 in the electronic device 900 loads instructions corresponding to processes of one or more application programs into the memory 902 according to the following steps, and the processor 901 runs the application programs stored in the memory 902, so as to implement various functions:
starting an image shooting function and acquiring a current shooting image;
in the process of acquiring the current shot image, receiving a three-dimensional image sent by second electronic equipment, wherein the three-dimensional image is obtained by shooting a target shot object by the second electronic equipment;
generating a virtual model of the target shooting object according to the three-dimensional image;
the virtual model is projected on the currently captured image to generate a composite image, and the composite image is displayed in a preview frame.
Memory 902 may be used to store applications and data. The memory 902 stores applications containing instructions executable in the processor. The application programs may constitute various functional modules. The processor 901 executes various functional applications and data processing by running an application program stored in the memory 902.
The display 903 may be used to display information input by or provided to the user as well as various graphical user interfaces of the terminal, which may be comprised of images, text, icons, video, and any combination thereof.
The control circuit 904 is electrically connected to the display 903, and is configured to control the display 903 to display information.
In some embodiments, as shown in fig. 10, the electronic device 900 further comprises: a radio frequency circuit 905, an input unit 906, an audio circuit 907, a sensor 908, and a power supply 909. The processor 901 is electrically connected to the rf circuit 905, the input unit 906, the audio circuit 907, the sensor 908, and the power source 909.
The radio frequency circuit 905 is configured to receive and transmit radio frequency signals, so as to establish wireless communication with a network device or other electronic devices through wireless communication, and receive and transmit signals with the network device or other electronic devices.
The input unit 906 may be used to receive input numbers, character information, or user characteristic information (e.g., a fingerprint), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control. The input unit 906 may include a fingerprint recognition module.
The audio circuit 907 may provide an audio interface between the user and the terminal through a speaker, microphone, or the like.
The electronic device 900 may also include at least one sensor 908, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that may turn off the display panel and/or the backlight when the terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal, detailed description is omitted here.
The power supply 909 is used to supply power to the various components of the electronic device 900. In some embodiments, the power source 909 may be logically connected to the processor 901 through a power management system, so that functions of managing charging, discharging, and power consumption management are realized through the power management system.
Although not shown in fig. 10, the electronic device 900 may further include a camera, a bluetooth module, etc., which are not described in detail herein.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor. To this end, the present invention provides a storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps in any one of the image processing methods provided by the embodiments of the present invention.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any image processing method provided in the embodiment of the present invention, the beneficial effects that can be achieved by any image processing method provided in the embodiment of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
In summary, although the present application has been described with reference to the preferred embodiments, the above-described preferred embodiments are not intended to limit the present application, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the present application, so that the scope of the present application shall be determined by the appended claims.

Claims (8)

1. An image processing method applied to a first electronic device is characterized by comprising the following steps:
starting an image shooting function and acquiring a current shooting image;
in the process of acquiring a current shot image, receiving a three-dimensional image sent by second electronic equipment, wherein the three-dimensional image is obtained by shooting a target shot object by the second electronic equipment;
generating a virtual model of the target shooting object according to the three-dimensional image;
projecting the virtual model on a currently shot image to generate a composite image, and displaying the composite image in a preview frame; wherein projecting the virtual model on a currently captured image comprises: detecting the shooting distance of at least one shot object in the current shot image; determining a projection point of the virtual model on a current shot image; projecting the virtual model on a current shot image according to the shooting distance and the projection point; the shooting distance of the subject refers to the depth of field of the subject.
2. The image processing method according to claim 1, wherein the projecting the virtual model on the current captured image according to the capturing distance and the projection point comprises:
determining a target photographic object from the at least one photographed object, and determining a photographic distance of the target photographic object as a first distance; the shooting distance of the target shooting object refers to the depth of field of the target shooting object;
adjusting the size of the virtual model according to the first distance;
and projecting the adjusted virtual model by taking the projection point as a projection center.
3. The image processing method according to claim 2, wherein the projecting the adjusted virtual model with the projection point as a projection center includes:
determining a projection area of the adjusted virtual model on the current shot image by taking the projection point as a projection center;
detecting whether a photographed object overlapping with the projection area exists or not;
if the image is overlapped, taking the shooting distance of the overlapped shot objects as a second distance, and acquiring an overlapped area; the shooting distance of the shot object refers to the depth of field of the shot object;
determining a displayable region from the projection region according to the first distance, the second distance and the overlapping region;
and projecting the virtual model corresponding to the displayable area.
4. The image processing method according to claim 3, wherein the determining a displayable region of the projection region from the first distance, the second distance, and the overlap region comprises:
when the first distance is smaller than the second distance, taking the area except the overlapping area in the projection area as a displayable area;
and when the first distance is not more than the second distance, taking the projection area as a displayable area.
5. An image processing apparatus applied to a first electronic device, comprising:
the acquisition module is used for starting an image shooting function and acquiring a current shot image;
the receiving module is used for receiving a three-dimensional image sent by second electronic equipment in the process of acquiring a current shot image, wherein the three-dimensional image is obtained by shooting a target shot object by the second electronic equipment;
the generating module is used for generating a virtual model of the target shooting object according to the three-dimensional image;
the projection module is used for projecting the virtual model on a current shot image to generate a composite image and displaying the composite image in a preview frame;
wherein the projection module comprises:
the detection submodule is used for detecting the shooting distance of at least one shot object in the current shot image; the shooting distance of the shot object refers to the depth of field of the shot object;
the determining submodule is used for determining a projection point of the virtual model on a current shot image;
and the projection submodule is used for projecting the virtual model on the current shot image according to the shooting distance and the projection point.
6. The image processing apparatus according to claim 5, wherein the projection sub-module is specifically configured to:
determining a target photographic object from the at least one photographed object, and determining a photographic distance of the target photographic object as a first distance; the shooting distance of the target shooting object refers to the depth of field of the target shooting object;
adjusting the size of the virtual model according to the first distance;
and projecting the adjusted virtual model by taking the projection point as a projection center.
7. A storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform the image processing method of any of claims 1 to 4.
8. An electronic device comprising a processor and a memory, the processor being electrically connected to the memory, the memory being configured to store instructions and data, the processor being configured to perform the steps of the image processing method according to any one of claims 1 to 4.
CN201810254539.8A 2018-03-26 2018-03-26 Image processing method, image processing device, storage medium and electronic equipment Expired - Fee Related CN108495032B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810254539.8A CN108495032B (en) 2018-03-26 2018-03-26 Image processing method, image processing device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810254539.8A CN108495032B (en) 2018-03-26 2018-03-26 Image processing method, image processing device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN108495032A CN108495032A (en) 2018-09-04
CN108495032B true CN108495032B (en) 2020-08-04

Family

ID=63337879

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810254539.8A Expired - Fee Related CN108495032B (en) 2018-03-26 2018-03-26 Image processing method, image processing device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN108495032B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110944109B (en) * 2018-09-21 2022-01-14 华为技术有限公司 Photographing method, device and equipment
CN110060355B (en) * 2019-04-29 2023-05-23 北京小米移动软件有限公司 Interface display method, device, equipment and storage medium
CN112511815B (en) * 2019-12-05 2022-01-21 中兴通讯股份有限公司 Image or video generation method and device
CN113012042B (en) * 2019-12-20 2023-01-20 海信集团有限公司 Display device, virtual photo generation method, and storage medium
CN113436301B (en) * 2020-03-20 2024-04-09 华为技术有限公司 Method and device for generating anthropomorphic 3D model
CN111401459A (en) * 2020-03-24 2020-07-10 谷元(上海)文化科技有限责任公司 Animation figure form change vision capture system
CN111901518B (en) * 2020-06-23 2022-05-17 维沃移动通信有限公司 Display method and device and electronic equipment
CN112381949B (en) * 2020-11-03 2024-11-01 恒信东方文化股份有限公司 Virtual three-dimensional image creation method and system thereof
CN112887601B (en) * 2021-01-26 2022-09-16 维沃移动通信有限公司 Shooting method and device and electronic equipment
CN113805701A (en) * 2021-09-16 2021-12-17 北京百度网讯科技有限公司 Method for determining virtual image display range, electronic device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1687970A (en) * 2005-04-27 2005-10-26 蔡涛 Interactive controlling method for selecting 3-D image body reconstructive partial body
CN103308452A (en) * 2013-05-27 2013-09-18 中国科学院自动化研究所 Optical projection tomography image capturing method based on depth-of-field fusion
CN105187709A (en) * 2015-07-28 2015-12-23 努比亚技术有限公司 Remote photography implementing method and terminal
CN106296574A (en) * 2016-08-02 2017-01-04 乐视控股(北京)有限公司 3-d photographs generates method and apparatus

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002083285A (en) * 2000-07-07 2002-03-22 Matsushita Electric Ind Co Ltd Image compositing device and image compositing method
JP5235798B2 (en) * 2009-06-22 2013-07-10 富士フイルム株式会社 Imaging apparatus and control method thereof
CN101605211B (en) * 2009-07-23 2011-01-05 杭州镭星科技有限公司 Method for seamlessly composing virtual three-dimensional building and real-scene video of real environment
CN102111561A (en) * 2009-12-25 2011-06-29 新奥特(北京)视频技术有限公司 Three-dimensional model projection method for simulating real scenes and device adopting same
CN102110299A (en) * 2009-12-25 2011-06-29 新奥特(北京)视频技术有限公司 Method and device for projecting application distortion in three-dimensional model
US9269219B2 (en) * 2010-11-15 2016-02-23 Bally Gaming, Inc. System and method for augmented reality with complex augmented reality video image tags
CN102831401B (en) * 2012-08-03 2016-01-13 樊晓东 To following the tracks of without specific markers target object, three-dimensional overlay and mutual method and system
CN103167232B (en) * 2012-10-26 2016-04-20 苏州比特速浪电子科技有限公司 Camera head, image synthesizer and image processing method
CN104143212A (en) * 2014-07-02 2014-11-12 惠州Tcl移动通信有限公司 Reality augmenting method and system based on wearable device
CN107092774B (en) * 2017-03-10 2020-03-13 昆山华大智造云影医疗科技有限公司 Method and device for providing reference information
CN107451953A (en) * 2017-08-07 2017-12-08 珠海格力电器股份有限公司 Group photo generation method and device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1687970A (en) * 2005-04-27 2005-10-26 蔡涛 Interactive controlling method for selecting 3-D image body reconstructive partial body
CN103308452A (en) * 2013-05-27 2013-09-18 中国科学院自动化研究所 Optical projection tomography image capturing method based on depth-of-field fusion
CN105187709A (en) * 2015-07-28 2015-12-23 努比亚技术有限公司 Remote photography implementing method and terminal
CN106296574A (en) * 2016-08-02 2017-01-04 乐视控股(北京)有限公司 3-d photographs generates method and apparatus

Also Published As

Publication number Publication date
CN108495032A (en) 2018-09-04

Similar Documents

Publication Publication Date Title
CN108495032B (en) Image processing method, image processing device, storage medium and electronic equipment
RU2715797C1 (en) Method and apparatus for synthesis of virtual reality objects
CN109246466B (en) Video playing method and device and electronic equipment
CN108632543B (en) Image display method, image display device, storage medium and electronic equipment
CN111065001B (en) Video production method, device, equipment and storage medium
CN105554372B (en) Shooting method and device
CN110865754A (en) Information display method and device and terminal
KR20170040385A (en) Method and terminal for acquiring panoramic image
CN109803165A (en) Method, apparatus, terminal and the storage medium of video processing
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN111586431B (en) Method, device and equipment for live broadcast processing and storage medium
CN110196673B (en) Picture interaction method, device, terminal and storage medium
CN110839174A (en) Image processing method and device, computer equipment and storage medium
CN108924422B (en) Panoramic photographing method and mobile terminal
CN113763228A (en) Image processing method, image processing device, electronic equipment and storage medium
CN109618192B (en) Method, device, system and storage medium for playing video
WO2022237839A1 (en) Photographing method and apparatus, and electronic device
CN114009003A (en) Image acquisition method, device, equipment and storage medium
CN111447365B (en) Shooting method and electronic equipment
CN110798621A (en) Image processing method and electronic equipment
CN110086998B (en) Shooting method and terminal
CN108881721A (en) A kind of display methods and terminal
CN113485596A (en) Virtual model processing method and device, electronic equipment and storage medium
CN109005337A (en) A kind of photographic method and terminal
KR102557592B1 (en) Method and apparatus for displaying an image, electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200804

CF01 Termination of patent right due to non-payment of annual fee