CN105678696B - A kind of information processing method and electronic equipment - Google Patents

A kind of information processing method and electronic equipment Download PDF

Info

Publication number
CN105678696B
CN105678696B CN201410670018.2A CN201410670018A CN105678696B CN 105678696 B CN105678696 B CN 105678696B CN 201410670018 A CN201410670018 A CN 201410670018A CN 105678696 B CN105678696 B CN 105678696B
Authority
CN
China
Prior art keywords
picture
foreground image
unit
depth information
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410670018.2A
Other languages
Chinese (zh)
Other versions
CN105678696A (en
Inventor
严琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201410670018.2A priority Critical patent/CN105678696B/en
Priority to US14/658,756 priority patent/US9607394B2/en
Publication of CN105678696A publication Critical patent/CN105678696A/en
Application granted granted Critical
Publication of CN105678696B publication Critical patent/CN105678696B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a kind of information processing methods, which comprises obtains the first operation, first operation is for choosing foreground image from the first picture;Based on first operation, the foreground image being located in the first picture is determined;Obtain the status information of the foreground image;The second operation is obtained, second operation is for the foreground image to be placed in second picture;Based on second operation, it is determined as the second picture of background;Obtain the status information of the second picture;According to the status information of the status information of the foreground image and the second picture, foreground image target size occupied in the second picture is determined;The foreground image is scaled the target size;The foreground image is shown in the second picture with the target size, is presented to the user.The present invention also discloses a kind of electronic equipment.

Description

Information processing method and electronic equipment
Technical Field
The present invention relates to electronic technologies, and in particular, to an information processing method and an electronic device.
Background
Background replacement is a common function in image editing applications. Background replacement is a process: an object O in one picture a is placed as a foreground in another picture B, where the object O is referred to as a foreground object or a foreground image. In this process of background replacement, the foreground object O typically needs to be scaled. Scaling foreground objects generally has two reasons: the first reason is that the scale sizes of the foreground object O and the picture B in human vision are matched by scaling; the second reason is that after moving the foreground object O to picture B, the foreground object O needs to be scaled to fit the new background (i.e., picture B). These manual scaling processes make the learning of the background replacement function difficult and difficult for non-professional editors to master.
Disclosure of Invention
In view of this, embodiments of the present invention provide an information processing method and an electronic device for solving at least one problem in the prior art, which can automatically zoom a foreground image, thereby improving user experience.
The technical scheme of the embodiment of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an information processing method, where the method includes:
acquiring a first operation, wherein the first operation is used for selecting a foreground image from a first picture;
determining a foreground image located in a first picture based on the first operation;
acquiring state information of the foreground image;
acquiring a second operation, wherein the second operation is used for placing the foreground image in a second picture;
determining a second picture as a background based on the second operation;
acquiring state information of the second picture;
determining the target size occupied by the foreground image in the second picture according to the state information of the foreground image and the state information of the second picture;
scaling the foreground image to the target size;
and displaying the foreground image in the second picture in the target size for presentation to a user.
In a second aspect, an embodiment of the present invention provides an electronic device, which includes a first obtaining unit, a first determining unit, a second obtaining unit, a third obtaining unit, a second determining unit, a fourth obtaining unit, a third determining unit, a zooming unit, and a display unit, wherein:
the first obtaining unit is used for obtaining a first operation, and the first operation is used for selecting a foreground image from a first picture;
the first determining unit is used for determining a foreground image in a first picture based on the first operation;
the second acquiring unit is used for acquiring the state information of the foreground image;
the third obtaining unit is configured to obtain a second operation, where the second operation is used to place the foreground image in a second picture;
the second determining unit is configured to determine a second picture as a background based on the second operation;
the fourth obtaining unit is configured to obtain state information of the second picture;
the third determining unit is used for determining the target size occupied by the foreground image in the second picture according to the state information of the foreground image and the state information of the second picture;
the scaling unit is used for scaling the foreground image into the target size;
and the display unit is used for displaying the foreground image in the second picture in the target size and presenting the foreground image to a user.
The embodiment of the invention provides an information processing method and electronic equipment, wherein the information processing method comprises the following steps: acquiring a first operation, wherein the first operation is used for selecting a foreground image from a first picture; determining a foreground image located in a first picture based on the first operation; acquiring state information of the foreground image; acquiring a second operation, wherein the second operation is used for placing the foreground image in a second picture; determining a second picture as a background based on the second operation; acquiring state information of the second picture; determining the target size occupied by the foreground image in the second picture according to the state information of the foreground image and the state information of the second picture; scaling the foreground image to the target size; displaying the foreground image in the second picture in the target size, and presenting the foreground image to a user; therefore, the foreground image can be automatically zoomed, and the user experience is improved.
Drawings
FIG. 1-1 is a schematic flow chart illustrating an implementation of an information processing method according to an embodiment of the present invention;
1-2-1-5 are schematic diagrams illustrating operations of a first operation and a second operation having consistency according to a first embodiment of the present invention;
fig. 1-6 to fig. 1-10 are schematic diagrams illustrating the first operation and the second operation in two actions according to the first embodiment of the present invention;
FIG. 2 is a schematic flow chart of an implementation of a second information processing method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a third electronic device according to an embodiment of the invention.
Detailed Description
The embodiment of the invention provides an information processing method, which belongs to the field of image processing and is applied to electronic equipment, wherein the electronic equipment comprises terminals such as a smart phone, a tablet personal computer, a notebook computer, a desktop computer, a navigator, a Personal Digital Assistant (PDA), an electronic reader and the like, and the electronic equipment can run an application program (hereinafter referred to as application for short), and the application at least comprises an application for editing pictures; the electronic device of the embodiment of the present invention may further include an image capturing unit such as a camera, where the camera may have an imaging function of a general camera, and the camera may be further configured to acquire depth information of a photographed object. In a specific implementation process, the Camera may be an Array Camera (Array Camera), the Array Camera is composed of a plurality of optical systems and a plurality of image sensors, the Array Camera may acquire two or more pictures at the same time, and then synthesize data to form an image through an algorithm, and may estimate depth information by using the algorithm. The multiple sensors in the array camera also help create various special effects, such as it can select any object to focus, and can select any object in the scene to refocus when or after taking a picture; the array camera can also focus on a plurality of objects.
The information processing method provided by the embodiment of the invention can be realized by calling a program code through a processor in the electronic equipment, or the information processing method is embodied by an application program, a manufacturer of the electronic equipment can be directly preset in the electronic equipment for a user to use, the user can download the application program such as a picture editing application from an application store, and the information processing mode can be saved in a computer storage medium in a program code mode regardless of the mode. It should be noted that the electronic device may include the array camera and the like described above in addition to the processor and the storage medium.
The technical solution of the present invention is further elaborated below with reference to the drawings and the specific embodiments.
Fig. 1-1 is a schematic flow chart of an implementation of an information processing method according to an embodiment of the present invention, as shown in fig. 1-1, the method includes:
step 101, acquiring a first operation, wherein the first operation is used for selecting a foreground image from a first picture; determining a foreground image located in a first picture based on the first operation;
here, the first operation may be a preset action, and in the process of implementing the method, the first operation may be a preset action in a picture editing application, for example, starting the picture editing application, opening a first picture by using the picture editing application, and then selecting an object from the first picture by using a tool carried by the picture editing application, such as a selection tool, a cropping tool, and the like, and taking the object as a foreground image. Certainly, a simpler operation mode may also be provided, for example, if the display unit of the electronic device is a touch display screen, the user may press a certain object in the first picture for a long time, and use the object as a foreground object, where the long press means that the operation body stays on the object for more than a certain threshold, for example, the stay for more than 2 seconds is regarded as a long press operation, the operation body may be a finger or a stylus, and the long press with the finger may be a long press with a single finger or a long press with two fingers.
Here, the first picture is composed of pixels, and each pixel may include the following information, for example: color information and depth information. The color information may be represented by gray scale information or color information, where the gray scale information represents only brightness of color and the color information represents saturation, hue and brightness.
102, acquiring state information of the foreground image;
here, the state information includes at least depth information and a size occupied by the object in the picture, which is a size of an area occupied by the object, and in general, the size occupied by the object may be expressed in terms of the number of pixel points. The state information of the foreground image includes depth information of the foreground image and a size occupied by the foreground image. Those skilled in the art can obtain the state information of the foreground image through various prior arts, and the detailed description is omitted here.
103, acquiring a second operation, wherein the second operation is used for placing the foreground image in a second picture; determining a second picture as a background based on the second operation;
here, the first picture is different from the second picture, and the difference mainly means that the background of the first picture is different from that of the second picture.
Step 104, acquiring state information of the second picture;
here, the status information of the second picture includes depth information of each object in the second picture, and when the second picture includes a plurality of objects, the depth information of each object in the second picture may be the same or different, where the object in the second picture may be a part of a face or a limb of a person, and of course, the object in the second picture may also be a scene such as a big tree, or a still object such as a table, a wall, or the like.
Step 105, determining a target size occupied by the foreground image in the second picture according to the state information of the foreground image and the state information of the second picture;
step 106, scaling the foreground image to the target size; and displaying the foreground image in the second picture in the target size for presentation to a user.
In an embodiment of the present invention, when the display unit of the electronic device is a touch display screen, the obtaining of the first operation may include step a1 and step a 2; wherein,
step A1, detecting whether touch operation meeting a first predetermined condition occurs;
step A2, when the operation satisfies the first predetermined condition, determining to acquire a first operation.
Here, the touch operation satisfies the first predetermined condition, which may be implemented by, but not limited to, the following forms: whether the touch operation is a single-click touch operation or not; or whether the touch operation is a double-click touch operation.
Here, when the display unit of the electronic device is a non-touch display screen, that is, the electronic device adopts a conventional key input method, the obtaining of the first operation includes steps B1 and B2, where:
step B1, detecting whether the key operation meeting the second preset condition occurs;
and step B2, when the key operation meets the second preset condition, determining to acquire the first operation.
Here, the key operation satisfying the second predetermined condition may be implemented by, but not limited to, the following form: whether the key operation is long pressing of a certain key, for example, pressing of a function key-HOME key for more than 2 seconds; or whether the key operation is a combination of certain keys, such as a combination of a function key-HOME key and a volume key.
In the embodiment of the present invention, the first operation and the second operation are consecutive actions; or, the first operation and the second operation are two actions separated by a first time period. When the first operation and the second operation are two actions with a first time period at an interval, the acquiring of the first operation is similar to the acquiring of the second operation, and thus the description is omitted.
The following describes a scenario in which the technical solution provided by the embodiment of the present invention can be used, with the first operation and the second operation being consecutive actions: starting a picture editing application, as shown in fig. 1-2, opening two pictures 11 and 14 by using the picture editing application, wherein the first picture 11 comprises a book 13, and a pen 12 is placed on the book 13; the second picture 14 comprises a table 15 on which a stack of books 16 is placed. Then, as shown in fig. 1-3, the user holds down the pen 12 in the first picture 11, i.e. selects the pen 12 as the foreground image, and then drags to drop the pen 12 on the table 15 of the second picture 14. 1-4, the user releases the pen 12 in the appropriate position; as shown in fig. 1-5, the pen 12 is reduced in size to the pen 17, and the user can perceive that the application has reduced the size of the pen 12 in the original first picture to the size of the pen 17 to better match the background in the second picture. It should be noted that, during the process that the user presses the pen 12 for a long time, the application program may obtain the state information of the pen 12, and when the user releases the pen 12, the application program may also obtain the state information of the second picture 14, so that the application program may determine the target size occupied by the pen 12 in the second picture 14 according to the state information of the pen 12 and the state information of the second picture 14; the pen 12 is then scaled to the target size (i.e. the size of the pen 17); the pen 12 is finally displayed in the second picture 14 in the size of the pen 17 for presentation to the user. In this embodiment, the user performs the first operation and the second operation by a series of operations of dragging the foreground image to the target position by long pressing the selected foreground image.
The following describes a scenario in which the technical solution provided by the embodiment of the present invention can be used, with the first operation and the second operation as two actions with a first time period in between: starting a picture editing application, as shown in fig. 1-6, opening a first picture 11 by using the picture editing application, wherein the first picture 11 includes a book 13, and a pen 12 is placed on the book 13. Then, as shown in fig. 1-7, the user selects the pen 12 in the first picture 11 (the pen 12 in fig. 1-7 is shown in dashed lines to show that the pen 12 is selected by the user) by means of a selection tool on the application, i.e. selects the pen 12 as the foreground image. Then, as shown in fig. 1-8, opening a second picture 14 by using the picture editing application, wherein the second picture 14 includes a table 15 on which a stack of books 16 is placed; the user then places the originally selected pen 12 on the table 15 of the second picture 14 by selecting the appropriate position with a mouse or hand, as shown in fig. 1-9. As shown in fig. 1-10, the pen 12 is reduced in size to the pen 17, and the user can perceive that the application has reduced the size of the pen 12 in the original first picture to the size of the pen 17 to better match the background in the second picture. It should be noted that, in the process of selecting the pen 12 by the user, the application program may obtain the state information of the pen 12, and when the user releases the pen 12, the application program may also obtain the state information of the second picture 14, so that the application program may determine the target size occupied by the pen 12 in the second picture 14 according to the state information of the pen 12 and the state information of the second picture 14; the pen 12 is then scaled to the target size (i.e. the size of the pen 17); the pen 12 is finally displayed in the second picture 14 in the size of the pen 17 for presentation to the user. In this embodiment, the selection of the foreground image by the user and the determination of the second picture are performed by two actions with a certain time interval.
The embodiment of the invention provides an information processing method and electronic equipment, wherein the information processing method comprises the following steps: acquiring a first operation, and determining a foreground image in a first picture based on the first operation; acquiring state information of the foreground image; acquiring a second operation, and determining a second picture as a background based on the second operation; acquiring state information of the second picture; determining the target size occupied by the foreground image in the second picture according to the state information of the foreground image and the state information of the second picture; scaling the foreground image to the target size; displaying the foreground image in the second picture in the target size, and presenting the foreground image to a user; therefore, the foreground image can be automatically zoomed, and the user experience is improved.
Example two
Based on the first embodiment, an information processing method is provided in an embodiment of the present invention and applied to an electronic device, and fig. 2 is a schematic flow chart illustrating an implementation of a second information processing method in an embodiment of the present invention, as shown in fig. 2, the method includes:
step 201, acquiring a first operation, wherein the first operation is used for selecting a foreground image from a first picture; determining a foreground image located in a first picture based on the first operation;
here, the first operation may be a preset action, and in the process of implementing the method, the first operation may be a preset action in a picture editing application, for example, starting the picture editing application, opening a first picture by using the picture editing application, and then selecting an object from the first picture by using a tool carried by the picture editing application, such as a selection tool, a cropping tool, and the like, and taking the object as a foreground image. Certainly, a simpler operation mode may also be provided, for example, if the display unit of the electronic device is a touch display screen, the user may press a certain object in the first picture for a long time, and use the object as a foreground object, where the long press means that the operation body stays on the object for more than a certain threshold, for example, the stay for more than 2 seconds is regarded as a long press operation, the operation body may be a finger or a stylus, and the long press with the finger may be a long press with a single finger or a long press with two fingers.
Here, the first picture is composed of pixels, and each pixel may include the following information, for example: color information and depth information. The color information may be represented by gray scale information or color information, where the gray scale information represents only brightness of color and the color information represents saturation, hue and brightness.
Step 202, acquiring a first size occupied by the foreground image in the first picture and first depth information of the foreground image in the first picture;
step 203, acquiring a second operation, wherein the second operation is used for placing the foreground image in a second picture; determining a second picture as a background based on the second operation;
here, the first picture is different from the second picture, and the difference mainly means that the background of the first picture is different from that of the second picture.
Step 204, acquiring second depth information of the second picture;
step 205, determining a target size occupied by the foreground image in the second picture according to the first size, the first depth information and the second depth information;
step 206, scaling the foreground image to the target size; and displaying the foreground image in the second picture in the target size for presentation to a user.
In this embodiment of the present invention, in step 205, the target size C' occupied by the foreground image in the second picture is determined according to the first size, the first depth information, and the second depth information, and specifically may be calculated by the following formula (1):
in the formula (1), C 'is a target size, C is the first size, d' is the first depth information, and d is the second depth information.
In this embodiment of the present invention, the obtaining a first size occupied by the foreground image in the first picture includes:
acquiring the total number of pixels included in the foreground image, specifically including:
judging whether the pixels in the first picture are in the foreground object one by one to obtain a first judgment result;
when the first judgment result shows that the pixel is in the foreground object, adding 1 to the number of pixels included in the foreground object;
and when the first judgment result shows that the pixel is not in the foreground object, judging whether the next pixel in the first picture is in the foreground object.
The method for obtaining the total number of pixels included in the foreground image provided above is to check one by one whether each pixel in the first picture is in the foreground image, if it is determined that the pixel is in the foreground image, the total number of pixels in the foreground image is added by 1, otherwise, it is determined whether the next pixel is in the foreground image, and the total number of pixels of the foreground image obtained after the last check is finished is C.
In the embodiment of the invention, the state information further comprises a focal length of an image acquisition unit for shooting pictures;
correspondingly, step 205, determining a target size occupied by the foreground image in the second picture according to the first size, the first depth information and the second depth information, includes:
determining a target size occupied by the foreground image in the second picture according to the first size, the first depth information, the second depth information, a focal length of an image acquisition unit used for shooting the first picture, and a focal length of an image acquisition unit used for shooting the second picture, which can be specifically calculated by the following formula (2):
in formula (2), C ' is a target size, C is the first size, d ' is the first depth information, d is the second depth information, f ' is a focal length of an image acquisition unit used for taking the second picture, f is a focal length of an image acquisition unit used for taking the first picture, wherein the image acquisition unit includes a camera or a camera. The focal length is a fixed internal parameter of the image acquisition unit, and can be obtained by a standard pattern calibration method.
With respect to equation (2), when the first picture and the second picture employ the same image capture unit, equation (2) may be transformed into equation (3):
the first depth information and the second depth information in the formula (1) and the formula (2) can be obtained from corresponding image acquisition units, and each pixel point has corresponding depth information.
In the embodiment of the present invention, the above formula (1) and formula (2) are derived through the following steps, and how to derive the formula (2) is described below:
in the above-mentioned formula, the compound of formula,as is well known, since the focal length f 'of the image acquisition unit used to take the second picture is much smaller than d', while the focal length f of the image acquisition unit used to take the first picture is much smaller than d, equation (1) can be derived:
EXAMPLE III
Based on the foregoing method embodiment, an embodiment of the present invention provides an electronic device, where functions implemented by units included in the electronic device in the following description may be implemented by a processor in the electronic device calling a program code; fig. 3 is a schematic diagram of a composition structure of a third electronic device according to an embodiment of the present invention, and as shown in fig. 3, the electronic device 300 includes a first obtaining unit 301, a first determining unit 302, a second obtaining unit 303, a third obtaining unit 304, a second determining unit 305, a fourth obtaining unit 306, a third determining unit 307, a scaling unit 308, and a display unit 309, where:
the first obtaining unit 301 is configured to obtain a first operation, where the first operation is used to select a foreground image from a first picture;
the first determining unit 302 is configured to determine, based on the first operation, a foreground image located in a first picture;
the second obtaining unit 303 is configured to obtain state information of the foreground image;
the third obtaining unit 304 is configured to obtain a second operation, where the second operation is used to place the foreground image in a second picture;
the second determining unit 305 is configured to determine a second picture as a background based on the second operation;
the fourth obtaining unit 306 is configured to obtain status information of the second picture;
the third determining unit 307, configured to determine, according to the state information of the foreground image and the state information of the second picture, a target size occupied by the foreground image in the second picture;
the scaling unit 308 is configured to scale the foreground image to the target size;
the display unit 309 is configured to display the foreground image in the second picture with the target size, and present the foreground image to a user.
Here, the first operation may be a preset action, and in the process of implementing the method, the first operation may be a preset action in a picture editing application, for example, starting the picture editing application, opening a first picture by using the picture editing application, and then selecting an object from the first picture by using a tool carried by the picture editing application, such as a selection tool, a cropping tool, and the like, and taking the object as a foreground image. Certainly, a simpler operation mode may also be provided, for example, if the display unit of the electronic device is a touch display screen, the user may press a certain object in the first picture for a long time, and use the object as a foreground object, where the long press means that the operation body stays on the object for more than a certain threshold, for example, the stay for more than 2 seconds is regarded as a long press operation, the operation body may be a finger or a stylus, and the long press with the finger may be a long press with a single finger or a long press with two fingers.
Here, the first picture is composed of pixels, and each pixel may include the following information, for example: color information and depth information. The color information may be represented by gray scale information or color information, where the gray scale information represents only brightness of color and the color information represents saturation, hue and brightness.
Here, the state information includes at least depth information and a size occupied by the object in the picture, which is a size of an area occupied by the object, and in general, the size occupied by the object may be expressed in terms of the number of pixel points. The state information of the foreground image includes depth information of the foreground image and a size occupied by the foreground image. Those skilled in the art can obtain the state information of the foreground image through various prior arts, and the detailed description is omitted here.
Here, the first picture is different from the second picture, and the difference mainly means that the background of the first picture is different from that of the second picture.
Here, the status information of the second picture includes depth information of each object in the second picture, and when the second picture includes a plurality of objects, the depth information of each object in the second picture may be the same or different, where the object in the second picture may be a part of a face or a limb of a person, and of course, the object in the second picture may also be a scene such as a big tree, or a still object such as a table, a wall, or the like.
In the embodiment of the present invention, the first operation and the second operation are consecutive actions; or, the first operation and the second operation are two actions separated by a first time period. When the first operation and the second operation are two actions with a first time period at an interval, the acquiring of the first operation is similar to the acquiring of the second operation, and thus the description is omitted.
The following describes a scenario in which the technical solution provided by the embodiment of the present invention can be used, with the first operation and the second operation being consecutive actions: starting a picture editing application, as shown in fig. 1-2, opening two pictures 11 and 14 by using the picture editing application, wherein the first picture 11 comprises a book 13, and a pen 12 is placed on the book 13; the second picture 14 comprises a table 15 on which a stack of books 16 is placed. Then, as shown in fig. 1-3, the user holds down the pen 12 in the first picture 11, i.e. selects the pen 12 as the foreground image, and then drags to drop the pen 12 on the table 15 of the second picture 14. 1-4, the user releases the pen 12 in the appropriate position; as shown in fig. 1-5, the pen 12 is reduced in size to a size greater than 17, and the user can perceive that the application has reduced the size of the pen 12 in the original first picture to a size of the pen 17 to better match the background in the second picture. It should be noted that, during the process that the user presses the pen 12 for a long time, the application program may obtain the state information of the pen 12, and when the user releases the pen 12, the application program may also obtain the state information of the second picture 14, so that the application program may determine the target size occupied by the pen 12 in the second picture 14 according to the state information of the pen 12 and the state information of the second picture 14; then scaling ratio 12 to the target size (i.e., a size of ratio 17); the pen 12 is finally displayed in the second picture 14 in the size of the pen 17 for presentation to the user. In this embodiment, the user performs the first operation and the second operation by a series of operations of dragging the foreground image to the target position by long pressing the selected foreground image.
The following describes a scenario in which the technical solution provided by the embodiment of the present invention can be used, with the first operation and the second operation as two actions with a first time period in between: starting a picture editing application, as shown in fig. 1-6, opening a first picture 11 by using the picture editing application, wherein the first picture 11 includes a book 13, and a pen 12 is placed on the book 13. Then, as shown in fig. 1-7, the user selects the pen 12 in the first picture 11 (the pen 12 in fig. 1-7 is shown in dashed lines to show that the pen 12 is selected by the user) by means of a selection tool on the application, i.e. selects the pen 12 as the foreground image. Then, as shown in fig. 1-8, opening a second picture 14 by using the picture editing application, wherein the second picture 14 includes a table 15 on which a stack of books 16 is placed; the user then places the originally selected pen 12 on the table 15 of the second picture 14 by selecting the appropriate position with a mouse or hand, as shown in fig. 1-9. As shown in fig. 1-10, the pen 12 is reduced in size to a size greater than 17, and the user may perceive that the application has reduced the size of the pen 12 in the original first picture to a size of the pen 17 to better match the background in the second picture. It should be noted that, in the process of selecting the pen 12 by the user, the application program may obtain the state information of the pen 12, and when the user releases the pen 12, the application program may also obtain the state information of the second picture 14, so that the application program may determine the target size occupied by the pen 12 in the second picture 14 according to the state information of the pen 12 and the state information of the second picture 14; then scaling ratio 12 to the target size (i.e., a size of ratio 17); the pen 12 is finally displayed in the second picture 14 in the size of the pen 17 for presentation to the user. In this embodiment, the selection of the foreground image by the user and the determination of the second picture are performed by two actions with a certain time interval.
The embodiment of the invention provides an information processing method and electronic equipment, wherein the information processing method comprises the following steps: the first acquisition unit 301 acquires a first operation, and the first determination unit 302 determines a foreground image located in a first picture based on the first operation; the second acquiring unit 303 acquires state information of the foreground image; a third acquisition unit 304 acquires a second operation, based on which the second determination unit 305 determines a second picture as a background; the fourth obtaining unit 306 obtains the status information of the second picture; the third determining unit 307 determines a target size occupied by the foreground image in the second picture according to the state information of the foreground image and the state information of the second picture; the scaling unit 308 scales the foreground image to the target size; the display unit 309 displays the foreground image in the second picture with the target size, and presents the foreground image to a user; therefore, the foreground image can be automatically zoomed, and the user experience is improved.
Example four
Based on the third embodiment, an embodiment of the present invention provides an electronic device, where functions implemented by units included in the electronic device in the following description may be implemented by a processor in the electronic device calling a program code; the electronic equipment comprises a first acquisition unit, a first determination unit, a second acquisition unit, a third acquisition unit, a second determination unit, a fourth acquisition unit, a third determination unit, a scaling unit and a display unit, wherein:
the first obtaining unit is used for obtaining a first operation, and the first operation is used for selecting a foreground image from a first picture;
here, the first operation may be a preset action, and in the process of implementing the method, the first operation may be a preset action in a picture editing application, for example, starting the picture editing application, opening a first picture by using the picture editing application, and then selecting an object from the first picture by using a tool carried by the picture editing application, such as a selection tool, a cropping tool, and the like, and taking the object as a foreground image. Certainly, a simpler operation mode may also be provided, for example, if the display unit of the electronic device is a touch display screen, the user may press a certain object in the first picture for a long time, and use the object as a foreground object, where the long press means that the operation body stays on the object for more than a certain threshold, for example, the stay for more than 2 seconds is regarded as a long press operation, the operation body may be a finger or a stylus, and the long press with the finger may be a long press with a single finger or a long press with two fingers.
The first determining unit is used for determining a foreground image in a first picture based on the first operation;
here, the first picture is composed of pixels, and each pixel may include the following information, for example: color information and depth information. The color information may be represented by gray scale information or color information, where the gray scale information represents only brightness of color and the color information represents saturation, hue and brightness.
The second acquiring unit is used for acquiring a first size occupied by the foreground image in the first picture and first depth information of the foreground image in the first picture;
the third obtaining unit is configured to obtain a second operation, where the second operation is used to place the foreground image in a second picture;
the second determining unit is configured to determine a second picture as a background based on the second operation;
here, the first picture is different from the second picture, and the difference mainly means that the background of the first picture is different from that of the second picture.
The fourth obtaining unit is configured to obtain second depth information of the second picture;
the third determining unit is configured to determine a target size occupied by the foreground image in the second picture according to the first size, the first depth information, and the second depth information;
the scaling unit is used for scaling the foreground image into the target size;
and the display unit is used for displaying the foreground image in the second picture in the target size and presenting the foreground image to a user.
In this embodiment of the present invention, the third determining unit determines, according to the first size, the first depth information, and the second depth information, a target size C' occupied by the foreground image in the second picture, which can be specifically calculated by the following formula (1):
in the formula (1), C 'is a target size, C is the first size, d' is the first depth information, and d is the second depth information.
In this embodiment of the present invention, the acquiring, by the second acquiring unit, the first size occupied by the foreground image in the first picture includes:
acquiring the total number of pixels included in the foreground image, specifically including:
judging whether the pixels in the first picture are in the foreground object one by one to obtain a first judgment result;
when the first judgment result shows that the pixel is in the foreground object, adding 1 to the number of pixels included in the foreground object;
and when the first judgment result shows that the pixel is not in the foreground object, judging whether the next pixel in the first picture is in the foreground object.
The method for obtaining the total number of pixels included in the foreground image provided above is to check one by one whether each pixel in the first picture is in the foreground image, if it is determined that the pixel is in the foreground image, the total number of pixels in the foreground image is added by 1, otherwise, it is determined whether the next pixel is in the foreground image, and the total number of pixels of the foreground image obtained after the last check is finished is C.
In the embodiment of the invention, the state information further comprises a focal length of an image acquisition unit for shooting pictures;
correspondingly, the third determining unit determines a target size occupied by the foreground image in the second picture according to the first size, the first depth information and the second depth information, and includes:
determining a target size occupied by the foreground image in the second picture according to the first size, the first depth information, the second depth information, a focal length of an image acquisition unit used for shooting the first picture, and a focal length of an image acquisition unit used for shooting the second picture, which can be specifically calculated by the following formula (2):
in formula (2), C ' is a target size, C is the first size, d ' is the first depth information, d is the second depth information, f ' is a focal length of an image acquisition unit used for shooting the second picture, f is a focal length of an image acquisition unit used for shooting the first picture, wherein the image acquisition unit includes a camera or a camera. The focal length is a fixed internal parameter of the image acquisition unit, and can be obtained by a standard pattern calibration method.
With respect to equation (2), when the first picture and the second picture employ the same image capture unit, equation (2) may be transformed into equation (3):
the first depth information and the second depth information in the formula (1) and the formula (2) can be obtained from corresponding image acquisition units, and each pixel point has corresponding depth information.
In the embodiment of the present invention, the above formula (1) and formula (2) are derived through the following steps, and how to derive the formula (2) is described below:
in the above-mentioned formula, the compound of formula,as is well known, since the focal length f 'of the image acquisition unit used to take the second picture is much smaller than d', while the focal length f of the image acquisition unit used to take the first picture is much smaller than d, equation (1) can be derived:
in the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. An information processing method, characterized in that the method comprises:
acquiring a first operation, wherein the first operation is used for selecting a foreground image from a first picture;
determining a foreground image located in a first picture based on the first operation;
acquiring state information of the foreground image; wherein the state information of the foreground image comprises: a first size occupied by the foreground image in the first picture and first depth information of the foreground image in the first picture;
acquiring a second operation, wherein the second operation is used for placing the foreground image in a second picture;
determining a second picture as a background based on the second operation;
acquiring state information of the second picture; wherein the state information of the second picture comprises: second depth information of the second picture;
determining the target size occupied by the foreground image in the second picture according to the state information of the foreground image and the state information of the second picture; wherein the target size is proportional to the first size;
scaling the foreground image to the target size;
and displaying the foreground image in the second picture in the target size for presentation to a user.
2. The method of claim 1, wherein the first operation and the second operation are consecutive actions; or,
the first operation and the second operation are two actions separated by a first time period.
3. The method of claim 1, wherein the status information comprises depth information of the object in the picture and a size occupied by the object in the picture;
correspondingly, acquiring the state information of the foreground image comprises the following steps:
acquiring a first size occupied by the foreground image in the first picture and first depth information of the foreground image in the first picture;
correspondingly, acquiring the state information of the second picture, including:
acquiring second depth information of the second picture;
correspondingly, determining the target size occupied by the foreground image in the second picture according to the state information of the foreground image and the state information of the second picture comprises:
and determining a target size occupied by the foreground image in the second picture according to the first size, the first depth information and the second depth information.
4. The method of claim 3, wherein the status information further comprises a focal length of an image capture unit used to take the picture;
correspondingly, determining a target size occupied by the foreground image in the second picture according to the first size, the first depth information and the second depth information includes:
and determining the target size occupied by the foreground image in the second picture according to the first size, the first depth information, the second depth information, the focal length of an image acquisition unit for shooting the first picture and the focal length of an image acquisition unit for shooting the second picture.
5. The method of claim 3, wherein the obtaining a first size occupied by the foreground image in the first picture comprises:
and acquiring the total number of pixels included in the foreground image.
6. An electronic device, comprising a first acquisition unit, a first determination unit, a second acquisition unit, a third acquisition unit, a second determination unit, a fourth acquisition unit, a third determination unit, a scaling unit, and a display unit, wherein:
the first obtaining unit is used for obtaining a first operation, and the first operation is used for selecting a foreground image from a first picture;
the first determining unit is used for determining a foreground image in a first picture based on the first operation;
the second acquiring unit is used for acquiring the state information of the foreground image; wherein the state information of the foreground image comprises: a first size occupied by the foreground image in the first picture and first depth information of the foreground image in the first picture;
the third obtaining unit is configured to obtain a second operation, where the second operation is used to place the foreground image in a second picture;
the second determining unit is configured to determine a second picture as a background based on the second operation;
the fourth obtaining unit is configured to obtain state information of the second picture; wherein the state information of the second picture comprises: second depth information of the second picture;
the third determining unit is used for determining the target size occupied by the foreground image in the second picture according to the state information of the foreground image and the state information of the second picture; wherein the target size is proportional to the first size;
the scaling unit is used for scaling the foreground image into the target size;
and the display unit is used for displaying the foreground image in the second picture in the target size and presenting the foreground image to a user.
7. The electronic device of claim 6, wherein the first operation and the second operation are coherent actions; or,
the first operation and the second operation are two actions separated by a first time period.
8. The electronic device according to claim 6 or 7, wherein the status information includes depth information of the object in the picture and a size occupied by the object in the picture;
correspondingly, the second acquiring unit comprises a first acquiring module and a second acquiring module, wherein the first acquiring module is used for acquiring a first size occupied by the foreground image in the first picture;
the second obtaining module is configured to obtain first depth information of the foreground image in the first picture;
correspondingly, the fourth obtaining unit is configured to obtain second depth information of the second picture;
correspondingly, the third determining unit is configured to determine a target size occupied by the foreground image in the second picture according to the first size, the first depth information, and the second depth information.
9. The electronic device of claim 8, wherein the status information further comprises a focal length of an image capture unit used to take a picture;
correspondingly, the third determining unit is configured to determine, according to the first size, the first depth information, the second depth information, a focal length of an image capturing unit used for capturing the first picture, and a focal length of an image capturing unit used for capturing the second picture, a target size occupied by the foreground image in the second picture.
10. The electronic device of claim 8, wherein the first obtaining module is configured to obtain a total number of pixels included in the foreground image.
CN201410670018.2A 2014-11-20 2014-11-20 A kind of information processing method and electronic equipment Active CN105678696B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201410670018.2A CN105678696B (en) 2014-11-20 2014-11-20 A kind of information processing method and electronic equipment
US14/658,756 US9607394B2 (en) 2014-11-20 2015-03-16 Information processing method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410670018.2A CN105678696B (en) 2014-11-20 2014-11-20 A kind of information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN105678696A CN105678696A (en) 2016-06-15
CN105678696B true CN105678696B (en) 2019-03-29

Family

ID=56958044

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410670018.2A Active CN105678696B (en) 2014-11-20 2014-11-20 A kind of information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN105678696B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106375662B (en) * 2016-09-22 2019-04-12 宇龙计算机通信科技(深圳)有限公司 A kind of image pickup method based on dual camera, device and mobile terminal
CN108270978B (en) * 2016-12-30 2021-08-13 纳恩博(北京)科技有限公司 Image processing method and device
CN112689196B (en) * 2021-03-09 2021-06-11 北京世纪好未来教育科技有限公司 Interactive video playing method, player, equipment and storage medium
CN113507575B (en) * 2021-09-08 2021-11-26 上海英立视电子有限公司 Human body self-photographing lens generation method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101305401A (en) * 2005-11-14 2008-11-12 微软公司 Stereo video for gaming
CN101853498A (en) * 2009-03-31 2010-10-06 华为技术有限公司 Image synthetizing method and image processing device
CN102932541A (en) * 2012-10-25 2013-02-13 广东欧珀移动通信有限公司 Mobile phone photographing method and system
CN103024271A (en) * 2012-12-14 2013-04-03 广东欧珀移动通信有限公司 Method photographing on electronic device and electronic device adopting method
CN103442181A (en) * 2013-09-06 2013-12-11 深圳市中兴移动通信有限公司 Image processing method and image processing device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7720283B2 (en) * 2005-12-09 2010-05-18 Microsoft Corporation Background removal in a live video
US8626930B2 (en) * 2007-03-15 2014-01-07 Apple Inc. Multimedia content filtering
US9203918B2 (en) * 2007-03-15 2015-12-01 Nokia Technologies Oy Pulling information from information sources via refer requests
CN102855459B (en) * 2011-06-30 2015-11-25 株式会社理光 For the method and system of the detection validation of particular prospect object

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101305401A (en) * 2005-11-14 2008-11-12 微软公司 Stereo video for gaming
CN101853498A (en) * 2009-03-31 2010-10-06 华为技术有限公司 Image synthetizing method and image processing device
CN102932541A (en) * 2012-10-25 2013-02-13 广东欧珀移动通信有限公司 Mobile phone photographing method and system
CN103024271A (en) * 2012-12-14 2013-04-03 广东欧珀移动通信有限公司 Method photographing on electronic device and electronic device adopting method
CN103442181A (en) * 2013-09-06 2013-12-11 深圳市中兴移动通信有限公司 Image processing method and image processing device

Also Published As

Publication number Publication date
CN105678696A (en) 2016-06-15

Similar Documents

Publication Publication Date Title
CN110933296B (en) Apparatus and method for providing content aware photo filter
US9607394B2 (en) Information processing method and electronic device
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
EP3125135A1 (en) Picture processing method and device
CN110100251B (en) Apparatus, method, and computer-readable storage medium for processing document
CN107613202B (en) Shooting method and mobile terminal
CN107532881B (en) Measurement method and terminal
CN106454086B (en) Image processing method and mobile terminal
EP3822757A1 (en) Method and apparatus for setting background of ui control
CN105678696B (en) A kind of information processing method and electronic equipment
CN112954212B (en) Video generation method, device and equipment
US10216381B2 (en) Image capture
CN113709368A (en) Image display method, device and equipment
CN107426490A (en) A kind of photographic method and terminal
CN114025100A (en) Shooting method, shooting device, electronic equipment and readable storage medium
KR20130103217A (en) Apparatus for providing blurred image and method for providing blurred image
CN112188108A (en) Photographing method, terminal, and computer-readable storage medium
CN105608668B (en) A kind of information processing method and device
CN109089040B (en) Image processing method, image processing device and terminal equipment
JP6397508B2 (en) Method and apparatus for generating a personal input panel
CN110136233B (en) Method, terminal and storage medium for generating nail effect map
CN103826061B (en) Information processing method and electronic device
CN104978566A (en) Picture processing method and terminal
CN104298442A (en) Information processing method and electronic device
CN104866163B (en) Image display method, device and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant