CN117714849A - Image shooting method and related equipment - Google Patents
Image shooting method and related equipment Download PDFInfo
- Publication number
- CN117714849A CN117714849A CN202311120538.1A CN202311120538A CN117714849A CN 117714849 A CN117714849 A CN 117714849A CN 202311120538 A CN202311120538 A CN 202311120538A CN 117714849 A CN117714849 A CN 117714849A
- Authority
- CN
- China
- Prior art keywords
- image
- camera
- spliced
- preview
- stitched
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 152
- 230000008569 process Effects 0.000 claims abstract description 88
- 230000008859 change Effects 0.000 claims abstract description 12
- 238000012545 processing Methods 0.000 claims description 47
- 238000004590 computer program Methods 0.000 claims description 24
- 230000033001 locomotion Effects 0.000 claims description 23
- 230000009466 transformation Effects 0.000 claims description 17
- 230000004044 response Effects 0.000 claims description 16
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 15
- 239000011159 matrix material Substances 0.000 claims description 9
- 230000000007 visual effect Effects 0.000 abstract description 8
- 230000006870 function Effects 0.000 description 43
- 238000004891 communication Methods 0.000 description 36
- 230000006854 communication Effects 0.000 description 36
- 230000000694 effects Effects 0.000 description 19
- 238000010295 mobile communication Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 15
- 238000007726 management method Methods 0.000 description 15
- 230000005236 sound signal Effects 0.000 description 10
- 210000000988 bone and bone Anatomy 0.000 description 9
- 230000001976 improved effect Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 230000004927 fusion Effects 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 229920001621 AMOLED Polymers 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000003416 augmentation Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000036772 blood pressure Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 125000004122 cyclic group Chemical group 0.000 description 2
- 230000000977 initiatory effect Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 1
- 230000007175 bidirectional communication Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 230000001351 cycling effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 239000010985 leather Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000002138 osteoinductive effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000010349 pulsation Effects 0.000 description 1
- 239000002096 quantum dot Substances 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/387—Composing, repositioning or otherwise geometrically modifying originals
- H04N1/3876—Recombination of partial images to recreate the original image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/69—Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
Abstract
An image shooting method and related equipment. By implementing the image shooting method provided by the embodiment of the application, the electronic equipment can change the orientation of the camera to shoot N frames of images to be spliced, then the N frames of images to be spliced are subjected to image splicing, and the spliced preview images are displayed in the spliced preview window, so that the spliced target image has the image resolution of multiple frames of images and simultaneously presents more sceneries than any frame of shot images, and simultaneously, the visual preview of image splicing is realized in the image shooting process.
Description
Technical Field
The present disclosure relates to the field of terminals and communications technologies, and in particular, to an image capturing method and related devices.
Background
Along with development of technology, shooting functions of electronic devices such as mobile phones are gradually improved, and the same electronic device may have multiple cameras, such as a wide-angle camera and a telephoto camera. Images taken with a tele camera have a higher image resolution than images taken with a wide camera, but present less scenes.
In the photographing function, photographing is realized and simultaneously preview display of the image is accompanied.
How to have the resolution of the image shot by the tele camera in the shot image, and simultaneously present the scenery in the image shot by the wide-angle camera and even more scenery, and realizing the effective preview of the shot image in the process is a direction worthy of research.
Disclosure of Invention
The application provides an image shooting method and related equipment, which can be used for shooting multiple frames of images through the cameras arranged in electronic equipment, and can be used for obtaining a target image by changing the orientation of one camera to shoot the multiple frames of images and splicing the shot images, so that the target image is ensured to have the image resolution of the multiple frames of images, and simultaneously more sceneries than any frame of shot images are presented, and the image splicing is effectively previewed based on the shot images of the two cameras.
In a first aspect, an image capturing method is provided, applied to an electronic device including a first camera and a second camera, where a field angle of the first camera is larger than a field angle of the second camera, the image capturing method including:
the electronic equipment enters a camera preview interface, responds to photographing triggering operation, acquires a first image by using a first camera, responds to photographing triggering operation, changes the direction of a second camera, acquires N frames of images to be spliced by using the second camera, and performs image splicing on the N frames of images to be spliced to obtain a target image. N is an integer greater than 2.
The camera preview interface comprises a spliced preview window. On this basis, a stitched preview image is displayed in a stitched preview window, the stitched preview image corresponding to the target image, the image to be stitched, or the first image.
The correspondence between the stitched preview image and the target image includes, but is not limited to, using the target image as the stitched preview image, and cutting or processing the target image to obtain the stitched preview image.
The correspondence between the stitched preview image and the image to be stitched includes, but is not limited to, taking the image to be stitched as a stitched preview image, taking a stitched image generated in a frame-by-frame stitching process of the image to be stitched as a stitched preview image, and cutting or processing the image to be stitched as a stitched preview image.
The correspondence between the stitched preview image and the first image includes, but is not limited to, using the first image as a stitched preview image, and cutting or processing the first image to obtain the stitched preview image.
The stitched preview image corresponds to the process of image stitching. The spliced preview image can be displayed in the spliced preview window in the process of splicing the images to obtain the target image.
When image shooting is executed, the resolution of pictures shot by cameras with different view angles in the electronic equipment is different and the quantity of sceneries displayed is different, in this case, the shooting of multi-frame images is carried out by changing the orientation of the camera through arranging the camera with rotatable lens in the electronic equipment, each frame of shot image has a certain resolution and the sceneries displayed, and then the multi-frame images are spliced, so that the spliced target image has the image resolution of the multi-frame images and simultaneously displays more sceneries than any frame of shot image.
Meanwhile, in the image shooting process, visual preview of image stitching is realized, an image obtained in image stitching processing of a rotatable camera or an image obtained by shooting of a camera with a larger angle of view is utilized, equivalent preview of stitched images is realized, and preview display of an image shooting effect of an image stitching mode is realized.
With reference to the first aspect, in certain implementations of the first aspect, the camera preview interface includes an image preview area in which the stitched preview window is located.
After entering a camera preview interface, the electronic device acquires a camera preview image by using the first camera, and displays the camera preview image in an image preview area. Wherein the first image corresponds to a camera preview image displayed in a trigger time image preview area of a photographing trigger operation.
Thus, the spliced preview window forms a small preview window, and the image preview area forms a large preview window, so that the picture preview display of the current scenery to be shot is realized together.
With reference to the first aspect, in some implementations of the first aspect, in response to a photographing triggering operation, changing an orientation of the second camera, acquiring N frames of images to be stitched with the second camera includes:
and responding to photographing triggering operation, driving the motor to change the direction of the second camera according to the set fortune mirror track, and acquiring N frames of images to be spliced by utilizing N photographing points of the second camera in the fortune mirror track.
Therefore, when the second camera in the electronic equipment moves to one photographing point along the lens moving track, one frame of image to be processed is obtained through photographing, and finally N frames of image to be processed are obtained at N photographing points in the lens moving track.
With reference to the first aspect, in certain implementations of the first aspect, the mirror trajectory needs to be determined in advance.
In this case, the image capturing method further includes:
and selecting an inner photographing start point and an outer photographing end point from the view angle of the first camera, determining the photographing movement sequence among N-2 middle photographing points in the view angle based on the photographing start point and the photographing end point, determining a target movement track from the photographing start point to the photographing end point through the N-2 middle photographing points according to the photographing movement sequence, and setting the target movement track as a moving mirror track.
The point position selection mode ensures that the moving mirror track starts from the center position of the movable range of the lens as much as possible, saves the moving time of the second camera from the center position to the shooting starting point, realizes quick drawing, reduces unnecessary moving distance and improves the stability of the shot picture.
With reference to the first aspect, in some implementations of the first aspect, selecting an inner photographing start point and an outer photographing end point from a field angle of the first camera includes:
and selecting a first point position which is positioned inside from the view angle of the first camera as a photographing start point position, and selecting a second point position which is positioned outside relative to the first point position and is smaller than a threshold value from the view angle of the first camera as a photographing end point position.
Therefore, the shooting end point position can be ensured to be an inner point position as far as possible, the moving time of the camera from the shooting end point position to the center position of the movable range of the lens is reduced, quick drawing is realized, unnecessary moving distance is further reduced, and the stability of a shot picture is improved.
With reference to the first aspect, in certain implementation manners of the first aspect, displaying the stitched preview image in the stitched preview window includes:
And cutting the first image to obtain a second image, and displaying the second image as a spliced preview image in a spliced preview window.
And cutting the image shot by the first camera with a large view angle to obtain an equivalent preview image corresponding to the spliced target image for preview display, so that the rapid preview display of the image which can be formed after splicing and presents multiple scenes and has high resolution is realized.
With reference to the first aspect, in some implementations of the first aspect, cropping the first image to obtain a second image includes:
and cutting out a second image consistent with the maximum field angle covered by the second camera under the direction adjustment from the first image.
Therefore, the image shot by the first camera with the large angle of view is cut to be consistent with the maximum angle of view of the rotatable tele camera, and the equivalent effect between the cut preview image and the target image obtained by splicing N images to be spliced is ensured.
With reference to the first aspect, in certain implementation manners of the first aspect, cutting out a second image consistent with a maximum field angle covered by the second camera under the orientation adjustment from the first image includes:
and acquiring the zooming magnification of the equivalent focal length of the maximum field angle covered by the second camera under the orientation adjustment relative to the first focal length of the first camera, determining a clipping region coordinate from the first image based on the wide-high size and the zooming magnification of the first image, and clipping the first image based on the clipping region coordinate to obtain a second image.
Therefore, the picture with the same angle of view as the spliced image of the second camera can be cut out from the shooting picture of the first camera, and preview effect display can be performed.
With reference to the first aspect, in certain implementation manners of the first aspect, displaying the stitched preview image in the stitched preview window includes:
and displaying the target image as a spliced preview image in a spliced preview window.
Or, cutting the target image to obtain a third image, and displaying the third image as a spliced preview image in a spliced preview window.
Or, in the process of performing image stitching on the N frames of images to be stitched to obtain the target image, stitching the N frames of images to be stitched to obtain a stitched image, and displaying the stitched image serving as a stitched preview image in a stitched preview window.
Thus, the rapid preview display based on the splicing result of the image splicing or the dynamic preview display based on the splicing process of the image splicing is realized.
With reference to the first aspect, in some implementations of the first aspect, a display grid is provided in the stitching preview window, where the display grid includes N grid bits, each grid bit corresponds to a photographing point in a lens track of the second camera, and one photographing point is used to obtain a frame of image to be stitched by using the second camera.
In this case, in the process of performing image stitching on N frames of images to be stitched to obtain a target image, a stitched image obtained by stitching each pair of images to be stitched is displayed in a stitched preview window as a stitched preview image, including:
and determining at least one target lattice position corresponding to each spliced image obtained by splicing each pair of images to be spliced in the process of carrying out image splicing on the N frames of images to be spliced to obtain the target image from the display lattice plate, and sequentially covering and displaying the spliced images on the at least one target lattice position in the spliced preview window.
In the process, the moving mirror track is integrated into the graphic preview, the moving mirror process of image dynamic acquisition is combined with dynamic splicing preview display of the image, and the simulation degree of image acquisition splicing preview display is improved while the dynamic preview display of image splicing is realized.
With reference to the first aspect, in some implementations of the first aspect, image stitching is performed on N frames of images to be stitched to obtain a target image, including:
after the first frame of images to be spliced is obtained by the second camera, each time the second camera is used for obtaining one frame of images to be spliced, one frame of images to be spliced is spliced to obtain spliced images until the second camera is used for obtaining the N frame of images to be spliced, and the N frame of images to be spliced is spliced to obtain target images.
And performing frame-by-frame stitching on the N frames of images to be stitched, and generating a stitched image when each frame of images to be stitched is stitched until a target image is finally obtained, so as to implement dynamic preview display of image stitching.
With reference to the first aspect, in some implementations of the first aspect, each time a frame of images to be stitched is obtained by using the second camera, stitching is performed on the frame of images to be stitched to obtain a stitched image, including:
under the condition that a first image to be spliced is obtained by using a second camera, obtaining central position coordinates of the first image to be spliced and the second image to be spliced, determining an image splicing area between the first image to be spliced and the second image to be spliced based on the central position coordinates, detecting image characteristic points from the image splicing area, determining M pairs of most similar characteristic points from the image characteristic points, calculating an affine transformation matrix of the first image to be spliced according to the M pairs of characteristic points, carrying out affine transformation on the first image to be spliced according to the affine transformation matrix to obtain a third image to be spliced, and fusing the third image to be spliced and the second image to be spliced to obtain a spliced image. M is an integer greater than or equal to 3.
The second image to be spliced is the first frame image to be spliced or the spliced image obtained by splicing.
In the process, when image stitching processing is carried out, the central position coordinates of two frames of stitched images are introduced when each image stitching is carried out, and image registration is carried out according to the central position coordinates, so that detection matching and affine transformation of feature points are carried out on the basis, image fusion stitching is realized, and the image stitching processing accuracy is ensured.
In a second aspect, the present application provides an image capturing apparatus including a first camera and a second camera, the angle of view of the first camera being greater than the angle of view of the second camera, the apparatus comprising:
the mode triggering module is used for entering a camera preview interface, and the camera preview interface comprises a spliced preview window;
the first shooting module is used for responding to shooting triggering operation and acquiring a first image by using a first camera;
the second shooting module is used for responding to shooting triggering operation, changing the direction of a second camera, acquiring N frames of images to be spliced by using the second camera, wherein N is an integer larger than 2;
the image processing module is used for carrying out image stitching on the N frames of images to be stitched to obtain a target image;
And the image preview module is used for displaying a spliced preview image in the spliced preview window, wherein the spliced preview image corresponds to the target image, the image to be spliced or the first image.
In a third aspect, the present application provides an electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing any one of the methods of the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer readable storage medium storing a computer program capable of implementing any one of the methods of the first aspect when the computer program is executed by a processor.
In a fifth aspect, the present application provides a computer program product comprising a computer program capable of implementing any one of the methods of the first aspect when the computer program is executed by a processor.
It will be appreciated that the image capturing apparatus provided in the second aspect, the electronic device provided in the third aspect, the computer readable storage medium provided in the fourth aspect, and the computer program product provided in the fifth aspect are all configured to perform any one of the methods provided in the first aspect of the present application. Therefore, the advantages achieved by the method can be referred to as the advantages of the corresponding method, and will not be described herein.
Drawings
FIGS. 1 a-1 c illustrate an exemplary set of views of a scene taken with a wide angle camera;
2 a-2 c illustrate a set of schematic diagrams of an electronic device capturing images with a tele camera;
3 a-3 d illustrate a set of camera preview display schematics in an electronic device;
FIG. 3e shows a camera parameter relationship diagram in a tele camera;
fig. 4 is a schematic flow chart of a shooting method involved in a first scene;
fig. 5a to 5e are a set of schematic diagrams of acquiring N frames of images to be stitched and performing image stitching preview in a second scene;
fig. 6 is a schematic flow chart of a shooting method involved in a second scenario;
FIG. 7 is a schematic flow chart of frame-by-frame stitching of N frames of images to be stitched;
FIG. 8 is a schematic flow chart of a one-cycle process for performing image stitching;
8 a-8 f are a set of schematic diagrams for determining a mirror trajectory and performing image stitching preview in combination with the mirror trajectory in a third scenario;
fig. 9 is a schematic flowchart of a photographing method involved in a third scenario;
fig. 10 is a schematic diagram of an electronic device control apparatus according to an embodiment of the present application;
fig. 11 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. Wherein, in the description of the embodiments of the present application, "/" means or is meant unless otherwise indicated, for example, a/B may represent a or B; the text "and/or" is merely an association relation describing the associated object, and indicates that three relations may exist, for example, a and/or B may indicate: the three cases where a exists alone, a and B exist together, and B exists alone, and in addition, in the description of the embodiments of the present application, "plural" means two or more than two.
It should be understood that the terms first, second, and the like in the description and in the claims and drawings of the present application are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly understand that the embodiments described herein may be combined with other embodiments.
The term "User Interface (UI)" in the following embodiments of the present application is a media interface for interaction and information exchange between an application program or an operating system and a user, which enables conversion between an internal form of information and an acceptable form of the user. The user interface is a source code written in a specific computer language such as java, extensible markup language (extensible markup language, XML) and the like, and the interface source code is analyzed and rendered on the electronic equipment to finally be presented as content which can be identified by a user. A commonly used presentation form of the user interface is a graphical user interface (graphic user interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be a visual interface element of text, icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, widgets, etc., displayed in a display of the electronic device.
Multiple cameras can be arranged in the same electronic device according to the device functions, and photographing functions supported under different cameras can be achieved.
The electronic device may be, for example, a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an augmented Reality (Augmented Reality, AR)/Virtual Reality (VR) device, a notebook computer, an Ultra-Mobile Personal Computer (UMPC), a netbook, a personal digital assistant (Personal Digital Assistant, PDA), or a special camera (e.g., a single-lens reflex camera, a card-type camera) with an image capturing function.
The camera is, for example, a normal camera (flat angle camera), a wide angle camera, an ultra wide angle camera, a tele camera, and the like, and the photographing function is, for example, a normal photographing function, a wide angle photographing function, a photographing splicing function provided later in the embodiment of the present application, and the like. Different photographing functions may be configured in the camera application in the form of a photographing mode. Under different image shooting requirements, corresponding image shooting effects need to be realized by combining different cameras.
When a long shot is taken, zoom magnification is usually increased, at this time, the electronic device cuts into the long-focus camera to take the shot, and although the scenery in the picture is clearer, the shot scenery is less.
If the wide-angle camera is used for shooting at the same viewpoint, the amplified definition effect is inferior to that of the tele camera although the shot scenery is more.
In this application, two cameras with different angles of view are included in the electronic device. The angle of view of the first camera is larger than that of the second camera, the first camera is a wide-angle camera or an ultra-wide-angle camera, and the second camera is a common camera or a long-focus camera.
In one approach, the electronic device may take a photograph with the first camera in order to render more content in the captured image. The first camera may be a wide angle camera or an ultra wide angle camera.
Fig. 1 a-1 c illustrate an exemplary set of views of a scene taken with a wide angle camera. As shown in fig. 1a, the user interface 10 is a main interface of an electronic device. The electronic device may initiate a camera function in response to a function initiation operation (e.g., a click operation, a slide operation, etc.) by a user on the camera application 101.
As shown in fig. 1b, assuming that the electronic device is in a first position and that the electronic device is able to take a partial scene with a tele camera for a scene in region 101, the electronic device may take an entire scene with a wide camera having a larger field angle (larger than the tele camera). And responding to photographing triggering operation (such as clicking operation, sliding operation and the like) of the user, and executing photographing operation by the electronic equipment to obtain an image A.
For viewing the captured image, the electronic device may display the user interface 11 shown in fig. 1 c.
In the user interface 11, an image B may be displayed, and the image B may be an image a, or may be an image obtained by processing the image a, so as to implement viewing and displaying of a captured image. The image resolution of the image B is low, resulting in the image B being unclear. For example, a partial image in the image B may be displayed in the region 102, and the partial image may be enlarged X times as shown in the image in the region 103, and an unclear problem may occur.
Thus, although the wide-angle camera can shoot more scenes with a larger field angle, the resolution of the shot image is lower, and the problem of unclear image can occur.
In this application, in order to make the shot image present more shooting content and have higher image resolution at the same time, the electronic device may use the second camera with adjustable orientation to shoot at the first position, obtain the multi-frame image of the scenery in the related area 101, and splice the images to obtain the target image. The second camera is a rotatable camera, the lens can be rotated by a motor in hardware, the direction of the camera lens is modulated, the limitation of the second camera on the view angle is improved according to the direction of the camera lens, and the shooting coverage area is enlarged.
There is the angle of view size difference between first camera and the second camera. The angle of view of the first camera is greater than the angle of view of the second camera itself. And the two shot images have conflict between the resolution of the images and the scene which can be presented by the images.
In this scenario, the embodiments of the present application add a photo stitching mode to the camera functionality. In this mode, the second camera acquires multiple frames of images and performs stitching processing on the multiple frames of images to replace the first camera to shoot and obtain images which can present more scenes but have higher overall image resolution.
Optionally, the first camera is a wide-angle camera or an ultra-wide-angle camera, and correspondingly the second camera is a tele camera. Then, at this time, when the distant view is photographed, the image photographed by the second camera is relatively clearer than the aforementioned image a.
Under the circumstance, in a shooting and splicing mode added in the camera function, the long-focus camera acquires multi-frame images and carries out splicing processing on the multi-frame images so as to replace the wide-angle camera to shoot and obtain images which can present more sceneries but have higher overall image resolution, and the mode can be called as a long-focus splicing mode. Of course, when the first camera and the second camera each use another camera, the mode may be called another mode name corresponding to the mode, which is not limited thereto.
The user interface 20 in fig. 2a is a display interface of function setting items in a camera application, in which a long Jiao Pinjie control 200 corresponding to a "tele-splice" mode may be set, and in response to a user operation (e.g., a click operation) on the long Jiao Pinjie control 200, the electronic device may enter the tele-splice mode.
Fig. 2b, 2c show a set of schematic diagrams of an electronic device capturing an image with a tele camera.
Referring to the schematic diagram shown in fig. 1b, the electronic device is in the first position and is unchanged, and in response to a photographing triggering operation (such as a clicking operation, a sliding operation, etc.) of the user, the electronic device changes the orientation of the long-focus camera, in fig. 2b, so that it may photograph the scenery in the area 201 first, and then photograph the scenery in the area 202, the area 203, and the scenery in the area 204 in sequence, so as to obtain a total of 4 frames of images to be spliced, and then the electronic device may splice the 4 frames of images to be spliced, so as to obtain an image C.
As shown in fig. 2C, the image displayed in the user interface 21 may be an image D, where the image D may be an image C, or may be an image obtained by processing the image C, so as to implement viewing and displaying of the captured image. The scene in the region 101 may be displayed in this image D with a higher resolution than in image B. For example, a partial image in the image D may be displayed in the region 211, and the partial image may be shown as an image shown in the region 212 after being enlarged X times, without causing an unclear problem compared to the image shown in the aforementioned region 103.
It will be appreciated that if the first camera is an ultra wide angle camera, the second camera may also be a wide angle camera.
Therefore, the electronic equipment can shoot scenes with a larger range by using the camera with a small angle of view, so that more scenes can be presented in the image, and meanwhile, the image resolution of the image is not reduced, so that the image is clear.
The application provides an image shooting method. According to the image shooting method, under the scenes that the resolution of pictures shot by cameras with different angles of view and the quantity of scenes presented are different, shooting of multi-frame images is carried out by setting rotatable cameras in electronic equipment and changing the directions of the cameras, each frame of shot image has a certain resolution and the scenes presented, and then the multi-frame images are spliced, so that the spliced target image has the image resolution of the multi-frame images and presents more scenes than any frame of shot image.
Meanwhile, in the image shooting process, visual preview of image stitching is realized, an image obtained in image stitching processing of a rotatable camera or an image obtained by shooting of a camera with a larger angle of view is utilized, equivalent preview of stitched images is realized, and preview display of an image shooting effect of an image stitching mode is realized.
In one scheme, in order to enable the image stitching operation to have a visual display effect, a stitching preview window is arranged in a camera preview interface of a camera function, and a preview image of image stitching is displayed in the stitching preview window.
Fig. 3a, 3b show a schematic view of a camera preview interface in an electronic device calling up a stitched preview window.
As shown in fig. 3a, the user interface 30 is a camera preview interface. The electronic device starts the camera function and enters a camera preview interface. After the electronic device enters a preview interface in a camera application, the first camera is started and starts to collect image frames, the preview frame 301 of the camera preview interface displays the image frame content collected by the first camera, when the electronic device triggers a long Jiao Pinjie mode in the camera application, the electronic device responds to a photographing triggering operation on a photographing control 302 in the camera preview interface, the second camera starts to collect image frames at the same viewpoint, as shown in the user interface 31 in fig. 3b, the electronic device displays a splicing preview window 311 in the camera preview interface, synchronously changes the lens orientation of the second camera, and photographs N images through the second camera.
In the user interface 31 of fig. 3b, the stitched preview window 311 may be presented in the form of a preview pane at the camera preview interface.
The preview frame 301 forms a preview big window, and the image frames collected by the first camera are synchronously displayed in the camera preview interface through the preview big window. Optionally, the display position of the stitched preview window 311 is located in the preview pane formed by the preview pane 301. Thus, the spliced preview small window and the wide-angle preview large window jointly realize the picture preview display of the current scenery to be shot.
The stitched preview window 311 displays a stitched preview image corresponding to the stitching of the images of the second camera. The stitching preview display may be a preview display based on an image with more scenes captured by the first camera, or a preview display based on a target image generated after stitching N images to be stitched captured by the second camera that is rotated, or a preview display based on an image stitching process of the N images to be stitched captured by the second camera that is rotated in the process of stitching the generated target image.
In the implementation process, the first camera can be a wide-angle camera and a super-wide-angle camera, and the second camera can be a long-focus camera. Alternatively, the first camera may be an ultra-wide angle camera, and the second camera may be a wide angle camera. Or, the first camera may be a normal camera, and the second camera may be a tele camera.
The following describes in detail a visual preview display manner of image stitching in the image capturing method according to the embodiment of the present application, taking a first camera as a wide-angle camera and a second camera as a tele camera as an example, and combining three specific implementation application scenarios.
First scene: and performing preview display based on the first image shot by the first camera.
The user interfaces involved in this process may be as described above with reference to the user interfaces and images of fig. 1a, 2b, 3 a-3 d.
As shown in fig. 1a, the user interface 10 is a main interface of the electronic device. The electronic device may initiate a camera function in response to a user's function initiation on the camera application 101, entering a camera preview interface, see user interface 30 in fig. 3 a.
After entering the camera preview interface, the electronic device acquires an image by using the wide-angle camera, and the image acquired by the wide-angle camera is displayed in the current camera preview interface, so that preview display of the image acquired by the current camera is realized.
In the camera function, the user can select the long Jiao Pinjie control 200 from the function setting item display interface shown in fig. 2a based on the current photographing requirement, and trigger the electronic device to enter a tele splicing mode of the camera function. Alternatively, the user may adjust the zoom magnification to a set value, for example, to 3.5 times the zoom magnification, based on the current photographing requirement, and the electronic device enters the "long Jiao Pinjie mode" of the camera function.
In response to a user's photographing triggering operation on the photographing control 302 in the camera preview interface, the electronic device enters the user interface 31 shown in fig. 3b, and the electronic device may continue to acquire a camera preview image with the wide-angle camera and display the preview image in the preview box 301, and call out the display stitching preview window 311. And responding to the photographing triggering operation, and executing photographing operation by the electronic equipment.
The photographing operation includes two parts, and one part is to photograph the image frame collected by the wide-angle camera in the preview display frame 301 at the triggering time of the photographing triggering operation, so as to obtain an image a (i.e. a first image obtained by the first camera). And meanwhile, the other part is to adjust the lens direction of the tele camera, acquire N frames of images to be spliced and splice the N frames of images to obtain a target image.
In the process of acquiring the N frames of images to be stitched, the stitching preview images are synchronously displayed in the stitching preview window 311, and the stitching preview images need to be obtained based on the images shot by the wide-angle cameras.
In the process of splicing preview, the image a can be used as a spliced preview image, or the image a is cut or overlapped with more other processes and then is used as a spliced preview image, and the spliced preview image is displayed in the spliced preview window 311, so that the rapid preview display of the images which can be formed after splicing and present multiple scenes and have high resolution is realized.
After the N frames of images to be spliced are obtained, splicing or overlapping more other processing is carried out on the N frames of images to be spliced, and a target image is obtained and stored. Wherein, N is a positive integer greater than or equal to 2, and is generally 2 times of a certain positive integer, and the value of N is determined by the field angle of the current wide-angle camera and the field angle of the tele camera.
In a preview process, as shown in fig. 3c, considering that there is an image black edge in the imaging of the wide-angle camera lens, the shot image a needs to be edge-cut according to a reasonable size to remove the black edge, so as to obtain a spliced preview image.
Or, in a preview process, considering the difference of the angles of view between the wide-angle camera and the telephoto camera and the rotatable maximum angle range of the lens combined with the telephoto camera, there may be a coverage difference of the angles of view between the wide-angle camera and the rotatable telephoto camera, and an image consistent with the maximum angle of view which can be covered by the orientation adjustment of the telephoto camera needs to be cut out from the image a, so as to obtain a spliced preview image.
As shown in fig. 1b, 2b, and 3d, the coverage of the single wide-angle camera is the range of the area 101, and the coverage of the single telephoto camera is the range of the area 201. In the present example, the lens orientations of the tele cameras can be rotationally adjusted to be oriented toward the region 201, the region 202, the region 203, and the region 204, respectively. If the lens rotation angle range of the 4 lenses facing the lower tele camera is taken as the maximum rotatable angle range of the lenses, the coverage of the angle of view of the rotatable tele camera in the maximum rotatable angle range of the lenses is the range of the region formed by the region 201, the region 202, the region 203 and the region 204 together, namely, the range of the region 301, as shown in fig. 3 d.
Here, the image corresponding to the maximum angle of view that can be covered by the rotatable tele camera under the orientation adjustment is the target image obtained by stitching N images to be stitched, where the target image corresponds to the angle coverage area 301 of the lens within the rotatable maximum angle range.
As shown in fig. 3d, the wide-angle camera captures an image a corresponding to the region 101, where the region 302 in the image a is a clipping region, and the region 302 in the image a corresponds to the field angle coverage region 301 of the rotatable tele camera. And cutting the image A to be consistent with the maximum field angle of the rotatable tele camera according to the cutting area to obtain equivalent preview images (namely spliced preview images) corresponding to the target images obtained by splicing the N images to be spliced, so as to realize rapid preview display of the images which can be formed after splicing and present multiple scenes and have high resolution.
In order to compensate for the coverage difference of the angle of view between the wide-angle camera and the rotatable tele camera, it is necessary to calibrate the Zoom magnifications of the images corresponding to the wide-angle camera and the rotatable tele camera in advance based on the tele camera and the rotatable tele camera which are already equipped in the electronic device, that is, the Zoom magnifications corresponding to the Zoom magnifications, and to cut out the image consistent with the maximum angle of view which can be covered by the rotatable tele camera under the orientation adjustment from the image a.
Therefore, before setting the "long Jiao Pinjie mode", the Zoom magnification to be adapted to the "long Jiao Pinjie mode" needs to be set.
Specifically, it is necessary to first measure the angle of view corresponding to the rotatable telephoto camera when the lens is rotated to the maximum angle, and calculate the equivalent focal length f at the angle of view by using a formula in combination with the diagonal length d of the image sensor (sensor) in the telephoto camera, as shown in fig. 3 e:
θ=angle/2;
f=(d/2)/tanθ;
further, the equivalent focal length f and the focal length f of the tele camera are calculated 0 Ratio of:
zoom=f/f 0 ;
and obtaining the zoom magnification of the angle of view corresponding to the rotatable tele camera relative to the angle of view of the wide-angle camera, namely a zoom value.
Subsequently, based on the zoom magnification and the width-height dimension of the image a, the coordinates of the start position and the stop position of the clipping region are determined from the image a as shown in fig. 3 d. For determination of the clipping region coordinates, it is possible to determine that the clipping start point Q coordinates are ((width-width/from)/2, (height-height/from)/2) and the clipping end point P coordinates are ((width+width/from)/2, (height+height/from)/2).
Based on the coordinates, a picture with the same angle as the spliced image of the tele camera is cut out from the shooting picture of the wide-angle camera, and preview effect display is performed.
In the application, the first camera may be a main camera, and the main image is cut to be consistent with the maximum field angle of the rotatable tele camera, and then sent and displayed, so as to realize main shooting and displaying.
The process of the electronic device for acquiring the target image and the image preview by using the shooting method in the embodiment of the application is described in detail below in conjunction with the above description of the first scene.
Fig. 4 is a schematic flow chart of a photographing method involved in the first scene.
In the first scenario, the process of the electronic device acquiring the target image and the image preview by using the shooting method in the embodiment of the present application may refer to the following descriptions of steps 101 to 107.
Step 101, the electronic device enters a camera preview interface.
The camera preview interface includes an image preview area therein. The image preview area is used for previewing and displaying the image frames acquired by the camera.
After entering the camera preview interface, the electronic device captures a scene by default using 1-time zoom magnification, and at this time, the electronic device can use the wide-angle camera to capture an image. The user can set the zoom magnification at the time of capturing an image by the electronic apparatus, for example, set the zoom magnification from 1-fold zoom magnification to 3-fold zoom magnification, or the like.
Under different zoom magnifications, the electronic equipment can use different cameras to acquire images and send the images to be displayed. For example, when the zoom magnification is greater than 3.5 times the zoom magnification, the electronic device uses a wide-angle camera and/or a tele camera. When the zoom magnification is less than 1.0 times, an ultra-wide-angle camera is used, and other cases can only use the wide-angle camera. It should be understood that the effect of zoom magnification on the camera used in capturing the image is illustrated herein by way of example only and is not illustrated as being configured differently in different electronic devices.
After entering the camera preview interface, the electronic device may collect an image based on the wide-angle camera in the default state, and perform preview display in an image preview area in the camera preview interface.
In different ways, a splicing preview window can be set in the camera preview interface, when the zoom magnification is greater than 3.5 times of zoom magnification, the electronic device uses the tele camera to shoot N images and splice the images, and preview display is performed on the splicing of the images in the splicing preview window.
Step 102, acquiring a camera preview image by using a first camera.
The electronic equipment comprises a first camera and a second camera, wherein the field angle of the first camera is larger than that of the second camera.
Here, the first camera may be a wide-angle camera or an ultra-wide-angle camera, and the second camera may be a tele camera. The first camera may be an ultra-wide angle camera, and the second camera may be a wide angle camera. The first camera may be a normal camera, and the second camera may be a tele camera.
Step 103, displaying the preview image in the image preview area.
The electronic device collects images by using the first camera, and two frames of images and more can form an image sequence. The electronic device may send the collected image to a display screen for previewing in an image preview area of the camera preview interface after preprocessing the collected image. One exemplary user interface involved in previewing may be the user interface 30 illustrated in FIG. 3a, previously described.
Step 104, responding to a photographing triggering operation, and acquiring a first image by using a first camera.
The first image illustratively corresponds to a preview image displayed in the trigger-time image preview area of the photographing trigger operation.
Specifically, the first image is a frame of image in the image sequence acquired by the first camera.
The photographing triggering operation may be an operation (e.g., a clicking operation) on the photographing control 302 in the user interface 30 shown in fig. 3a described above. When the electronic device detects a photographing triggering operation of the user on the photographing control 302, one frame of image acquired at the operation triggering moment in the image sequence is taken as a first image.
And 105, responding to photographing triggering operation, changing the direction of a second camera, and acquiring N frames of images to be spliced by using the second camera.
Wherein N is an integer greater than 2.
Here, the second camera is a rotatable camera. After the electronic device detects the shooting trigger operation of the user on the shooting control 302 in fig. 3a, the electronic device may control a driving device such as a motor to adjust the lens orientation of the second camera, so as to expand the coverage range of the image acquired by the lens of the second camera as much as possible, and expand the angle of view that can be reached when the camera shoots the image.
In the process of image capturing by changing the orientation of the second camera, as shown in fig. 2b, each time the orientation of the lens of the camera is changed, the area that can be captured by the camera is changed once, the camera captures an image of the currently captured area once, and after N changes (4 changes in fig. 2 b), N images to be stitched are captured (4 images to be stitched are captured in fig. 2 b).
And 106, performing image stitching on the N frames of images to be stitched to obtain a target image.
After N images to be spliced are obtained, the N images to be spliced can be spliced to obtain a target image.
Thus, the angle of view corresponding to the obtained target image is larger than the angle of view of any one of the N frames of images to be spliced. More sceneries can be presented in the target image obtained by final splicing, and meanwhile, the image resolution of the image is not reduced, so that the image is clear.
When image stitching is performed, if there are image areas with overlapping contents between N frames of images to be stitched, feature deduplication is required to be performed on the image areas with overlapping contents, so that image stitching fusion is realized. If no image area with overlapped contents exists between N frames of images to be spliced, pixel characteristic matching points between different images can be determined based on the edge image content of each frame of images to be spliced, and image splicing and fusion can be realized based on the characteristic matching points.
And 107, cutting out a second image which is consistent with the maximum angle of view covered by the second camera under the direction adjustment from the first image, and displaying the second image as a spliced preview image in a spliced preview window.
Wherein, the setting position of the spliced preview window in the camera preview interface can be set according to the use habit of a user or according to aesthetic preference. The stitched preview window is located in the image preview region. As shown in fig. 3b, a preview box may be provided in the camera preview interface to form an image preview area, and a stitched preview window 311 is provided in the preview box 301.
Here, the first image needs to be cropped to obtain the second image.
As shown in fig. 3d and 3e, in the process of cutting out the second image which is consistent with the maximum angle of view covered by the second camera under the adjustment of the orientation of the second camera from the first image, the angle of view corresponding to the image capturing range when the tele camera rotates to the maximum angle is actually measured, and on the basis of the angle of view, the zoom magnification of the tele camera relative to the wide camera is calculated, that is, the zoom magnification of the equivalent focal length of the maximum angle of view covered by the second camera under the adjustment of the orientation of the second camera relative to the first focal length of the first camera is obtained.
And then based on the zoom magnification, combining the width and height dimensions of the first image, determining clipping coordinates from the first image, clipping the first image based on the clipping coordinates to obtain a second image, and performing image preview display instead of the spliced target image.
Wherein optionally the image content in the target image is the same as the image content in the second image.
In this process, the stitched preview image corresponds to the first image, and the stitched preview image is displayed in the stitched preview window. The image shot by the wide-angle camera is used as a realization basis of image stitching preview in a tele stitching mode, so that the rapid preview display of images which can be formed after stitching and present multiple scenes and have high resolution is realized.
The second scenario: and based on the shot images of the second camera, performing preview display on the image stitching.
In this scenario, two implementations are included. One is to preview the target image obtained by final stitching based on N images to be stitched, that is, preview the target image based on the stitching result (target image) of image stitching. And the other is that different spliced images obtained in the splicing process based on N images to be spliced are subjected to preview display, namely, the splicing process based on image splicing (the splicing process of target images) is subjected to preview display. In this way, a corresponding relation is established between the spliced preview image and the target image and between the spliced image and the target image.
The electronic device achieves preview display of image stitching based on the photographed images of the second camera under the two different embodiments.
Reference is made to the schematic illustration of the user interface 30 shown in fig. 3a described above. After the electronic device enters the "long Jiao Pinjie mode" of the camera function, the electronic device obtains a camera preview image with the first camera at 3.5 times zoom magnification, the electronic device may continue to obtain the camera preview image with the wide-angle camera and display the preview image in the preview box 301 of the user interface.
In response to a photographing triggering operation of the user on the photographing control 302 in the camera preview interface, referring to the schematic diagram of the user interface 31 shown in fig. 3b, the electronic device calls out the spliced preview window 311 and synchronously executes the photographing operation. Specifically, the lens orientation of a long-focus camera is adjusted, N frames of images to be spliced are obtained by the long-focus camera, and then the N frames of images to be spliced are spliced or superimposed for more other processing, so that a target image is obtained.
Fig. 5 a-5 d are a set of schematic diagrams of acquiring N frames of images to be stitched in a second scene.
As shown in fig. 5a, assuming that at this time, the electronic device determines that N is 4 according to the field angle of the wide-angle camera and the field angle of the telephoto camera, the electronic device may move the motor 4 times according to the preset track, change the direction of the second camera, and acquire 4 frames of images to be stitched. So that it can shoot the scenery in the area 501 first, and then shoot the scenery in the area 502, the area 503 and the area 504 in turn, so as to obtain 4 frames of images to be spliced. When the direction of the second camera is changed when the motor is continuously moved twice, the scenery shot twice by the second camera can have an overlapping area, so that when two continuous frames of images to be spliced are shot, the two frames of images to be spliced have an overlapping area. For example, the area 505 is an overlapping area when the electronic device photographs the area 501 and the area 502. The area of the overlap region may be 15% -25%, for example 20%, of the image.
Taking the electronic equipment to acquire the first frame to-be-spliced image and the second frame to-be-spliced image as an example for explanation:
as shown in fig. 5b, it is assumed that the initial orientation of the second camera may be such that it captures a scene in the area 506. The motor moves for the first time to change the direction of the second camera, so that the second camera can shoot the scenery in the taking area 501, and the image to be spliced of the first frame is obtained. Then, the second movement of the motor changes the orientation of the second camera.
As shown in fig. 5c, the second camera may capture the scenery in the capturing area 502, to obtain a second frame of image to be stitched. At this time, the overlapping portions in the region 501 and the region 502 may be as shown in the region 505 shown in fig. 5 c.
The process of the electronic device obtaining the third frame to-be-stitched image and the fourth frame to-be-stitched image may refer to the foregoing descriptions of fig. 5a to 5c, which are not repeated here.
As shown in fig. 5d, a schematic diagram of the electronic device acquiring 4 frames of images to be processed is shown. For example, the 4 frame image may be image 1-image 4. Wherein the region 507 in image 1 and the region 507 in image 2 overlap regions. The region 508 in image 2 and the region 508 in image 3 overlap regions. The region 509 in image 3 and the region 509 in image 4 overlap regions.
And then, on the basis of the shot images of the second camera, image splicing is carried out for image preview display.
In one embodiment, the electronic device may stitch the 4 frames of images to be processed into one frame of image as the target image by using the overlapping area of the images, or process the one frame of image to obtain the target image. The resulting target image may be seen in the image D shown in the user interface 21 of fig. 2c described above.
In the image stitching process, N images to be stitched and the object image obtained by stitching are accompanied.
Therefore, when in splicing preview, the target image obtained by splicing can be directly displayed in the splicing preview window as a splicing preview image, so that the rapid preview display of the images which can be formed after splicing and present multiple scenes and have high resolution is realized; or, the target image obtained by splicing can be subjected to additional processing such as further cutting and the like and then displayed in a spliced preview window as a spliced preview image, so that the rapid preview display of images which can be formed after splicing and present multiple scenes and have high resolution is realized while the display effect of preview display is met.
Referring to fig. 3b, a stitched preview image, which is an image after additional processing such as stitching, or cropping, is displayed in a preview pane formed in the stitched preview window 311.
Or in another embodiment, in the process of stitching the N frames of images to be stitched to obtain the target image, each pair of images to be stitched generates a corresponding stitched image, and the stitched image is displayed in the stitching preview window, so that dynamic stitching display of the process of performing image stitching on the N frames of images to be stitched to obtain the icon image is realized, and visual dynamic display of image stitching is realized.
In the implementation process of the mode, N images to be spliced, which are obtained through shooting by the long-focus camera, are required to be spliced one by one, wherein the operation of splicing one by one can occur in the shooting process of the N images to be spliced, namely one by one shooting is performed, or after the shooting of the N images to be spliced is completed, the operation of splicing the N images to be spliced one by one is sequentially performed.
Fig. 5e is a schematic diagram of performing piece-by-piece stitching on N frames of images to be stitched and previewing in a stitching preview window.
As shown in connection with fig. 5d, the electronic device acquires 4 frames of images to be processed, which may be images 1-4.
In fig. 5e, the window 60 is a schematic view of the stitching preview window, and after the image 1 to be stitched of the first frame is acquired, the image 1 is displayed in the window 60 as the preview image 1 or the image 1 is processed and then displayed as the preview image 1. Then, the next frame image is acquired, after the image 2 is acquired, the image 2 and the image 1 are subjected to a stitching process to obtain a preview image 2, and the preview image 2 is displayed in the window 60, and at this time, the preview image 1 displayed in the window 60 is replaced with the preview image 2. After that, the image 3 is continuously acquired, and after the image 3 is acquired, the image 3 and the preview image 2 are subjected to stitching processing, so that the preview image 3 is obtained and displayed in the window 60, and at this time, the preview image 2 displayed in the window 60 is replaced with the preview image 3. Finally, the 4 th frame of image to be spliced is acquired, after the image 4 is acquired, the image 4 and the preview image 3 are subjected to splicing processing, the preview image 4 is obtained, and is displayed in the window 60, and at this time, the preview image 3 displayed in the window 60 is replaced by the preview image 4. And because the image 4 is the image to be spliced of the last frame, the preview image 4 spliced based on the image 4 is the preview image corresponding to the target image formed after all the images to be spliced are spliced.
In the process, one image to be spliced is obtained, and the generated spliced image is output and displayed in the splicing preview window in real time, so that the dynamic splicing processing of N frames of images to be spliced in the obtaining process can be effectively displayed, the dynamic splicing display of the process of obtaining the icon image by splicing the N frames of images to be spliced is realized, and the visualization of the dynamic splicing processing of the image is realized.
Through the embodiments, when the preview display of image stitching is realized, different corresponding relations can be established between the stitched preview images and the object images generated by stitching, and the preview display of image stitching based on the shooting images of the second camera is realized in different embodiments.
The process of the electronic device for acquiring the target image and the image preview by using the shooting method in the embodiment of the application is described in detail below in conjunction with the above description of the second scene.
Fig. 6 is a schematic flowchart of a photographing method involved in the second scene.
In the second scenario, the process of the electronic device acquiring the target image and the image preview by using the shooting method in the embodiment of the present application may refer to the following descriptions of steps 201 to 209.
In step 201, the electronic device enters a camera preview interface.
The camera preview interface includes a stitched preview window.
Step 202, acquiring a camera preview image by using a first camera.
The camera preview interface includes an image preview area.
In step 203, the preview image is displayed in the image preview area.
In step 204, a first image is acquired by using a first camera in response to a photographing trigger operation.
The first image corresponds to a preview image displayed in the trigger time image preview area of the photographing trigger operation.
Step 205, in response to the photographing triggering operation, changing the direction of the second camera, and acquiring N frames of images to be spliced by using the second camera.
Wherein N is an integer greater than 2.
The implementation process of the steps 201 to 205 may refer to the implementation process described in the steps 101 to 105 in the first scenario, and refer to the implementation process related to fig. 5a to 5e for acquiring the N frames of images to be stitched, which are not described herein again.
Different from equivalent previewing of the spliced images based on the images shot by the wide-angle cameras in the first scene, after N frames of images to be spliced are obtained in the second scene, image splicing previewing display is performed based on N images to be spliced, which are shot by the long-focus cameras, target images which are finally spliced, and the like.
In one embodiment, the image stitching preview may be performed for the image stitching process based on the N frames of images to be stitched acquired by the second camera. Step 206 is specifically performed.
And 206, performing image stitching on each frame of image to be stitched to obtain a stitched image, and displaying the current stitched image as a stitched preview image in a stitched preview window.
Here, the N frames of images to be stitched are stitched frame by frame, and each time one frame of image to be stitched is stitched, one stitched image is generated. And each time a spliced image is generated, the spliced image is displayed in a spliced preview window for preview. In the image stitching process, dynamic image stitching preview display is implemented along with image stitching processing. A preview presentation of a particular image stitching may be seen in fig. 5 e.
In one embodiment, the image stitching preview can be executed according to the image stitching processing result based on the N frames of images to be stitched acquired by the second camera, and finally the target image is stitched. Specifically, step 207 and step 208 are performed, or step 207 and step 209 are performed.
Step 207, obtaining a target image after completing image stitching based on the N frames of images to be stitched.
The target image is an image obtained after the N frames of images to be spliced are spliced.
The target image can be generated based on the obtained N frames of images to be spliced, and the target image is obtained after the image feature matching splicing is uniformly carried out. Alternatively, the splice may be made in a frame-by-frame splice as referred to in step 206.
Fig. 7 is a schematic flow chart of frame-by-frame stitching of N frames of images to be stitched.
In the frame-by-frame stitching approach, the image stitching process may refer to the following description of steps 301-304 in fig. 7.
Step 301, acquiring a first frame of image to be spliced by using a second camera.
The electronic equipment moves the motor to change the direction of the second camera, and the first frame of images to be spliced are acquired through the second camera.
The electronic equipment moves the position of the motor from the initial position to the first position, the moving motor can change the direction of the second camera, and the shooting range of the second camera can be changed, so that the scenery shot by the second camera is changed. The scene corresponding to the first frame to be stitched image obtained by the electronic device may be as shown in fig. 5 b. The initial position of the motor may be located at the center, for example, the area 506 in fig. 5b is the shooting range of the second camera when the motor is at the center, and the area 501 is the shooting range of the second camera after the motor moves for the first time.
After the first frame of images to be stitched is acquired with the second camera, step 302 is performed.
In step 302, the electronic device moves the motor to change the direction of the second camera, acquires the next frame of image to be stitched through the second camera, and each time the second camera is used to acquire one frame of image to be stitched, splices the one frame of image to be stitched to obtain a stitched image.
In the process, after a first frame of image to be spliced is acquired, the electronic equipment changes the direction of a second camera through moving a motor, continuously acquires the next frame of image to be spliced, performs image splicing processing once for each acquired frame of image to be spliced to obtain a frame of spliced image, performs image splicing processing on the spliced image and the acquired other frame of image to be spliced, continuously loops, splices the two frames of images together until the splicing of the N frames of images is completed, and finally obtains a target image.
An example of this process may be referred to the description content of acquiring N frames of images to be stitched associated with fig. 5a to 5d in the aforementioned second scenario, which is not described herein.
In addition, in some embodiments, after the electronic device shoots the first frame to-be-stitched image and the second frame to-be-stitched image through the second camera, one frame serves as a first to-be-stitched image, the other frame serves as a second to-be-stitched image, the first to-be-stitched image and the second to-be-stitched image are stitched to obtain a first stitched image, the first stitched image serves as the first to-be-stitched image in the next stitching cycle, the next frame to-be-stitched image is continuously acquired, and as the second to-be-stitched image, the stitching processing of the first to-be-stitched image and the second to-be-stitched image is performed once, so that the updated first to-be-stitched image is further obtained. And continuously cycling the process until the N frames of images are spliced to obtain a target image.
In order to ensure the image stitching effect of each frame of image to be stitched in the stitching process, in a one-time circulation process of stitching one frame of image to be stitched to obtain a stitched image, the central position coordinates of two frames of stitched images during each image stitching can be introduced.
A one-time loop process for performing image stitching is described below with reference to steps 401-406, as shown in fig. 8.
Step 401, after the second camera is used to obtain the first frame of image to be stitched, under the condition that the second camera is used to obtain the first image to be stitched, the central position coordinates of the first image to be stitched and the second image to be stitched are obtained.
Here, in the process of acquiring N frames of images to be stitched by using the second camera, the electronic device may first acquire a first frame of images to be stitched. After that, a cyclic image stitching process is performed.
As a one-time loop processing procedure, image stitching may be performed between the first frame image to be stitched and the second frame image to be stitched. At this time, the first image to be stitched is a second frame image to be stitched obtained by using the second camera, and the second image to be stitched is the first frame image to be stitched.
As another cyclic processing procedure, image stitching may be performed between a new frame of image to be stitched and a stitched image obtained by stitching. At this time, the first image to be stitched is the new image to be stitched acquired by the second camera, and the second image to be stitched is the stitched image obtained by stitching.
The N images to be spliced are all obtained by shooting by the same camera (the second camera), and splicing among the images is performed on the basis of the N images to be spliced, so that the central position coordinates of different images to be spliced can be determined under the camera coordinate system of the second camera.
Referring to fig. 5a to 5d, in fig. 5a, when the second camera is rotated to change the lens orientation, the center points in the areas 501 to 504 respectively correspond to the photographing center points of the second camera.
Then, in the images 1 to 4 obtained in fig. 5d, the center positions of the respective images correspond to the aforementioned photographing center points when the second camera photographs the different areas.
When image stitching processing is carried out, the central position coordinates of two frames of stitched images are introduced when each image stitching is carried out, image registration is carried out according to the central position coordinates, the relative spatial relationship between two images to be stitched is indicated, and the accuracy of the image stitching processing is ensured.
Step 402, determining an image stitching region between the first image to be stitched and the second image to be stitched based on the relative coordinates.
Taking the region 501 and the region 502 as examples, if the overlapping region 505 exists between the two regions, and if the corresponding overlapping region 507 exists between the corresponding images 1 and 2, when the images 1 and 2 are spliced, firstly, based on the respective central position coordinates of the two images, after the images 1 and 2 are registered, determining which overlapping regions between the two images are by combining the image sizes, and taking the overlapping region between the images as the image splicing region needing to integrate contents.
In step 403, image feature points are detected from the image stitching region.
The characteristic points are points with sharp changes of the gray values of the images in the first image to be spliced and the second image to be spliced or pixel points with larger curvature on the edges of the images, have obvious characteristics and can reflect the essential characteristics of the images. The electronic device may calculate feature points in the first image to be stitched and the second image to be stitched using a scale-invariant feature transform (SIFT) algorithm or an accelerated robust feature (speeded up robust feature, surf) algorithm.
Step 404, determining M pairs of feature points which are most similar from the image feature points.
The value range of M is a positive integer greater than or equal to 3, for example, M may be 3. The matching of the feature points is the matching of the images. The electronic equipment can calculate Euclidean distance between the characteristic points of the first image to be spliced and the second image to be spliced through a K-proximity algorithm, and the smaller the Euclidean distance between any characteristic point of the second image to be spliced and any characteristic point of the first image to be spliced is, the more similar the two characteristic points are.
In some embodiments, the electronic device may determine 3 pairs of feature points where the euclidean distance is the smallest.
In other embodiments, the electronic device may set a preset threshold, and calculate 3 pairs of feature points with euclidean distances smaller than the preset threshold, and stop calculating, where the 3 pairs of feature points are regarded as the most similar 3 pairs of feature points.
It should be understood that, besides calculating the euclidean distance between the feature points of the first image to be stitched and the second image to be stitched by using the K-proximity algorithm, there may also be other ways of determining the most similar M pairs of feature points, which is not limited in this embodiment of the present application.
Step 405, calculating an affine transformation matrix of the first image to be spliced according to the M pairs of feature points.
The affine transformation (affine transformation) matrix is used for carrying out linear transformation on the second image to be spliced once and carrying out previous translation, and transforming the second image to be spliced into a vector space where the first image to be spliced is located.
Step 406, performing affine transformation on the first image to be stitched according to the affine transformation matrix to obtain a third image to be stitched, and fusing the third image to be stitched with the second image to be stitched to obtain a stitched image.
The stitched image is then formed as the second image to be stitched in the next cycle, or as the final target image.
And the electronic equipment carries out affine transformation on the second image to be spliced according to the image affine transformation matrix to obtain a third image to be spliced, and carries out Laplacian fusion on the third image to be spliced and the first image to be spliced to obtain a first spliced image.
And the electronic equipment carries out affine transformation on the second image to be spliced by utilizing the radiation transformation matrix to obtain a third image to be spliced. The third image to be spliced is the same as the vector space where the first image to be spliced is located. And then the electronic equipment performs Laplacian fusion on the third image to be spliced and the first image to be spliced to obtain a first spliced image. And obtaining a first spliced image by utilizing the fusion as an updated first image to be spliced.
It should be understood that the electronic device may fuse the first image to be stitched and the third image to be stitched in other ways besides the laplace fusion algorithm. The embodiments of the present application are not limited in this regard.
It should be understood that in other embodiments, the N frames of images are stitched in other ways besides the above stitching, for example, the N frames of images are divided into two groups, and are respectively stitched in the above-mentioned ways to obtain two frames of images, and the two frames of images are stitched to obtain the target image. The method for splicing the electronic equipment to obtain the target image is not limited.
After obtaining a frame of image to be stitched by using the second camera, the current frame of image to be stitched is stitched to obtain a stitched image, and then step 303 is continuously executed.
Step 303, determining whether the acquired image to be stitched has reached N frames.
I.e. judging whether the second camera acquires the image to be spliced of the nth frame, if so, executing step 304 to splice to obtain the final target image.
Otherwise, it is necessary to enter the loop of image stitching, and the process returns to step 302. Step 302 may loop through N-1 times until the nth frame of images to be stitched is obtained.
And step 304, splicing the N frames of images to be spliced to obtain a target image.
After the electronic device obtains the first frame to-be-spliced image by using the second camera, each time the electronic device obtains one frame to-be-spliced image by using the second camera, the one frame to-be-spliced image is spliced to obtain a spliced image until the second camera obtains the N frame to-be-spliced image, the N frame to-be-spliced image is spliced to obtain a target image, and frame-by-frame image splicing of all N frames to-be-spliced images is completed.
The above process realizes that the target image is obtained by stitching on the basis of the N frames of images to be stitched, and step 207 is completed.
After step 207 is performed, in one embodiment, step 208 may be performed.
And step 208, displaying the target image as a spliced preview image in a spliced preview window.
Alternatively, after step 207 is performed, in another embodiment, step 209 may be performed.
And 209, clipping the target image to obtain a third image, and displaying the third image as a spliced preview image in a spliced preview window.
In the two embodiments, the target image is used as a spliced preview image, or the target image is cut and processed to be used as a spliced preview image and displayed in a spliced preview window, and the rapid preview display of the images which can be formed after splicing and present multiple scenes and have high resolution is realized in different image processing modes.
Third scenario: and based on the shot images of the second camera, combining the moving mirror track to carry out preview display on the image splicing.
The second camera arranged in the electronic equipment is a rotatable camera, so that in the process of acquiring N images to be spliced through the second camera, a lens transporting track in the process of acquiring the images by the camera can be set.
The electronic equipment controls the second camera to rotate in a certain path or paths according to the set mirror track, and adjusts the orientation of the lens.
The third scene is based on the second embodiment of the second scene, that is, based on the preview embodiment of the preview display based on the splicing process of image splicing, the moving mirror track is integrated into the graphic preview, and the moving mirror process of image dynamic acquisition is combined with the dynamic splicing preview display of the image, so that the simulation degree of image acquisition splicing preview display can be improved while the dynamic preview display of image splicing is realized.
The electronic equipment needs to combine the moving mirror track first, and N frames of images to be spliced are acquired through the second camera.
Reference is made to the schematic illustration of the user interface 30 shown in fig. 3a described above. After the electronic device enters the "long Jiao Pinjie mode" of the camera function, the first camera may be continuously used to acquire a camera preview image and display the preview image in the preview box 301 of the user interface, and in response to the shooting trigger operation of the user on the shooting control 302 in the camera preview interface, referring to the schematic diagram of the user interface 31 shown in fig. 3b, the electronic device calls out the spliced preview window 311 and synchronously executes the shooting operation. The electronic equipment driving motor changes the direction of the second camera according to the set mirror track, and N frames of images to be spliced are obtained by utilizing N photographing points of the second camera in the mirror track.
The lens-carrying track comprises a plurality of photographing points. The number of shooting points is the same as the number of images to be spliced, which need to be shot. And the shooting point positions are used for acquiring a frame of image to be spliced by using the second camera, and if N frames of image to be spliced need to be shot, the N shooting point positions are set in the moving mirror track.
Under the condition that the position of the electronic equipment is kept unchanged, the driving motor drives the second camera to move among N photographing points according to the preset track, the direction of the second camera can be changed every time the motor drives the second camera to move, the lens is moved, and then scenes which can be photographed by the second camera are changed. The second camera with changed orientation is located on a set photographing point, and then the second camera can photograph the scene with changed orientation on the current photographing point.
Therefore, when the second camera moves to one photographing point along the lens moving track, a frame of image to be processed is obtained through photographing, and finally N frames of image to be processed are obtained at N photographing points in the lens moving track.
Referring to fig. 2b, a mirror track may be shown by an arrow, where the mirror track includes 4 photo points. The second camera can shoot scenes in the area 201 according to 4 shooting points contained in the microscope track, then shoot scenes in the area 202, the area 203 and the area 204 in sequence to obtain 4 frames of images to be spliced, and then the electronic equipment can splice the 4 frames of images to be spliced.
In the "long Jiao Pinjie mode", the mirror trajectory may be designed and generated in advance in the mode or generated based on the user's current shooting needs setting.
Fig. 8 a-8 f are a set of schematic diagrams illustrating a determination of a mirror trajectory and a combined mirror trajectory for image stitching preview in a third scenario.
The first camera in the electronic device has a larger field of view than the second camera. In order to ensure that the image captured by the second camera can cover the view angle range of the first camera as much as possible, the view angle shooting coverage of the first camera with a larger view angle can be utilized to formulate the mirror track of the second camera.
Here, the mirror trajectory of the second camera may be determined based on the field angle of the first camera.
As shown in fig. 8 a-8 c, the electronic device acquires the camera preview image by using the first camera, and the field angle of the first camera is a equipped fixed parameter, so that the region division can be performed based on the coverage area of the field angle of the first camera. And planning the mirror track of the second camera based on the divided areas.
The number of the divided areas is the same as the number of the images to be spliced, which are shot by the second camera. Assuming that at this time, the electronic apparatus determines that N is 9 according to the field angle of the wide-angle camera and the field angle of the telephoto camera, 9 areas are divided from the coverage area of the field angle of the first camera. As shown in fig. 8 a-8 c, the coverage of the field angle of the first camera is divided into 9 areas 801-809, each of which covers a different scene content. The corresponding point positions 1-9 under the view angle of the first camera can be planned based on the divided 9 areas, and 9 photographing point positions are altogether formed. The photo spot may be preview displayed in the stitched preview region 810. In fig. 8 a-8 c, photo spots 1-9 are shown in the splice preview area 810, and different mirror movement routes are planned between the photo spots 1-9.
And a moving track is planned among the 9 photographing points, so that the electronic equipment controls the second camera to move to the corresponding photographing point in the moving track, and the scenery photographing of the area 801-809 is realized on the 9 photographing points, so that the coverage range of the view angle of the first camera is covered as much as possible.
For planning the movement sequence between photographing points, as shown in fig. 8a, a movement track between the point location 1 and the point location 9 can be obtained by selecting the photographing point location 1 corresponding to the upper left region 801 of the photographing region to the photographing point location 9 corresponding to the lower right region 809, taking the point location 1 as the photographing start point location a, taking the point location 9 as the photographing end point location B, and then: and the point position 1-point position 2-point position 3-point position 6-point position 5-point position 4-point position 7-point position 8-point position 9, so that a mirror track is formed.
Alternatively, a moving track from the photographing point 3 corresponding to the upper right region 803 to the photographing point 7 corresponding to the lower left region 807 of the photographing region may be selected, so as to form a mirror track, which is not described herein.
In the electronic equipment, the rotatable second camera is positioned at the center position of the movable range before shooting and after shooting. In the process of rotating the lens to shoot, the motor drives the camera to start from the central position and finally returns to the central position.
In order to ensure the rationality of the movement of the lens of the second camera, the 9 areas covered by the view angle of the first camera can be referred to, and the photographing start point position and the photographing end point position which are located inside and outside are selected from the photographing point positions 1 to 9.
The photographing starting point position inside is the center position of the movable range of the second camera, or the point position which is closer to the center position of the movable range of the second camera. The outer photographing ending point is a point far from the center of the movable range of the second camera relative to the photographing starting point.
The point position selection mode ensures that the moving mirror track starts from the center position of the movable range of the lens as much as possible, saves the moving time of the movable second camera from the center position to the shooting starting point, realizes quick drawing, reduces unnecessary moving distance and improves the stability of the shot picture.
In addition, in order to further ensure the rationality of the movement of the second camera lens, when the outer photographing end point is selected, the distance between the photographing end point and the photographing start point can be considered, and the distance between the selected photographing end point and the photographing start point is required to be smaller than a threshold value.
Therefore, the shooting end point position can be ensured to be an inner point position as far as possible, the moving time of the camera from the shooting end point position to the center position of the movable range of the lens is reduced, quick drawing is realized, unnecessary moving distance is further reduced, and the stability of a shot picture is improved.
In fig. 8B, an inner photographing point position 5 is selected as a photographing start point position a, an outer photographing point position 1 is selected as a photographing end point position B, and a photographing movement sequence between the remaining 7 middle photographing point positions is planned to obtain a movement track between the point position 5 and the point position 1: and the point position 5-point position 2-point position 3-point position 6-point position 9-point position 8-point position 7-point position 4-point position 1 to form a mirror track.
In fig. 8c, the photographing point position 5 located inside is selected as the photographing start point position a, the photographing point position 2 located outside is selected as the photographing end point position B, and the photographing movement sequence between the remaining 7 middle photographing point positions is planned, so as to obtain the movement track between the point position 5 and the point position 2: and the point position 5-point position 3-point position 6-point position 9-point position 8-point position 7-point position 4-point position 1-point position 2 to form a mirror track.
After the electronic device enters the long Jiao Pinjie mode of the camera function, the electronic device responds to the shooting trigger operation of the user, the driving motor changes the direction of the second camera according to the set fortune mirror track, N frames of images to be spliced are obtained by utilizing N shooting points of the second camera in the fortune mirror track, in the process of obtaining N frames of images to be spliced, the electronic device calls out the splicing preview window 311 with reference to the schematic diagram of the user interface 31 shown in the fig. 3b, and the preview display of image splicing is performed.
In this embodiment, a display grid may be set in a stitched preview window of a camera preview interface. The display grid may be in the form of a checkerboard or in the form of a diamond-shaped grid.
The number of the grid positions is required to be consistent with the number of the images to be spliced, which are required to be acquired by the second camera, in the display grid tray, so that the spliced images formed by splicing the images to be spliced in the corresponding grid positions are previewed and displayed. Each grid position corresponds to one photographing point in the lens moving track of the second camera, so that when the spliced image is subjected to preview display, the spliced image, the photographing point and the lens moving track are subjected to associated preview display.
On the basis of the mirror trajectory shown in fig. 8b, a preview display of the stitched image in the display pane is shown in fig. 8 d-8 f.
In fig. 8 d-8 f, the display grid is displayed in the stitched preview window 810. The display tray in the tile preview window 810 contains 9 tiles. Each lattice position corresponds to a photographing point position in the lens moving track of the second camera. Wherein, the grid position 5 corresponds to a photographing start point position a in the mirror track, the grid position 2 corresponds to a second photographing point position in the mirror track, the grid position 3 corresponds to a third photographing point position in the mirror track, the grid position 1 corresponds to a photographing end point position B in the mirror track, and the corresponding relationship between other grid positions and photographing point positions can be seen in the figure and is not illustrated one by one.
In the process that the electronic device obtains N frames of images to be spliced by using N photographing points in the lens moving track through the second camera, the images need to be previewed on the corresponding grid positions in the splicing preview window 810.
In fig. 8d, the electronic device controls the second camera to perform image capturing on the region 805 along the capturing start point in the lens moving track, so as to obtain the image to be stitched of the first frame, where the capturing start point a of the region 805 corresponds to the grid 5 in the stitching preview window 810. Therefore, the first frame of the image to be stitched needs to be displayed on bin 5 in the stitched preview window 810.
Subsequently, in fig. 8e, the electronic device controls the second camera to move along the moving mirror track, the photographing start point is adjusted to the second photographing point to perform image photographing on the region 802, so as to obtain a second frame to-be-spliced image, the first frame to-be-spliced image and the second frame to-be-spliced image are spliced to obtain a first spliced image, and the photographing point of the region 802 corresponds to the position 2 in the splicing preview window 810, so that the positions corresponding to the first spliced image are two, namely the position 5 and the position 2 (for displaying the preview display effect, the number of the lattice is not shown in fig. 8e, and the number can be shown in fig. 8 d). Therefore, the first frame of the image to be stitched that has been displayed in the previous tray 5 needs to be overlaid with the current first stitched image, and the first stitched image is displayed on the positions 5 and 2 in the stitched preview window 810.
Further, in fig. 8f, the electronic device controls the second camera to continue moving along the mirror track, the second photographing point is adjusted to the third photographing point to perform image capturing on the area 803, so as to obtain a third frame of image to be spliced, the third frame of image to be spliced is spliced with the first spliced image to obtain a second spliced image, and the photographing point of the area 803 corresponds to the position 3 in the splicing preview window 810, so that the positions corresponding to the second spliced image are three, namely the position 5, the position 2 and the position 3 (for displaying the preview display effect, the number of the tray is not shown in fig. 8e, and the number can be seen in fig. 8 d). Therefore, the first image to be stitched that has been displayed in the previous tray 5 and tray 2 needs to be overlaid with the current second stitched image, and the second stitched image is displayed on the positions 5, 2, and 3 in the stitched preview window 810.
The subsequent electronic equipment controls the second camera to continue moving along the moving mirror track, adjusts the shooting point position, shoots the images to be spliced, displays the spliced image obtained by splicing on the corresponding grid position of the display grid plate in the spliced preview window, and the processing process can see the description content until the splicing and previewing of the N frames of images to be spliced are completed.
Fig. 9 is a schematic flowchart of a photographing method involved in a third scenario.
In the third scenario, the electronic device may refer to the following description of steps 501 to 506 in the process of acquiring the target image and the image preview by using the photographing method in the embodiment of the present application.
Step 501, selecting an inner photographing start point and an outer photographing end point from the view angle of the first camera.
The photographing starting point position inside is the center position of the movable range of the second camera, or the point position which is closer to the center position of the movable range of the second camera. The outer photographing ending point is a point far from the center of the movable range of the second camera relative to the photographing starting point.
The point position selection mode ensures that the moving mirror track starts from the center position of the movable range of the lens as much as possible, saves the moving time of the movable second camera from the center position to the shooting starting point, realizes quick drawing, reduces unnecessary moving distance and improves the stability of the shot picture.
In order to further ensure the rationality of the movement of the second camera lens, when the photographing end point is selected to be located outside, the distance between the photographing end point and the photographing start point can be considered.
And selecting a first point position which is positioned inside from the view angle of the first camera as a photographing start point position, and selecting a second point position which is positioned outside relative to the first point position and is smaller than a threshold value from the view angle of the first camera as a photographing end point position, so that the distance between the selected photographing end point position and the photographing start point position is smaller than the threshold value.
Therefore, the shooting end point position can be ensured to be an inner point position as far as possible, the moving time of the camera from the shooting end point position to the center position of the movable range of the lens is reduced, quick drawing is realized, unnecessary moving distance is further reduced, and the stability of a shot picture is improved.
Step 502, determining a photographing moving sequence among N-2 middle photographing points in the field angle based on the photographing start point and the photographing end point.
Step 503, determining a target movement track from the photographing start point to the photographing end point through the N-2 middle photographing points according to the photographing movement sequence as a set mirror track.
Therefore, the lens moving track of the second camera is formulated by utilizing the view angle shooting coverage range of the first camera with a larger view angle, and the image acquisition and the long-focus spliced image effect of the second camera are ensured.
In step 504, in response to the photographing triggering operation, the driving motor changes the direction of the second camera according to the set moving mirror track, and N frames of images to be spliced are acquired by using N photographing points of the second camera in the moving mirror track.
The lens-carrying track comprises a plurality of photographing points. If N frames of images to be spliced need to be shot, N shooting points are set in the mirror track.
After the determination of the mirror track is completed, the second camera needs to shoot N frames of images to be spliced according to the mirror track.
The electronic equipment drives the motor to drive the second camera to move among N photographing points according to a preset track, and every time the motor drives the second camera to move, the scenery which can be photographed by the second camera changes. Therefore, when the second camera moves to one photographing point along the lens moving track, a frame of image to be processed is obtained through photographing, and finally N frames of image to be processed are obtained at N photographing points in the lens moving track.
In step 505, determining at least one target lattice position corresponding to the spliced image obtained by splicing each frame of images to be spliced in the process of performing image splicing on the N frames of images to be spliced to obtain the icon image from the display lattice plates arranged in the spliced preview window.
The splicing preview window is provided with a display lattice disk, the display lattice disk comprises N lattice positions, each lattice position corresponds to one photographing point in the moving mirror track of the second camera, and one photographing point is used for acquiring a frame of image to be spliced by the second camera.
Step 506, the stitched image is sequentially overlaid and displayed on at least one target lattice in the stitched preview window.
In the process of splicing the N frames of images to be spliced to obtain the icon images, each pair of images to be spliced to obtain the spliced image, the spliced image is used as a spliced preview image to be displayed in a spliced preview window, and dynamic preview display of the image splicing process is realized.
In the process, the moving mirror track is integrated into the graphic preview, the moving mirror process of image dynamic acquisition is combined with dynamic splicing preview display of the image, and the simulation degree of image acquisition splicing preview display is improved while the dynamic preview display of image splicing is realized.
In the embodiment of the application, some common points and differences exist in the image shooting and image preview processing processes under different scenes, and the related description contents of the parts can be mutually combined, so that technical combination implementation barriers are not caused by scene description differences.
Fig. 10 is a schematic diagram of an image capturing apparatus according to an embodiment of the present application. As shown in fig. 10, the apparatus 2000 includes a mode triggering module 2001, a first shooting module 2002, a second shooting module 2003, an image processing module 2004, and an image preview module 2005. The apparatus 2000 may be integrated in an electronic device such as a mobile phone, a tablet computer, and an intelligent wearable device.
The apparatus 2000 can be used to perform any of the above image capturing methods.
In one implementation, the apparatus 2000 may further include a storage unit for storing data of images, identified image features, and the like. The memory unit may be integrated in any one of the above units, or may be a unit independent of all the above units.
Fig. 11 is a schematic hardware structure of an electronic device according to an embodiment of the present application. As shown in fig. 11, the electronic device 900 may include a processor 910, an external memory interface 920, an internal memory 921, a universal serial bus (universal serial bus, USB) interface 930, a charge management module 940, a power management module 941, a battery 942, an antenna 1, an antenna 2, a mobile communication module 950, a wireless communication module 960, an audio module 970, a speaker 970A, a receiver 970B, a microphone 970C, an earphone interface 970D, a sensor module 980, keys 990, a motor 991, an indicator 992, a camera 993, a display screen 994, and a subscriber identity module (subscriber identification module, SIM) card interface 995, etc. The sensor module 980 may include, among other things, a pressure sensor 980A, a gyroscope sensor 980B, a barometric sensor 980C, a magnetic sensor 980D, an acceleration sensor 980E, a distance sensor 980F, a proximity sensor 980G, a fingerprint sensor 980H, a temperature sensor 980J, a touch sensor 980K, an ambient sensor 980L, a bone conduction sensor 980M, and the like.
It should be understood that the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the electronic device 900. In other embodiments of the present application, electronic device 900 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Illustratively, the processor 910 shown in FIG. 11 may include one or more processing units, such as: the processor 910 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 900, among other things. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 910 for storing instructions and data. In some embodiments, the memory in the processor 910 is a cache memory. The memory may hold instructions or data that the processor 910 has just used or recycled. If the processor 910 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided and the latency of the processor 910 is reduced, thereby improving the efficiency of the system.
In some embodiments, processor 910 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
In some embodiments, the I2C interface is a bi-directional synchronous serial bus including a serial data line (SDA) and a serial clock line (derail clock line, SCL). The processor 910 may include multiple sets of I2C buses. The processor 910 may be coupled to the touch sensor 980K, charger, flash, camera 993, etc., respectively, through different I2C bus interfaces. For example, the processor 910 may couple the touch sensor 980K through an I2C interface, causing the processor 910 to communicate with the touch sensor 980K through an I2C bus interface, implementing the touch functionality of the electronic device 900.
In some embodiments, the I2S interface may be used for audio communication. The processor 910 may include multiple sets of I2S buses. The processor 910 may be coupled to the audio module 970 by an I2S bus to enable communication between the processor 910 and the audio module 970.
In some embodiments, the audio module 970 may communicate audio signals to the wireless communication module 960 through an I2S interface to implement a function of answering a phone call through a bluetooth headset.
In some embodiments, the PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. The audio module 970 and the wireless communication module 960 may be coupled through a PCM bus interface.
In some embodiments, the audio module 970 may also communicate audio signals to the wireless communication module 960 through a PCM interface to enable answering a call through a bluetooth headset. It should be appreciated that both the I2S interface and the PCM interface may be used for audio communication.
In some embodiments, the UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. UART interfaces are typically used to connect the processor 910 with the wireless communication module 960. For example, the processor 910 communicates with a bluetooth module in the wireless communication module 960 through a UART interface to implement bluetooth functions. In some embodiments, the audio module 970 may communicate audio signals to the wireless communication module 960 through a UART interface to implement a function of playing music through a bluetooth headset.
In some embodiments, a MIPI interface may be used to connect processor 910 with peripheral devices such as display 994, camera 993, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. The processor 910 and the camera 993 communicate through the CSI interface to implement the photographing function of the electronic device 900. Processor 910 and display 994 communicate via a DSI interface to implement the display functions of electronic device 900.
In some embodiments, the GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. GPIO interfaces may be used to connect processor 910 with camera 993, display 994, wireless communication module 960, audio module 970, sensor module 980, and so forth. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
Illustratively, the USB interface 930 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 900, or may be used to transfer data between the electronic device 900 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the connection relationships between the modules illustrated in the embodiments of the present application are merely illustrative, and do not limit the structure of the electronic device 900. In other embodiments of the present application, the electronic device 900 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 940 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 940 may receive a charging input of the wired charger through the USB interface 930. In some wireless charging embodiments, the charge management module 940 may receive wireless charging input through a wireless charging coil of the electronic device 900. The charging management module 940 may also provide power to the electronic device through the power management module 941 while charging the battery 942.
The power management module 941 is used to connect the battery 942, the charge management module 940 and the processor 910. The power management module 941 receives input from the battery 942 and/or the charge management module 940 and provides power to the processor 910, the internal memory 921, the external memory, the display 994, the camera 993, the wireless communication module 960, and the like. The power management module 941 may also be used to monitor battery capacity, battery cycle times, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 941 may also be provided in the processor 910. In other embodiments, the power management module 941 and the charge management module 940 may be disposed in the same device.
The wireless communication function of the electronic device 900 may be implemented by the antenna 1, the antenna 2, the mobile communication module 950, the wireless communication module 960, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 900 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example, the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 950 may provide a solution for wireless communication applied on the electronic device 900, such as at least one of the following: second generation (2th generation,2G) mobile communications solutions, third generation (3 g) mobile communications solutions, fourth generation (4th generation,5G) mobile communications solutions, fifth generation (5th generation,5G) mobile communications solutions. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 950 may receive electromagnetic waves by the antenna 1, perform processes such as filtering and amplifying the received electromagnetic waves, and then transmit to a modem processor for demodulation. The mobile communication module 950 may also amplify the signal modulated by the modem processor, and the amplified signal is converted into electromagnetic waves by the antenna 1 and radiated. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 910. In some embodiments, at least some of the functional modules of the mobile communication module 950 may be provided in the same device as at least some of the modules of the processor 910.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to speaker 970A, receiver 970B, etc.), or displays images or video through display 994. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communications module 950 or other functional modules, independent of the processor 910.
The wireless communication module 960 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 900. The wireless communication module 960 may be one or more devices that integrate at least one communication processing module. The wireless communication module 960 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 910. The wireless communication module 960 may also receive a signal to be transmitted from the processor 910, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 of electronic device 900 is coupled to mobile communication module 950 and antenna 2 of electronic device 900 is coupled to wireless communication module 960 so that electronic device 900 may communicate with networks and other electronic devices via wireless communication techniques. The wireless communication technology may include at least one of the following communication technologies: global system for mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, IR technologies. The GNSS may include at least one of the following positioning techniques: global satellite positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), beidou satellite navigation system (beidou navigation satellite system, BDS), quasi zenith satellite system (quasi-zenith satellite system, QZSS), satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 900 implements display functionality via a GPU, a display 994, and an application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display 994 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 910 may include one or more GPUs that execute program instructions to generate or change display information.
The display 994 is used to display images, videos, and the like. The display 994 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (flex-emitting diode), mini-Led, micro-OLED, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the electronic device 900 may include 1 or N displays 994, N being a positive integer greater than 1.
The electronic device 900 may implement shooting functions through an ISP, a camera 993, a video codec, a GPU, a display 994, an application processor, and the like.
The ISP is used to process the data fed back by the camera 993. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, an ISP may be provided in the camera 993.
The camera 993 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the electronic device 900 may include 1 or N cameras 993, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 900 is selecting a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 900 may support one or more video codecs. Thus, the electronic device 900 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of the electronic device 900 may be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 920 may be used to connect an external memory card, such as a Secure Digital (SD) card, to enable expanding the memory capabilities of the electronic device 900. The external memory card communicates with the processor 910 through the external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 921 may be used to store computer-executable program code including instructions. The processor 910 executes various functional applications of the electronic device 900 and data processing by executing instructions stored in the internal memory 921. The internal memory 921 may include a stored program area and a stored data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 900 (e.g., audio data, phonebook, etc.), and so forth. In addition, the internal memory 921 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
Electronic device 900 may implement audio functionality through audio module 970, speaker 970A, receiver 970B, microphone 970C, headphone interface 970D, and application processors, among others. Such as music playing, recording, etc.
The audio module 970 is used to convert digital audio information to an analog audio signal output and also to convert an analog audio input to a digital audio signal. The audio module 970 may also be used to encode and decode audio signals. In some embodiments, the audio module 970 may be disposed in the processor 910 or some functional modules of the audio module 970 may be disposed in the processor 910.
Speaker 970A, also known as a "horn," is configured to convert audio electrical signals into sound signals. The electronic device 900 may listen to music, or to hands-free conversations, through the speaker 970A.
A receiver 970B, also known as a "earpiece," is used to convert an audio electrical signal into an acoustic signal. When electronic device 900 is answering a telephone call or voice message, voice may be received by placing receiver 970B in close proximity to the human ear.
Microphone 970C, also known as a "microphone" or "microphone," is used to convert acoustic signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 970C through the mouth, inputting an acoustic signal to the microphone 970C. The electronic device 900 may be provided with at least one microphone 970C. In other embodiments, the electronic device 900 may be provided with two microphones 970C, which may also perform noise reduction in addition to collecting sound signals. In other embodiments, the electronic device 900 may also be provided with three, four, or more microphones 970C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 970D is for connecting a wired earphone. The earphone interface 970D may be a USB interface 930 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 980A is configured to sense a pressure signal and convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 980A may be disposed on the display 994. The pressure sensor 980A is of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. When a force is applied to the pressure sensor 980A, the capacitance between the electrodes changes. The electronic device 900 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display 994, the electronic device 900 detects the intensity of the touch operation from the pressure sensor 980A. The electronic device 900 may also calculate the location of the touch based on the detection signal of the pressure sensor 980A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example, when a touch operation with a touch operation intensity smaller than a first pressure threshold acts on the short message application icon, an instruction to view the short message is executed. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyroscope sensor 980B may be used to determine a motion gesture of the electronic device 900. In some embodiments, the angular velocity of electronic device 900 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 980B. The gyro sensor 980B may be used for photographing anti-shake. Illustratively, when the shutter is pressed, the gyro sensor 980B detects the shake angle of the electronic device 900, and calculates the distance to be compensated by the lens module according to the angle, so that the lens counteracts the shake of the electronic device 900 by the reverse motion, thereby realizing anti-shake. The gyro sensor 980B can also be used for navigating, somatosensory game scenes.
The air pressure sensor 980C is for measuring air pressure. In some embodiments, the electronic device 900 calculates altitude from barometric pressure values measured by the barometric pressure sensor 980C, aiding in positioning and navigation.
The magnetic sensor 980D includes a hall sensor. The electronic device 900 may detect the opening and closing of the flip holster using the magnetic sensor 980D. In some embodiments, when the electronic device 900 is a flip machine, the electronic device 900 may detect the opening and closing of the flip according to the magnetic sensor 980D; and setting the characteristics of automatic unlocking of the flip cover and the like according to the detected opening and closing state of the leather sheath or the detected opening and closing state of the flip cover.
The acceleration sensor 980E can detect the magnitude of acceleration of the electronic device 900 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 900 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
The distance sensor 980F is used to measure distance. The electronic device 900 may measure distance by infrared or laser. In some embodiments, the electronic device 900 may range using the distance sensor 980F to achieve quick focus.
The proximity light sensor 980G may include, for example, a light-emitting diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 900 emits infrared light outward through the light emitting diode. The electronic device 900 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that an object is in the vicinity of the electronic device 900. When insufficient reflected light is detected, the electronic device 900 may determine that there is no object in the vicinity of the electronic device 900. The electronic device 900 may detect that the user holds the electronic device 900 in close proximity to the ear using the proximity sensor 980G, so as to automatically extinguish the screen for power saving purposes. The proximity light sensor 980G can also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 980L is for sensing ambient light level. The electronic device 900 may adaptively adjust the brightness of the display 994 based on the perceived ambient light level. The ambient light sensor 980L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 980L can also cooperate with proximity light sensor 980G to detect whether electronic device 900 is in a pocket to prevent false touches.
The fingerprint sensor 980H is for capturing a fingerprint. The electronic device 900 may utilize the collected fingerprint feature to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 980J is for detecting temperature. In some embodiments, the electronic device 900 utilizes the temperature detected by the temperature sensor 980J to execute a temperature processing strategy. For example, when the temperature reported by temperature sensor 980J exceeds a threshold, electronic device 900 performs a reduction in performance of a processor located near temperature sensor 980J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 900 heats the battery 942 to avoid abnormal shutdown of the electronic device 900 due to low temperatures. In other embodiments, when the temperature is below a further threshold, the electronic device 900 performs boosting of the output voltage of the battery 942 to avoid abnormal shutdown caused by low temperatures.
Touch sensor 980K, also referred to as a "touch panel". The touch sensor 980K may be disposed on the display 994, and the touch sensor 980K and the display 994 form a touch screen, which is also referred to as a "touch screen". The touch sensor 980K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 994. In other embodiments, the touch sensor 980K may be disposed on a surface of the electronic device 900 other than where the display 994 is located.
The bone conduction sensor 980M may acquire a vibration signal. In some embodiments, bone conduction sensor 980M may acquire a vibration signal of the human vocal tract vibrating bone pieces. The bone conduction sensor 980M may also contact the pulse of the human body and receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 980M may also be provided in a headset, in combination with an osteoinductive headset. The audio module 970 may analyze the voice signal based on the vibration signal of the sound part vibration bone block obtained by the bone conduction sensor 980M, so as to realize the voice function. The application processor can analyze heart rate information based on the blood pressure beat signals acquired by the bone conduction sensor 980M, so as to realize a heart rate detection function.
The keys 990 include a power-on key, a volume key, etc. The keys 990 may be mechanical keys. Or may be a touch key. The electronic device 900 may receive key inputs, generate key signal inputs related to user settings and function controls of the electronic device 900.
The motor 991 may generate a vibratory alert. The motor 991 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 991 may also correspond to different vibration feedback effects by touch operations applied to different areas of the display screen 994. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 992 may be an indicator light, which may be used to indicate a state of charge, a change in charge, an indication message, a missed call, a notification, or the like.
The SIM card interface 995 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 995, or removed from the SIM card interface 995, to enable contact and separation with the electronic device 900. The electronic device 900 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 995 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 995 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 995 may also be compatible with different types of SIM cards. SIM card interface 995 may also be compatible with external memory cards. The electronic device 900 interacts with the network through the SIM card to implement functions such as talking and data communication. In some embodiments, the electronic device 900 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 900 and cannot be separated from the electronic device 900.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The embodiment of the application also provides electronic equipment, which comprises: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor implementing steps of any of the methods described above when the computer program is executed.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps that may implement the various method embodiments described above.
The present application provides a computer program product comprising a computer program for performing the steps of the method embodiments described above when the computer program is executed by a processor.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/electronic apparatus, recording medium, computer memory, read-only memory (ROM), random access memory (random access memory, RAM), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/device and method may be implemented in other manners. For example, the apparatus/device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.
Claims (15)
1. An image capturing method applied to an electronic device including a first camera and a second camera, wherein a field angle of the first camera is larger than a field angle of the second camera, the method comprising:
the electronic equipment enters a camera preview interface, wherein the camera preview interface comprises a spliced preview window;
responding to shooting triggering operation, and acquiring a first image by using the first camera;
responding to the photographing triggering operation, changing the direction of the second camera, and acquiring N frames of images to be spliced by using the second camera, wherein N is an integer greater than 2;
performing image stitching on the N frames of images to be stitched to obtain a target image;
and displaying a spliced preview image in the spliced preview window, wherein the spliced preview image corresponds to the target image, the image to be spliced or the first image.
2. The method of claim 1, wherein the camera preview interface includes an image preview area;
after the electronic equipment enters a camera preview interface, acquiring a camera preview image by using the first camera;
displaying the camera preview image in the image preview area;
The first image corresponds to a camera preview image displayed in the image preview area at the triggering time of the photographing triggering operation; the stitched preview window is located in the image preview region.
3. The method of claim 1, wherein the changing the orientation of the second camera in response to the photographing trigger operation, and the acquiring N frames of images to be stitched with the second camera, comprises:
and responding to the photographing triggering operation, driving a motor to change the direction of the second camera according to a set mirror moving track, and acquiring N frames of images to be spliced by utilizing N photographing points of the second camera in the mirror moving track.
4. A method according to claim 3, characterized in that the method further comprises:
selecting an inner photographing starting point position and an outer photographing ending point position from the view angle of the first camera;
determining photographing movement sequences among N-2 middle photographing points in the field angle based on the photographing start point position and the photographing end point position;
and determining a target movement track from the photographing starting point to the photographing ending point through N-2 intermediate photographing point positions according to the photographing movement sequence, and taking the target movement track reaching the photographing ending point as the set mirror track.
5. The method of claim 4, wherein selecting an inner photographing start point and an outer photographing end point from the view angle of the first camera comprises:
selecting a first point position which is positioned in the first camera from the view angle of the first camera as the photographing starting point position;
and selecting a second point position which is smaller than a threshold value and is outside relative to the first point position from the view angle of the first camera as the photographing ending point position.
6. The method of any of claims 1 to 5, wherein displaying the stitched preview image in the stitched preview window comprises:
cutting the first image to obtain a second image;
and displaying the second image as the spliced preview image in the spliced preview window.
7. The method of claim 6, wherein cropping the first image to obtain a second image comprises:
and cutting out the second image consistent with the maximum field angle covered by the second camera under the direction adjustment from the first image.
8. The method of claim 7, wherein cropping the second image from the first image that is consistent with the maximum field angle covered by the second camera orientation adjustment comprises:
Acquiring the zooming magnification of the equivalent focal length of the maximum field angle covered by the second camera under the direction adjustment relative to the first focal length of the first camera;
determining clipping region coordinates from the first image based on a width-height dimension of the first image and the zoom magnification;
and clipping the first image based on the clipping region coordinates to obtain the second image.
9. The method of any of claims 1 to 5, wherein displaying the stitched preview image in the stitched preview window comprises:
displaying the target image as the spliced preview image in the spliced preview window; or,
cutting the target image to obtain a third image, and displaying the third image as the spliced preview image in the spliced preview window; or,
and in the process of carrying out image stitching on the N frames of images to be stitched to obtain the target image, stitching the images to be stitched of each pair of frames of images to be stitched to obtain a stitched image, and displaying the stitched image serving as the stitched preview image in the stitched preview window.
10. The method of claim 9, wherein a display grid is arranged in the stitching preview window, the display grid includes N grid bits, each grid bit corresponds to a photographing point in a mirror track of the second camera, and one photographing point is used for obtaining a frame of the image to be stitched by using the second camera;
In the process of performing image stitching on the N frames of images to be stitched to obtain the target image, stitching an image obtained by stitching each pair of images to be stitched is used as the stitching preview image and displayed in the stitching preview window, and the method comprises the following steps:
determining at least one target lattice position corresponding to a spliced image obtained by splicing each pair of images to be spliced in the process of carrying out image splicing on N frames of images to be spliced in the display lattice plate to obtain the target image;
and sequentially overlaying and displaying the spliced image on the at least one target lattice in the spliced preview window.
11. The method according to claim 1, wherein the performing image stitching on the N frames of images to be stitched to obtain a target image includes:
after a first frame of images to be spliced is obtained by using the second camera, each time a frame of images to be spliced is obtained by using the second camera, a frame of images to be spliced is spliced to obtain a spliced image;
and splicing the N frame of images to be spliced to obtain the target image until the N frame of images to be spliced are obtained by using the second camera.
12. The method according to claim 11, wherein each time a frame of the image to be stitched is acquired by using the second camera, stitching a frame of the image to be stitched to obtain a stitched image, including:
under the condition that a first image to be spliced is obtained by utilizing the second camera, the center position coordinates of the first image to be spliced and a second image to be spliced are obtained, wherein the second image to be spliced is the first frame image to be spliced or the spliced image obtained by splicing;
determining an image stitching region between the first image to be stitched and the second image to be stitched based on the central position coordinates;
detecting and obtaining image characteristic points from the image splicing area;
determining M pairs of characteristic points which are most similar from the image characteristic points; m is an integer greater than or equal to 3;
calculating an affine transformation matrix of the first image to be spliced according to the M pairs of characteristic points;
carrying out affine transformation on the first image to be spliced according to the affine transformation matrix to obtain a third image to be spliced;
and fusing the third image to be spliced with the second image to be spliced to obtain the spliced image.
13. An image capturing device comprising a first camera and a second camera, wherein the angle of view of the first camera is greater than the angle of view of the second camera, the device comprising:
the mode triggering module is used for entering a camera preview interface, wherein the camera preview interface comprises a spliced preview window;
the first shooting module is used for responding to shooting triggering operation and acquiring a first image by using the first camera;
the second shooting module is used for responding to the shooting triggering operation, changing the direction of the second camera and acquiring N frames of images to be spliced by using the second camera, wherein N is an integer greater than 2;
the image processing module is used for carrying out image stitching on the N frames of images to be stitched to obtain a target image;
and the image preview module is used for displaying a spliced preview image in the spliced preview window, wherein the spliced preview image corresponds to the target image, the image to be spliced or the first image.
14. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 12 when executing the computer program.
15. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method of any one of claims 1 to 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311120538.1A CN117714849A (en) | 2023-08-31 | 2023-08-31 | Image shooting method and related equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311120538.1A CN117714849A (en) | 2023-08-31 | 2023-08-31 | Image shooting method and related equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117714849A true CN117714849A (en) | 2024-03-15 |
Family
ID=90146762
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311120538.1A Pending CN117714849A (en) | 2023-08-31 | 2023-08-31 | Image shooting method and related equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117714849A (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140184850A1 (en) * | 2012-12-31 | 2014-07-03 | Texas Instruments Incorporated | System and method for generating 360 degree video recording using mvc |
CN111010510A (en) * | 2019-12-10 | 2020-04-14 | 维沃移动通信有限公司 | Shooting control method and device and electronic equipment |
CN113542574A (en) * | 2020-04-15 | 2021-10-22 | 华为技术有限公司 | Shooting preview method under zooming, terminal, storage medium and electronic equipment |
CN113556461A (en) * | 2020-09-29 | 2021-10-26 | 华为技术有限公司 | Image processing method and related device |
WO2021238317A1 (en) * | 2020-05-29 | 2021-12-02 | 华为技术有限公司 | Panoramic image capture method and device |
CN114071009A (en) * | 2020-07-31 | 2022-02-18 | 华为技术有限公司 | Shooting method and equipment |
CN114071010A (en) * | 2020-07-30 | 2022-02-18 | 华为技术有限公司 | Shooting method and equipment |
CN114710618A (en) * | 2022-03-23 | 2022-07-05 | 三星(中国)半导体有限公司 | Method and device for previewing spliced image and electronic equipment |
CN115914860A (en) * | 2021-08-03 | 2023-04-04 | 荣耀终端有限公司 | Shooting method and electronic equipment |
CN116095464A (en) * | 2022-07-15 | 2023-05-09 | 荣耀终端有限公司 | Terminal shooting method and terminal equipment |
CN116208846A (en) * | 2021-11-29 | 2023-06-02 | 中兴通讯股份有限公司 | Shooting preview method, image fusion method, electronic device and storage medium |
-
2023
- 2023-08-31 CN CN202311120538.1A patent/CN117714849A/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140184850A1 (en) * | 2012-12-31 | 2014-07-03 | Texas Instruments Incorporated | System and method for generating 360 degree video recording using mvc |
CN111010510A (en) * | 2019-12-10 | 2020-04-14 | 维沃移动通信有限公司 | Shooting control method and device and electronic equipment |
CN113542574A (en) * | 2020-04-15 | 2021-10-22 | 华为技术有限公司 | Shooting preview method under zooming, terminal, storage medium and electronic equipment |
WO2021238317A1 (en) * | 2020-05-29 | 2021-12-02 | 华为技术有限公司 | Panoramic image capture method and device |
CN114071010A (en) * | 2020-07-30 | 2022-02-18 | 华为技术有限公司 | Shooting method and equipment |
CN114071009A (en) * | 2020-07-31 | 2022-02-18 | 华为技术有限公司 | Shooting method and equipment |
CN113556461A (en) * | 2020-09-29 | 2021-10-26 | 华为技术有限公司 | Image processing method and related device |
WO2022068537A1 (en) * | 2020-09-29 | 2022-04-07 | 华为技术有限公司 | Image processing method and related apparatus |
CN115379112A (en) * | 2020-09-29 | 2022-11-22 | 华为技术有限公司 | Image processing method and related device |
CN115914860A (en) * | 2021-08-03 | 2023-04-04 | 荣耀终端有限公司 | Shooting method and electronic equipment |
CN116208846A (en) * | 2021-11-29 | 2023-06-02 | 中兴通讯股份有限公司 | Shooting preview method, image fusion method, electronic device and storage medium |
CN114710618A (en) * | 2022-03-23 | 2022-07-05 | 三星(中国)半导体有限公司 | Method and device for previewing spliced image and electronic equipment |
CN116095464A (en) * | 2022-07-15 | 2023-05-09 | 荣耀终端有限公司 | Terminal shooting method and terminal equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112333380B (en) | Shooting method and equipment | |
US11831977B2 (en) | Photographing and processing method and electronic device | |
CN110072070B (en) | Multi-channel video recording method, equipment and medium | |
CN113905179B (en) | Method for switching cameras by terminal and terminal | |
WO2020073959A1 (en) | Image capturing method, and electronic device | |
CN113489894B (en) | Shooting method and terminal in long-focus scene | |
CN113132620A (en) | Image shooting method and related device | |
CN114205515B (en) | Anti-shake processing method for video and electronic equipment | |
CN113596316B (en) | Photographing method and electronic equipment | |
CN113364969A (en) | Imaging method of non-line-of-sight object and electronic equipment | |
CN114302063B (en) | Shooting method and equipment | |
CN115150543B (en) | Shooting method, shooting device, electronic equipment and readable storage medium | |
CN116095464A (en) | Terminal shooting method and terminal equipment | |
CN117714849A (en) | Image shooting method and related equipment | |
CN116782023A (en) | Shooting method and electronic equipment | |
CN116709018B (en) | Zoom bar segmentation method and electronic equipment | |
US12149817B2 (en) | Image processing method and apparatus | |
CN115696067B (en) | Image processing method for terminal, terminal device and computer readable storage medium | |
CN115209027B (en) | Camera focusing method and electronic equipment | |
CN118524286A (en) | Focusing method and focusing device | |
CN116225276A (en) | Display screen window switching method and electronic equipment | |
CN116582743A (en) | Shooting method, electronic equipment and medium | |
CN116939123A (en) | Photographing method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |