CN106506958B - Method for shooting by adopting mobile terminal and mobile terminal - Google Patents
Method for shooting by adopting mobile terminal and mobile terminal Download PDFInfo
- Publication number
- CN106506958B CN106506958B CN201611025799.5A CN201611025799A CN106506958B CN 106506958 B CN106506958 B CN 106506958B CN 201611025799 A CN201611025799 A CN 201611025799A CN 106506958 B CN106506958 B CN 106506958B
- Authority
- CN
- China
- Prior art keywords
- camera
- information
- position information
- shooting
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/65—Control of camera operation in relation to power supply
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
- H04N23/6812—Motion detection based on additional sensors, e.g. acceleration sensors
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Telephone Function (AREA)
Abstract
The embodiment of the invention provides a method for shooting by adopting a mobile terminal and the mobile terminal, wherein the mobile terminal comprises at least two cameras, and the method comprises the following steps: starting a first camera and a second camera; acquiring the depth of field information of a shot object in a view frame; judging whether the movement information of the shot subject is smaller than a preset amplitude threshold value or not; when the movement information is smaller than a preset amplitude threshold value, closing the second camera; and shooting an image by adopting the first camera. In the embodiment of the invention, the second camera is adjusted by adopting a plurality of shooting parameters, so that the depth of field information is effectively determined, meanwhile, the influence of shaking of a user on the shooting effect under certain conditions can be prevented, the shooting effect and the quality of pictures are improved, and the user experience is enhanced; and finally, after determining that the depth of field information or the object does not move, closing the second camera, and shooting by adopting the first camera, so that the power consumption is reduced, and the energy is saved.
Description
Technical Field
The present invention relates to the field of mobile terminal technology, and in particular, to a method for shooting with a mobile terminal and a mobile terminal.
Background
With the continuous development of scientific technology, the types of electronic products are more and more, mobile terminals with shooting functions are more and more popular, users can shoot by using the mobile terminals at any time and any place, and the obtained images are sent to relatives and friends in the form of multimedia short messages and the like, and the shooting is carried out immediately, so that the mobile terminals are convenient and fast.
Mobile terminals such as smart phones are no longer simple communication tools, but are artware integrating functions of leisure, entertainment, communication and the like. Meanwhile, the requirements of users on the shooting effect are also increasing. Many mobile terminals on the market use dual cameras, and the mobile terminals using dual cameras can improve the shooting effect greatly, and can provide many other shooting functions, such as 3D depth of field drawing, obstacle detection and higher-quality images. However, the camera module is one of the modules of the mobile terminal which consumes more power, and particularly, two sets of camera modules are installed in the mobile terminal.
Disclosure of Invention
In view of the foregoing problems, embodiments of the present invention provide a shooting method and a mobile terminal, so as to solve the problem of power consumption in shooting an image by using a dual-camera module or a multi-camera module in the prior art.
In order to solve the above problem, an embodiment of the present invention discloses a method for shooting by using a mobile terminal, where the mobile terminal includes at least two cameras, and the method includes:
starting a first camera and a second camera;
acquiring the depth of field information of a shot object in a view frame;
judging whether the movement information of the shot subject is smaller than a preset amplitude threshold value or not;
when the movement information is smaller than a preset amplitude threshold value, closing the second camera;
and shooting an image by adopting the first camera.
The embodiment of the invention also discloses a mobile terminal, which comprises at least two cameras, and the terminal comprises:
the camera opening module is used for opening the first camera and the second camera;
the field depth information acquisition module is used for acquiring the field depth information of a shot object in the view frame;
the mobile information judging module is used for judging whether the mobile information of the shot main body is smaller than a preset amplitude threshold value or not;
the second camera closing module is used for closing the second camera when the movement information is smaller than a preset amplitude threshold value;
and the first camera shooting module is used for shooting images by adopting the first camera.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, the first camera and the second camera are started, the depth of field information of a shot object in a viewing frame is obtained, and whether the movement information of the shot object is smaller than a preset amplitude threshold value or not is judged; when the movement information is smaller than a preset amplitude threshold value, the second camera is closed, and the first camera is adopted to shoot images; and finally, after determining that the depth of field information or the object does not move, closing the second camera, and shooting by adopting the first camera, so that the power consumption is reduced, and the energy is saved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart illustrating a first step of a method for photographing with a mobile terminal according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating steps of a second embodiment of a method for photographing with a mobile terminal according to an embodiment of the present invention;
fig. 3 is a block diagram of a mobile terminal according to a third embodiment of the apparatus in the embodiment of the present invention;
fig. 4 is a block diagram of a mobile terminal according to a fourth embodiment of the apparatus in the embodiment of the present invention;
fig. 5 is a schematic structural diagram of a mobile terminal according to a fifth embodiment of the apparatus in the embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects solved by the embodiments of the present invention more clearly apparent, the embodiments of the present invention are described in further detail below with reference to the accompanying drawings and the embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Method embodiment one
Referring to fig. 1, a flowchart of a first step of a method for shooting by using a mobile terminal according to an embodiment of the present invention is shown, where the mobile terminal includes at least two cameras, and the method specifically includes the following steps:
in the embodiment of the present invention, the mobile terminal may include an intelligent device with at least two cameras, and the mobile terminal is adopted to perform shooting, and first receives a camera opening request of a user, where a camera opening request triggered by the user may be a click event of touching a screen, a press event of pressing a physical key, or other ways to open the camera.
Further, after the mobile terminal receives a camera opening request of a user, the first camera and the second camera are opened according to the request, it should be noted that the order of opening the cameras may be simultaneously opened or sequentially opened according to a certain order, and the embodiment of the present invention does not limit this.
102, acquiring depth of field information of a shot object in a view frame;
after the first camera and the second camera are started, when a user adjusts shooting parameters aiming at the second camera, the mobile terminal obtains depth-of-field information of a shot object in the view frame; that is, the user has completed focusing operation, photometric operation, and focus adjusting operation, etc. on the object to be photographed, wherein the photographing parameters may include focus information and/or aperture information and/or shutter time information and/or gain information; specifically, the focal length information refers to the distance from the center of a lens to an imaging plane of a CCD (Charge-coupled Device) in a camera of the mobile terminal, and the focal length information can be set according to the limit of the focal length information of the mobile terminal, and the aperture is a Device for controlling the amount of light entering the CCD through a lens of the camera, which is usually inside the lens. The size of the aperture information is generally expressed by F value, and the larger the F value, the smaller the amount of light entering. A shutter is a device in a camera apparatus for controlling the time during which light irradiates a CCD. The shutter time information may be the time of illumination of the CCD, e.g., 1/250 seconds, 1/60 seconds, which determines the brightness of the photograph. The gain information refers to a process parameter for compensating the image. It should be noted that, in the embodiment of the present invention, the first camera may be configured to capture an image, and the second camera may be configured to receive the capture parameters to further determine depth of field information, where the functions of the two cameras are different, where the depth of field information may refer to a distance range between the front and the back of the object to be captured, which is measured by an imaging device capable of obtaining a clear image at the front edge of a lens or other type of imager (such as a professional capture camera) of the mobile terminal. In the embodiment of the present invention, the second camera may be configured to determine depth information, that is, a distance at which a clear image can be obtained for identifying the object to be shot. It should be noted that the above-mentioned examples of the shooting parameters are only enumeration in the embodiment of the present invention, and the shooting parameters may also include shooting parameters such as white balance information, which is not limited in the embodiment of the present invention.
103, judging whether the movement information of the shot subject is smaller than a preset amplitude threshold value;
in practical application to the embodiment of the invention, the mobile terminal adopts the first camera to collect the first characteristic image and the second characteristic image at a specific time interval; the first characteristic image comprises first position information of the shot main body, the second characteristic image comprises second position information of the shot main body, whether the first position information is consistent with the second position information is further judged, when the first position information is inconsistent with the second position information, the difference value of the first position information and the second position information is determined to be movement information, and whether the movement information is smaller than a preset amplitude threshold value is judged. The method comprises the steps of acquiring position information of a shot subject at different moments by adopting a first camera, specifically, the first camera can shoot two images containing the shot subject at different moments, identifying the shot subject through an image identification technology, further acquiring position information of the shot subject in the two images, judging whether the position information is changed, calculating the change amplitude of the position information of the two images, and determining depth-of-field information again according to shooting parameters of a user when the change amplitude (namely, movement information) of the position information is larger than a preset amplitude threshold value and the shot subject is considered to move. It should be noted that setting the preset amplitude threshold can prevent the shaking of the user from affecting the effect of the photo; further, the first camera is set by adopting shooting parameters of a second camera.
specifically, when the movement information is detected and is smaller than a preset amplitude threshold value, it can be considered that the subject to be photographed does not occur or the shake of the moving user is not enough to affect the depth information of the image, the second camera is automatically turned off.
And 105, shooting an image by using the first camera.
Specifically, in the embodiment of the invention, after the second camera is automatically closed, a click event aiming at a screen button of a user is received; for the click event, the first camera captures an image, it should be noted that the manner in which the user captures the image may include a click event by touching a screen, a pressing event by pressing a physical key, or a manner in which an automatic timing capture is set, which is not limited in this embodiment of the present invention.
In the embodiment of the invention, the first camera and the second camera are started; acquiring the depth of field information of a shot object in a view frame; judging whether the movement information of the shot subject is smaller than a preset amplitude threshold value or not; when the movement information is smaller than a preset amplitude threshold value, closing the second camera; and shooting an image by adopting the first camera. In the embodiment of the invention, the second camera is adjusted by adopting a plurality of shooting parameters, so that the depth of field information is effectively determined, the preset amplitude threshold value is further set, whether the shot main body moves or not is accurately judged, meanwhile, the shooting effect is prevented from being influenced by the shaking of a user under certain conditions, the shooting effect and the quality of pictures are improved, and the user experience is enhanced; and finally, after determining that the depth of field information or the object does not move, closing the second camera, and shooting by adopting the first camera, so that the power consumption is reduced, and the energy is saved.
Method embodiment two
Referring to fig. 2, a flowchart illustrating steps of a second embodiment of a method for shooting with a mobile terminal according to an embodiment of the present invention is shown, where the method specifically includes the following steps:
the mobile terminal is adopted to shoot, a camera opening request of a user is received, the camera opening request triggered by the user can be started through a click event of contacting a screen, a pressing event of pressing an entity key or a preset triggering mode (such as a mode of shaking or specific screen sliding).
Specifically, after the mobile terminal receives a camera opening request of a user, the first camera and the second camera are opened according to the request. It should be noted that, in the embodiment of the present invention, the first camera may be used for shooting, and the second camera may be used for determining depth information.
in the embodiment of the invention, after the first camera and the second camera are started, when a user adjusts the shooting parameters aiming at the second camera, the mobile terminal acquires the depth of field information of a shot object in the view frame; that is, the user has completed focusing operation, photometric operation, and focus adjusting operation, etc. on the object to be photographed, wherein the photographing parameters may include focus information and/or aperture information and/or shutter time information and/or gain information; it should be noted that the automatically adjusted shooting parameters for the second camera may include auto focus, auto exposure, and auto white balance. In the embodiment of the present invention, the second camera may be configured to determine depth information, that is, a distance at which a clear image can be obtained for identifying the object to be shot.
specifically, the first camera captures the first characteristic image and the second characteristic image at a specific time interval, where the specific time interval may be 0.1s or 0.5s, and may be any time value set according to an actual situation, and the implementation of the present invention is not limited specifically. The photographed subjects in the first characteristic image and the second characteristic image can be identified and the corresponding first position information and second position information can be recorded in the processor and the memory through image identification without outputting and previewing.
further, comparing first position information with second position information, judging whether the first position information is consistent with the second position information, and when the first position information is consistent with the second position information, indicating that the shot main body does not move or the user does not shake enough to influence the determination of the depth of field information.
specifically, the difference between the first position information and the second position information is determined as the movement information. For example, when the rectangular plane coordinate system is established with the length and width directions of the image captured by the first camera as references, and the first position information of the first feature image is the coordinates (2.32, 1.45), and the second position information of the second feature image is the coordinates (2.32, 1.48), it indicates that the subject has moved 0.03 coordinate points in the width direction, and the difference is 0.03 in the width direction, and the difference is determined as the movement information.
it should be noted that the preset amplitude threshold may be any value in the width direction and the length direction of a rectangular planar coordinate system established with the length direction of the image captured by the first camera as a reference, for example, the preset amplitude threshold sets 0.3 coordinate points in the width direction or 0.4 coordinate points in the length direction, where the coordinate points may be used as length units, and of course, the preset amplitude threshold may also be any value set according to actual situations, which is not limited in this embodiment of the present invention.
In a preferred embodiment of the present invention, before the step of turning off the second camera when the movement information is smaller than a preset amplitude threshold, the method further includes: and setting the first camera by adopting the shooting parameters of the second camera.
when the movement information is detected and is smaller than the preset amplitude threshold value, the second camera is automatically turned off when the photographed main body does not occur or the shake of the mobile user is not enough to influence the depth information of the image.
And step 208, shooting an image by using the first camera.
In a preferred embodiment of the present invention, the step of capturing the image by using the first camera includes: receiving a click event aiming at a screen button of a user; and aiming at the click event, the first camera shoots an image.
In the embodiment of the invention, whether the first position information is consistent with the second position information is judged by adopting a first characteristic image and a second characteristic image which are acquired by the first camera at a specific time interval, when the first position information is inconsistent with the second position information, determining the difference value between the first position information and the second position information as movement information, judging whether the movement information is smaller than a preset amplitude threshold value, when the movement information is smaller than the preset amplitude threshold value, the second camera is closed, the first camera is adopted to shoot the image, whether the moving amplitude of the shot main body exceeds a certain threshold value or not is judged, the change of the depth of field information caused by the shaking of a user or the movement of the shot object is effectively prevented, the quality of the image is improved, the user experience is greatly improved, the power consumption is reduced, and resources are saved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Device embodiment III
Fig. 3 is a block diagram of a mobile terminal of one embodiment of the present invention. The mobile terminal 300 shown in fig. 3 includes a camera turning-on module 301, an adjustment information determination module 302, a movement information determination module 303, a second camera turning-off module 304, and a first camera photographing module 305.
The camera opening module 301 is used for opening the first camera and the second camera;
a depth information acquiring module 302, configured to acquire depth information of a subject captured in the view finder;
a movement information judgment module 303, configured to judge whether the movement information of the photographed subject is smaller than a preset amplitude threshold;
a second camera closing module 304, configured to close the second camera when the movement information is smaller than a preset amplitude threshold;
a first camera shooting module 305, configured to shoot an image with the first camera.
Optionally, the mobile information determining module 303 includes:
the image shooting submodule is used for acquiring a first characteristic image and a second characteristic image at a specific time interval by adopting the first camera; wherein the first feature image includes first position information of the subject, and the second feature image includes second position information of the subject;
the position information judgment submodule is used for judging whether the first position information is consistent with the second position information;
the mobile information determining submodule is used for determining that the difference value between the first position information and the second position information is mobile information when the first position information is inconsistent with the second position information;
and the preset amplitude threshold judgment submodule is used for judging whether the movement information is smaller than a preset amplitude threshold.
Optionally, the module connected to the second camera shutdown module 304 further includes:
and the first camera setting module is used for setting the shooting parameters of the second camera in the first camera.
Optionally, the shooting parameters include focal length information and/or aperture information and/or shutter time information and/or gain information.
Optionally, the first camera shooting module 305 includes:
the click event receiving submodule is used for receiving a click event aiming at a screen key of a user;
and the image shooting submodule is used for shooting an image by the first camera aiming at the click event.
Example four of the device
Fig. 4 is a block diagram of a mobile terminal according to another embodiment of the present invention. The mobile terminal 400 shown in fig. 4 includes: at least one processor 401, memory 402, at least one network interface 404, other user interfaces 403, and a photographing component 406. The various components in the mobile terminal 400 are coupled together by a bus system 405. It is understood that the bus system 405 is used to enable connection communication between these components. The bus system 405 includes a power bus, a control bus, and a status signal bus in addition to a data bus. However, for clarity of illustration, the various buses are labeled as the bus system 405 in FIG. 4, and the photographing component 4100 includes a first camera and a second camera.
The user interface 403 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, trackball, touch pad, or touch screen, among others.
It will be appreciated that memory 402 in embodiments of the invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a Read-only memory (ROM), a programmable Read-only memory (PROM), an erasable programmable Read-only memory (erasabprom, EPROM), an electrically erasable programmable Read-only memory (EEPROM), or a flash memory. The volatile memory may be a Random Access Memory (RAM) which functions as an external cache. By way of example, but not limitation, many forms of RAM are available, such as static random access memory (staticiram, SRAM), dynamic random access memory (dynamic RAM, DRAM), synchronous dynamic random access memory (syncronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced synchronous SDRAM (ESDRAM), synchronous link SDRAM (SLDRAM), and direct memory bus SDRAM (DRRAM). The memory 402 of the systems and methods described in this embodiment of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 402 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof: an operating system 4021 and application programs 4022.
The operating system 4021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is configured to implement various basic services and process hardware-based tasks. The application 4022 includes various applications, such as a media player (MediaPlayer), a Browser (Browser), and the like, for implementing various application services. A program for implementing the method according to the embodiment of the present invention may be included in the application 4022.
In the embodiment of the present invention, the processor 401 is configured to start the first camera and the second camera by calling a program or an instruction stored in the memory 402, specifically, the program or the instruction stored in the application 4022; judging whether shooting parameter adjusting information of a user for the second camera is received or not; acquiring the depth of field information of a shot object in a view frame; after the shooting parameter adjusting information is received, judging whether the movement information of the shot main body of the first camera is smaller than a preset amplitude threshold value or not; when the movement information is smaller than a preset amplitude threshold value, closing the second camera; and shooting a picture image by adopting the first camera.
The method disclosed in the above embodiments of the present invention may be applied to the processor 401, or implemented by the processor 401. The processor 401 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 401. The processor 401 may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 402, and the processor 401 reads the information in the memory 402 and completes the steps of the method in combination with the hardware.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described in this disclosure may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described in this disclosure. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, the processor 401 is further configured to: acquiring a first characteristic image and a second characteristic image at a specific time interval by adopting the first camera; wherein the first feature image includes first position information of the subject, and the second feature image includes second position information of the subject.
Optionally, the processor 401 is further configured to: and judging whether the first position information is consistent with the second position information.
Optionally, the processor 401 is further configured to: and when the first position information is inconsistent with the second position information, determining that the difference value between the first position information and the second position information is movement information.
Optionally, the processor 401 is further configured to: and judging whether the movement information is smaller than a preset amplitude threshold value or not.
Optionally, the processor 401 is further configured to: and setting the first camera by adopting the shooting parameters of the second camera.
Optionally, the shooting parameters include focal length information and/or aperture information and/or shutter time information and/or gain information.
Optionally, the processor 401 is further configured to: and receiving a clicking event of a user for a screen button.
Optionally, the processor 401 is further configured to: and aiming at the click event, the first camera shoots an image.
The mobile terminal 400 can implement the processes implemented by the mobile terminal in the foregoing embodiments, and in order to avoid repetition, the detailed description is omitted here.
In the embodiment of the invention, the first camera and the second camera are started; acquiring the depth of field information of a shot object in a view frame; judging whether the movement information of the shot subject is smaller than a preset amplitude threshold value or not; when the movement information is smaller than a preset amplitude threshold value, closing the second camera; and shooting an image by adopting the first camera. In the embodiment of the invention, the second camera is adjusted by adopting a plurality of shooting parameters, so that the depth of field information is effectively determined, the preset amplitude threshold value is further set, whether the shot main body moves or not is accurately judged, meanwhile, the shooting effect is prevented from being influenced by the shaking of a user under certain conditions, the shooting effect and the quality of pictures are improved, and the user experience is enhanced; and finally, after determining that the depth of field information or the object does not move, closing the second camera, and shooting by adopting the first camera, so that the power consumption is reduced, and the energy is saved.
Example V of the device
Fig. 5 is a schematic structural diagram of a mobile terminal according to another embodiment of the present invention. Specifically, the mobile terminal 500 in fig. 5 may be a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), or a vehicle-mounted computer.
The mobile terminal 500 in fig. 5 includes a Radio Frequency (RF) circuit 510, a memory 520, an input unit 530, a display unit 540, a processor 560, an audio circuit 570, a wifi (wireless fidelity) module 580, a power supply 590, and a photographing component 5110.
The input unit 530 may be used to receive numeric or character information input by a user and generate signal inputs related to user settings and function control of the mobile terminal 500, among other things. Specifically, in the embodiment of the present invention, the input unit 530 may include a touch panel 531. The touch panel 531, also called a touch screen, can collect touch operations of a user (for example, operations of the user on the touch panel 531 by using a finger, a stylus pen, or any other suitable object or accessory) thereon or nearby, and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 531 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, sends it to the processor 560, and can receive and execute commands from the processor 560. In addition, the touch panel 531 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 531, the input unit 530 may further include other input devices 532, and the other input devices 532 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Among other things, the display unit 540 may be used to display information input by the user or information provided to the user and various menu interfaces of the mobile terminal 500. The display unit 540 may include a display panel 541, and optionally, the display panel 541 may be configured in the form of an LCD or an organic light-emitting diode (OLED), or the like.
It should be noted that the touch panel 531 may cover the display panel 541 to form a touch display screen, and when the touch display screen detects a touch operation thereon or nearby, the touch display screen is transmitted to the processor 560 to determine the type of the touch event, and then the processor 560 provides a corresponding visual output on the touch display screen according to the type of the touch event.
The touch display screen comprises an application program interface display area and a common control display area. The arrangement modes of the application program interface display area and the common control display area are not limited, and can be an arrangement mode which can distinguish two display areas, such as vertical arrangement, left-right arrangement and the like. The application interface display area may be used to display an interface of an application. Each interface may contain at least one interface element such as an icon and/or widget desktop control for an application. The application interface display area may also be an empty interface that does not contain any content. The common control display area is used for displaying controls with high utilization rate, such as application icons like setting buttons, interface numbers, scroll bars, phone book icons and the like.
The photographing component 5100 includes at least a first camera and a second camera.
The processor 560 is a control center of the mobile terminal 500, connects various parts of the entire cellular phone using various interfaces and lines, and performs various functions of the mobile terminal 500 and processes data by operating or executing software programs and/or modules stored in the first memory 521 and calling data stored in the second memory 522, thereby integrally monitoring the mobile terminal 500. Optionally, processor 560 may include one or more processing units.
In the embodiment of the present invention, the processor 560 is configured to turn on the first camera and the second camera by calling the software program and/or the module stored in the first memory 521 and/or the data stored in the second memory 522; acquiring the depth of field information of a shot object in a view frame; judging whether the movement information of the shot subject is smaller than a preset amplitude threshold value or not; when the movement information is smaller than a preset amplitude threshold value, closing the second camera; and shooting an image by adopting the first camera.
Optionally, the processor 560 is further configured to: acquiring a first characteristic image and a second characteristic image at a specific time interval by adopting the first camera; wherein the first feature image includes first position information of the subject, and the second feature image includes second position information of the subject.
Optionally, the processor 560 is further configured to: and judging whether the first position information is consistent with the second position information.
Optionally, the processor 560 is further configured to: and when the first position information is inconsistent with the second position information, determining that the difference value between the first position information and the second position information is movement information.
Optionally, the processor 560 is further configured to: and judging whether the movement information is smaller than a preset amplitude threshold value or not.
Optionally, the processor 560 is further configured to: and setting the first camera by adopting the shooting parameters of the second camera.
Optionally, the shooting parameters include focal length information and/or aperture information and/or shutter time information and/or gain information.
Optionally, the processor 560 is further configured to: and receiving a clicking event of a user for a screen button.
Optionally, the processor 560 is further configured to: and aiming at the click event, the first camera shoots an image.
It can be seen that in the embodiment of the present invention, whether the first position information is consistent with the second position information is determined by using the first characteristic image and the second characteristic image acquired by the first camera at a specific time interval, when the first position information is inconsistent with the second position information, the difference between the first position information and the second position information is determined as the movement information, whether the movement information is smaller than the preset amplitude threshold value is determined, when the movement information is smaller than the preset amplitude threshold value, the second camera is turned off, the first camera is used to capture the image, and whether the movement amplitude of the subject exceeds a certain threshold value is determined, thereby effectively preventing the variation of the depth of field information caused by the shaking of the user or the movement of the subject, improving the quality of the image, greatly improving the user experience, and reducing the power consumption, and resources are saved.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (4)
1. A method for shooting by adopting a mobile terminal is characterized in that the mobile terminal comprises at least two cameras, and the method comprises the following steps:
starting a first camera and a second camera;
acquiring the depth of field information of a shot object in a view frame;
judging whether the movement information of the shot subject is smaller than a preset amplitude threshold value or not;
when the movement information is smaller than a preset amplitude threshold value, closing the second camera;
shooting an image by using the first camera;
the step of judging whether the movement information of the shot subject is smaller than a preset amplitude threshold value comprises the following steps:
acquiring a first characteristic image and a second characteristic image at a specific time interval by adopting the first camera; wherein the first feature image includes first position information of the subject, and the second feature image includes second position information of the subject;
judging whether the first position information is consistent with the second position information;
when the first position information is inconsistent with the second position information, determining that the difference value of the first position information and the second position information is movement information;
judging whether the movement information is smaller than a preset amplitude threshold value or not;
the preset amplitude threshold value is a preset numerical value in the width direction and the length direction of a rectangular plane coordinate system established by taking the length and width directions of the image shot by the first camera as references;
wherein, when the movement information is smaller than a preset amplitude threshold, before the step of turning off the second camera, the method further includes:
setting the first camera by adopting the shooting parameters of the second camera; wherein the photographing parameters include focus information and aperture information and shutter time information and gain information and white balance information.
2. The method of claim 1, wherein the step of capturing the image with the first camera comprises:
receiving a click event aiming at a screen button of a user;
and aiming at the click event, the first camera shoots an image.
3. A mobile terminal, characterized in that the mobile terminal comprises at least two cameras, the terminal comprising:
the camera opening module is used for opening the first camera and the second camera;
the field depth information acquisition module is used for acquiring the field depth information of a shot object in the view frame;
the mobile information judging module is used for judging whether the mobile information of the shot main body is smaller than a preset amplitude threshold value or not;
the second camera closing module is used for closing the second camera when the movement information is smaller than a preset amplitude threshold value;
the first camera shooting module is used for shooting an image by adopting the first camera;
the mobile information judgment module comprises:
the image shooting submodule is used for acquiring a first characteristic image and a second characteristic image at a specific time interval by adopting the first camera; wherein the first feature image includes first position information of the subject, and the second feature image includes second position information of the subject;
the position information judgment submodule is used for judging whether the first position information is consistent with the second position information;
the mobile information determining submodule is used for determining that the difference value between the first position information and the second position information is mobile information when the first position information is inconsistent with the second position information;
a preset amplitude threshold judgment submodule, configured to judge whether the movement information is smaller than a preset amplitude threshold;
the preset amplitude threshold value is a preset numerical value in the width direction and the length direction of a rectangular plane coordinate system established by taking the length and width directions of the image shot by the first camera as references;
wherein, with the module that the module links to each other is closed to the second camera still includes:
the first camera setting module is used for setting the first camera by adopting the shooting parameters of the second camera;
wherein the photographing parameters include focus information and aperture information and shutter time information and gain information and white balance information.
4. The terminal of claim 3, wherein the first camera shooting module comprises:
the click event receiving submodule is used for receiving a click event aiming at a screen key of a user;
and the image shooting submodule is used for shooting an image by the first camera aiming at the click event.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611025799.5A CN106506958B (en) | 2016-11-15 | 2016-11-15 | Method for shooting by adopting mobile terminal and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611025799.5A CN106506958B (en) | 2016-11-15 | 2016-11-15 | Method for shooting by adopting mobile terminal and mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106506958A CN106506958A (en) | 2017-03-15 |
CN106506958B true CN106506958B (en) | 2020-04-10 |
Family
ID=58327251
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611025799.5A Active CN106506958B (en) | 2016-11-15 | 2016-11-15 | Method for shooting by adopting mobile terminal and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106506958B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107147827B (en) * | 2017-05-19 | 2023-09-26 | 北京京东尚科信息技术有限公司 | Image acquisition method and device |
CN107395988A (en) * | 2017-08-31 | 2017-11-24 | 华勤通讯技术有限公司 | The control method and system of the camera of mobile terminal |
CN107465878A (en) * | 2017-09-19 | 2017-12-12 | 珠海市魅族科技有限公司 | Control method and device, terminal and the readable storage medium storing program for executing of camera |
CN107493438B (en) * | 2017-09-26 | 2020-05-15 | 华勤通讯技术有限公司 | Continuous shooting method and device for double cameras and electronic equipment |
CN110072057B (en) * | 2019-05-14 | 2021-03-09 | Oppo广东移动通信有限公司 | Image processing method and related product |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104333702A (en) * | 2014-11-28 | 2015-02-04 | 广东欧珀移动通信有限公司 | Method, device and terminal for automatic focusing |
CN104363376A (en) * | 2014-11-28 | 2015-02-18 | 广东欧珀移动通信有限公司 | Continuous focusing method, device and terminal |
CN104363379A (en) * | 2014-11-28 | 2015-02-18 | 广东欧珀移动通信有限公司 | Shooting method by use of cameras with different focal lengths and terminal |
CN105120135A (en) * | 2015-08-25 | 2015-12-02 | 努比亚技术有限公司 | Binocular camera |
CN105578026A (en) * | 2015-07-10 | 2016-05-11 | 宇龙计算机通信科技(深圳)有限公司 | Photographing method and user terminal |
-
2016
- 2016-11-15 CN CN201611025799.5A patent/CN106506958B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104333702A (en) * | 2014-11-28 | 2015-02-04 | 广东欧珀移动通信有限公司 | Method, device and terminal for automatic focusing |
CN104363376A (en) * | 2014-11-28 | 2015-02-18 | 广东欧珀移动通信有限公司 | Continuous focusing method, device and terminal |
CN104363379A (en) * | 2014-11-28 | 2015-02-18 | 广东欧珀移动通信有限公司 | Shooting method by use of cameras with different focal lengths and terminal |
CN105578026A (en) * | 2015-07-10 | 2016-05-11 | 宇龙计算机通信科技(深圳)有限公司 | Photographing method and user terminal |
CN105120135A (en) * | 2015-08-25 | 2015-12-02 | 努比亚技术有限公司 | Binocular camera |
Also Published As
Publication number | Publication date |
---|---|
CN106506958A (en) | 2017-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6267363B2 (en) | Method and apparatus for taking images | |
CN107197169B (en) | high dynamic range image shooting method and mobile terminal | |
EP3076659B1 (en) | Photographing apparatus, control method thereof, and non-transitory computer-readable recording medium | |
CN106060406B (en) | Photographing method and mobile terminal | |
US10284773B2 (en) | Method and apparatus for preventing photograph from being shielded | |
KR101946437B1 (en) | Method and terminal for acquiring panoramic image | |
KR101678173B1 (en) | Method, device, program and recording medium for photographing | |
CN106506958B (en) | Method for shooting by adopting mobile terminal and mobile terminal | |
CN105245775B (en) | camera imaging method, mobile terminal and device | |
CN106954027B (en) | Image shooting method and mobile terminal | |
CN106454086B (en) | Image processing method and mobile terminal | |
KR20200019728A (en) | Shooting mobile terminal | |
CN105282441B (en) | Photographing method and device | |
IL224050A (en) | Above-lock camera access | |
CN104216525B (en) | Method and device for mode control of camera application | |
KR102501036B1 (en) | Method and device for shooting image, and storage medium | |
CN115087955A (en) | Input-based startup sequence for camera applications | |
CN106713747A (en) | Focusing method and mobile terminal | |
CN108040204B (en) | Image shooting method and device based on multiple cameras and storage medium | |
KR20170009089A (en) | Method and photographing device for controlling a function based on a gesture of a user | |
CN107346332A (en) | A kind of image processing method and mobile terminal | |
CN106713742B (en) | Shooting method and mobile terminal | |
JP2015019376A (en) | Imaging control system and control method of the same | |
KR20150014226A (en) | Electronic Device And Method For Taking Images Of The Same | |
CN105426081B (en) | Interface switching device and method of mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |