CN118524277A - Camera driving method, device, vehicle and storage medium - Google Patents
Camera driving method, device, vehicle and storage medium Download PDFInfo
- Publication number
- CN118524277A CN118524277A CN202410980162.XA CN202410980162A CN118524277A CN 118524277 A CN118524277 A CN 118524277A CN 202410980162 A CN202410980162 A CN 202410980162A CN 118524277 A CN118524277 A CN 118524277A
- Authority
- CN
- China
- Prior art keywords
- camera
- target
- configuration information
- driving
- power
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 86
- 238000004891 communication Methods 0.000 claims abstract description 72
- 238000012545 processing Methods 0.000 claims abstract description 12
- 238000004590 computer program Methods 0.000 claims description 12
- 230000010354 integration Effects 0.000 claims description 9
- 238000001514 detection method Methods 0.000 abstract description 22
- 230000006870 function Effects 0.000 description 7
- 238000011161 development Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 101100459256 Cyprinus carpio myca gene Proteins 0.000 description 2
- 101100459261 Cyprinus carpio mycb gene Proteins 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000012792 core layer Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 238000002054 transplantation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/4401—Bootstrapping
- G06F9/4411—Configuring for operating with peripheral devices; Loading of device drivers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Studio Devices (AREA)
Abstract
The application relates to a camera driving method, a device, a vehicle and a storage medium, wherein a virtual camera is obtained by integrating and processing a power-on parameter corresponding to an equipment tree and an element identification address of a camera element; controlling the communication bus to be electrified through the virtual camera according to the electrified parameters, and accessing the identification addresses of all elements corresponding to the virtual camera based on the electrified communication bus; determining target configuration information according to the acquired first element identifier and the matching degree of the second element identifier in each configuration information corresponding to the camera interface; and driving the target camera actually carried by the camera interface according to the target driving information corresponding to the target configuration information. According to the integrated information in the virtual camera, the first element identification corresponding to the target camera actually carried by the camera interface can be identified at one time by one-time power-on, so that the target configuration information corresponding to the target camera is identified, the rapid driving detection of the target camera is realized, and the driving efficiency of the camera is improved.
Description
Technical Field
The present application relates to the field of camera technologies, and in particular, to a method and apparatus for driving a camera, a vehicle, and a storage medium.
Background
Along with the continuous upgrade of intelligent products, a plurality of different camera devices can be configured according to requirements by a plurality of intelligent products, and along with the gradual increase of the types of camera peripherals, camera driving corresponding to each camera is also gradually diversified, in the related art, relatively independent driving is generally set for each camera or camera component, camera elements included in each driving are different from power-on modes, and power-on detection is required to be independently performed for each driving during each power-on. In the process of starting the camera, all camera drivers need to be detected to determine the final target driver to realize the starting of the camera, so that the detection needs to be repeatedly powered on and powered off, the starting speed of the camera is slow, and the usability is affected.
Disclosure of Invention
The embodiment of the application provides a camera driving method, a camera driving device, a vehicle and a storage medium, so as to at least partially solve the technical problems.
In order to achieve the above object, according to a first aspect of the present application, there is provided a camera driving method comprising:
integrating the power-on parameters corresponding to the equipment tree with the element identification addresses of the camera elements to obtain virtual cameras, wherein the equipment tree is created according to the camera configuration information compatible with the camera interfaces;
controlling a communication bus to be electrified through the virtual camera according to the electrifying parameters, accessing each element identification address corresponding to the virtual camera based on the electrified communication bus, and acquiring a first element identification;
Determining target configuration information according to the matching degree of the first element identifier and the second element identifier in the configuration information corresponding to the camera interface, wherein the first element identifier is determined according to a target camera actually carried by the camera interface;
And driving the target camera actually carried by the camera interface according to the target driving information corresponding to the target configuration information.
Optionally, the integrating the power-on parameter corresponding to the equipment tree and the element identification address of the camera element to obtain the virtual camera includes:
acquiring configuration information of each camera bound with a device tree;
Extracting, for each piece of camera configuration information, a camera element identification address of each camera element in each piece of camera configuration information, and a power-on parameter in each piece of camera configuration information;
and respectively carrying out union processing on the extracted element identification address and the power-on parameter to generate a virtual camera.
Optionally, the accessing, based on the powered communication bus, each element identification address corresponding to the virtual camera, to obtain a first element identification includes:
Accessing a storage unit connected with the communication bus based on the communication bus after power-on, and sequentially reading the element identification addresses in the virtual camera in the storage unit;
and taking the element identifier obtained from the element identifier address as the first element identifier.
Optionally, the method further comprises:
For each camera interface, acquiring a camera element identifier of each camera element in a target camera actually carried by the camera interface and an element identifier address corresponding to the camera element identifier;
And associating the camera element identification with an element identification address corresponding to the camera element identification, and storing the camera element identification and the element identification address in a storage unit connected with the communication bus.
Optionally, the associating the camera element identifier with an element identifier address corresponding to the camera element identifier, and storing the element identifier address in a storage unit connected to the communication bus, includes:
If the read-write modes of the camera element identifiers are different, coding the camera element identifiers according to a preset coding rule;
storing the coded element identifier and the corresponding element identifier address in a storage unit connected with the communication bus in an associated manner;
The method further comprises the steps of:
if any element identifier is stored in any element identifier address, acquiring the element identifier, decoding the element identifier, and obtaining a decoded first element identifier.
Optionally, the determining the target configuration information according to the matching degree of the second element identifier in the configuration information corresponding to the camera interface and the first element identifier acquired by access includes:
according to the first element identification obtained by access, comparing the first element identification with the second element identification in each camera configuration information in turn;
and if all the second element identifiers in any one of the camera configuration information are the same as all the first element identifiers, taking the camera configuration information as the target configuration information.
Optionally, the driving the target camera actually carried by the camera interface according to the target driving information corresponding to the target configuration information includes:
Extracting drive power-on data and drive operation data in the target configuration information to obtain target drive information;
updating the target driving information into a universal drive corresponding to the camera interface to obtain a camera drive corresponding to the camera interface of the camera;
and driving the target camera actually carried by the camera interface according to the camera.
Optionally, the method further comprises:
If a first starting-up instruction of the camera is received, executing the steps of controlling a communication bus to be electrified through the virtual camera according to the electrifying parameters, and accessing each element identification address corresponding to the virtual camera based on the electrified communication bus;
If a second starting-up instruction of the camera is received, acquiring historical target configuration information corresponding to each camera interface, and executing the target driving information corresponding to the target configuration information to drive a target camera actually carried by the camera interface;
The first power-on instruction represents a kernel power-off power-on instruction of the camera equipment, and the second power-on instruction represents a kernel power-on continuous power-on instruction of the camera equipment.
According to a second aspect of the present application, there is provided a camera driving apparatus comprising:
The integration module is used for integrating the power-on parameters corresponding to the equipment tree and the element identification addresses of the camera elements to obtain a virtual camera, and the equipment tree is created according to the camera configuration information compatible with the camera interface;
the access module is used for controlling the communication bus to be electrified through the virtual camera according to the electrifying parameters, and accessing each element identification address corresponding to the virtual camera based on the electrified communication bus;
The determining module is used for determining target configuration information according to the degree of matching of the second element identifier in the configuration information corresponding to the camera interface and the first element identifier acquired by access, wherein the first element identifier is determined according to a target camera actually carried by the camera interface;
And the driving module is used for driving the target camera actually carried by the camera interface according to the target driving information corresponding to the target configuration information.
According to a third aspect of the present application, there is provided a vehicle comprising:
one or more processors;
a memory; and
One or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to implement any of the camera driving methods.
According to a fourth aspect of the present application, there is provided a computer readable storage medium having stored thereon a computer program, the computer program being loaded by a processor to perform the steps of any of the camera driving methods.
According to the camera driving method, the camera driving device, the vehicle and the storage medium, the virtual camera is obtained by integrating the power-on parameters corresponding to the equipment tree and the element identification addresses of the camera elements, and the equipment tree is created according to the camera configuration information compatible with the camera interface; controlling a communication bus to be electrified through the virtual camera according to the electrifying parameters, and accessing each element identification address corresponding to the virtual camera based on the electrified communication bus; determining target configuration information according to the degree of matching of a first element identifier obtained by access and a second element identifier in configuration information corresponding to the camera interface, wherein the first element identifier is determined according to a target camera actually carried by the camera interface; and driving the target camera actually carried by the camera interface according to the target driving information corresponding to the target configuration information. According to the method, the virtual camera is built according to the camera configuration information corresponding to the camera interface, the first element identification corresponding to the target camera actually carried by the camera interface can be identified at one time by powering on according to the integration information in the virtual camera, the camera configuration information corresponding to the camera interface is further detected according to the first element identification, the target configuration information corresponding to the target camera is identified, rapid driving detection of the target camera is achieved, and the camera driving efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the description of the embodiments will be briefly described below. It is evident that the drawings in the following description are only some embodiments of the application and that other drawings may be obtained from these drawings without inventive effort for a person skilled in the art.
For a more complete understanding of the present application and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which like reference numerals represent like parts throughout the following description.
Fig. 1 is a schematic view of a camera driving method according to an embodiment of the present application;
FIG. 2 is a flow chart of an embodiment of a camera driving method provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of one of the structures of the virtual camera in one embodiment of the camera driving method;
fig. 4 is a schematic flow chart of one embodiment of the first element identifier acquisition in the camera driving method according to the embodiment of the present application;
Fig. 5 is a schematic structural diagram of one embodiment of a memory unit in a camera driving method according to an embodiment of the present application;
fig. 6 is a schematic flow chart of one embodiment of determining target configuration information in a camera driving method according to an embodiment of the present application;
Fig. 7 is a schematic flow chart of an embodiment of a camera driving method according to an embodiment of the present application;
FIG. 8 is a schematic view of an embodiment of a camera driving apparatus provided in an embodiment of the present application;
fig. 9 is a schematic structural view of an embodiment of a vehicle provided in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
In the description of the present invention, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc. indicate orientations or positional relationships based on the drawings are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the embodiment of the present application, "and/or" describes the association relationship of the association object, which means that three relationships may exist, for example, a and/or B may be represented: a exists alone, A and B exist together, and B exists alone. The character "/", unless otherwise specified, generally indicates that the associated object is an "or" relationship.
In the present application, the term "exemplary" is used to mean "serving as an example, instance, or illustration. Any embodiment described as "exemplary" in this disclosure is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the application. In the following description, details are set forth for purposes of explanation. It will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In other instances, well-known structures and processes have not been described in detail so as not to obscure the description of the application with unnecessary detail. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
In recent years, with the continuous development of industries such as mobile phones, security, machine vision, vehicle intellectualization and the like, the demands of cameras are explosively increased, and various types of differentiated cmos models are endlessly layered. In order to ensure stable supply, the hardware is convenient to upgrade, the same platform is redundant, and camera equipment with different specifications is compatible. Even the same specification, a plurality of suppliers are always reserved.
For the purpose of autonomous control, most host factory platforms select a free linux/Android system. And adding a plurality of Camera drivers in linux/Android, wherein each driver is generally independent, each driver is configured with power supply such as MCLK, VANA, VDIG, VIO, VAF, RESET, STANDBY needed by the driver, and the device identification model of cmos, AF and flashlight/RGB IC is set. For signals that need to be transmitted over long distances mipi, deSer, ser, ISP equipment are also used, also provided in each drive. After the power-on, the power-on configuration is powered on according to the respective drive power-on configuration, and then the bus addresses and the identification addresses of camera elements such as cmos, AF, flashlight/RGB IC, deSer, ser, ISP and the like are compared one by one, so that the identification content is read. The driver is read, identified correctly and registered in the system.
The following technical defects exist in the above technology: each device needs a device tree, and each new device needs a new device tree; each device detection needs to be powered on and powered off once; the camera elements such as cmos, AF, flashlight/RGB IC, deSer, ser, ISP and the like, bus addresses and equipment identification addresses are different, and the identification is carried out for a plurality of times; the driver recognizes that all the possibilities are to be detected one by one, and the last possible device to be mounted on the host is detected.
In summary, it can be seen that the configuration of the newly added elements in the prior art is complicated, and the starting time of the post-addition equipment becomes longer and longer with the gradual increase of the types of camera peripherals. When the host is to be compatible with tens or hundreds of combined devices, the boot time may increase by several seconds.
In car and other IOT scenarios, the host and camera are often separate, and the camera is later compatible with a large number of model elements. According to the prior art, in some scenes (such as starting up and reversing a car) which need quick response, delay of a few seconds is very bad for user experience.
Accordingly, embodiments of the present application provide a camera driving method, apparatus, device, and computer-readable storage medium, which are described in detail below.
The camera driving method in the embodiment of the invention is applied to a camera driving device, the camera driving device is arranged on a vehicle, one or more processors, a memory and one or more application programs are arranged in the vehicle, and the one or more application programs are stored in the memory and are configured to be executed by the processors to realize the camera driving method.
As shown in fig. 1, fig. 1 is a schematic view of a camera driving method according to an embodiment of the present application, where a camera driving scene includes a vehicle 100 (a camera driving device is integrated in the vehicle 100), and a computer readable storage medium corresponding to the camera driving method is executed in the vehicle 100 to execute steps in the camera driving method.
The vehicle 100 in the embodiment of the invention is mainly used for: integrating the power-on parameters corresponding to the equipment tree with the element identification addresses of the camera elements to obtain virtual cameras, wherein the equipment tree is created according to the camera configuration information compatible with the camera interfaces; controlling a communication bus to be electrified through the virtual camera according to the electrifying parameters, and accessing each element identification address corresponding to the virtual camera based on the electrified communication bus; determining target configuration information according to the degree of matching of a first element identifier obtained by access and a second element identifier in configuration information corresponding to the camera interface, wherein the first element identifier is determined according to a target camera actually carried by the camera interface; and driving the target camera actually carried by the camera interface according to the target driving information corresponding to the target configuration information.
It will be appreciated by those skilled in the art that the application environment shown in fig. 1 is merely an application scenario of the present application, and is not limited to the application scenario of the present application, and other application environments may further include more or fewer vehicles than those shown in fig. 1, or a vehicle network connection relationship, for example, only 1 vehicle is shown in fig. 1, and it will be appreciated that the scenario of the camera driving method may further include one or more other vehicles, which is not limited herein in particular; the vehicle 100 may further include a memory for storing data, for example, image information obtained by photographing, and the like.
Further, a display device may be provided on the vehicle 100 in the scene of the camera driving method of the present application for outputting the result of the execution of the camera driving method in the vehicle. The vehicle 100 may access a background database 200 (the background database may be in a local memory of the vehicle, and the background database may also be disposed in the cloud), where the background database 200 stores information related to camera driving, for example, data corresponding to the camera driving of the vehicle is stored in the background database 200.
It should be noted that, the schematic view of the camera driving method shown in fig. 1 is only an example, and the scene of the camera driving method described in the embodiment of the present invention is for more clearly describing the technical solution of the embodiment of the present invention, and does not constitute a limitation on the technical solution provided by the embodiment of the present invention.
Based on the above-mentioned scene of the camera driving method, embodiments of the camera driving method are presented.
Referring to fig. 2, a flowchart of an embodiment of a camera driving method according to the present application includes steps S201 to S204:
s201, integrating the power-on parameters corresponding to the equipment tree and the element identification addresses of the camera elements to obtain a virtual camera, wherein the equipment tree is created according to the camera configuration information compatible with the camera interface.
The virtual camera comprises element identification addresses and all power-on parameters of all camera elements corresponding to the camera interface;
the camera interface, that is, the interface where the camera devices are correspondingly connected, is a mobile phone, a vehicle, etc.
In the embodiment of the application, for each camera interface, a device tree is built and an association relationship between the device tree and camera configuration information corresponding to the camera interface is created, the device tree comprises camera configuration information compatible with the camera interface, and a virtual camera is correspondingly created for each device tree, wherein the virtual camera comprises identification addresses of all camera elements corresponding to the camera interface and all power-on parameters.
Specifically, in the embodiment of the application, when driving detection is performed on each camera interface, by acquiring a device tree corresponding to the camera interface and extracting all element identification addresses and all power-on parameters in the camera configuration information according to camera configuration information corresponding to the device tree, integrating all element identification addresses to integrate repeated element identification addresses in all element identification addresses, only preserving one repeated element identification address, reducing the number of element identification addresses, further integrating all power-on parameters to integrate repeated power-on parameters in all power-on parameters, only preserving one repeated power-on parameter to obtain an element identification address and power-on parameter after the integration processing, and forming a virtual camera corresponding to the device tree to provide all element identification addresses and all power-on parameters corresponding to the virtual camera characterization camera interface.
S202, controlling the communication bus to be electrified through the virtual camera according to the electrified parameters, and accessing each element identification address corresponding to the virtual camera based on the electrified communication bus.
Specifically, in the embodiment of the application, after a virtual camera is constructed, all power-on parameters in the virtual camera are extracted, and the power required by a communication bus is determined according to the power-on parameters so as to control the power-on of the communication bus, thereby realizing one-time power-on.
Further, after power-on, accessing each element identification address corresponding to the virtual camera through the communication bus, and once reading a first element identification corresponding to each element identification address in a storage unit corresponding to the communication bus.
That is, it is understood that the first element identification may or may not be read.
S203, determining target configuration information according to the degree of matching of the second element identifier in the configuration information corresponding to the camera interface and the first element identifier obtained through access, wherein the first element identifier is determined according to a target camera actually carried by the camera interface.
Specifically, in the embodiment of the present application, the storage unit is provided outside the communication bus, where all camera element identifiers and element identifier addresses in the target camera actually carried by each camera interface are stored in the storage unit, that is, if the element identifier address accessed corresponds to the element identifier content stored in the accessed element identifier address in the accessing process, the stored element identifier content is used as the first element identifier. After all the first element identifiers are obtained, comparing all the first element identifiers with the second element identifiers in the configuration information corresponding to the camera interface, and determining target configuration information matched with the first element identifiers.
S204, driving the target camera actually carried by the camera interface according to target driving information corresponding to the target configuration information.
In one embodiment of the present application, the target driving information is a target driving of the camera interface corresponding to an actually mounted target camera, and the target camera may be driven according to the target driving.
Specifically, in another embodiment of the present application, the target driving information is differential driving information of the camera interface corresponding to the actually carried target camera, and the differential driving information and the camera interface corresponding to general driving information may be combined into a complete target driving, and the target camera is driven according to the target driving.
It can be understood that some drivers are the same among different cameras, and the differentiated driving information mainly represents power-on information and driving operation information, so in the embodiment of the application, the differentiated information and the power-on information in the target configuration information are used as target driving information, and the camera interface is combined to correspond to general driving information, so that complete target driving is obtained.
In the scheme, a virtual camera is constructed by merging all element identification addresses and power-on parameters corresponding to each camera interface, one-time power-on of a communication bus is realized according to the power-on parameters, element identifications corresponding to all addresses are read at one time according to the element identification addresses after merging, a first element identification of a carried target camera is determined, camera configuration information detection corresponding to the camera interface is further carried out aiming at the first element identification, target configuration information corresponding to the target camera is identified, namely, driving corresponding to the target camera is completed, rapid driving detection of the target camera is completed, and camera driving efficiency is improved.
Further, in another embodiment of the present application, there is also provided an embodiment of virtual camera determination, specifically including the steps of:
(1) Acquiring configuration information of each camera bound with a device tree;
(2) Extracting a camera element identification address of each camera element in each camera configuration information and a power-on parameter in each camera configuration information;
(3) And respectively carrying out union processing on the extracted element identification address and the power-on parameter, and generating a virtual camera according to the element identification address set and the power-on parameter set after the union processing.
The camera configuration information is a driving configuration file bound with the equipment tree, wherein each driving configuration file at least comprises an element identification address of a camera element and a power-on parameter of the camera element.
In the embodiment of the application, the configuration information of each camera under the same equipment tree is combined to form a virtual camera, and the virtual camera at least comprises a power-on parameter set and an element identification address set. The virtual camera is only used for identifying the identification of the elements contained in the camera, and does not perform actual work. And only the communication bus of the equipment is powered, and the memory with uniform storage specifications in the camera is directly read. If the equipment is in accordance with the unified specification, the identification information of all camera elements can be obtained only by one reading, and the driving detection efficiency is improved.
For example, referring to fig. 3, fig. 3 is a schematic structural diagram of one embodiment of a camera driving method for constructing the virtual camera.
The camera interface is compatible with cameras cam0, cam1 and cam2, wherein corresponding CMOS (Complementary Metal-Oxide Semiconductor, digital image sensor), flashlight (flash lamp), ISP (IMAGE SIGNAL Processor ), ser (Serializer) and element identification memory address (EID) in each camera are respectively configured information for each camera, and in the embodiment of the application, all power-on parameters and camera elements in cam0, cam1 and cam2 are extracted and processed together to obtain a virtual camera.
Further, in the embodiment of the present application, the present application further provides an embodiment of first element identifier reading, specifically referring to fig. 4, fig. 4 is a schematic flow chart of one embodiment of first element identifier acquisition in the camera driving method provided in the embodiment of the present application, and specifically includes steps S401 to S402:
s401, accessing a storage unit connected with the communication bus based on the communication bus after power-on, and sequentially reading the element identification addresses in the virtual camera in the storage unit.
Specifically, in the specific implementation process, the storage unit connected with the communication bus is accessed based on the communication bus after power-on, and it can be understood that, because the power-on parameters of the communication bus correspond to all power-on parameters of the camera interface, namely, only one power-on is needed for the communication bus, the corresponding element identification in the storage unit can be read once, so that detection of all driving information is completed later, repeated power-on and power-off are avoided, and driving detection efficiency is improved.
S402, taking the element identifier obtained from the element identifier address as the first element identifier.
Specifically, in the embodiment of the present application, the element identification addresses in the virtual camera in the storage unit are sequentially read, if the element identification addresses are read, the read element identification is recorded, if the element identification addresses are not read, the next element identification address is continuously read until all the element identification addresses are read, and the recorded element identification is used as a first element identification, that is, the first element identification may include a plurality of element identifications.
In other embodiments of the present application, if no element identifier is read until all the element identifier addresses are read, the existing reading mode may be adopted to re-read, so as to ensure the compatibility of the reading mode.
Specifically, in one embodiment of the present application, there is also provided a method for determining storage contents in a storage unit, including the steps of:
(1) For each camera interface, acquiring a camera element identifier of each camera element in a target camera actually carried by the camera interface and an element identifier address corresponding to the camera element identifier;
(2) And associating the camera element identification with an element identification address corresponding to the camera element identification, and storing the camera element identification and the element identification address in a storage unit connected with the communication bus.
It may be understood that one camera interface may be mounted with one target camera at a time, but different cameras may be compatibly inserted, and each compatible camera has corresponding camera configuration information, where the camera configuration information includes affiliated physical port information (such as 0) of each camera, where the physical port information is an interface identifier of the camera interface, and each camera configuration information includes, but is not limited to: the power-on parameter, the device identification read address, the device identification address expects information such as content (second element identification), the operation mode of the device driver, and the like.
For each camera interface, the acquirer acquires a target camera which is carried substantially, takes a camera element identifier of each camera element of the camera which is carried substantially as a first element identifier, and an element identifier address corresponding to the camera element identifier, and stores the camera element identifier in a storage unit in an associated manner.
In the actual application process, the virtual camera corresponding to all the element identification addresses in the camera interface is created, the storage unit is accessed according to all the element identification addresses in the virtual camera, the first element identification of the camera element compatible with the camera interface can be obtained at one time, the camera element of the target camera which is carried substantially can be represented by the read first element identification, after the first element identification of the target camera is obtained, element identification matching can be carried out on the camera configuration information corresponding to each camera, and the target configuration information corresponding to the target camera is found, wherein the target configuration information is one of the target camera configuration information.
It will be appreciated that the data in the storage unit may be updated according to a preset update frequency with reference to the above procedure, or updated when the camera interface is detected to be replaced with the onboard camera, or updated in response to a user instruction.
In particular, in other embodiments of the present application, where different camera elements may be stored from different manufacturers, it may result in inconsistent reading modes of the element identifier, or identifier lengths, etc., so in this embodiment, by providing a memory unit for storing the camera element identifier for the scene, the method specifically includes the steps of:
(1) If the read-write modes of the camera element identifiers are different, coding the camera element identifiers according to a preset coding rule;
(2) And storing the coded element identification and the corresponding element identification address in a storage unit connected with the communication bus.
Specifically, in the embodiment of the present application, after the camera element identifier of the target camera is obtained, if the length and the read-write mode of the camera element identifier are different, encoding processing is performed on each camera element identifier according to a preset encoding rule, and further, the encoded element identifier and the corresponding element identifier address are stored in a storage unit connected to the communication bus in an associated manner in a read-write mode of a unified element identifier.
In the embodiment of the application, the drivers (camera configuration information) under the same equipment tree are combined to form a virtual camera. The virtual camera is only used for identifying the identification of the elements contained in the camera, and does not perform actual work. And only the communication bus of the equipment is powered, and the memory with uniform storage specifications in the camera is directly read. Only one storage unit is required to be read at one time, the identification information of all camera elements can be obtained, and the driving detection efficiency is improved.
For example, referring to fig. 5, fig. 5 is a schematic structural diagram of one embodiment of a memory unit in the camera driving method according to the embodiment of the present application, where the unified bus address is a bus address corresponding to the communication bus, the identification address corresponds to an element identification address, the identification content corresponds to an element identification, and the element identification address and the element identification in each row correspond to each other.
Specifically, in this embodiment of the present application, after accessing any one of the element identifier addresses and obtaining the element identifier, the element identifier needs to be decoded to obtain a decoded first element identifier, and it can be understood that the decoding method corresponds to the encoding rule.
Further, the present application also provides an embodiment of determining target configuration information, referring to fig. 6, fig. 6 is a schematic flow chart of one embodiment of determining target configuration information in the camera driving method provided by the embodiment of the present application, which specifically includes steps S601 to S602:
S601, according to the first element identification obtained through access, the first element identification is compared with the second element identification in each piece of camera configuration information in sequence.
S602, if all the second element identifiers in any one of the camera configuration information are the same as all the first element identifiers, the camera configuration information is used as the target configuration information.
Specifically, after the first element identifier is obtained in the embodiment of the present application, the first element identifier is sequentially compared with the second element identifier in each piece of camera configuration information, and if the second element identifier in which camera configuration information is seen to be identical to the first element identifier obtained by reading, the camera configuration information is determined to be the target configuration information corresponding to the target camera.
In one embodiment of the present application, after determining the target configuration information, driving the target camera actually carried by the camera interface according to the target driving information corresponding to the target configuration information, the method specifically includes the steps of:
(1) Extracting drive power-on data and drive operation data (namely, operation modes corresponding to the device drive) in the target configuration information to obtain target drive information;
(2) Updating the target driving information into a universal drive corresponding to the camera interface to obtain a camera drive corresponding to the camera interface of the camera;
(3) And driving the target camera actually carried by the camera interface according to the camera.
Specifically, in the embodiment of the application, the universal drive is designed in the kernel, and the universal drive does not need to be added in the kernel all the time like the traditional drive. The different drivers are embodied in a differential parameter configuration, and the differential parameters are stored in the user space. Therefore, a Camera driver can be added at will, and the system is not required to be upgraded. The development difficulty is reduced, and the development efficiency is improved.
Further, in one embodiment of the present application, a method for driving a camera according to different power-on modes is further provided, specifically, if a first power-on instruction of the camera is received, integrating a power-on parameter corresponding to a device tree with an element identification address of a camera element to obtain a virtual camera, and executing the steps of controlling a communication bus to be powered on according to the power-on parameter by the virtual camera, and accessing each element identification address corresponding to the virtual camera based on the powered-on communication bus.
In other embodiments of the present application, if a second startup instruction of the camera is received, integrating the power-on parameter corresponding to the equipment tree with the element identification address of the camera element to obtain a virtual camera, obtaining historical target configuration information corresponding to each camera interface, and executing the target driving information corresponding to the target configuration information to drive the target camera actually carried by the camera interface.
Specifically, the first power-on instruction represents a kernel power-off power-on instruction of the camera device, and the second power-on instruction represents a kernel power-on continuous power-on instruction of the camera device.
It can be appreciated that, the obtaining of the historical target configuration information corresponding to each camera interface may be determined by obtaining the detection record corresponding to each camera interface.
Specifically, referring to fig. 7, fig. 7 is a schematic flow chart of one embodiment of a camera driving method according to an embodiment of the present application, which specifically includes the steps of:
1. when the system is started, a user space (CAMERAHAL services an Android system) calls character equipment cdev to enter a kernel space through equipment/dev/videox.
2. And the kernel space calls v4l2 drive of the platform and enters a Camera equipment detection flow.
3. The user space classifies according to the equipment tree, traverses all driving configuration files (driving configuration files are camera configuration information described in full text) corresponding to the first classification tree, takes the whole set of the power-on parameters and the camera element identification addresses in the configuration files, and generates virtual camera equipment faker _cam which does not exist in a system.
4. The virtual camera equipment detects and calls the universal drive which is realized by the method, only electricity required by the communication bus is electrified, and identification reading is started (namely, the virtual camera equipment accesses each element identification address corresponding to the virtual camera based on the electrified communication bus).
5. When the identification is read, firstly, a memory with unified storage specifications in the camera is read, if the memory is read, the read identification is recorded in a kernel, and the camera is powered down (only a bus is powered on).
6. The unified storage specification storage identification is not read, other customized non-universal position identification information is searched, and if the non-universal position identification information exists, the non-universal position identification information is continuously read; and reading, recording the read identification in the kernel, and powering down the camera (only the bus power).
7. The unified storage specification and the customized non-universal identification are not read, the camera is directly powered down (only the bus is powered on), and the identification record is empty. Indicating that no driver is detected under the device tree, generating a mark in the kernel, and recording that the current device tree is detected.
8. The identification is read, the identification information is transmitted back to the user space, all driving configuration files under the equipment tree are traversed, and matching is carried out with the identification information (according to the first element identification obtained by access, the matching degree of the second element identification in each configuration information corresponding to the camera interface is determined, and the target configuration information is determined).
9. And matching the driver identifier, transmitting the complete driver configuration corresponding to the identifier from the user space to the kernel space, and using the universal driver to load the driver configuration to instantiate the driver of the current equipment tree (driving the target camera actually carried by the camera interface according to the target driver information corresponding to the target configuration information).
10. The driving example is completed, a successful detection result is returned to the platform v4l2 framework driving, and the platform v4l2 framework driving registers the device with the v4l2 core layer.
11. And if the current equipment tree is not detected, jumping to the next equipment tree driving set to continue detection until the equipment tree driving sets corresponding to all the driving configuration files are detected.
It can be understood that the above embodiment corresponds to a flow scenario corresponding to the first power-on instruction, that is, the power-off power-on process of the kernel in the scenario.
It can be appreciated that in some implementation scenarios, the kernel is powered on without power, i.e. corresponds to a second power-on instruction; the method for driving the camera in the scene specifically comprises the following steps:
1. And 3, executing the front step 3 of the kernel power-off and power-on driving identification process, and for the equipment with the identification record in the equipment tree, skipping the step 4-8, directly starting from the step 9, and completing equipment registration.
2. The equipment tree has no identification record, but has a detected record mark, the steps 4-10 are skipped, the step 11 is directly executed, and no equipment exists under the current equipment tree.
3. And if no record is identified in the equipment tree and no record mark is detected, re-detection is sequentially performed according to the steps 4-11.
By the adoption of the rapid identification and registration method of the Camera driver, the steps of powering on, reading and identifying are performed, the driving identification and registration time is reduced, and the time consumption of the Camera driver when the Camera driver is started and the service is restarted is furthest saved.
In this solution, the device tree is no longer dead-bound with each driver, each device tree is an abstract device, multiple drivers share one device tree (i.e., each device tree includes all cameras compatible with a camera interface and corresponds to all camera configuration information corresponding to the camera interface), and by using the identified device identifier, it is determined with which driver the device tree is bound (binding target configuration information). The new adding and replacing drive does not need to increase equipment trees, and compared with the traditional one-by-one adding mode, the efficiency is higher and more stable. Even if a plurality of types of drivers exist under one equipment tree, the driving power supply logic is different, and the communication bus is only required to be powered when the detection is identified. In the embodiment of the application, the drive identification mode is uniform, the equipment identification can be detected only by one bus reading, and the camera elements such as cmos, AF, flashlight/RGB IC, deSer, ser, ISP and the like do not need to be tried to be identified one by one. Compared with the traditional one-by-one detection mode, the starting time is shorter, and the camera is opened faster.
Furthermore, the embodiment of the application constructs a virtual camera based on the general realization of the linux/Android v4l2 camera frame. The Arm architecture in the market is all based on v4l2 camera for driving custom development, and the scheme has strong universality and can easily realize cross-platform transplantation. According to the scheme, all device identifiers can be recorded to the kernel after the device is started for the first time, the kernel is not powered off, and the identifier information cannot be cleared. The host computer is restarted, or the camera service crashes, equipment registration can be completed by directly matching recorded identifiers without equipment identification, and compared with the traditional scheme, the time from restarting the camera service to recovering the camera is greatly prolonged.
Compared with the traditional mode, no matter how many kinds of possible driving are available for the current equipment tree, the same equipment tree is adopted, and the method only needs to detect equipment once, so that the driving detection efficiency is greatly improved.
In order to better implement the camera driving method according to the embodiment of the present application, based on the camera driving method, the embodiment of the present application further provides a camera driving apparatus, as shown in fig. 8, where the camera driving apparatus includes modules 801-804:
An integration module 801, configured to integrate the power-on parameter corresponding to the equipment tree with the element identification address of the camera element to obtain a virtual camera, where the equipment tree is created according to each camera configuration information compatible with the camera interface;
An access module 802, configured to control, by using the virtual camera, power up of a communication bus according to the power-up parameter, and access each element identification address corresponding to the virtual camera based on the powered up communication bus;
A determining module 803, configured to determine target configuration information according to a degree of matching of a first element identifier obtained by accessing and a second element identifier in each configuration information corresponding to the camera interface, where the first element identifier is determined according to a target camera actually carried by the camera interface;
the driving module 804 is configured to drive a target camera actually mounted on the camera interface according to target driving information corresponding to the target configuration information.
In some embodiments of the present application, an integration module 801, configured to integrate a power-on parameter corresponding to a device tree and an element identification address of a camera element to obtain a virtual camera, includes:
acquiring configuration information of each camera bound with a device tree;
Extracting a camera element identification address of each camera element in each camera configuration information and a power-on parameter in each camera configuration information;
And respectively carrying out union processing on the extracted element identification address and the power-on parameter, and generating a virtual camera according to the element identification address set and the power-on parameter set after the union processing.
In some embodiments of the present application, the accessing module 802, configured to access each of the element identification addresses corresponding to the virtual camera based on the communication bus after power-up, includes:
Accessing a storage unit connected with the communication bus based on the communication bus after power-on, and sequentially reading the element identification addresses in the virtual camera in the storage unit;
and taking the element identifier obtained from the element identifier address as the first element identifier.
In some embodiments of the application, the access module 802 is further configured to:
For each camera interface, acquiring a camera element identifier of each camera element in a target camera actually carried by the camera interface and an element identifier address corresponding to the camera element identifier;
And associating the camera element identification with an element identification address corresponding to the camera element identification, and storing the camera element identification and the element identification address in a storage unit connected with the communication bus.
In some embodiments of the present application, the accessing module 802, configured to associate the camera element identifier with an element identifier address corresponding to the camera element identifier, and store the element identifier address in a storage unit connected to the communication bus, includes:
If the read-write modes of the camera element identifiers are different, coding the camera element identifiers according to a preset coding rule;
storing the coded element identifier and the corresponding element identifier address in a storage unit connected with the communication bus in an associated manner;
The method further comprises the steps of:
if any element identifier is stored in any element identifier address, acquiring the element identifier, decoding the element identifier, and obtaining a decoded first element identifier.
In some embodiments of the present application, the determining module 803 is configured to determine, according to the access acquired first element identifier, the matching degree of the second element identifier in each configuration information corresponding to the camera interface, and includes:
according to the first element identification obtained by access, comparing the first element identification with the second element identification in each camera configuration information in turn;
and if all the second element identifiers in any one of the camera configuration information are the same as all the first element identifiers, taking the camera configuration information as the target configuration information.
In some embodiments of the present application, the driving module 804, configured to drive the target camera actually carried by the camera interface according to the target driving information corresponding to the target configuration information, includes:
Extracting drive power-on data and drive operation data in the target configuration information to obtain target drive information;
updating the target driving information into a universal drive corresponding to the camera interface to obtain a camera drive corresponding to the camera interface of the camera;
and driving the target camera actually carried by the camera interface according to the camera.
In some embodiments of the application, the apparatus further comprises an instruction execution module for:
If a first starting-up instruction of the camera is received, executing the steps of controlling a communication bus to be electrified through the virtual camera according to the electrifying parameters, and accessing each element identification address corresponding to the virtual camera based on the electrified communication bus;
If a second starting-up instruction of the camera is received, acquiring historical target configuration information corresponding to each camera interface, and executing the target driving information corresponding to the target configuration information to drive a target camera actually carried by the camera interface;
The first power-on instruction represents a kernel power-off power-on instruction of the camera equipment, and the second power-on instruction represents a kernel power-on continuous power-on instruction of the camera equipment.
The provided camera driving device is used for integrating the power-on parameters corresponding to the equipment tree and the element identification addresses of the camera elements through the arrangement of the integration module to obtain a virtual camera, and the equipment tree is created according to the camera configuration information compatible with the camera interface; the access module is used for controlling the communication bus to be electrified through the virtual camera according to the electrifying parameters, and accessing each element identification address corresponding to the virtual camera based on the electrified communication bus; the determining module is used for determining target configuration information according to the degree of matching of the second element identifier in the configuration information corresponding to the camera interface and the first element identifier acquired by access, wherein the first element identifier is determined according to a target camera actually carried by the camera interface; and the driving module is used for driving the target camera actually carried by the camera interface according to the target driving information corresponding to the target configuration information. According to the method, the virtual camera is built according to the camera configuration information corresponding to the camera interface, the first element identification corresponding to the target camera actually carried by the camera interface can be identified at one time by powering on according to the integration information in the virtual camera, the camera configuration information corresponding to the camera interface is further detected according to the first element identification, the target configuration information corresponding to the target camera is identified, rapid driving detection of the target camera is achieved, and the camera driving efficiency is improved.
Further, it is understood that in other embodiments of the present application, there is also provided a vehicle incorporating any of the camera driving apparatuses provided in the embodiments of the present application, the vehicle including:
one or more processors;
a memory; and
One or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to perform the steps of the camera driving method described in any of the embodiments of the camera driving method described above.
In connection with the above embodiments, in some of the embodiments of the application, the processor or processors, memory, in the vehicle section are integrated on a circuit board body comprised by the vehicle in which the circuit board body is disposed.
It will be appreciated that in other embodiments of the application, the processor and memory in the vehicle may not be integrated on the vehicle, i.e., the processor and memory may be provided in the vehicle as parts of the vehicle, respectively.
As shown in fig. 9, fig. 9 is a schematic structural view of an embodiment of a vehicle provided in an embodiment of the present application.
Specifically, the present invention relates to a method for manufacturing a semiconductor device. The vehicle may include components such as a processor 1001 of one or more processing cores, a memory 1002 of one or more computer-readable storage media, a power supply 1003, and an input unit 1004. It will be appreciated by those skilled in the art that the vehicle structure shown in fig. 9 is not limiting of the vehicle and may include more or fewer components than shown, or certain components may be combined, or a different arrangement of components. Wherein:
The processor 1001 is the camera driving center, connects various parts of the entire vehicle using various interfaces and lines, and performs various functions of the vehicle and processes data by running or executing software programs and/or modules stored in the memory 1002 and calling data stored in the memory 1002, thereby performing overall monitoring of the vehicle. It will be appreciated that the processor 1001 may, by signaling with the controller, optionally include one or more processing cores; preferably, the processor 1001 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, an application program, and the like, and the modem processor mainly processes wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 1001.
The memory 1002 may be used to store software programs and modules, and the processor 1001 executes various functional applications and data processing by executing the software programs and modules stored in the memory 1002. The memory 1002 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data created according to the use of the vehicle, etc. In addition, memory 1002 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 1002 may also include a memory controller to provide the processor 1001 with access to the memory 1002.
In some embodiments of the application, the camera driving apparatus may be implemented in the form of a computer program that is executable on a vehicle as shown in fig. 9. The memory of the vehicle may store various program modules constituting the camera driving apparatus, such as an integrating module 801, an accessing module 802, a determining module 803, and a driving module 804 shown in fig. 8. The computer program constituted by the respective program modules causes the processor to execute the steps in the camera driving method of the respective embodiments of the present application described in the present specification.
For example, the vehicle shown in fig. 9 may perform step S201 through the integration module 801 in the camera driving apparatus as shown in fig. 8. The vehicle may perform step S202 through the access module 802. The vehicle may perform step S203 through the determination module 803. The vehicle may perform step S204 via the drive module 804. The vehicle includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the vehicle is configured to provide computing and control capabilities. The memory of the vehicle includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the vehicle is used for communicating with an external vehicle through a network connection. The computer program is executed by a processor to implement a camera driving method.
The vehicle further includes a power supply 1003 for powering the various components, preferably, the power supply 1003 is logically connected to the processor 1001 by a power management system, such that charge, discharge, and power consumption management functions are performed by the power management system. The power supply 1003 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The vehicle may also include an input unit 1004, which input unit 1004 may be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the vehicle may further include a display unit or the like, which is not described herein. In particular, in this embodiment, the processor 1001 in the vehicle loads executable files corresponding to the processes of one or more application programs into the memory 1002 according to the following instructions, and the processor 1001 executes the application programs stored in the memory 1002, so as to implement various functions as follows:
integrating the power-on parameters corresponding to the equipment tree with the element identification addresses of the camera elements to obtain virtual cameras, wherein the equipment tree is created according to the camera configuration information compatible with the camera interfaces;
controlling a communication bus to be electrified through the virtual camera according to the electrifying parameters, and accessing each element identification address corresponding to the virtual camera based on the electrified communication bus;
Determining target configuration information according to the degree of matching of a first element identifier obtained by access and a second element identifier in configuration information corresponding to the camera interface, wherein the first element identifier is determined according to a target camera actually carried by the camera interface;
And driving the target camera actually carried by the camera interface according to the target driving information corresponding to the target configuration information.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present invention provide a computer-readable storage medium, which may include: read-only memory (ROM, readOnlyMemory), random access memory (RAM, randomAccessMemory), magnetic or optical disk, and the like. On which a computer program is stored, which computer program is loaded by a processor for performing the steps of any of the camera driving methods provided by the embodiments of the present invention. For example, the loading of the computer program by the processor may perform the steps of:
integrating the power-on parameters corresponding to the equipment tree with the element identification addresses of the camera elements to obtain virtual cameras, wherein the equipment tree is created according to the camera configuration information compatible with the camera interfaces;
controlling a communication bus to be electrified through the virtual camera according to the electrifying parameters, and accessing each element identification address corresponding to the virtual camera based on the electrified communication bus;
Determining target configuration information according to the degree of matching of a first element identifier obtained by access and a second element identifier in configuration information corresponding to the camera interface, wherein the first element identifier is determined according to a target camera actually carried by the camera interface;
And driving the target camera actually carried by the camera interface according to the target driving information corresponding to the target configuration information.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and the portions of one embodiment that are not described in detail in the foregoing embodiments may be referred to in the foregoing detailed description of other embodiments, which are not described herein again.
In the implementation, each unit or structure may be implemented as an independent entity, or may be implemented as the same entity or several entities in any combination, and the implementation of each unit or structure may be referred to the foregoing method embodiments and will not be repeated herein.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
The foregoing has described in detail the method, apparatus, vehicle and storage medium for driving a camera according to the embodiments of the present application, and specific examples have been applied to illustrate the principles and embodiments of the present application, and the above description of the embodiments is only for aiding in understanding the method and core idea of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.
Claims (11)
1. A camera driving method, characterized by comprising:
integrating the power-on parameters corresponding to the equipment tree with the element identification addresses of the camera elements to obtain virtual cameras, wherein the equipment tree is created according to the camera configuration information compatible with the camera interfaces;
Controlling a communication bus to be electrified through the virtual camera according to the electrifying parameters, accessing the element identification address corresponding to the virtual camera based on the electrified communication bus, and acquiring a first element identification;
Determining target configuration information according to the matching degree of the first element identifier and the second element identifier in the configuration information corresponding to the camera interface, wherein the first element identifier is determined according to a target camera actually carried by the camera interface;
And driving the target camera actually carried by the camera interface according to the target driving information corresponding to the target configuration information.
2. The method for driving a camera according to claim 1, wherein the integrating the power-on parameter corresponding to the device tree and the element identification address of the camera element to obtain the virtual camera includes:
acquiring configuration information of each camera bound with a device tree;
Extracting a camera element identification address and a power-on parameter of each camera element in the camera configuration information aiming at each piece of camera configuration information;
and respectively carrying out union processing on the extracted element identification address and the power-on parameter to generate a virtual camera.
3. The method according to claim 1, wherein the accessing the element identification address corresponding to the virtual camera based on the powered communication bus to obtain the first element identification includes:
Accessing a storage unit connected with the communication bus based on the communication bus after power-on, and sequentially reading the element identification addresses in the virtual camera in the storage unit;
and taking the element identifier obtained from the element identifier address as the first element identifier.
4. A camera driving method according to claim 3, further comprising:
For each camera interface, acquiring a camera element identifier of each camera element in a target camera actually carried by the camera interface and an element identifier address corresponding to the camera element identifier;
And associating the camera element identification with an element identification address corresponding to the camera element identification, and storing the camera element identification and the element identification address in a storage unit connected with the communication bus.
5. The camera driving method according to claim 4, wherein the associating the camera element identification with the element identification address corresponding to the camera element identification and storing in a storage unit connected to the communication bus includes:
If the read-write modes of the camera element identifiers are different, coding the camera element identifiers according to a preset coding rule;
storing the coded element identifier and the corresponding element identifier address in a storage unit connected with the communication bus in an associated manner;
The method further comprises the steps of:
if any element identifier is stored in any element identifier address, acquiring the element identifier, decoding the element identifier, and obtaining a decoded first element identifier.
6. The method according to claim 1, wherein determining target configuration information according to the degree of matching of the first element identifier with the second element identifier in the configuration information corresponding to the camera interface includes:
according to the first element identification obtained by access, comparing the first element identification with the second element identification in each camera configuration information in turn;
and if all the second element identifiers in any one piece of camera configuration information are the same as all the first element identifiers, taking the camera configuration information as the target configuration information.
7. The method according to claim 1, wherein driving the target camera actually mounted on the camera interface according to the target driving information corresponding to the target configuration information includes:
Extracting drive power-on data and drive operation data in the target configuration information to obtain target drive information;
updating the target driving information into a universal drive corresponding to the camera interface to obtain a camera drive corresponding to the camera interface of the camera;
and driving the target camera actually carried by the camera interface according to the camera.
8. The camera driving method according to any one of claims 1 to 7, characterized in that the method further comprises:
If a first starting-up instruction of the camera is received, executing the steps of controlling a communication bus to be electrified through the virtual camera according to the electrifying parameters, and accessing each element identification address corresponding to the virtual camera based on the electrified communication bus;
If a second starting-up instruction of the camera is received, acquiring historical target configuration information corresponding to each camera interface, and executing the target driving information corresponding to the target configuration information to drive a target camera actually carried by the camera interface;
The first power-on instruction represents a kernel power-off power-on instruction of the camera equipment, and the second power-on instruction represents a kernel power-on continuous power-on instruction of the camera equipment.
9. A camera driving apparatus, characterized in that the camera driving apparatus comprises:
The integration module is used for integrating the power-on parameters corresponding to the equipment tree and the element identification addresses of the camera elements to obtain a virtual camera, and the equipment tree is created according to the camera configuration information compatible with the camera interface;
the access module is used for controlling the communication bus to be electrified through the virtual camera according to the electrifying parameters, and accessing each element identification address corresponding to the virtual camera based on the electrified communication bus;
The determining module is used for determining target configuration information according to the degree of matching of the second element identifier in the configuration information corresponding to the camera interface and the first element identifier acquired by access, wherein the first element identifier is determined according to a target camera actually carried by the camera interface;
And the driving module is used for driving the target camera actually carried by the camera interface according to the target driving information corresponding to the target configuration information.
10. A vehicle, characterized in that the vehicle comprises:
one or more processors;
a memory; and
One or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to implement the camera driving method of any one of claims 1 to 8.
11. A computer-readable storage medium, having stored thereon a computer program, the computer program being loaded by a processor to perform the steps in the camera driving method of any of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410980162.XA CN118524277A (en) | 2024-07-22 | 2024-07-22 | Camera driving method, device, vehicle and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410980162.XA CN118524277A (en) | 2024-07-22 | 2024-07-22 | Camera driving method, device, vehicle and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118524277A true CN118524277A (en) | 2024-08-20 |
Family
ID=92285350
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410980162.XA Pending CN118524277A (en) | 2024-07-22 | 2024-07-22 | Camera driving method, device, vehicle and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118524277A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112363767A (en) * | 2020-11-11 | 2021-02-12 | 广州小鹏汽车科技有限公司 | Vehicle-mounted camera calling method and device |
CN113934463A (en) * | 2021-11-04 | 2022-01-14 | 中科可控信息产业有限公司 | Starting method and device of server, computer equipment and storage medium |
CN116028122A (en) * | 2022-11-29 | 2023-04-28 | 龙芯中科技术股份有限公司 | Device processing method and device based on processor |
CN117707628A (en) * | 2023-06-15 | 2024-03-15 | 荣耀终端有限公司 | Device initialization method, electronic equipment and readable storage medium |
-
2024
- 2024-07-22 CN CN202410980162.XA patent/CN118524277A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112363767A (en) * | 2020-11-11 | 2021-02-12 | 广州小鹏汽车科技有限公司 | Vehicle-mounted camera calling method and device |
CN113934463A (en) * | 2021-11-04 | 2022-01-14 | 中科可控信息产业有限公司 | Starting method and device of server, computer equipment and storage medium |
CN116028122A (en) * | 2022-11-29 | 2023-04-28 | 龙芯中科技术股份有限公司 | Device processing method and device based on processor |
CN117707628A (en) * | 2023-06-15 | 2024-03-15 | 荣耀终端有限公司 | Device initialization method, electronic equipment and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8466974B2 (en) | Apparatus and methods for controlling image sensors | |
US20100064036A1 (en) | Peripheral device operation method, peripheral device and host | |
US12061892B2 (en) | Firmware updating method, and electronic apparatus and storage media for same | |
US11243594B2 (en) | Electronic device supporting connection with external device and power consumption reducing method when using electronic device in connection with external device | |
CN112732310B (en) | Firmware upgrading method, system, equipment and medium based on embedded guide partition | |
CN103365696A (en) | BIOS (Basic Input Output System) image file obtaining method and device | |
US20070300055A1 (en) | Booting apparatus and method therefor | |
US20200401384A1 (en) | Electronic device and operation method thereof | |
CN102164245A (en) | Mobile-phone-based camera module adaptation method | |
CN109240748B (en) | Mirror image starting and adapting method and system applied to embedded system | |
CN110413331B (en) | SPI NOR FLASH identification method, device, system and storage medium based on ROM | |
CN115053516A (en) | Electronic device and method for storing image | |
CN100472402C (en) | Power management method for built-in camera | |
US20050010914A1 (en) | Method for upgrading firmware | |
US20060047938A1 (en) | Method and apparatus to initialize CPU | |
CN118524277A (en) | Camera driving method, device, vehicle and storage medium | |
US20190354493A1 (en) | Universal flash storage, electronic device capable of connecting to a plurality type of memory devices and method thereof | |
US11526363B2 (en) | Electronic apparatus and control method thereof | |
US11059429B1 (en) | Car multimedia device with function for automatically switching between internal device control mode and external device control mode and car multimedia device control method | |
CN109240749B (en) | Starting device and starting method of electronic equipment | |
CN116665751B (en) | Test method and electronic equipment | |
CN117707628A (en) | Device initialization method, electronic equipment and readable storage medium | |
CN112513789A (en) | Method for controlling operation mode by using electronic pen and electronic device | |
CN116860219A (en) | Multi-board sharing method of CPLD firmware and related components | |
CN114816571B (en) | Method, device, equipment and storage medium for plug-in flash memory |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |