CN112269939B - Automatic driving scene searching method, device, terminal, server and medium - Google Patents
Automatic driving scene searching method, device, terminal, server and medium Download PDFInfo
- Publication number
- CN112269939B CN112269939B CN202011285118.5A CN202011285118A CN112269939B CN 112269939 B CN112269939 B CN 112269939B CN 202011285118 A CN202011285118 A CN 202011285118A CN 112269939 B CN112269939 B CN 112269939B
- Authority
- CN
- China
- Prior art keywords
- data
- driving
- attribute information
- running
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9538—Presentation of query results
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Traffic Control Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application provides an automatic driving scene searching method, device, terminal, server and medium, and belongs to the field of data processing. According to the method and the device for searching the driving scene, the scene searching interface is provided to provide a scene searching function, the driving scene keywords can be used for indicating the specific driving scene in the automatic driving process according to the driving scene keywords input in the scene Jing Sousuo interface, and the target driving data corresponding to the specific driving scene is obtained by obtaining the driving data of which the attribute information is matched with the driving scene keywords, so that the searching is not needed manually, and the data processing efficiency is improved.
Description
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a method, an apparatus, a terminal, a server, and a medium for searching a scene during automatic driving.
Background
In general, an autonomous driving vehicle includes three modules, namely, a sensing module, a decision module and an execution module, wherein the sensing module collects data of surrounding environment and the vehicle itself through sensors such as a camera, a millimeter wave radar, a laser radar and the like in the driving process of the vehicle, the decision module identifies which driving scene the current situation belongs to according to the collected data, and then makes a driving decision plan according to the identified driving scene, and the execution module executes corresponding driving operation according to the driving decision plan, so that the autonomous driving of the vehicle is realized. The data collected during the running process of the automatic driving vehicle can be used for training the automatic driving vehicle, and can be analyzed by related technicians based on the collected data so as to optimize the automatic driving vehicle and the like.
Because the data volume collected by the automatic driving vehicle in the driving process is huge, and most of the data are corresponding to repeated simple driving scenes, such as uniform driving scenes, turning scenes and the like, the data need to be checked manually to determine data corresponding to special driving scenes or complex driving scenes, so that the data processing efficiency is low.
Disclosure of Invention
The embodiment of the application provides an automatic driving scene searching method, an automatic driving scene searching device, a terminal, a server and a medium, and data processing efficiency can be improved. The technical scheme is as follows:
in one aspect, a method for searching for a scene of automatic driving is provided, the method comprising:
providing a scene search interface, wherein the scene search interface comprises a search box, and the scene search interface is used for providing search functions of target driving data corresponding to different driving scenes;
responding to the search operation of the external input scene search interface, and acquiring driving scene keywords input in the search box;
and acquiring driving data of which attribute information is matched with the driving scene keywords as target driving data, wherein the attribute information is used for indicating the environment state of the external environment, the vehicle state and the driving state of the automatic driving vehicle in the driving process of the automatic driving vehicle, and at least one target database stores the driving data taking at least one of the environment state, the vehicle state and the driving state as parameters, and different parameters of the driving data correspond to different attribute information.
In one possible implementation manner, the acquiring the driving data whose attribute information matches the driving scene keyword includes, as target driving data:
generating a search request based on the driving scene keyword, wherein the search request carries the driving scene keyword;
sending the search request to a server;
and receiving the driving data acquired by the server from the at least one target database as the target driving data.
In one possible implementation, the method further includes:
providing a data input interface, wherein the data input interface comprises an input box and is used for inputting environmental state data in the running process of the automatic driving vehicle, and the environmental state data comprises at least one of weather data, light data, road state data, time data and position data;
acquiring environmental state data input in the input box in response to an input operation in the input box;
and sending the environment state data to a server, wherein the environment state data is used for determining environment attribute information of the driving data corresponding to the automatic driving vehicle.
In one possible implementation, the method further includes:
Acquiring video data generated based on the driving data of the automatic driving vehicle, wherein the video data is marked with attribute information of the driving data, and the attribute information comprises at least one of environment attribute information, vehicle attribute information, driving attribute information and scene attribute information;
based on the playing of the video data, marking information in the video data is obtained, wherein the marking information is used for indicating driving data with the decision accuracy of driving behaviors in the video data smaller than a second target threshold value;
the tag information is sent to the server.
In one possible implementation, the method further includes:
acquiring video data generated based on the driving data of the automatic driving vehicle, wherein the video data is marked with attribute information of the driving data, and the attribute comprises at least one of environment attribute information, vehicle attribute information, driving attribute information and scene attribute information;
based on the playing of the video data, acquiring road state data input based on at least one frame of image in the video data, wherein the road state data is used for indicating the road condition in the running process of the automatic driving vehicle;
the road state data is transmitted to a server, and the road state data is used for determining road attribute information of the driving data.
In one aspect, a method for searching for a scene of automatic driving is provided, the method comprising:
acquiring running data of an automatic driving vehicle, wherein the running data is acquired in the running process of the automatic driving vehicle;
acquiring at least one attribute information of the running data, the attribute information being used for indicating an environmental state of an external environment during running of the autonomous vehicle, and at least one of a vehicle state and a running state of the autonomous vehicle;
data fusion is carried out on the at least one attribute information and the running data, so that the running data associated with the at least one attribute information is obtained;
and generating a search engine based on the at least one attribute information, wherein the search engine is used for determining target running data according to the driving scene keywords when the driving scene keywords related to the driving scenes are received, the target running data are running data, the attribute information of which is matched with the driving scene keywords, in a target database, the target database stores running data taking at least one of an environment state, a vehicle state and a running state as dimensions, and different dimensions of the running data correspond to different attribute information.
In one possible implementation, the driving data includes at least one of:
environmental status data transmitted by the terminal, the environmental status data including at least one of weather data, light data, road status data, time data, and location data;
vehicle state data uploaded by the autonomous vehicle, the vehicle state data including at least one of speed data, acceleration data, oil mass data, oil consumption data, travel direction data, time data, and position data;
and driving state data determined based on the vehicle state data uploaded by the autonomous vehicle, the driving state data including at least one of driving mileage data, driving trajectory data, driving duration data, and parking duration data.
In one possible implementation manner, the acquiring the at least one attribute information of the driving data includes at least one of the following:
determining environmental attribute information of the driving data based on the environmental state data;
determining vehicle attribute information of the travel data based on the vehicle state data;
based on the travel state data, travel attribute information of the travel data is determined.
In one possible implementation manner, the acquiring the at least one attribute information of the driving data further includes:
Identifying the running data to obtain the running data meeting a first target condition in the running data, wherein the first target condition is that the environmental attribute information is wrong or the decision accuracy of the running behavior is smaller than a first target threshold;
based on the driving data satisfying the first target condition, scene attribute information of the driving data is determined, wherein the scene attribute information is used for indicating that a driving scene corresponding to the driving data satisfying the first target condition is a driving scene to be improved or an unusual driving scene.
In one possible implementation manner, the acquiring the at least one attribute information of the driving data further includes:
generating video data based on the environment state data, the vehicle state data and the driving state data, wherein the video data is marked with environment attribute information, vehicle attribute information, driving attribute information and scene attribute information of the driving data;
transmitting the video data to a terminal;
the method comprises the steps of receiving marking information returned by the terminal based on the video data, wherein the marking information is used for indicating driving data with the decision accuracy of driving behaviors in the video data smaller than a second target threshold value;
based on the marking information, a corresponding attribute tag is added to the running data as response attribute information of the running data.
In one possible implementation manner, the acquiring the at least one attribute information of the driving data further includes:
receiving road state data sent by the terminal;
road attribute information of the travel data is determined based on the road state data.
In one possible implementation manner, the data fusing the at least one attribute information with the running data to obtain the running data associated with the at least one attribute information includes:
and adding the at least one attribute information into the running data corresponding to the time information according to the at least one attribute information and the time information corresponding to the running data to obtain fusion data, wherein the fusion data comprises the running data and the attribute information which are correspondingly stored according to the time sequence indicated by the time information.
In one possible implementation, the generating a search engine based on the at least one attribute information includes:
determining the at least one attribute information as an index of the search engine;
the search engine is generated based on the index and the travel data.
In one possible implementation, after the generating the search engine based on the at least one attribute information, the method further includes:
Responding to a search request of a terminal, and determining the similarity between a driving scene keyword carried by the search request and at least one attribute information;
determining attribute information with the similarity larger than a third target threshold value as target attribute information matched with the search keyword;
and acquiring the running data corresponding to the target attribute information from the at least one target database as the target running data.
In one aspect, there is provided an automatic driving scene searching apparatus, the apparatus comprising:
the scene searching interface comprises a searching frame and is used for providing searching functions of target driving data corresponding to different driving scenes;
the keyword acquisition module is used for responding to the search operation of the external input scene search interface and acquiring driving scene keywords input in the search box;
the data acquisition module is used for acquiring running data, of which attribute information is matched with the driving scene keywords, from at least one target database, wherein the running data is used for indicating the environment state of an external environment, the vehicle state and the running state of the automatic driving vehicle in the running process of the automatic driving vehicle, the at least one target database stores the running data taking at least one of the environment state, the vehicle state and the running state as parameters, and different parameters of the running data correspond to different attribute information.
In one possible implementation manner, the data acquisition module is configured to generate a search request based on the driving scenario keyword, where the search request carries the driving scenario keyword; sending the search request to a server; and receiving the driving data acquired by the server from the at least one target database as the target driving data.
In one possible implementation, the providing module is further configured to provide a data input interface, where the data input interface includes an input box, and the data input interface is configured to input environmental status data during driving of the autonomous vehicle, where the environmental status data includes at least one of weather data, light data, road status data, time data, and location data;
the data acquisition module is also used for responding to the input operation in the input box and acquiring the environmental state data input in the input box;
the apparatus further comprises:
and the first sending module is used for sending the environment state data to a server, wherein the environment state data is used for determining environment attribute information of the running data corresponding to the automatic driving vehicle.
In one possible implementation manner, the data acquisition module is configured to acquire video data generated based on running data of the autonomous vehicle, where the video data is marked with attribute information of the running data, and the attribute information includes at least one of environment attribute information, vehicle attribute information, running attribute information, and scene attribute information;
The apparatus further comprises:
the information acquisition module is used for acquiring marking information in the video data based on the playing of the video data, wherein the marking information is used for indicating driving data with the decision accuracy of driving behaviors in the video data smaller than a second target threshold value;
and the second sending module is used for sending the marking information to the server.
In one possible implementation manner, the data obtaining module is further configured to obtain video data generated based on running data of the autonomous vehicle, where attribute information of the running data is marked in the video data, and the attribute includes at least one of environment attribute information, vehicle attribute information, running attribute information, and scene attribute information;
the data acquisition module is also used for acquiring road state data input based on at least one frame of image in the video data based on the playing of the video data, wherein the road state data is used for indicating the road condition in the running process of the automatic driving vehicle;
the apparatus further comprises:
and a third transmitting module for transmitting the road status data to a server, the road status data being used for determining road attribute information of the driving data.
In one aspect, there is provided an automatic driving scene searching apparatus, the apparatus comprising:
The data acquisition module is used for acquiring running data of the automatic driving vehicle, wherein the running data is acquired in the running process of the automatic driving vehicle;
an information acquisition module for acquiring at least one attribute information of the running data, the attribute information being used for indicating an environmental state of an external environment during running of the autonomous vehicle, and at least one of a vehicle state and a running state of the autonomous vehicle;
the data fusion module is used for carrying out data fusion on the at least one attribute information and the running data to obtain the running data associated with the at least one attribute information;
and the generation module is used for generating a search engine based on the at least one attribute information, wherein the search engine is used for determining target running data according to the driving scene keywords when the driving scene keywords related to the driving scenes are received, the target running data are running data, the attribute information of which is matched with the driving scene keywords, in a target database, the target database stores the running data taking at least one of an environment state, a vehicle state and a running state as dimensions, and different dimensions of the running data correspond to different attribute information.
In one possible implementation, the driving data includes at least one of:
environmental status data transmitted by the terminal, the environmental status data including at least one of weather data, light data, road status data, time data, and location data;
vehicle state data uploaded by the autonomous vehicle, the vehicle state data including at least one of speed data, acceleration data, oil mass data, oil consumption data, travel direction data, time data, and position data;
and driving state data determined based on the vehicle state data uploaded by the autonomous vehicle, the driving state data including at least one of driving mileage data, driving trajectory data, driving duration data, and parking duration data.
In one possible implementation manner, the information obtaining module is configured to at least one of the following:
determining environmental attribute information of the driving data based on the environmental state data;
determining vehicle attribute information of the travel data based on the vehicle state data;
based on the travel state data, travel attribute information of the travel data is determined.
In one possible implementation manner, the information obtaining module is further configured to identify the running data, and obtain running data that meets a first target condition in the running data, where the first target condition is that the environmental attribute information is wrong or the decision accuracy of the running behavior is less than a first target threshold; based on the driving data satisfying the first target condition, scene attribute information of the driving data is determined, wherein the scene attribute information is used for indicating that a driving scene corresponding to the driving data satisfying the first target condition is a driving scene to be improved or an unusual driving scene.
In one possible implementation manner, the information obtaining module is further configured to generate video data based on the environmental status data, the vehicle status data, and the driving status data, where the video data is marked with environmental attribute information, vehicle attribute information, driving attribute information, and scene attribute information of the driving data; transmitting the video data to a terminal; the method comprises the steps of receiving marking information returned by the terminal based on the video data, wherein the marking information is used for indicating driving data with the decision accuracy of driving behaviors in the video data smaller than a second target threshold value; based on the marking information, a corresponding attribute tag is added to the running data as response attribute information of the running data.
In a possible implementation manner, the information acquisition module is further configured to receive road status data sent by the terminal; road attribute information of the travel data is determined based on the road state data.
In one possible implementation manner, the data fusion module is configured to add the at least one attribute information to the travel data corresponding to the time information according to the at least one attribute information and the time information corresponding to the travel data, so as to obtain fusion data, where the fusion data includes the travel data and the attribute information stored correspondingly according to the time sequence indicated by the time information.
In one possible implementation, the generating module is configured to determine the at least one attribute information as an index of the search engine; the search engine is generated based on the index and the travel data.
In one possible implementation, the apparatus further includes:
the determining module is used for responding to the search request of the terminal and determining the similarity between the driving scene keywords carried by the search request and at least one attribute information;
the determining module is further configured to determine attribute information with the similarity greater than a third target threshold as target attribute information matched with the search keyword;
the data acquisition module is further configured to acquire driving data corresponding to the target attribute information from the at least one target database, and use the driving data as the target driving data.
In one aspect, a terminal is provided that includes one or more processors and one or more memories having stored therein at least one program code loaded and executed by the one or more processors to implement operations performed by the automated driving scenario search method.
In one aspect, a server is provided that includes one or more processors and one or more memories having stored therein at least one program code loaded and executed by the one or more processors to implement operations performed by the automated driving scenario search method.
In one aspect, a computer readable storage medium having at least one program code stored therein is loaded and executed by a processor to perform operations performed by the automated driving scenario search method.
In one aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer program code, the computer program code being stored in a computer readable storage medium. The processor of the terminal/server reads the computer program code from the computer readable storage medium, and the processor executes the computer program code to implement the operations performed by the automated driving scene search method.
According to the scheme, the scene searching function is provided by providing the scene searching interface, the driving scene keywords can be used for indicating the specific driving scene in the automatic driving process according to the driving scene keywords input in the scene Jing Sousuo interface, and the target driving data corresponding to the specific driving scene is obtained by obtaining the driving data of which the attribute information is matched with the driving scene keywords, so that the searching is not needed manually, and the data processing efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an implementation environment schematic diagram of an automatic driving scene searching method according to an embodiment of the present application;
fig. 2 is a flowchart of a scenario search method for automatic driving according to an embodiment of the present application;
fig. 3 is a flowchart of a scenario search method for automatic driving according to an embodiment of the present application;
fig. 4 is a flowchart of a scenario search method for automatic driving according to an embodiment of the present application;
fig. 5 is a flowchart of a scenario search method for automatic driving according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an autopilot scene searching device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an autopilot scene searching apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a schematic view of an implementation environment of an automatic driving scene searching method according to an embodiment of the present application, referring to fig. 1, the implementation environment includes: a terminal 101, an in-vehicle terminal 102, and a server 103.
The terminal 101 may be at least one of a smart phone, a game console, a desktop computer, a tablet computer, an e-book reader, an MP3 (Moving Picture Experts Group Audio Layer III, moving picture experts compression standard audio layer 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, moving picture experts compression standard audio layer 4) player, and a laptop portable computer. The terminal 101 communicates with the server 103 by wired or wireless communication, which is not limited in the embodiment of the present application. The related art can transmit data manually recorded during the traveling of the autonomous vehicle to the server 103 through the terminal 101 so that the server 103 stores the received data.
The in-vehicle terminal 102 communicates with the server 103 by wired or wireless communication, which is not limited in the embodiment of the present application. The in-vehicle terminal acquires travel data acquired by devices such as a sensor of the automatically driven vehicle, and transmits the acquired travel data to the server 103.
The terminal 101 and the in-vehicle terminal 102 may refer to one of a plurality of terminals in general, and the present embodiment is exemplified only by the terminal 101 and the in-vehicle terminal 102. Those skilled in the art will recognize that the number of terminals may be greater or lesser. For example, the number of the terminals may be only several, or the number of the terminals may be tens or hundreds, or more, and the number and the device types of the terminal 101 and the vehicle-mounted terminal 102 are not limited in the embodiments of the present disclosure.
Fig. 2 is a flowchart of an automatic driving scene searching method provided in an embodiment of the present application, and referring to fig. 2, the method includes:
201. the terminal provides a scene search interface, wherein the scene search interface comprises a search box, and the scene search interface is used for providing search functions of target driving data corresponding to different driving scenes.
202. And the terminal responds to the search operation of the scene search interface input from the outside and acquires the driving scene keywords input in the search box.
203. The terminal acquires, as target running data, running data whose attribute information matches the driving scenario keyword, the attribute information being used to indicate an environmental state of an external environment in a running process of the automated driving vehicle, a vehicle state of the automated driving vehicle, and a running state, the at least one target database storing running data having at least one of the environmental state, the vehicle state, and the running state as a parameter, different parameters of the running data corresponding to different attribute information.
According to the scheme provided by the embodiment of the application, the scene searching function is provided by providing the scene searching interface, the driving scene keywords can be used for indicating the specific driving scene in the automatic driving process according to the driving scene keywords input in the scene Jing Sousuo interface, and the target driving data corresponding to the specific driving scene is obtained by obtaining the driving data of which the attribute information is matched with the driving scene keywords, so that the searching is not needed manually, and the data processing efficiency is improved.
In one possible implementation manner, the acquiring the driving data whose attribute information matches the driving scene keyword includes, as target driving data:
generating a search request based on the driving scene keyword, wherein the search request carries the driving scene keyword;
sending the search request to a server;
and receiving the driving data acquired by the server from the at least one target database as the target driving data.
In one possible implementation, the method further includes:
providing a data input interface, wherein the data input interface comprises an input box and is used for inputting environmental state data in the running process of the automatic driving vehicle, and the environmental state data comprises at least one of weather data, light data, road state data, time data and position data;
acquiring environmental state data input in the input box in response to an input operation in the input box;
and sending the environment state data to a server, wherein the environment state data is used for determining environment attribute information of the driving data corresponding to the automatic driving vehicle.
In one possible implementation, the method further includes:
Acquiring video data generated based on the driving data of the automatic driving vehicle, wherein the video data is marked with attribute information of the driving data, and the attribute information comprises at least one of environment attribute information, vehicle attribute information, driving attribute information and scene attribute information;
based on the playing of the video data, marking information in the video data is obtained, wherein the marking information is used for indicating driving data with the decision accuracy of driving behaviors in the video data smaller than a second target threshold value;
the tag information is sent to the server.
In one possible implementation, the method further includes:
acquiring video data generated based on the driving data of the automatic driving vehicle, wherein the video data is marked with attribute information of the driving data, and the attribute comprises at least one of environment attribute information, vehicle attribute information, driving attribute information and scene attribute information;
based on the playing of the video data, acquiring road state data input based on at least one frame of image in the video data, wherein the road state data is used for indicating the road condition in the running process of the automatic driving vehicle;
the road state data is transmitted to a server, and the road state data is used for determining road attribute information of the driving data.
Fig. 3 is a flowchart of an automatic driving scene searching method provided in an embodiment of the present application, and referring to fig. 3, the method includes:
301. the server acquires driving data of the automatic driving vehicle, wherein the driving data is data acquired by the automatic driving vehicle in the driving process.
302. The server acquires at least one attribute information of the running data, the attribute information being used to indicate an environmental state of an external environment during running of the autonomous vehicle, and at least one of a vehicle state and a running state of the autonomous vehicle.
303. And the server performs data fusion on the at least one attribute information and the running data to obtain the running data associated with the at least one attribute information.
304. The server generates a search engine based on the at least one attribute information, the search engine is used for determining target running data according to the driving scene keywords when the driving scene keywords related to the driving scenes are received, the target running data are running data, the attribute information of which is matched with the driving scene keywords, in a target database, the target database stores running data taking at least one of an environment state, a vehicle state and a running state as dimensions, and different dimensions of the running data correspond to different attribute information.
According to the scheme provided by the embodiment of the application, when the driving data of the automatic driving vehicle in the driving process is obtained, at least one attribute information of the driving data is extracted, the attribute information is further related to the driving data, and a search engine capable of searching based on driving scene keywords related to driving scenes is generated, so that target driving data corresponding to specific driving scenes is directly obtained through searching, and searching is not needed manually, so that the data processing efficiency is improved.
In one possible implementation, the driving data includes at least one of:
environmental status data transmitted by the terminal, the environmental status data including at least one of weather data, light data, road status data, time data, and location data;
vehicle state data uploaded by the autonomous vehicle, the vehicle state data including at least one of speed data, acceleration data, oil mass data, oil consumption data, travel direction data, time data, and position data;
and driving state data determined based on the vehicle state data uploaded by the autonomous vehicle, the driving state data including at least one of driving mileage data, driving trajectory data, driving duration data, and parking duration data.
In one possible implementation manner, the acquiring the at least one attribute information of the driving data includes at least one of the following:
determining environmental attribute information of the driving data based on the environmental state data;
determining vehicle attribute information of the travel data based on the vehicle state data;
based on the travel state data, travel attribute information of the travel data is determined.
In one possible implementation manner, the acquiring the at least one attribute information of the driving data further includes:
identifying the running data to obtain the running data meeting a first target condition in the running data, wherein the first target condition is that the environmental attribute information is wrong or the decision accuracy of the running behavior is smaller than a first target threshold;
based on the driving data satisfying the first target condition, scene attribute information of the driving data is determined, wherein the scene attribute information is used for indicating that a driving scene corresponding to the driving data satisfying the first target condition is a driving scene to be improved or an unusual driving scene.
In one possible implementation manner, the acquiring the at least one attribute information of the driving data further includes:
generating video data based on the environment state data, the vehicle state data and the driving state data, wherein the video data is marked with environment attribute information, vehicle attribute information, driving attribute information and scene attribute information of the driving data;
Transmitting the video data to a terminal;
the method comprises the steps of receiving marking information returned by the terminal based on the video data, wherein the marking information is used for indicating driving data with the decision accuracy of driving behaviors in the video data smaller than a second target threshold value;
based on the marking information, a corresponding attribute tag is added to the running data as response attribute information of the running data.
In one possible implementation manner, the acquiring the at least one attribute information of the driving data further includes:
receiving road state data sent by the terminal;
road attribute information of the travel data is determined based on the road state data.
In one possible implementation manner, the data fusing the at least one attribute information with the running data to obtain the running data associated with the at least one attribute information includes:
and adding the at least one attribute information into the running data corresponding to the time information according to the at least one attribute information and the time information corresponding to the running data to obtain fusion data, wherein the fusion data comprises the running data and the attribute information which are correspondingly stored according to the time sequence indicated by the time information.
In one possible implementation, the generating a search engine based on the at least one attribute information includes:
determining the at least one attribute information as an index of the search engine;
the search engine is generated based on the index and the travel data.
In one possible implementation, after the generating the search engine based on the at least one attribute information, the method further includes:
responding to a search request of a terminal, and determining the similarity between a driving scene keyword carried by the search request and at least one attribute information;
determining attribute information with the similarity larger than a third target threshold value as target attribute information matched with the search keyword;
and acquiring the running data corresponding to the target attribute information from the at least one target database as the target running data.
Fig. 4 is a flowchart of an automatic driving scene searching method provided in an embodiment of the present application, and referring to fig. 4, the method includes:
401. the terminal provides a scene search interface, wherein the scene search interface comprises a search box, and the scene search interface is used for providing search functions of target driving data corresponding to different driving scenes.
Optionally, the terminal provides the scene search interface in a visual manner, or the terminal provides the scene search interface in a voice manner, or the terminal provides the scene search interface in a combination of visual and voice manners, which manner is not limited in the embodiments of the present application.
Taking the terminal as an example to provide the scene search interface in a visual manner, in one possible implementation, the terminal displays the scene search interface on a visual interface thereof, so that a user inputs driving scene keywords which the user wants to search in an input box of the scene search interface.
402. And the terminal responds to the search operation of the scene search interface input from the outside and acquires the driving scene keywords input in the search box.
It should be noted that, after the user inputs the driving scene keyword in the search box of the scene search interface, the triggering operation is performed on the search box to trigger the searching operation on the scene search interface, and the terminal responds to the searching operation to obtain the driving scene keyword input by the user.
403. And the terminal generates a search request based on the driving scene keywords, wherein the search request carries the driving scene keywords.
404. The terminal sends the search request to the server.
405. The server responds to the search request of the terminal, and determines the similarity between the driving scene keywords carried by the search request and at least one attribute information, wherein the attribute information is used for indicating at least one of the environment state of the external environment, the vehicle state and the driving state of the automatic driving vehicle in the driving process.
The server is associated with at least one target database for storing the driving data of at least one automatic driving vehicle and the attribute information associated with the driving data.
In one possible implementation manner, the server responds to the received search request, acquires the at least one attribute information from the at least one target database, and further determines the similarity between the driving scene keyword carried by the search request and each attribute information.
406. And the server determines the attribute information with the similarity larger than a third target threshold value as target attribute information matched with the search keyword.
Note that, the third target threshold is any positive value, and the specific value of the third target threshold is not limited in this embodiment of the present application.
In more possible implementations, the server ranks the at least one attribute information according to the order of the similarity from the high degree to the low degree based on the similarity between the driving scenario keyword and the at least one attribute information determined in the step 405, and further determines the attribute ranked before the target position as the target attribute information matched with the search keyword.
407. The server acquires running data corresponding to the target attribute information from the at least one target database, wherein the at least one target database stores the running data taking at least one of an environment state, a vehicle state and a running state as a parameter, and different parameters of the running data correspond to different attribute information.
In the at least one target database associated with the server, the running data and the attribute information corresponding to the running data are both associated, so that the server can directly obtain the corresponding target running data from the at least one target database based on the determined target attribute information.
408. The server transmits the target travel data to the terminal.
It should be noted that, after receiving the target driving data, the terminal may display the target driving data through a visual interface, so that a related technician may perform system optimization on the automatic driving vehicle by analyzing the target driving data. Or after receiving the target driving data, the terminal trains the automatic driving vehicle based on the target driving data so as to improve the decision accuracy of the automatic driving vehicle in a specific driving scene corresponding to the driving scene keywords.
The following describes, in several examples, target travel data for determining a specific driving scenario based on driving scenario keywords:
the embodiment of the invention provides a method for determining target driving data corresponding to driving scenes of vehicles cut into a current lane when the vehicles run to a bifurcation on a high-speed road section within the scope of Suzhou by taking "Suzhou", "high-speed", "raining", "bifurcation road" and "vehicle cut-in" as driving scene keywords;
in example 2, taking a "tunnel", "daytime", "unclear lane lines", "the number of front vehicles is greater than 10", "the average vehicle speed is less than 5 km/h" as driving scene keywords, the vehicle can enter the tunnel in the daytime and meanwhile the lane lines are unclear, the number of front vehicles exceeds 10 and the driving scene corresponds to the target driving data;
example 3, using "night", "average speed of front vehicle 10-20 km/h", "gradient greater than 5 degrees", "oil consumption exceeding 30 liters/hundred km" as driving scene keywords, the scheme provided by the embodiment of the application can find the target driving data corresponding to the driving scene with high oil consumption at night and slow uphill;
In example 4, the "obstacle identification error", "vehicle speed exceeds 100 km/h", "half load", and "backlight" are used as driving scene keywords, and the scheme provided by the embodiment of the present application can find the target driving data corresponding to the driving scene that is identified by the obstacle error when the half load vehicle runs at high speed under the condition of backlight.
It should be noted that the foregoing is merely illustrative, and is not to be construed as limiting the embodiments of the present application.
According to the scheme provided by the embodiment of the application, the scene searching function is provided by providing the scene searching interface, the driving scene keywords can be used for indicating the specific driving scene in the automatic driving process according to the driving scene keywords input in the scene Jing Sousuo interface, and the target driving data corresponding to the specific driving scene is obtained by obtaining the driving data of which the attribute information is matched with the driving scene keywords, so that the searching is not needed manually, and the data processing efficiency is improved. According to the method and the device, the flexible searching capability is provided, so that different users can find out the driving data corresponding to the valuable driving scene from different dimensions, the efficiency of data searching is improved, and further the user experience is improved.
It should be noted that, the steps 401 to 408 are described by taking the example of acquiring the target driving data from at least one target database based on the search keyword input by the user, and the process of processing and storing the data in the at least one target database is described with reference to fig. 5, and fig. 5 is a flowchart of an automatic driving scene searching method provided in the embodiment of the present application, where the method includes:
501. the terminal provides a data input interface including an input box for inputting environmental status data during travel of the autonomous vehicle, the environmental status data including at least one of weather data, light data, road status data, time data, and location data.
In the test running process of the automatic driving vehicle, a safety person records environmental state data such as weather data, light data, road state data, time data, position data and the like in the running process of the automatic driving vehicle in real time, and further, after the running process of the automatic driving vehicle is finished or in the running process of the automatic driving vehicle, the recorded environmental state data is input into the terminal through an input box in a data input interface provided by the terminal.
The weather data, such as sunny days, cloudy days, rain and snow, etc., the light data, such as daytime, evening, early morning, evening, backlight, suddenly brightening or darkening, etc., the road status data includes road status data about lane lines, such as unclear lane lines, missing lane lines, unusual lane lines, etc., and road status data about forward obstacles, such as misidentification of forward obstacles, repeated identification of forward obstacles, etc., and optionally, the environmental status data and various data included in the environmental status include other types of data, which is not limited in the embodiment of the present application.
Optionally, the terminal provides the data input interface in a visual manner, or the terminal provides the data input interface in a voice manner, or the terminal provides the data input interface in a combination of visual and voice manners, which manner is not limited in the embodiments of the present application.
Taking the terminal as an example to provide the data input interface visually, in one possible implementation, the terminal displays the data input interface on its visual interface so that the security personnel can input the environmental status data in the input box of the data input interface.
502. The terminal acquires the environmental state data input in the input box in response to the input operation in the input box.
In one possible implementation, the user inputs the recorded environmental state data into the terminal through an input box of the data input interface, and the terminal acquires the environmental state data input by the security personnel in response to an input operation in the input box.
503. The terminal transmits the environmental state data to a server, wherein the environmental state data is used for determining environmental attribute information of running data corresponding to the automatic driving vehicle.
504. The server receives the environmental status data transmitted from the terminal as the traveling data.
505. The server determines environmental attribute information of the travel data based on the environmental status data.
In one possible implementation, the server determines weather information corresponding to the weather data, light information corresponding to the light data, lane line information corresponding to road state data regarding a lane line, forward obstacle information corresponding to road state data regarding a forward obstacle, time information corresponding to time data, geographic position information corresponding to positioning data, and the like as the environmental attribute information of the environmental state data as the traveling data.
506. The server receives vehicle state data uploaded by the autonomous vehicle as the travel data, the vehicle state data including at least one of speed data, acceleration data, fuel amount data, fuel consumption data, travel direction data, time data, and position data.
The sensor of the automatic driving vehicle acquires vehicle state data such as speed data, acceleration data, oil mass data, oil consumption data, driving direction data, time data and position data of the automatic driving vehicle in real time in the driving process of the automatic driving vehicle, the vehicle-mounted terminal acquires the vehicle state data acquired in real time by the sensor, and the vehicle state data is further sent to the server through the real-time data uploading port so that the server can receive the vehicle state data.
507. The server determines vehicle attribute information of the travel data based on the vehicle state data.
In one possible implementation, the server determines speed information corresponding to the speed data, acceleration information corresponding to the acceleration data, fuel amount information corresponding to the fuel amount data, fuel consumption information corresponding to the fuel consumption data, travel direction information corresponding to the travel direction data, time information corresponding to the time data, position information corresponding to the position data, and the like as the vehicle attribute information of the vehicle state data as the travel data.
508. The server determines running state data including, as the running data, at least one of running mileage data, running track data, running duration data, and parking duration data, based on the vehicle state data uploaded by the automatically driven vehicle.
In one possible implementation manner, the server determines, through calculation, non-transient data such as driving mileage data, driving track data, driving duration data, parking duration data and the like of the autonomous vehicle as driving state data of the autonomous vehicle based on vehicle state data such as speed data, acceleration data, oil amount data, oil consumption data, driving direction data, time data, position data and the like uploaded by the autonomous vehicle.
509. The server determines travel attribute information of the travel data based on the travel state data.
In one possible implementation manner, the server determines, as the travel attribute information of the travel state data that is the travel data, the travel distance information corresponding to the travel distance data, the travel locus information corresponding to the travel locus data, the travel duration information corresponding to the travel duration data, the parking duration information corresponding to the parking duration, and the like.
510. The server determines scene attribute information of the travel data based on the travel data, the scene attribute information being used to indicate a driving scene to be improved and an unusual driving scene.
In one possible implementation manner, the server identifies environmental state data, vehicle state data and driving state data as driving data, obtains driving data meeting a first target condition in the driving data, and determines scene attribute information of the driving data based on the driving data meeting the first target condition, where the scene attribute information is used to indicate that a driving scene corresponding to the driving data meeting the first target condition is a driving scene to be improved or an unusual driving scene.
The first target condition is that the decision accuracy of the environmental attribute information error or the driving behavior is smaller than a first target threshold, and the first target threshold is any positive value.
It should be noted that, the driving data is identified, that is, the driving data is mined by artificial intelligence (Artificial Intelligence, AI) data flow processing, so as to determine driving scenes to be improved or unusual from the driving data, such as lane line identification errors, obstacle identification errors, traffic jams, vehicles with different types, and the like, and further determine scene attribute information based on the driving data corresponding to the driving scenes to be improved or unusual.
511. The server generates video data, in which environment attribute information, vehicle attribute information, travel attribute information, and scene attribute information of the travel data are marked, based on the environment state data, the vehicle state data, and the travel state data.
In one possible implementation manner, the server performs visualization processing on the environmental status data, the vehicle status data and the driving status data through an algorithm, and superimposes sensing, planning and control results, namely, environmental attribute information, vehicle attribute information, driving attribute information and scene attribute information, into the results after the visualization processing, so as to obtain the video data.
The environment state data, the vehicle state data and the driving state data are subjected to visual processing to generate video data, and then the scene is reproduced highly through the video data, so that the safety officer can check conveniently.
512. The server determines response attribute information of the driving data based on the video data, wherein the response attribute information is used for indicating the driving data with the decision accuracy of the driving behavior in the video data being smaller than a second target threshold value.
In one possible implementation manner, the server sends the video data to the terminal, the terminal receives the video data to obtain video data generated based on the running data of the automatic driving vehicle, and plays the video data, so that a safety person determines that the decision accuracy of the driving behavior in the running process of the automatic driving vehicle is smaller than the running data of the second target threshold value by playing back the video data, marks the running data of which the decision accuracy of the driving behavior is smaller than the second target threshold value to obtain mark information, and sends the mark information to the server, the server receives the mark information returned by the terminal based on the video data, and adds an attribute tag into the running data based on the mark information as response attribute information of the running data, wherein the mark information is used for indicating the running data of which the decision accuracy of the driving behavior in the video data is smaller than the second target threshold value.
The decision accuracy of the driving behavior is smaller than the driving data of the second target threshold, namely the driving data corresponding to the defect (Bug) or the non-ideal system response in the processing process.
513. The server determines road attribute information of the travel data based on the video data.
In one possible implementation manner, during playing of the video data by the terminal, the security personnel marks a road state in at least one frame of image of the video data, and inputs the road state data obtained by the marking into the terminal, the terminal obtains the road state data input based on at least one frame of image of the video data, sends the road state data to the server, the server receives the road state data sent by the terminal, determines road attribute information of the driving data based on the road state data, and the road state data is used for indicating a road condition during driving of the automatic driving vehicle and determining the road attribute information of the driving data.
On the basis that the environment attribute information is determined, the road attribute information is further determined, the accuracy of determining the attribute information is improved, and the accuracy of the data fusion process is further improved.
514. And the server performs data fusion on the at least one attribute information and the running data to obtain the running data associated with the at least one attribute information.
In one possible implementation manner, the server adds the at least one attribute information to the travel data corresponding to the time information according to the at least one attribute information and the time information corresponding to the travel data, so as to obtain fusion data, wherein the fusion data comprises the travel data and the attribute information which are correspondingly stored according to the time sequence indicated by the time information.
By fusing the attribute information and the running data, different attributes from different systems can be written in the same protocol on the time axis of the running data, and unification of the data and the attribute information in the time dimension is realized.
515. The server generates a search engine based on the at least one attribute information, the search engine is used for determining target running data according to the driving scene keywords when the driving scene keywords related to the driving scenes are received, the target running data are running data, the attribute information of which is matched with the driving scene keywords, in a target database, the target database stores running data taking at least one of an environment state, a vehicle state and a running state as dimensions, and different dimensions of the running data correspond to different attribute information.
In one possible implementation, the server determines the at least one attribute information as an index of the search engine, and generates the search engine based on the index and the travel data.
The customization of the search engine can be realized through the process, and the search engine can support complex inquiry, so that a user can quickly find running data corresponding to any combination of various attribute information, and the speed and efficiency of data searching are improved.
According to the scheme provided by the embodiment of the application, the running data and the attribute information are subjected to data fusion to provide an automatic driving data stream fusion mode, so that tools and methods for accurately searching complex driving scenes or form data corresponding to specific driving scenes from massive data are realized, an information island is opened, data generated in different stages and different modes are integrated, and logic connection between information is established. The video data is generated based on the driving data, so that the perception, prediction, planning and control data and the data attribute can be fused, the original data is visually displayed, the use by a user is convenient, and the utilization rate of the data is improved. In addition, the search engine is customized to provide flexible search capability, so that different users can find out the driving data corresponding to the valuable driving scene from different dimensions, the efficiency of data searching is improved, and further the user experience is improved.
Any combination of the above optional solutions may be adopted to form an optional embodiment of the present application, which is not described herein in detail.
Fig. 6 is a schematic structural diagram of an autopilot scene searching device according to an embodiment of the present application, referring to fig. 6, the device includes:
the providing module 601 is configured to provide a scene search interface, where the scene search interface includes a search box, and the scene search interface is configured to provide a search function of target driving data corresponding to different driving scenes;
the keyword obtaining module 602 is configured to obtain driving scene keywords input in the search box in response to a search operation of inputting the scene search interface from the outside;
a data obtaining module 603, configured to obtain, as target running data, running data whose attribute information matches the driving scenario keyword, where the attribute information is used to indicate an environmental state of an external environment during running of the autonomous vehicle, a vehicle state of the autonomous vehicle, and a running state, from at least one target database storing running data with at least one of the environmental state, the vehicle state, and the running state as parameters, where different parameters of the running data correspond to different attribute information.
According to the device provided by the embodiment of the application, the scene searching function is provided by providing the scene searching interface, the driving scene keywords can be used for indicating the specific driving scene in the automatic driving process according to the driving scene keywords input in the scene Jing Sousuo interface, and the target driving data corresponding to the specific driving scene is obtained by obtaining the driving data of which the attribute information is matched with the driving scene keywords, so that the searching is not needed manually, and the data processing efficiency is improved.
In a possible implementation manner, the data obtaining module 603 is configured to generate a search request based on the driving scenario keyword, where the search request carries the driving scenario keyword; sending the search request to a server; and receiving the driving data acquired by the server from the at least one target database as the target driving data.
In a possible implementation manner, the providing module 601 is further configured to provide a data input interface, where the data input interface includes an input box, and the data input interface is configured to input environmental status data during running of the autonomous vehicle, where the environmental status data includes at least one of weather data, light data, road status data, time data, and location data;
The data obtaining module 603 is further configured to obtain environmental status data input in the input box in response to an input operation in the input box;
the apparatus further comprises:
and the first sending module is used for sending the environment state data to a server, wherein the environment state data is used for determining environment attribute information of the running data corresponding to the automatic driving vehicle.
In a possible implementation manner, the data obtaining module 603 is configured to obtain video data generated based on running data of the autonomous vehicle, where the video data is labeled with attribute information of the running data, and the attribute information includes at least one of environment attribute information, vehicle attribute information, running attribute information, and scene attribute information;
the apparatus further comprises:
the information acquisition module is used for acquiring marking information in the video data based on the playing of the video data, wherein the marking information is used for indicating driving data with the decision accuracy of driving behaviors in the video data smaller than a second target threshold value;
and the second sending module is used for sending the marking information to the server.
In a possible implementation manner, the data obtaining module 603 is further configured to obtain video data generated based on the driving data of the autopilot vehicle, where attribute information of the driving data is marked in the video data, and the attribute includes at least one of environment attribute information, vehicle attribute information, driving attribute information, and scene attribute information;
The data obtaining module 603 is further configured to obtain, based on the playing of the video data, road status data input based on at least one frame of image in the video data, where the road status data is used to indicate a road condition during the driving process of the autopilot vehicle;
the apparatus further comprises:
and a third transmitting module for transmitting the road status data to a server, the road status data being used for determining road attribute information of the driving data.
Fig. 7 is a schematic structural diagram of an autopilot scene searching device provided in an embodiment of the present application, referring to fig. 7, the device includes:
the data acquisition module 701 is configured to acquire driving data of an autonomous vehicle, where the driving data is data acquired by the autonomous vehicle during driving;
an information acquisition module 702 for acquiring at least one attribute information of the running data, the attribute information being used to indicate an environmental state of an external environment during running of the autonomous vehicle, and at least one of a vehicle state and a running state of the autonomous vehicle;
a data fusion module 703, configured to perform data fusion on the at least one attribute information and the running data, so as to obtain the running data associated with the at least one attribute information;
And a generating module 704, configured to generate, based on the at least one attribute information, a search engine, where the search engine is configured to determine, when a driving scenario keyword related to a driving scenario is received, target driving data according to the driving scenario keyword, where the target driving data is driving data whose attribute information matches the driving scenario keyword in a target database, and the target database stores driving data in which at least one of an environmental state, a vehicle state, and a driving state is used as dimensions, and different dimensions of the driving data correspond to different attribute information.
According to the device provided by the embodiment of the application, the scene searching function is provided by providing the scene searching interface, the driving scene keywords can be used for indicating the specific driving scene in the automatic driving process according to the driving scene keywords input in the scene Jing Sousuo interface, and the target driving data corresponding to the specific driving scene is obtained by obtaining the driving data of which the attribute information is matched with the driving scene keywords, so that the searching is not needed manually, and the data processing efficiency is improved.
In one possible implementation, the driving data includes at least one of:
Environmental status data transmitted by the terminal, the environmental status data including at least one of weather data, light data, road status data, time data, and location data;
vehicle state data uploaded by the autonomous vehicle, the vehicle state data including at least one of speed data, acceleration data, oil mass data, oil consumption data, travel direction data, time data, and position data;
and driving state data determined based on the vehicle state data uploaded by the autonomous vehicle, the driving state data including at least one of driving mileage data, driving trajectory data, driving duration data, and parking duration data.
In one possible implementation, the information obtaining module 702 is configured to at least one of:
determining environmental attribute information of the driving data based on the environmental state data;
determining vehicle attribute information of the travel data based on the vehicle state data;
based on the travel state data, travel attribute information of the travel data is determined.
In a possible implementation manner, the information obtaining module 702 is further configured to identify the running data, so as to obtain running data that meets a first target condition in the running data, where the first target condition is that the environmental attribute information is wrong or the decision accuracy of the running behavior is less than a first target threshold; based on the driving data satisfying the first target condition, scene attribute information of the driving data is determined, wherein the scene attribute information is used for indicating that a driving scene corresponding to the driving data satisfying the first target condition is a driving scene to be improved or an unusual driving scene.
In a possible implementation manner, the information obtaining module 702 is further configured to generate video data based on the environmental status data, the vehicle status data, and the driving status data, where the video data is marked with environmental attribute information, vehicle attribute information, driving attribute information, and scene attribute information of the driving data; transmitting the video data to a terminal; the method comprises the steps of receiving marking information returned by the terminal based on the video data, wherein the marking information is used for indicating driving data with the decision accuracy of driving behaviors in the video data smaller than a second target threshold value; based on the marking information, a corresponding attribute tag is added to the running data as response attribute information of the running data.
In a possible implementation manner, the information obtaining module 702 is further configured to receive road status data sent by the terminal; road attribute information of the travel data is determined based on the road state data.
In a possible implementation manner, the data fusion module 703 is configured to add the at least one attribute information to the travel data corresponding to the time information according to the at least one attribute information and the time information corresponding to the travel data, to obtain fusion data, where the fusion data includes the travel data and the attribute information stored correspondingly according to the time sequence indicated by the time information.
In one possible implementation, the generating module 704 is configured to determine the at least one attribute information as an index of the search engine; the search engine is generated based on the index and the travel data.
In one possible implementation, the apparatus further includes:
the determining module is used for responding to the search request of the terminal and determining the similarity between the driving scene keywords carried by the search request and at least one attribute information;
the determining module is further configured to determine attribute information with the similarity greater than a third target threshold as target attribute information matched with the search keyword;
the data obtaining module 701 is further configured to obtain, from the at least one target database, driving data corresponding to the target attribute information, as the target driving data.
It should be noted that: the automatic driving scene searching device provided in the above embodiment only uses the division of the above functional modules to illustrate when searching driving scenes in the automatic driving process, and in practical application, the above functional allocation may be completed by different functional modules according to needs, i.e. the internal structure of the terminal/server is divided into different functional modules to complete all or part of the above functions. In addition, the automatic driving scene searching device provided in the above embodiment and the automatic driving scene searching method embodiment belong to the same concept, and detailed implementation processes of the automatic driving scene searching device are detailed in the method embodiment, and are not repeated here.
Fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present application. The terminal 800 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 800 may also be referred to by other names of user devices, portable computers, laptop computers, desktop computers, and the like.
In general, the terminal 800 includes: one or more processors 801, and one or more memories 802.
In some embodiments, the terminal 800 may further optionally include: a peripheral interface 803, and at least one peripheral. The processor 801, the memory 802, and the peripheral interface 803 may be connected by a bus or signal line. Individual peripheral devices may be connected to the peripheral device interface 803 by buses, signal lines, or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 804, a display 805, a camera assembly 806, audio circuitry 807, a positioning assembly 808, and a power supply 809.
The Radio Frequency circuit 804 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 804 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 804 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 804 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 804 may also include NFC (Near Field Communication ) related circuitry, which is not limited in this application.
The display 805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 805 is a touch display, the display 805 also has the ability to collect touch signals at or above the surface of the display 805. The touch signal may be input as a control signal to the processor 801 for processing. At this time, the display 805 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 805 may be one and disposed on a front panel of the terminal 800; in other embodiments, the display 805 may be at least two, respectively disposed on different surfaces of the terminal 800 or in a folded design; in other embodiments, the display 805 may be a flexible display disposed on a curved surface or a folded surface of the terminal 800. Even more, the display 805 may be arranged in an irregular pattern other than rectangular, i.e., a shaped screen. The display 805 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 806 is used to capture images or video. Optionally, the camera assembly 806 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 806 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The location component 808 is utilized to locate the current geographic location of the terminal 800 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 808 may be a positioning component based on the United states GPS (Global Positioning System ), the Beidou system of China, the Granati system of Russia, or the Galileo system of the European Union.
A power supply 809 is used to power the various components in the terminal 800. The power supply 809 may be an alternating current, direct current, disposable battery, or rechargeable battery. When the power supply 809 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 800 also includes one or more sensors 810. The one or more sensors 810 include, but are not limited to: acceleration sensor 811, gyroscope sensor 812, pressure sensor 813, fingerprint sensor 814, optical sensor 815, and proximity sensor 816.
The acceleration sensor 811 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 800. For example, the acceleration sensor 811 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 801 may control the display screen 805 to display a user interface in a landscape view or a portrait view based on the gravitational acceleration signal acquired by the acceleration sensor 811. Acceleration sensor 811 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 812 may detect a body direction and a rotation angle of the terminal 800, and the gyro sensor 812 may collect a 3D motion of the user to the terminal 800 in cooperation with the acceleration sensor 811. The processor 801 may implement the following functions based on the data collected by the gyro sensor 812: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 813 may be disposed at a side frame of the terminal 800 and/or at a lower layer of the display 805. When the pressure sensor 813 is disposed on a side frame of the terminal 800, a grip signal of the terminal 800 by a user may be detected, and the processor 801 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 813. When the pressure sensor 813 is disposed at the lower layer of the display screen 805, the processor 801 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 805. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 814 is used to collect a fingerprint of a user, and the processor 801 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 814, or the fingerprint sensor 814 identifies the identity of the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 801 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 814 may be disposed on the front, back, or side of the terminal 800. When a physical key or vendor Logo is provided on the terminal 800, the fingerprint sensor 814 may be integrated with the physical key or vendor Logo.
The optical sensor 815 is used to collect the ambient light intensity. In one embodiment, the processor 801 may control the display brightness of the display screen 805 based on the intensity of ambient light collected by the optical sensor 815. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 805 is turned up; when the ambient light intensity is low, the display brightness of the display screen 805 is turned down. In another embodiment, the processor 801 may also dynamically adjust the shooting parameters of the camera module 806 based on the ambient light intensity collected by the optical sensor 815.
A proximity sensor 816, also referred to as a distance sensor, is typically provided on the front panel of the terminal 800. The proximity sensor 816 is used to collect the distance between the user and the front of the terminal 800. In one embodiment, when the proximity sensor 816 detects that the distance between the user and the front of the terminal 800 gradually decreases, the processor 801 controls the display 805 to switch from the bright screen state to the off screen state; when the proximity sensor 816 detects that the distance between the user and the front surface of the terminal 800 gradually increases, the processor 801 controls the display 805 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 8 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
Fig. 9 is a schematic structural diagram of a server provided in the embodiment of the present application, where the server 900 may have a relatively large difference due to configuration or performance, and may include one or more processors (Central Processing Units, CPU) 901 and one or more memories 902, where at least one program code is stored in the one or more memories 902, and the at least one program code is loaded and executed by the one or more processors 901 to implement the methods provided in the foregoing method embodiments. Of course, the server 900 may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
In an exemplary embodiment, a computer readable storage medium, such as a memory including program code executable by a processor to perform the scenario search method of autopilot in the above embodiment, is also provided. For example, the computer readable storage medium may be Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), compact disc Read-Only Memory (CD-ROM), magnetic tape, floppy disk, optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program product or a computer program comprising computer program code stored in a computer readable storage medium, the computer program code being read from the computer readable storage medium by a processor of a terminal/server, the computer program code being executed by the processor such that the terminal/server performs the method steps of the automatic driving scene search method provided in the above embodiments.
It will be appreciated by those of ordinary skill in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by program code related hardware, where the program may be stored in a computer readable storage medium, where the storage medium may be a read only memory, a magnetic disk or optical disk, etc.
The foregoing description of the preferred embodiments is merely exemplary in nature and is not intended to limit the invention, but is intended to cover various modifications, substitutions, improvements, and alternatives falling within the spirit and principles of the invention.
Claims (15)
1. A method of automatically driving a scene search, the method comprising:
Acquiring video data generated based on driving data of an automatic driving vehicle, wherein the video data is marked with attribute information of the driving data, and the attribute information of the driving data comprises at least one of environment attribute information, vehicle attribute information, driving attribute information and scene attribute information;
based on the playing of the video data, obtaining marking information in the video data, wherein the marking information is used for indicating driving data with the decision accuracy of driving behaviors in the video data smaller than a second target threshold;
sending the marking information to a server;
based on the playing of the video data, acquiring road state data input based on at least one frame of image in the video data, wherein the road state data is used for indicating the road condition in the running process of the automatic driving vehicle;
transmitting the road state data to a server, wherein the road state data is used for determining road attribute information of the driving data;
providing a scene search interface, wherein the scene search interface comprises a search box, and the scene search interface is used for providing search functions of target driving data corresponding to different driving scenes;
Responding to the search operation of the external input scene search interface, and acquiring driving scene keywords input in the search box;
and acquiring running data of which attribute information is matched with the driving scene keywords as target running data, wherein the attribute information is used for indicating the environment state of the external environment, the vehicle state and the running state of the automatic driving vehicle in the running process of the automatic driving vehicle, at least one target database stores the running data taking at least one of the environment state, the vehicle state and the running state as parameters, and different parameters of the running data correspond to different attribute information.
2. The method according to claim 1, wherein the acquiring travel data whose attribute information matches the driving scenario keyword as target travel data includes:
generating a search request based on the driving scene keywords, wherein the search request carries the driving scene keywords;
sending the search request to a server;
and receiving the driving data acquired by the server from the at least one target database as the target driving data.
3. The method according to claim 1, wherein the method further comprises:
Providing a data input interface, wherein the data input interface comprises an input box and is used for inputting environmental state data in the running process of the automatic driving vehicle, and the environmental state data comprises at least one of weather data, light ray data, road state data, time data and position data;
acquiring environmental state data input in the input box in response to an input operation in the input box;
and sending the environment state data to a server, wherein the environment state data is used for determining environment attribute information of running data corresponding to the automatic driving vehicle.
4. A method of automatically driving a scene search, the method comprising:
acquiring driving data of an automatic driving vehicle, wherein the driving data are acquired in the driving process of the automatic driving vehicle;
acquiring at least one attribute information of the driving data, wherein the attribute information is used for indicating an environment state of an external environment in the driving process of the automatic driving vehicle and at least one of a vehicle state and a driving state of the automatic driving vehicle;
performing data fusion on the at least one attribute information and the running data to obtain the running data associated with the at least one attribute information;
Generating a search engine based on the at least one attribute information, wherein the search engine is used for determining target running data according to driving scene keywords when the driving scene keywords related to driving scenes are received, the target running data are running data, the attribute information of which is matched with the driving scene keywords, in a target database, the target database stores the running data taking at least one of an environment state, a vehicle state and a running state as dimensions, and different dimensions of the running data correspond to different attribute information;
wherein, the obtaining the at least one attribute information of the driving data further includes:
identifying the running data to obtain the running data meeting a first target condition in the running data, wherein the first target condition is that the environment attribute information is wrong or the decision accuracy of the running behavior is smaller than a first target threshold;
determining scene attribute information of the driving data based on the driving data meeting the first target condition, wherein the scene attribute information is used for indicating that a driving scene corresponding to the driving data meeting the first target condition is a driving scene to be improved or an unusual driving scene;
Generating video data based on environment state data, vehicle state data and driving state data, wherein the video data is marked with environment attribute information, vehicle attribute information, driving attribute information and scene attribute information of the driving data;
transmitting the video data to a terminal;
the method comprises the steps of receiving marking information returned by the terminal based on the video data, wherein the marking information is used for indicating driving data with the decision accuracy of driving behaviors in the video data smaller than a second target threshold;
and adding a corresponding attribute tag in the running data based on the marking information to serve as response attribute information of the running data.
5. The method of claim 4, wherein the travel data comprises at least one of:
environmental status data transmitted by a terminal, the environmental status data including at least one of weather data, light data, road status data, time data, and location data;
vehicle state data uploaded by the autonomous vehicle, the vehicle state data including at least one of speed data, acceleration data, fuel amount data, fuel consumption data, travel direction data, time data, and position data;
And running state data determined based on the vehicle state data uploaded by the autonomous vehicle, wherein the running state data comprises at least one of running mileage data, running track data, running duration data and parking duration data.
6. The method of claim 5, wherein the obtaining at least one attribute information of the travel data comprises at least one of:
determining environmental attribute information of the driving data based on the environmental state data;
determining vehicle attribute information of the travel data based on the vehicle state data;
and determining the running attribute information of the running data based on the running state data.
7. The method of claim 4, wherein the obtaining at least one attribute information of the travel data further comprises:
receiving road state data sent by a terminal;
and determining road attribute information of the driving data based on the road state data.
8. The method of claim 4, wherein the data fusing the at least one attribute information with the travel data to obtain the travel data associated with the at least one attribute information comprises:
And adding the at least one attribute information into the travel data corresponding to the time information according to the at least one attribute information and the time information corresponding to the travel data to obtain fusion data, wherein the fusion data comprises the travel data and the attribute information which are correspondingly stored according to the time sequence indicated by the time information.
9. The method of claim 4, wherein generating a search engine based on the at least one attribute information comprises:
determining the at least one attribute information as an index of the search engine;
the search engine is generated based on the index and the travel data.
10. The method of claim 4, wherein after generating a search engine based on the at least one attribute information, the method further comprises:
responding to a search request of a terminal, and determining the similarity between driving scene keywords carried by the search request and at least one attribute information;
determining attribute information with the similarity larger than a third target threshold value as target attribute information matched with the search keyword;
and acquiring the running data corresponding to the target attribute information from the at least one target database as the target running data.
11. An autopilot scene search apparatus, the apparatus comprising:
a data acquisition module for acquiring video data generated based on running data of an automatically driven vehicle, wherein the video data is marked with attribute information of the running data, and the attribute information comprises at least one of environment attribute information, vehicle attribute information, running attribute information and scene attribute information;
the information acquisition module is used for acquiring marking information in the video data based on the playing of the video data, wherein the marking information is used for indicating driving data with the decision accuracy of driving behaviors in the video data smaller than a second target threshold value;
the second sending module is used for sending the marking information to the server;
the data acquisition module is also used for acquiring road state data input based on at least one frame of image in the video data based on the playing of the video data, wherein the road state data is used for indicating the road condition in the running process of the automatic driving vehicle;
the third sending module is used for sending the road state data to a server, wherein the road state data is used for determining road attribute information of the driving data;
The scene searching interface comprises a searching frame and is used for providing searching functions of target driving data corresponding to different driving scenes;
the keyword acquisition module is used for responding to the search operation of the scene search interface input from the outside and acquiring driving scene keywords input in the search box;
the data acquisition module is used for acquiring running data, of which attribute information is matched with the driving scene keywords, from at least one target database, wherein the attribute information is used for indicating the environment state of the external environment, the vehicle state and the running state of the automatic driving vehicle in the running process of the automatic driving vehicle, the at least one target database stores the running data taking at least one of the environment state, the vehicle state and the running state as parameters, and different parameters of the running data correspond to different attribute information.
12. An autopilot scene search apparatus, the apparatus comprising:
the data acquisition module is used for acquiring running data of the automatic driving vehicle, wherein the running data are acquired in the running process of the automatic driving vehicle;
An information acquisition module configured to acquire at least one attribute information of the running data, the attribute information being configured to indicate an environmental state of an external environment during running of the autonomous vehicle, and at least one of a vehicle state and a running state of the autonomous vehicle;
the data fusion module is used for carrying out data fusion on the at least one attribute information and the running data to obtain the running data associated with the at least one attribute information;
the generation module is used for generating a search engine based on the at least one attribute information, wherein the search engine is used for determining target running data according to driving scene keywords when the driving scene keywords related to driving scenes are received, the target running data are running data, the attribute information of which is matched with the driving scene keywords, in a target database, the target database stores running data taking at least one of an environment state, a vehicle state and a running state as dimensions, and different dimensions of the running data correspond to different attribute information;
wherein, the information acquisition module is further used for:
identifying the running data to obtain the running data meeting a first target condition in the running data, wherein the first target condition is that the environment attribute information is wrong or the decision accuracy of the running behavior is smaller than a first target threshold;
Determining scene attribute information of the driving data based on the driving data meeting the first target condition, wherein the scene attribute information is used for indicating that a driving scene corresponding to the driving data meeting the first target condition is a driving scene to be improved or an unusual driving scene;
generating video data based on environment state data, vehicle state data and driving state data, wherein the video data is marked with environment attribute information, vehicle attribute information, driving attribute information and scene attribute information of the driving data;
transmitting the video data to a terminal;
the method comprises the steps of receiving marking information returned by the terminal based on the video data, wherein the marking information is used for indicating driving data with the decision accuracy of driving behaviors in the video data smaller than a second target threshold;
and adding a corresponding attribute tag in the running data based on the marking information to serve as response attribute information of the running data.
13. A terminal comprising one or more processors and one or more memories, the one or more memories having stored therein at least one program code loaded and executed by the one or more processors to implement the operations performed by the autopilot scenario search method of any one of claims 1 to 3.
14. A server comprising one or more processors and one or more memories, the one or more memories having stored therein at least one program code loaded and executed by the one or more processors to implement the operations performed by the autopilot scenario search method of any one of claims 4 to 10.
15. A computer readable storage medium having stored therein at least one program code loaded and executed by a processor to implement the operations performed by the autopilot scenario search method of any one of claims 1 to 3; or the operations performed by the automated driving scenario search method according to any one of claims 4 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011285118.5A CN112269939B (en) | 2020-11-17 | 2020-11-17 | Automatic driving scene searching method, device, terminal, server and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011285118.5A CN112269939B (en) | 2020-11-17 | 2020-11-17 | Automatic driving scene searching method, device, terminal, server and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112269939A CN112269939A (en) | 2021-01-26 |
CN112269939B true CN112269939B (en) | 2023-05-30 |
Family
ID=74340696
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011285118.5A Active CN112269939B (en) | 2020-11-17 | 2020-11-17 | Automatic driving scene searching method, device, terminal, server and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112269939B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113297530B (en) * | 2021-04-15 | 2024-04-09 | 南京大学 | Automatic driving black box test system based on scene search |
US11698910B1 (en) * | 2022-07-21 | 2023-07-11 | Plusai, Inc. | Methods and apparatus for natural language-based safety case discovery to train a machine learning model for a driving system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109408710A (en) * | 2018-09-26 | 2019-03-01 | 斑马网络技术有限公司 | Search result optimization method, device, system and storage medium |
CN110843794A (en) * | 2020-01-15 | 2020-02-28 | 北京三快在线科技有限公司 | Driving scene understanding method and device and trajectory planning method and device |
CN111038497A (en) * | 2019-12-25 | 2020-04-21 | 苏州智加科技有限公司 | Automatic driving control method and device, vehicle-mounted terminal and readable storage medium |
CN111694973A (en) * | 2020-06-09 | 2020-09-22 | 北京百度网讯科技有限公司 | Model training method and device for automatic driving scene and electronic equipment |
-
2020
- 2020-11-17 CN CN202011285118.5A patent/CN112269939B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109408710A (en) * | 2018-09-26 | 2019-03-01 | 斑马网络技术有限公司 | Search result optimization method, device, system and storage medium |
CN111038497A (en) * | 2019-12-25 | 2020-04-21 | 苏州智加科技有限公司 | Automatic driving control method and device, vehicle-mounted terminal and readable storage medium |
CN110843794A (en) * | 2020-01-15 | 2020-02-28 | 北京三快在线科技有限公司 | Driving scene understanding method and device and trajectory planning method and device |
CN111694973A (en) * | 2020-06-09 | 2020-09-22 | 北京百度网讯科技有限公司 | Model training method and device for automatic driving scene and electronic equipment |
Non-Patent Citations (1)
Title |
---|
基于融合感知的场景数据提取技术研究;李英勃等;《现代计算机(专业版)》;20190325(第09期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112269939A (en) | 2021-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111126182B (en) | Lane line detection method, lane line detection device, electronic device, and storage medium | |
CN112307642B (en) | Data processing method, device, system, computer equipment and storage medium | |
CN111114554B (en) | Method, device, terminal and storage medium for predicting travel track | |
CN110095128B (en) | Method, device, equipment and storage medium for acquiring missing road information | |
CN111182453A (en) | Positioning method, positioning device, electronic equipment and storage medium | |
CN111125442B (en) | Data labeling method and device | |
CN111192341A (en) | Method and device for generating high-precision map, automatic driving equipment and storage medium | |
CN111854780B (en) | Vehicle navigation method, device, vehicle, electronic equipment and storage medium | |
CN113160427A (en) | Virtual scene creating method, device, equipment and storage medium | |
CN112667290B (en) | Instruction management method, device, equipment and computer readable storage medium | |
CN113205515B (en) | Target detection method, device and computer storage medium | |
CN112269939B (en) | Automatic driving scene searching method, device, terminal, server and medium | |
CN116052461A (en) | Virtual parking space determining method, display method, device, equipment, medium and program | |
CN112991439B (en) | Method, device, electronic equipment and medium for positioning target object | |
CN113361386B (en) | Virtual scene processing method, device, equipment and storage medium | |
CN111984755B (en) | Method and device for determining target parking spot, electronic equipment and storage medium | |
CN110990728B (en) | Method, device, equipment and storage medium for managing interest point information | |
CN111754564B (en) | Video display method, device, equipment and storage medium | |
CN112365088B (en) | Method, device and equipment for determining travel key points and readable storage medium | |
CN114598992A (en) | Information interaction method, device, equipment and computer readable storage medium | |
CN110399688B (en) | Method and device for determining environment working condition of automatic driving and storage medium | |
CN112818243A (en) | Navigation route recommendation method, device, equipment and storage medium | |
CN111259252A (en) | User identification recognition method and device, computer equipment and storage medium | |
CN117782115B (en) | Automatic driving route generation method | |
CN118025201B (en) | Method and device for processing data of automatic driving system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |