CN116151270A - Parking test system and method - Google Patents

Parking test system and method Download PDF

Info

Publication number
CN116151270A
CN116151270A CN202310165823.9A CN202310165823A CN116151270A CN 116151270 A CN116151270 A CN 116151270A CN 202310165823 A CN202310165823 A CN 202310165823A CN 116151270 A CN116151270 A CN 116151270A
Authority
CN
China
Prior art keywords
virtual scene
semantic understanding
feature vector
test virtual
test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310165823.9A
Other languages
Chinese (zh)
Inventor
苏星溢
王佩生
李杨
胡旭
曾成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Seres New Energy Automobile Design Institute Co Ltd
Original Assignee
Chongqing Seres New Energy Automobile Design Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Seres New Energy Automobile Design Institute Co Ltd filed Critical Chongqing Seres New Energy Automobile Design Institute Co Ltd
Priority to CN202310165823.9A priority Critical patent/CN116151270A/en
Publication of CN116151270A publication Critical patent/CN116151270A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a parking test system and a parking test method. Firstly, word segmentation is carried out on the obtained text description of the parking test virtual scene, then a word embedding layer is used for obtaining a word embedding vector sequence, then the word embedding vector sequence is used for obtaining a first scale test virtual scene semantic understanding feature vector through a first semantic encoder, then a second scale test virtual scene semantic understanding feature vector is obtained through a second semantic encoder, then the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector are fused to obtain a multi-scale test virtual scene semantic understanding feature vector, and finally the multi-scale test virtual scene semantic understanding feature vector is used for obtaining a virtual test scene graph through a virtual scene generator. By the mode, the problem of manpower restriction can be solved, and the cost is reduced.

Description

Parking test system and method
Technical Field
The present disclosure relates to the field of intelligent testing technology, and more particularly, to a parking test system and method.
Background
The parking system detects a stoppable space and plans a parking space position through ultrasonic radar, an all-round camera, a driving camera, a laser radar and other sensors which are arranged on the vehicle body, dynamically plans a parking path in real time, guides or directly controls a steering wheel to drive into the parking position, eliminates visual blind areas around the vehicle, and helps a driver to park more accurately.
A vehicle with a parking system requires testing of the parking system before it is put into the market. In general, parking tests are classified into real vehicle tests and simulation tests. In the simulation test for different parking environments, the construction and simulation of the test scene often consumes a great deal of manpower and time.
Thus, an optimized parking test scheme is desired.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides a parking test system and a parking test method. Firstly, word segmentation is carried out on the obtained text description of the parking test virtual scene, then a word embedding layer is used for obtaining a word embedding vector sequence, then the word embedding vector sequence is used for obtaining a first scale test virtual scene semantic understanding feature vector through a first semantic encoder, then a second scale test virtual scene semantic understanding feature vector is obtained through a second semantic encoder, then the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector are fused to obtain a multi-scale test virtual scene semantic understanding feature vector, and finally the multi-scale test virtual scene semantic understanding feature vector is used for obtaining a virtual test scene graph through a virtual scene generator. By the mode, the problem of manpower restriction can be solved, and the cost is reduced.
According to one aspect of the present application, there is provided a parking test system, comprising:
the parking virtual scene description acquisition module is used for acquiring text description of a parking test virtual scene;
the word embedding module is used for obtaining a sequence of word embedding vectors through a word embedding layer after word segmentation processing is carried out on the text description of the parking test virtual scene;
the first semantic understanding module is used for enabling the sequence of the word embedded vectors to pass through a first semantic encoder to obtain first scale test virtual scene semantic understanding feature vectors;
the second semantic understanding module is used for enabling the sequence of the word embedded vectors to pass through a second semantic encoder based on a converter to obtain second scale test virtual scene semantic understanding feature vectors;
the multi-scale fusion module is used for fusing the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector to obtain a multi-scale test virtual scene semantic understanding feature vector; and
and the virtual scene generation module is used for enabling the multi-scale test virtual scene semantic understanding feature vector to pass through a virtual scene generator based on a countermeasure generation network to obtain a virtual test scene graph.
In the parking test system, the first semantic encoder is a two-way long-short-term memory neural network model.
In the above parking test system, the second semantic encoder based on a converter is a Bert model based on a converter.
In the above parking test system, the second semantic understanding module includes:
a context coding unit for inputting the sequence of word embedded vectors into the second semantic encoder based on the converter to obtain a plurality of test virtual scene semantic feature vectors; and
and the cascading unit is used for cascading the plurality of test virtual scene semantic feature vectors to obtain the second-scale test virtual scene semantic understanding feature vector.
In the above parking test system, the context encoding unit includes:
the query vector construction secondary subunit is used for carrying out one-dimensional arrangement on the sequence of the word embedded vector to obtain a global word sequence feature vector;
a self-attention secondary subunit, configured to calculate a product between the global word sequence feature vector and a transpose vector of each word vector in the sequence of word embedding vectors to obtain a plurality of self-attention association matrices;
the standardized secondary subunit is used for respectively carrying out standardized processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of standardized self-attention correlation matrices;
the attention degree calculating secondary subunit is used for obtaining a plurality of probability values through a Softmax classification function by using each normalized self-attention correlation matrix in the normalized self-attention correlation matrices; and
and the attention applying secondary subunit is used for weighting each word embedding vector in the sequence of word embedding vectors by taking each probability value in the plurality of probability values as a weight so as to obtain the plurality of test virtual scene semantic feature vectors.
In the above parking test system, the multi-scale fusion module is further configured to: fusing the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector by the following formula to obtain the multi-scale test virtual scene semantic understanding feature vector;
wherein, the formula is:
Figure BDA0004095894970000021
wherein V is i Representing the semantic understanding feature vector of the first scale test virtual scene, V j Representing the semantic understanding feature vector of the second scale test virtual scene, V c And expressing the multi-scale test virtual scene semantic understanding feature vector, wherein exp (·) expresses the exponential operation of the vector, and the exponential operation of the vector expresses the calculation of a natural exponential function value with the feature value of each position in the vector as a power.
In the above parking test system, the countermeasure generation network includes a discriminator and a generator.
According to another aspect of the present application, there is provided a parking test method, including:
acquiring text description of a parking test virtual scene;
word segmentation is carried out on the text description of the parking test virtual scene, and then a word embedding layer is used for obtaining a word embedding vector sequence;
passing the sequence of word embedded vectors through a first semantic encoder to obtain first scale test virtual scene semantic understanding feature vectors;
passing the sequence of word embedded vectors through a second semantic encoder based on a converter to obtain second scale test virtual scene semantic understanding feature vectors;
fusing the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector to obtain a multi-scale test virtual scene semantic understanding feature vector; and
and the multi-scale test virtual scene semantic understanding feature vector is used for obtaining a virtual test scene graph through a virtual scene generator based on an antagonism generation network.
In the parking test method, the first semantic encoder is a two-way long-short-term memory neural network model.
In the above parking test method, the second semantic encoder based on a converter is a Bert model based on a converter.
Compared with the prior art, the parking test system and the method provided by the application have the advantages that firstly, word segmentation is carried out on the obtained text description of the parking test virtual scene, then a word embedding layer is used for obtaining a word embedding vector sequence, then the word embedding vector sequence is used for obtaining a first scale test virtual scene semantic understanding feature vector through a first semantic encoder, then the word embedding vector sequence is used for obtaining a second scale test virtual scene semantic understanding feature vector through a second semantic encoder, then the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector are fused to obtain a multi-scale test virtual scene semantic understanding feature vector, and finally the multi-scale test virtual scene semantic understanding feature vector is used for obtaining a virtual test scene graph through a virtual scene generator. By the mode, the problem of manpower restriction can be solved, and the cost is reduced.
Drawings
The foregoing and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 is an application scenario diagram of a parking test system according to an embodiment of the present application.
Fig. 2 is a block diagram schematic of a park test system according to an embodiment of the application.
Fig. 3 is a block diagram schematic of the second semantic understanding module in the parking test system according to an embodiment of the present application.
Fig. 4 is a block diagram schematic of the context encoding unit in the parking test system according to the embodiment of the present application.
Fig. 5 is a flowchart of a parking test method according to an embodiment of the present application.
Fig. 6 is a schematic diagram of a system architecture of a parking test method according to an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Scene overview
As described above, in the simulation test for different parking environments, the construction and simulation of the test scenario often consumes a lot of manpower and time. Thus, an optimized parking test scheme is desired.
In the technical scheme of the application, the construction of the parking virtual test scene can be converted into a virtual scene generation problem. The method comprises the steps of obtaining text description of a parking test virtual scene, carrying out semantic understanding on the text description of the parking test virtual scene to obtain feature representation for representing semantic features of the parking test virtual scene, carrying out countermeasure generation on the semantic features of the parking test virtual scene through a countermeasure generation network to obtain a virtual test scene graph, and intelligently generating the virtual test scene graph through an artificial intelligence technology and a natural language processing technology to solve manpower constraint, so that cost is reduced.
The parking virtual scene description acquisition module is used for acquiring text descriptions of the parking test virtual scene. And then, word segmentation processing is carried out on the text description of the parking test virtual scene, and a word embedding layer is used for obtaining a word embedding vector sequence. That is, considering that the text description of the parking test virtual scene is in an unstructured structure, the text description of the parking test virtual scene is converted into structured data. Firstly, word segmentation is carried out on text description of the parking test virtual scene to obtain a word sequence, and each word in the word sequence is mapped to a word embedding vector through the word embedding layer to obtain a sequence of the word embedding vector. In one specific example, the Word embedding layer is built based on a Word bag model, or, alternatively, a Word2vec model.
Then, the sequence of word embedding vectors is semantically understood. In order to improve the accuracy of semantic understanding, in the technical scheme of the application, the sequence of the word embedding vector is subjected to multi-scale semantic understanding. Specifically, the sequence of word embedding vectors is first passed through a two-way long-short-term memory neural network model to obtain a first-scale test virtual scene semantic understanding feature vector, where the long-short-term memory neural network (LSTM) is proposed to solve the gradient disappearance problem of the conventional Recurrent Neural Network (RNN), and the basic unit is a structure of multiple groups of neurons, called cells. The three control gates f, i and o are respectively called forget gate, input gate and output gate, the parameters of the three control gates are reasonably set, the memory function of the LSTM can be realized, and the core calculation formula is as follows:
f t =σ(W f .[h t-1 ,x t ]+b f )
i t =σ(W i .[h t-1 ,x t ]+b i )
Figure BDA0004095894970000041
Figure BDA0004095894970000042
O t =σ(W 0 ·[h t-1 ,x t ]+b o )
h t =O t .tanh(c t ) Wherein f, i, o, t, o, h, c, W, b represents forgetting, sigmoid activation function, input, time step, output layer, hidden layer, cell state, weight matrix, bias, respectively. In order to meet the integrity of the extracted information, the mainstream in the industry adopts a two-way connection mode for the cell structure to form a two-way long-short-term memory neural network (BiLSTM).
Meanwhile, the word embedding vector sequence passes through a context encoder based on a converter to obtain a second scale test virtual scene semantic understanding feature vector. The converter (transducer) -based context encoder is capable of global-based context semantic understanding of the sequence of word embedding vectors by a self-attention mechanism to capture global length context semantic association features of the sequence of word embedding vectors to derive the second scale test virtual scene semantic understanding feature vector.
And then fusing the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector to obtain a multi-scale test virtual scene semantic understanding feature vector. Preferably, the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector are fused in a cascading manner to obtain a multi-scale test virtual scene semantic understanding feature vector.
However, in the technical solution of the present application, in the process of fusing the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector by cascading to obtain the multi-scale test virtual scene semantic understanding feature vector, because there is a semantic mismatch between feature distribution of the first scale test virtual scene semantic understanding feature vector and feature collection of the same category in feature distribution of the second scale test virtual scene semantic understanding feature vector, that is, there is a class center offset between feature domain of the first scale test virtual scene semantic understanding feature vector and feature of the same category in the second scale test virtual scene semantic understanding feature vector, this affects alignment degree of fusion feature representation between the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector, thereby affecting structural accuracy and certainty of the virtual test scene graph generated by the countermeasure generation network.
Therefore, in the technical scheme of the application, domain self-adaptive class diagram topology fusion is performed on the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector:
Figure BDA0004095894970000051
and information robustness interaction and propagation among different feature domain vectors are realized along a preset direction of feature distribution by taking feature values of all positions in feature distribution of the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector as nodes and taking informationized interpretation of space distances among corresponding positions of the feature distribution as edges, so that the fused multi-scale test virtual scene semantic understanding feature vector not only has feature sparsity and non-network attribute at a pixel level, but also has relatively better feature consistency, and alignment and aggregation are carried out on similar target distribution in the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector to improve the feature expression precision and certainty of the multi-scale test virtual scene semantic understanding feature vector.
Finally, the multi-scale test virtual scene semantic understanding feature vector passes through a virtual scene generator based on the countermeasure generation network to obtain a virtual test scene graph. The countermeasure generation network includes a generator for generating a virtual test scenario graph and a discriminator for discriminating a difference between the generated virtual test scenario graph and a real virtual test scenario graph to obtain a discriminator loss function value, and updating neural network parameters of the generator with the discriminator loss function value as a loss function value and traveling in a direction of gradient descent so that the virtual test scenario graph generated by the generator can approximate the real virtual test scenario graph.
Based on this, the present application provides a parking test system, which includes: the parking virtual scene description acquisition module is used for acquiring text description of a parking test virtual scene; the word embedding module is used for obtaining a sequence of word embedding vectors through a word embedding layer after word segmentation processing is carried out on the text description of the parking test virtual scene; the first semantic understanding module is used for enabling the sequence of the word embedded vectors to pass through a first semantic encoder to obtain first scale test virtual scene semantic understanding feature vectors; the second semantic understanding module is used for enabling the sequence of the word embedded vectors to pass through a second semantic encoder based on a converter to obtain second scale test virtual scene semantic understanding feature vectors; the multi-scale fusion module is used for fusing the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector to obtain a multi-scale test virtual scene semantic understanding feature vector; and the virtual scene generation module is used for enabling the multi-scale test virtual scene semantic understanding feature vector to pass through a virtual scene generator based on a countermeasure generation network to obtain a virtual test scene graph.
Fig. 1 is an application scenario diagram of a parking test system according to an embodiment of the present application. As shown in fig. 1, in this application scenario, first, a text description of a parking test virtual scenario (e.g., D illustrated in fig. 1) is acquired, and then, the text description of the parking test virtual scenario is input into a server (e.g., S illustrated in fig. 1) in which a parking test algorithm is deployed, wherein the server is capable of processing the text description of the parking test virtual scenario using the parking test algorithm to obtain a virtual test scenario diagram.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Exemplary System
Fig. 2 is a block diagram schematic of a park test system according to an embodiment of the application. As shown in fig. 2, a parking test system 100 according to an embodiment of the present application includes: a parking virtual scene description acquiring module 110, configured to acquire a text description of a parking test virtual scene; the word embedding module 120 is configured to obtain a sequence of word embedding vectors through a word embedding layer after performing word segmentation processing on the text description of the parking test virtual scene; a first semantic understanding module 130, configured to pass the sequence of word embedded vectors through a first semantic encoder to obtain a first scale test virtual scene semantic understanding feature vector; a second semantic understanding module 140, configured to pass the sequence of word embedded vectors through a second semantic encoder based on a converter to obtain a second scale test virtual scene semantic understanding feature vector; the multi-scale fusion module 150 is configured to fuse the first-scale test virtual scene semantic understanding feature vector and the second-scale test virtual scene semantic understanding feature vector to obtain a multi-scale test virtual scene semantic understanding feature vector; and a virtual scene generation module 160, configured to pass the multi-scale test virtual scene semantic understanding feature vector through a virtual scene generator based on a countermeasure generation network to obtain a virtual test scene graph.
More specifically, in the embodiment of the present application, the parking virtual scene description obtaining module 110 is configured to obtain a text description of a parking test virtual scene. The construction of the park virtual test scenario may be translated into a virtual scenario generation problem. The method comprises the steps of obtaining text description of a parking test virtual scene, carrying out semantic understanding on the text description of the parking test virtual scene to obtain feature representation for representing semantic features of the parking test virtual scene, carrying out countermeasure generation on the semantic features of the parking test virtual scene through a countermeasure generation network to obtain a virtual test scene graph, and intelligently generating the virtual test scene graph through an artificial intelligence technology and a natural language processing technology to solve manpower constraint, so that cost is reduced.
More specifically, in the embodiment of the present application, the word embedding module 120 is configured to obtain a sequence of word embedding vectors through a word embedding layer after performing word segmentation processing on the text description of the parking test virtual scene. That is, considering that the text description of the parking test virtual scene is in an unstructured structure, the text description of the parking test virtual scene is converted into structured data. Firstly, word segmentation is carried out on text description of the parking test virtual scene to obtain a word sequence, and each word in the word sequence is mapped to a word embedding vector through the word embedding layer to obtain a sequence of the word embedding vector. In one specific example, the Word embedding layer is built based on a Word bag model, or, alternatively, a Word2vec model.
Then, the sequence of word embedding vectors is semantically understood. In order to improve the accuracy of semantic understanding, in the technical scheme of the application, the sequence of the word embedding vector is subjected to multi-scale semantic understanding.
More specifically, in the embodiment of the present application, the first semantic understanding module 130 is configured to pass the sequence of word embedding vectors through a first semantic encoder to obtain a first scale test virtual scene semantic understanding feature vector. Accordingly, in one specific example, the first semantic encoder is a two-way long-short term memory neural network model. The sequence of the word embedding vectors is passed through a two-way long-short-term memory neural network model to obtain a first-scale test virtual scene semantic understanding feature vector, wherein the long-short-term memory neural network (LSTM) is provided for solving the gradient disappearance problem of the traditional cyclic neural network (RNN), and the basic unit is a structure of multiple groups of neurons, namely cells.
More specifically, in the embodiment of the present application, the second semantic understanding module 140 is configured to insert the sequence of words into the vector through a second semantic encoder based on a converter to obtain a second scale test virtual scene semantic understanding feature vector. Accordingly, in one specific example, the second semantic encoder based on a converter is a Bert model based on a converter. The sequence of word embedded vectors is passed through a context encoder based on a transducer to obtain a second scale test virtual scene semantic understanding feature vector. The converter (transducer) -based context encoder is capable of global-based context semantic understanding of the sequence of word embedding vectors by a self-attention mechanism to capture global length context semantic association features of the sequence of word embedding vectors to derive the second scale test virtual scene semantic understanding feature vector.
Accordingly, in one specific example, as shown in fig. 3, the second semantic understanding module 140 includes: a context encoding unit 141 for inputting the sequence of word embedding vectors into the second semantic encoder based on the converter to obtain a plurality of test virtual scene semantic feature vectors; and a concatenation unit 142, configured to concatenate the plurality of test virtual scene semantic feature vectors to obtain the second-scale test virtual scene semantic understanding feature vector.
Accordingly, in one specific example, as shown in fig. 4, the context encoding unit 141 includes: a query vector construction secondary subunit 1411, configured to perform one-dimensional arrangement on the sequence of word embedded vectors to obtain a global word sequence feature vector; a self-attention secondary subunit 1412, configured to calculate products between the global word sequence feature vector and transpose vectors of respective word vectors in the sequence of word embedding vectors to obtain a plurality of self-attention correlation matrices; a normalization secondary subunit 1413, configured to perform normalization processing on each of the plurality of self-attention correlation matrices to obtain a plurality of normalized self-attention correlation matrices; a second-level attention calculating subunit 1414, configured to obtain a plurality of probability values from each normalized self-attention correlation matrix in the plurality of normalized self-attention correlation matrices by using a Softmax classification function; and an attention applying secondary subunit 1415, configured to weight each word embedding vector in the sequence of word embedding vectors with each probability value in the plurality of probability values as a weight to obtain the plurality of test virtual scene semantic feature vectors.
More specifically, in the embodiment of the present application, the multi-scale fusion module 150 is configured to fuse the first-scale test virtual scene semantic understanding feature vector and the second-scale test virtual scene semantic understanding feature vector to obtain a multi-scale test virtual scene semantic understanding feature vector. Preferably, the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector are fused in a cascading manner to obtain a multi-scale test virtual scene semantic understanding feature vector.
However, in the technical solution of the present application, in the process of fusing the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector by cascading to obtain the multi-scale test virtual scene semantic understanding feature vector, because there is a semantic mismatch between feature distribution of the first scale test virtual scene semantic understanding feature vector and feature collection of the same category in feature distribution of the second scale test virtual scene semantic understanding feature vector, that is, there is a class center offset between feature domain of the first scale test virtual scene semantic understanding feature vector and feature of the same category in the second scale test virtual scene semantic understanding feature vector, this affects alignment degree of fusion feature representation between the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector, thereby affecting structural accuracy and certainty of the virtual test scene graph generated by the countermeasure generation network. Therefore, in the technical scheme of the application, domain self-adaptive class diagram topology fusion is performed on the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector.
Accordingly, in one specific example, the multi-scale fusion module 150 is further configured to: fusing the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector by the following formula to obtain the multi-scale test virtual scene semantic understanding feature vector; wherein, the formula is:
Figure BDA0004095894970000081
wherein V is i Representing the semantic understanding feature vector of the first scale test virtual scene, V j Representing the semantic understanding feature vector of the second scale test virtual scene, V c Representing the semantic understanding feature vector of the multi-scale test virtual scene, exp (·) representing the exponential operation of the vector, the exponential operation of the vector representing the calculation of the natural exponential function value exponentiated by the feature value of each position in the vector。
In this way, the information robustness interaction and propagation between different feature domain vectors are realized along the preset direction of the feature distribution by taking the feature value of each position in the feature distribution of the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector as a node and taking the informationized interpretation of the space distance between corresponding positions in the feature distribution as an edge, so that the fused multi-scale test virtual scene semantic understanding feature vector not only has the feature sparsity and the non-network attribute of a pixel level, but also has the relatively better feature consistency, and the similar target distribution in the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector is aligned and aggregated to improve the feature expression precision and the feature certainty of the multi-scale test virtual scene semantic understanding feature vector.
Finally, the multi-scale test virtual scene semantic understanding feature vector passes through a virtual scene generator based on the countermeasure generation network to obtain a virtual test scene graph. The countermeasure generation network includes a generator for generating a virtual test scenario graph and a discriminator for discriminating a difference between the generated virtual test scenario graph and a real virtual test scenario graph to obtain a discriminator loss function value, and updating neural network parameters of the generator with the discriminator loss function value as a loss function value and traveling in a direction of gradient descent so that the virtual test scenario graph generated by the generator can approximate the real virtual test scenario graph.
More specifically, in the embodiment of the present application, the virtual scenario generation module 160 is configured to pass the multi-scale test virtual scenario semantic understanding feature vector through a virtual scenario generator based on a countermeasure generation network to obtain a virtual test scenario graph.
Accordingly, in one particular example, the countermeasure generation network includes a discriminator and a generator.
In summary, the parking test system 100 according to the embodiment of the present application is illustrated, firstly, word segmentation is performed on the obtained text description of the parking test virtual scene, then, a word embedding layer is used to obtain a sequence of word embedding vectors, then, the sequence of word embedding vectors is passed through a first semantic encoder to obtain a first scale test virtual scene semantic understanding feature vector, then, the sequence of word embedding vectors is passed through a second semantic encoder to obtain a second scale test virtual scene semantic understanding feature vector, then, the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector are fused to obtain a multi-scale test virtual scene semantic understanding feature vector, and finally, the multi-scale test virtual scene semantic understanding feature vector is passed through a virtual scene generator to obtain a virtual test scene graph. By the mode, the problem of manpower restriction can be solved, and the cost is reduced.
As described above, the parking test system 100 according to the embodiment of the present application may be implemented in various terminal devices, for example, a server having a parking test algorithm, or the like. In one example, park test system 100 may be integrated into the terminal device as a software module and/or hardware module. For example, the park test system 100 may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the park test system 100 may also be one of a number of hardware modules of the terminal device.
Alternatively, in another example, the park test system 100 and the terminal device may be separate devices, and the park test system 100 may be connected to the terminal device via a wired and/or wireless network and transmit the interaction information in a agreed data format.
Exemplary method
Fig. 5 is a flowchart of a parking test method according to an embodiment of the present application. As shown in fig. 5, a parking test method according to an embodiment of the present application includes: s110, acquiring text description of a parking test virtual scene; s120, word segmentation is carried out on the text description of the parking test virtual scene, and then a word embedding layer is used for obtaining a word embedding vector sequence; s130, enabling the sequence of the word embedded vectors to pass through a first semantic encoder to obtain first scale test virtual scene semantic understanding feature vectors; s140, enabling the sequence of the word embedded vectors to pass through a second semantic encoder based on a converter to obtain second scale test virtual scene semantic understanding feature vectors; s150, fusing the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector to obtain a multi-scale test virtual scene semantic understanding feature vector; and S160, enabling the multi-scale test virtual scene semantic understanding feature vector to pass through a virtual scene generator based on a countermeasure generation network to obtain a virtual test scene graph.
Fig. 6 is a schematic diagram of a system architecture of a parking test method according to an embodiment of the present application. As shown in fig. 6, in the system architecture of the parking test method, first, a text description of a virtual scene of a parking test is acquired; then, word segmentation is carried out on the text description of the parking test virtual scene, and a word embedding layer is used for obtaining a word embedding vector sequence; then, the sequence of the word embedded vectors passes through a first semantic encoder to obtain first scale test virtual scene semantic understanding feature vectors; then, the word embedding vector sequence passes through a second semantic encoder based on a converter to obtain a second scale test virtual scene semantic understanding feature vector; then, fusing the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector to obtain a multi-scale test virtual scene semantic understanding feature vector; finally, the multi-scale test virtual scene semantic understanding feature vector is passed through a virtual scene generator based on a countermeasure generation network to obtain a virtual test scene graph.
In a specific example, in the parking test method, the first semantic encoder is a two-way long-short-term memory neural network model.
In a specific example, in the above parking test method, the second semantic encoder based on a converter is a Bert model based on a converter.
In a specific example, in the above parking test method, the step of embedding the sequence of words into the vector through a second semantic encoder based on a converter to obtain a second scale test virtual scene semantic understanding feature vector includes: inputting the sequence of word embedded vectors into the second semantic encoder based on the converter to obtain a plurality of test virtual scene semantic feature vectors; and cascading the plurality of test virtual scene semantic feature vectors to obtain the second-scale test virtual scene semantic understanding feature vector.
In a specific example, in the above parking test method, the inputting the sequence of word embedded vectors into the second semantic encoder based on a converter to obtain a plurality of test virtual scene semantic feature vectors includes: one-dimensional arrangement is carried out on the sequence of the word embedding vector to obtain a global word sequence feature vector; calculating the product between the global word sequence feature vector and the transpose vector of each word vector in the sequence of word embedding vectors to obtain a plurality of self-attention association matrices; respectively carrying out standardization processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of standardized self-attention correlation matrices; obtaining a plurality of probability values by using a Softmax classification function through each normalized self-attention correlation matrix in the normalized self-attention correlation matrices; and weighting each word embedding vector in the sequence of word embedding vectors by taking each probability value in the plurality of probability values as a weight to obtain the plurality of test virtual scene semantic feature vectors.
In a specific example, in the above parking test method, the fusing the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector to obtain a multi-scale test virtual scene semantic understanding feature vector includes: fusing the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector by the following formula to obtain the multi-scale test virtual scene semantic understanding feature vector; wherein, the formula is:
Figure BDA0004095894970000111
wherein V is i Representing the semantic understanding feature vector of the first scale test virtual scene, V j Representing the semantic understanding feature vector of the second scale test virtual scene, V c And expressing the multi-scale test virtual scene semantic understanding feature vector, wherein exp (·) expresses the exponential operation of the vector, and the exponential operation of the vector expresses the calculation of a natural exponential function value with the feature value of each position in the vector as a power.
In one specific example, in the above parking test method, the countermeasure generation network includes a discriminator and a generator.
Here, it will be appreciated by those skilled in the art that the specific operations of the respective steps in the above-described parking test method have been described in detail in the above description of the parking test system with reference to fig. 1 to 4, and thus, repetitive descriptions thereof will be omitted.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not intended to be limited to the details disclosed herein as such.
The block diagrams of the devices, apparatuses, devices, systems referred to in this application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that the components or steps in the apparatus, devices, and methods of the present application may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent to the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. A park test system, comprising:
the parking virtual scene description acquisition module is used for acquiring text description of a parking test virtual scene;
the word embedding module is used for obtaining a sequence of word embedding vectors through a word embedding layer after word segmentation processing is carried out on the text description of the parking test virtual scene;
the first semantic understanding module is used for enabling the sequence of the word embedded vectors to pass through a first semantic encoder to obtain first scale test virtual scene semantic understanding feature vectors;
the second semantic understanding module is used for enabling the sequence of the word embedded vectors to pass through a second semantic encoder based on a converter to obtain second scale test virtual scene semantic understanding feature vectors;
the multi-scale fusion module is used for fusing the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector to obtain a multi-scale test virtual scene semantic understanding feature vector;
and the virtual scene generation module is used for enabling the multi-scale test virtual scene semantic understanding feature vector to pass through a virtual scene generator based on a countermeasure generation network to obtain a virtual test scene graph.
2. The park test system of claim 1, wherein the first semantic encoder is a two-way long-short term memory neural network model.
3. The park test system of claim 2, wherein the second converter-based semantic encoder is a converter-based Bert model.
4. The park test system of claim 3, wherein the second semantic understanding module comprises:
a context coding unit for inputting the sequence of word embedded vectors into the second semantic encoder based on the converter to obtain a plurality of test virtual scene semantic feature vectors; and
and the cascading unit is used for cascading the plurality of test virtual scene semantic feature vectors to obtain the second-scale test virtual scene semantic understanding feature vector.
5. The park test system of claim 4, wherein the context encoding unit comprises:
the query vector construction secondary subunit is used for carrying out one-dimensional arrangement on the sequence of the word embedded vector to obtain a global word sequence feature vector;
a self-attention secondary subunit, configured to calculate a product between the global word sequence feature vector and a transpose vector of each word vector in the sequence of word embedding vectors to obtain a plurality of self-attention association matrices;
the standardized secondary subunit is used for respectively carrying out standardized processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of standardized self-attention correlation matrices;
the attention degree calculating secondary subunit is used for obtaining a plurality of probability values through a Softmax classification function by using each normalized self-attention correlation matrix in the normalized self-attention correlation matrices; and
and the attention applying secondary subunit is used for weighting each word embedding vector in the sequence of word embedding vectors by taking each probability value in the plurality of probability values as a weight so as to obtain the plurality of test virtual scene semantic feature vectors.
6. The park test system of claim 5, wherein the multi-scale fusion module is further configured to: fusing the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector by the following formula to obtain the multi-scale test virtual scene semantic understanding feature vector;
wherein, the formula is:
Figure FDA0004095894960000021
wherein V is i Representing the semantic understanding feature vector of the first scale test virtual scene, V j Representing the semantic understanding feature vector of the second scale test virtual scene, V c And expressing the multi-scale test virtual scene semantic understanding feature vector, wherein exp (·) expresses the exponential operation of the vector, and the exponential operation of the vector expresses the calculation of a natural exponential function value with the feature value of each position in the vector as a power.
7. The park test system of claim 6, wherein the countermeasure generation network comprises a discriminator and a generator.
8. A parking test method, comprising:
acquiring text description of a parking test virtual scene;
word segmentation is carried out on the text description of the parking test virtual scene, and then a word embedding layer is used for obtaining a word embedding vector sequence;
passing the sequence of word embedded vectors through a first semantic encoder to obtain first scale test virtual scene semantic understanding feature vectors;
passing the sequence of word embedded vectors through a second semantic encoder based on a converter to obtain second scale test virtual scene semantic understanding feature vectors;
fusing the first scale test virtual scene semantic understanding feature vector and the second scale test virtual scene semantic understanding feature vector to obtain a multi-scale test virtual scene semantic understanding feature vector; and
and the multi-scale test virtual scene semantic understanding feature vector is used for obtaining a virtual test scene graph through a virtual scene generator based on an antagonism generation network.
9. The park test method of claim 8, wherein the first semantic encoder is a two-way long and short term memory neural network model.
10. The park testing method according to claim 9, wherein the second converter-based semantic encoder is a converter-based Bert model.
CN202310165823.9A 2023-02-23 2023-02-23 Parking test system and method Pending CN116151270A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310165823.9A CN116151270A (en) 2023-02-23 2023-02-23 Parking test system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310165823.9A CN116151270A (en) 2023-02-23 2023-02-23 Parking test system and method

Publications (1)

Publication Number Publication Date
CN116151270A true CN116151270A (en) 2023-05-23

Family

ID=86356057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310165823.9A Pending CN116151270A (en) 2023-02-23 2023-02-23 Parking test system and method

Country Status (1)

Country Link
CN (1) CN116151270A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116578288A (en) * 2023-05-30 2023-08-11 杭州行至云起科技有限公司 Structured self-defined lamp efficiency configuration method and system based on logic judgment
CN116881017A (en) * 2023-07-27 2023-10-13 中国人民解放军陆军工程大学 Collaborative virtual maintenance training system and method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116578288A (en) * 2023-05-30 2023-08-11 杭州行至云起科技有限公司 Structured self-defined lamp efficiency configuration method and system based on logic judgment
CN116578288B (en) * 2023-05-30 2023-11-28 杭州行至云起科技有限公司 Structured self-defined lamp efficiency configuration method and system based on logic judgment
CN116881017A (en) * 2023-07-27 2023-10-13 中国人民解放军陆军工程大学 Collaborative virtual maintenance training system and method
CN116881017B (en) * 2023-07-27 2024-05-28 中国人民解放军陆军工程大学 Collaborative virtual maintenance training system and method

Similar Documents

Publication Publication Date Title
CN111985245A (en) Attention cycle gating graph convolution network-based relation extraction method and system
CN116151270A (en) Parking test system and method
CN116415654A (en) Data processing method and related equipment
CN109214006B (en) Natural language reasoning method for image enhanced hierarchical semantic representation
CN116664719B (en) Image redrawing model training method, image redrawing method and device
CN106777125A (en) A kind of iamge description generation method based on neutral net and image attention point
CN111444968A (en) Image description generation method based on attention fusion
CN115951883B (en) Service component management system of distributed micro-service architecture and method thereof
CN114462520A (en) Network intrusion detection method based on traffic classification
CN117690098B (en) Multi-label identification method based on dynamic graph convolution under open driving scene
CN111144500A (en) Differential privacy deep learning classification method based on analytic Gaussian mechanism
Li et al. L-DETR: A light-weight detector for end-to-end object detection with transformers
CN115114409A (en) Civil aviation unsafe event combined extraction method based on soft parameter sharing
CN115168579A (en) Text classification method based on multi-head attention mechanism and two-dimensional convolution operation
CN116308754A (en) Bank credit risk early warning system and method thereof
Waugh et al. Computational stylistics using artificial neural networks
Ji et al. Learning the dynamics of time delay systems with trainable delays
CN110334340B (en) Semantic analysis method and device based on rule fusion and readable storage medium
CN111767720B (en) Title generation method, computer and readable storage medium
CN112597311B (en) Terminal information classification method and system based on low-orbit satellite communication
Lai et al. Fast Broad Multiview Multi-Instance Multilabel Learning (FBM3L) With Viewwise Intercorrelation
CN116501864A (en) Cross embedded attention BiLSTM multi-label text classification model, method and equipment
CN115563841A (en) Particle orbit discovery method based on cellular automaton of graph neural network
CN110825861B (en) Man-machine conversation method and system based on distributed representation model confusion degree
Chu et al. Research on capsule network optimization structure by variable route planning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination