CN118230294B - Urban road sweeping roadway condition sensing system and method based on Internet of things - Google Patents

Urban road sweeping roadway condition sensing system and method based on Internet of things Download PDF

Info

Publication number
CN118230294B
CN118230294B CN202410583433.8A CN202410583433A CN118230294B CN 118230294 B CN118230294 B CN 118230294B CN 202410583433 A CN202410583433 A CN 202410583433A CN 118230294 B CN118230294 B CN 118230294B
Authority
CN
China
Prior art keywords
obstacle
road
convnext
picture
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410583433.8A
Other languages
Chinese (zh)
Other versions
CN118230294A (en
Inventor
陈锦福
刘清秀
崔飞易
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jinfeijie Information Technology Service Co ltd
Original Assignee
Shenzhen Jinfeijie Information Technology Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jinfeijie Information Technology Service Co ltd filed Critical Shenzhen Jinfeijie Information Technology Service Co ltd
Priority to CN202410583433.8A priority Critical patent/CN118230294B/en
Publication of CN118230294A publication Critical patent/CN118230294A/en
Application granted granted Critical
Publication of CN118230294B publication Critical patent/CN118230294B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/435Computation of moments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Physiology (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a city road sweeping condition sensing system and method based on the Internet of things, which relates to the field of vehicle traffic control, provides a basis for acquiring image samples of obstacles later by setting a total type set and a sub type matrix of the obstacles so as to obtain an accurate ConvNeXt convolutional neural network model, provides an accurate calculation model for classifying the acquired image data of the obstacles in the process of traveling on a city road of a city road, and then, carrying out feature extraction on the obstacle according to a classification result obtained by classifying the image data acquired in real time according to the ConvNeXt convolutional neural network model, calculating the quality and the volume of the obstacle according to the extracted feature information, and finally judging whether the urban road sweeper avoids the obstacle according to the quality and the volume of the obstacle, so that the urban road sweeper can take correct measures when encountering the road obstacle.

Description

Urban road sweeping roadway condition sensing system and method based on Internet of things
Technical Field
The invention belongs to the field of vehicle traffic control, and particularly relates to an urban road sweeping roadway condition sensing system and method based on the Internet of things.
Background
Road condition sensing of urban sweeping vehicles means that a series of sensors and intelligent systems are utilized to monitor and evaluate road environments and working conditions encountered by the sweeping vehicles in the driving process; the sensing technology can help the road cleaning vehicle to clean the road more effectively, improve the working efficiency and ensure the operation safety; the urban road sweeper not only can efficiently clean roads, but also can ensure the safety of operators and simultaneously reduce the influence on the environment to the greatest extent; with the continuous development of technology, the sensing system of the future sweeping vehicle is more intelligent and automatic, and provides more efficient and safe service for urban cleaning work.
Chinese patent CN115731714B discloses a road environment sensing method and apparatus, the apparatus obtains road environment data from a V2X message gateway (or other devices), and then inputs the road environment data into a strong classification learner composed of a plurality of weak classification learners to identify abnormal road conditions, so as to obtain a plurality of abnormal road condition identification results, i.e. V2X security events; finally, the road environment sensing result is issued to the current intersection and other intersections; the road environment sensing device can automatically perform data synchronization with other systems; by applying the technical scheme of the application, the road environment sensing capability can be provided for the road without the road side equipment.
The existing urban road sweeper usually adopts a manual driving mode when sweeping urban roads, an operator is required to observe the condition of the road surface through eyes in the driving process, the area of the urban road surface is large, so that the manpower is greatly consumed, meanwhile, if different obstacles or garbage are treated in the same way, for example, the urban road sweeper finds that the obstacle or garbage with a large volume is not avoided for sweeping operation, collision can occur and even damage is caused to the road sweeper.
Disclosure of Invention
Aiming at the problems in the related art, the invention provides an urban road sweeping roadway condition sensing system and method based on the Internet of things, so as to overcome the technical problems in the prior art.
In order to solve the technical problems, the invention is realized by the following technical scheme:
The invention discloses an urban road sweeping roadway condition sensing method based on the Internet of things, which comprises the following steps of:
S1, setting a total type set of the obstacle; setting an obstacle sub-category matrix according to the obstacle total category set; acquiring a road obstacle picture training sample set and a road obstacle picture testing sample set according to the obstacle sub-category matrix, and setting a corresponding obstacle picture training sample label set according to the road obstacle picture training sample set; setting a corresponding obstacle picture test sample tag set according to the road obstacle picture test sample set; constructing ConvNeXt convolutional neural network models again, and obtaining final ConvNeXt convolutional neural network models according to the road obstacle picture training sample set, the obstacle picture training sample tag set, the road obstacle picture testing sample set and the obstacle picture testing sample tag set;
S2, acquiring image data of a road surface to be passed in real time and classifying the image data when the urban road sweeper travels on an urban road to obtain a classification result;
S3, setting a maximum mass threshold and a maximum volume threshold of the road sweeper for cleaning obstacles; performing feature extraction on the real-time image data corresponding to the classification result in the step S2 by adopting a feature extraction algorithm to obtain a first feature data set or a second feature data set; calculating the quality and the volume of the road obstacle according to the first characteristic data set;
S4, judging whether the urban road sweeper needs to avoid the obstacle according to the first characteristic data set or the second characteristic data set obtained in the S3;
Firstly, setting a total type set and a sub type matrix of the obstacle, and determining the type of the obstacle to be considered, thereby providing a basis for acquiring the image sample of the obstacle subsequently; collecting a road obstacle picture training sample set, a road obstacle picture testing sample set, a corresponding obstacle picture training sample label set and an obstacle picture testing sample label set according to the total obstacle type set and the obstacle sub-type matrix, providing data support for training and testing a ConvNeXt convolutional neural network model, further obtaining an accurate ConvNeXt convolutional neural network model, providing an accurate calculation model for classifying the obstacle image data collected around in the process of travelling on an urban road of an urban road sweeper, extracting characteristics of the obstacle according to a classification result obtained by classifying the image data collected in real time according to a ConvNeXt convolutional neural network model, calculating the quality and the volume of the obstacle according to the extracted characteristic information, and finally judging whether the urban road sweeper dodges the obstacle according to the quality and the volume of the obstacle, so that the urban road sweeper can take correct measures when encountering the road obstacle;
The acquisition and transmission of the obstacle image data are all carried out through the Internet of things.
Preferably, the step S1 includes the steps of:
S11, setting a total type set of obstacles Indicating that the road is a raised obstacle,Representing a road-recessed obstacle,Representing an obstacle of the road itself;
S12, according to the total category set of the barriers Setting up a sub-category matrix of obstaclesThe method comprises, as follows,
Wherein,Representing the first of the road raised barrier typesThe sub-category of the individual obstacles,Representing the first belonging to the road recessed barrier typeThe sub-category of the individual obstacles,Representing the first of the road's own obstacle typesA sub-category of obstacles; And Respectively representing the total number of the obstacle sub-categories belonging to the road convex obstacle type, the total number of the obstacle sub-categories belonging to the road concave obstacle type and the total number of the obstacle sub-categories belonging to the road self obstacle type;
s13, according to the obstacle sub-category matrix Training sample set for collecting road obstacle pictureRoad obstacle picture test sample setTraining a sample set according to the road obstacle pictureSet up corresponding obstacle picture training sample label set; Testing a sample set according to the road obstacle pictureSetting corresponding obstacle picture test sample label set
Representing the first set of road obstacle picture training samplesThe road obstacle picture training samples were taken,Representing the total number of road obstacle picture training samples in the road obstacle picture training sample set; representing the first set of road obstacle picture training samples Obstacle picture training sample labels corresponding to the road obstacle picture training samples;
Representing the first road obstacle picture test sample set A road obstacle picture test sample is taken,Representing the total number of road obstacle picture test samples in the road obstacle picture test sample set; Representing the first road obstacle picture test sample set Obstacle picture test sample labels corresponding to the road obstacle picture test samples;
s14, constructing ConvNeXt convolutional neural network model, and training the road obstacle picture to sample set Obstacle picture training sample tag setInputting the road obstacle picture test sample set into ConvNeXt convolutional neural network models for training, obtaining a trained ConvNeXt convolutional neural network model after training is completed, and testing the road obstacle picture test sample setBarrier picture test sample tag setInputting the model into a trained ConvNeXt convolutional neural network model for testing, and optimizing the trained ConvNeXt convolutional neural network model according to a test result to obtain a final ConvNeXt convolutional neural network model;
The total types of the obstacles are concentrated to divide the obstacles into three main types, namely a road convex obstacle, a road concave obstacle and a road self obstacle, wherein the road convex obstacle refers to a convex obstacle on a road, and the obstacle is movable and does not belong to the part of the road; the road concave obstacle refers to a pit on a road; the road obstacle refers to an obstacle fixed on the road or an object manually set on the road; the barrier sub-category matrix subdivides each barrier major category in the total barrier category set, thereby facilitating subsequent sampling of barrier-related samples.
Preferably, the step S14 includes the steps of:
S141, constructing ConvNeXt convolutional neural network models, and setting the iteration number of training the ConvNeXt convolutional neural network models as The batch size isThe optimizer is adam optimizer and the initial learning rate is
S142, setting the current training iteration times asThe training error threshold is; According to the size of the batchAdam optimizer and initial learning rateTraining the road obstacle picture to obtain a sample setObstacle picture training sample tag setInputting the training data into ConvNeXt convolutional neural network model, and when the training error of ConvNeXt convolutional neural network model is smaller thanWhen or whenWhen the training is stopped, the trained ConvNeXt convolutional neural network model is obtained;
s143, setting an accuracy threshold Testing the road obstacle picture sample setBarrier picture test sample tag setInputting the test result into a trained ConvNeXt convolutional neural network model to obtain test accuracy; When (when)When the training ConvNeXt convolutional neural network model is used as a final ConvNeXt convolutional neural network model; when (when)When in use, the initial learning rate of the trained ConvNeXt convolutional neural network model is calculatedOptimizing to obtain an optimized ConvNeXt convolutional neural network model, and taking the optimized ConvNeXt convolutional neural network model as a final ConvNeXt convolutional neural network model;
setting relevant parameters of the ConvNeXt convolutional neural network model, inputting a road obstacle picture training sample set and an obstacle picture training sample tag set into the ConvNeXt convolutional neural network model for training to obtain a trained ConvNeXt convolutional neural network model, testing the trained ConvNeXt convolutional neural network model by using the road obstacle picture testing sample set and the obstacle picture testing sample tag set, and optimizing the trained ConvNeXt convolutional neural network model according to a test result to enable the ConvNeXt convolutional neural network model to be more accurate.
Preferably, the initial learning rate of the trained ConvNeXt convolutional neural network model in S143Optimizing to obtain an optimized ConvNeXt convolutional neural network model; the method specifically comprises the following steps:
S1431, constructing a bat population, and setting the scale of the bat population as The bat population is expressed asRepresenting the first of the bat populationIndividual bats; setting the first maximum iteration number asThe pulse frequency interval of each bat in the bat population is; Setting the initial pulse loudness to beThe initial pulse emission rate isThe volume attenuation coefficient isThe pulse frequency enhancement coefficient is; Setting the maximum and minimum inertial weights of the bat population asAnd
S1432, according to the initial learning rateGenerating a set of random learning ratesRepresentation according to initial learning rateGenerated firstA random learning rate; gathering the random learning rateEach random learning rate in the bat population is respectively used as bat populationInitial positions of individual bats;
s1433, set fitness function The fitness functionThe formula of (c) is as follows,
In the method, in the process of the invention,Representing an offset greater than 0;
S1434 according to the pulse frequency interval Loudness of initial pulseInitial pulse emission rateCoefficient of sound volume attenuationPulse frequency enhancement factorFor bat populationStarting iterative operation, and setting the iterative times of the first current population asAccording to the fitness functionSearching the optimal position of each bat individual in the bat population, and performing each iteration on the bat populationThe position and speed of each bat individual are updated, and the initial pulse loudness is simultaneously updatedInitial pulse emission rateUpdating;
s1435, setting the search precision threshold as Repeating S1434 when the search accuracy of each bat individual in the bat population is greater than or equal toTime or time of dayStopping the iterative process to obtain the optimal learning rate;
S1436, taking the optimal learning rate as the learning rate of the trained ConvNeXt convolutional neural network model to obtain an optimized ConvNeXt convolutional neural network model;
The bat algorithm is adopted to optimize the learning rate parameters in the trained ConvNeXt convolutional neural network model, and the test accuracy of the ConvNeXt convolutional neural network model is used as an index during optimization, so that the test accuracy of the ConvNeXt convolutional neural network model can be continuously improved during optimization, and the classification effect of the model on obstacles is better and better.
Preferably, the step S2 includes the steps of:
s21, acquiring image data of a road surface to be passed in real time when the urban road sweeper travels on an urban road, and recording the image data as real-time image data;
s22, classifying the real-time image data by adopting the final ConvNeXt convolutional neural network model to obtain a classification result
And the final ConvNeXt convolutional neural network model is adopted to conduct classification operation on the real-time image data, so that the urban road sweeper can conduct real-time classification on obstacles encountered by the periphery in the running process, and the obstacles can be analyzed conveniently.
Preferably, the step S3 includes the steps of:
S31, setting the diameter of wheels of the urban road sweeper as Maximum mass threshold value of cleaning obstacle of urban road sweeperMaximum volume threshold
S32, adopting a feature extraction algorithm pairExtracting features of corresponding real-time image data to obtain a first feature data setOr a second feature data setAndRespectively representing length parameters, width parameters and height parameters of the obstacle in the real-time image data; And Respectively representing maximum distance parameters between any two points on the periphery of the concave barrier in the real-time image data and maximum concave depth parameters of the concave barrier;
S33, according to the first characteristic data set Calculating the mass of road obstacleVolume and volume
By extracting the characteristics of the obstacle, the relevant attribute of the obstacle is calculated according to the extracted characteristics and the classification result, so that a basis is provided for the follow-up judgment of whether the urban road sweeper should avoid the obstacle.
Preferably, the specific process of S32 is as follows:
When (when) When the feature extraction algorithm is adopted, the feature extraction algorithm is adopted for the matchingExtracting features of corresponding real-time image data to obtain a first feature data set
When (when)When the feature extraction algorithm is adopted, the feature extraction algorithm is adopted for the matchingExtracting features of the corresponding real-time image data to obtain a second feature data set
Adopting characteristic extraction algorithm pair and methodThe corresponding real-time image data is subjected to feature extraction,The method for representing the classification result specifically comprises the following steps:
S321, will be associated with The corresponding real-time image data is recorded as first image data, and the first image data is subjected to graying and normalization processing to obtain processed first image data;
s322, performing decomposition operation on the processed first image data by adopting a two-dimensional discrete wavelet to obtain a first image data LL component;
S323, performing dimension reduction operation on the first image data LL component by adopting a linear mapping method to obtain an initial wavelet decomposition level;
S324, optimizing the initial wavelet decomposition level by adopting a genetic algorithm to obtain an optimal wavelet decomposition level;
s325, extracting the characteristics of the first image data according to the optimal wavelet decomposition level;
The wavelet decomposition hierarchy is obtained by carrying out wavelet decomposition and linear mapping operation on the first image data, and then the wavelet decomposition hierarchy is optimized by adopting a genetic algorithm, so that the extracted characteristics are more accurate.
Preferably, the step S324 specifically includes the following steps:
s3241, constructing a chromosome population, and setting the chromosome population to be of a scale Representing the first of the chromosome populationA plurality of chromosomes; taking each chromosome in the chromosome population as each different initial wavelet decomposition level; setting the second maximum iteration number as
S3242, setting the identification accuracy of the feature extraction algorithm to the first image data asAnd the time consumed by the feature extraction algorithm to identify the first image data isAccording toAndSetting fitness of each chromosome in the chromosome populationThe calculation formula of (c) is as follows,
S3243 according toPerforming coding operation on each chromosome in the chromosome population to obtain a coded chromosome population;
s3244, starting iteration, and setting the iteration number of the second current population as Selecting, crossing and mutating the coded chromosome population in each iteration process to obtain the chromosome population after operation;
S3245, decoding the chromosome population after the operation to obtain a decoded chromosome population;
s3246, repeat S3243, S3244 and S3245, when Stopping iteration to obtain an optimal wavelet decomposition level;
the initial wavelet decomposition level is selected, crossed and mutated through a genetic algorithm, and meanwhile, the time consumed by identifying the first image data and the identification precision are taken as optimization targets through a feature extraction algorithm, so that the optimized wavelet decomposition level can improve the efficiency and accuracy of image feature extraction.
Preferably, the specific process of S4 is as follows:
When (when) And satisfy the followingOr alternativelyWhen the urban road sweeper is needed to avoid the obstacle; when (when)And satisfy the followingAndWhen the urban road sweeper is used, the obstacle can be cleaned;
When (when) And satisfy the followingWhen the urban road sweeper is used, the obstacle does not need to be avoided; when (when)And satisfy the followingAndWhen the urban road sweeper is used, the obstacle does not need to be avoided; otherwise, the urban road sweeper needs to avoid the obstacle;
When (when) When the urban road sweeper is used, the obstacle needs to be avoided;
By comparing the calculated characteristic attribute of the obstacle with the attribute of the road sweeper, whether the road sweeper needs to avoid the obstacle or not can be judged.
The urban road sweeping roadway condition sensing system based on the Internet of things comprises a road obstacle picture training sample acquisition module, a road obstacle picture test sample acquisition module, a road obstacle picture classification module, a road obstacle picture feature extraction module and a road sweeping vehicle avoidance judgment module;
the road obstacle picture training sample collection module is used for collecting a road obstacle picture training sample set according to the obstacle sub-category matrix;
the road obstacle picture test sample collection module is used for collecting a road obstacle picture test sample set according to the obstacle sub-category matrix;
The road obstacle picture classification module is used for collecting image data of a road surface to be passed in real time and classifying the image data when the road obstacle picture classification module runs on an urban road;
The road obstacle picture feature extraction module is used for carrying out feature extraction on the real-time image data corresponding to the classification result by adopting a feature extraction algorithm;
The sweeping vehicle avoidance judging module judges whether the urban sweeping vehicle needs to avoid the obstacle according to the first characteristic data set or the second characteristic data set.
The invention has the following beneficial effects:
1. The road obstacle picture training sample acquisition module, the road obstacle picture test sample acquisition module, the road obstacle picture classification module, the road obstacle picture feature extraction module and the sweeping vehicle avoidance judgment module are arranged; firstly, setting a total type set and a sub type matrix of the obstacle, and determining the type of the obstacle to be considered, thereby providing a basis for acquiring the image sample of the obstacle subsequently; and collecting a road obstacle picture training sample set, a road obstacle picture testing sample set, a corresponding obstacle picture training sample label set and an obstacle picture testing sample label set according to the total obstacle type set and the obstacle sub-type matrix, providing data support for training and testing a ConvNeXt convolutional neural network model, further obtaining an accurate ConvNeXt convolutional neural network model, providing an accurate calculation model for classifying the surrounding collected obstacle image data in the process of travelling on the urban road of the urban road, extracting characteristics of the obstacle according to a classification result obtained by classifying the image data collected in real time according to the ConvNeXt convolutional neural network model, calculating the quality and the volume of the obstacle according to the extracted characteristic information, and finally judging whether the urban road sweeper dodges the obstacle according to the quality and the volume of the obstacle, so that the urban road sweeper can take correct measures when encountering the road obstacle.
2. In the invention, the total types of the obstacles are classified into three main types, namely a road convex obstacle, a road concave obstacle and a road self obstacle, wherein the road convex obstacle refers to a convex obstacle on a road, and the obstacle is movable and does not belong to the part of the road; the road concave obstacle refers to a pit on a road; the road obstacle refers to an obstacle fixed on the road or an object manually set on the road; the obstacle sub-category matrix subdivides each obstacle major category in the total obstacle category set, so that the follow-up taking of relevant obstacle samples is facilitated, relevant parameters of the ConvNeXt convolutional neural network model are set, the road obstacle picture training sample set and the obstacle picture training sample label set are input into the ConvNeXt convolutional neural network model for training, the trained ConvNeXt convolutional neural network model is obtained, the road obstacle picture testing sample set and the obstacle picture testing sample label set are used for testing the trained ConvNeXt convolutional neural network model, and the trained ConvNeXt convolutional neural network model is optimized according to the test result, so that the ConvNeXt convolutional neural network model is more accurate.
3. According to the invention, the bat algorithm is adopted to optimize the learning rate parameter in the trained ConvNeXt convolutional neural network model, and the test precision of the ConvNeXt convolutional neural network model is used as an index during optimization, so that the test precision of the ConvNeXt convolutional neural network model can be continuously improved during optimization, and the classification effect of the model on the obstacle is better and better; the wavelet decomposition level is obtained by carrying out wavelet decomposition and linear mapping operation on the first image data, and then the wavelet decomposition level is optimized by adopting a genetic algorithm, so that the extracted characteristics are more accurate.
Of course, it is not necessary for any one product to practice the invention to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the invention, the drawings that are needed for the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the invention, and that it is also possible for a person skilled in the art to obtain the drawings from these drawings without inventive effort.
Fig. 1 is a schematic flow chart of the urban road sweeping roadway condition sensing system for sensing a roadway based on the internet of things.
Detailed Description
The following description of the technical solutions in the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, based on the embodiments in the invention, which a person of ordinary skill in the art would obtain without inventive faculty, are within the scope of the invention.
In the description of the present invention, it should be understood that the terms "open," "upper," "lower," "top," "middle," "inner," and the like indicate an orientation or positional relationship, merely for convenience of description and to simplify the description, and do not indicate or imply that the components or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the invention.
The invention discloses an urban road sweeping roadway condition sensing method based on the Internet of things, which comprises the following steps of:
S1, setting a total type set of the obstacle; setting an obstacle sub-category matrix according to the obstacle total category set; acquiring a road obstacle picture training sample set and a road obstacle picture testing sample set according to the obstacle sub-category matrix, and setting a corresponding obstacle picture training sample label set according to the road obstacle picture training sample set; setting a corresponding obstacle picture test sample tag set according to the road obstacle picture test sample set; constructing ConvNeXt convolutional neural network models again, and obtaining final ConvNeXt convolutional neural network models according to the road obstacle picture training sample set, the obstacle picture training sample tag set, the road obstacle picture testing sample set and the obstacle picture testing sample tag set;
the step S1 comprises the following steps:
S11, setting a total type set of obstacles Indicating that the road is a raised obstacle,Representing a road-recessed obstacle,Representing an obstacle of the road itself;
S12, according to the total category set of the barriers Setting up a sub-category matrix of obstaclesThe method comprises, as follows,
Wherein,Representing the first of the road raised barrier typesThe sub-category of the individual obstacles,Representing the first belonging to the road recessed barrier typeThe sub-category of the individual obstacles,Representing the first of the road's own obstacle typesA sub-category of obstacles; And Respectively representing the total number of the obstacle sub-categories belonging to the road convex obstacle type, the total number of the obstacle sub-categories belonging to the road concave obstacle type and the total number of the obstacle sub-categories belonging to the road self obstacle type;
s13, according to the obstacle sub-category matrix Training sample set for collecting road obstacle pictureRoad obstacle picture test sample setTraining a sample set according to the road obstacle pictureSet up corresponding obstacle picture training sample label set; Testing a sample set according to the road obstacle pictureSetting corresponding obstacle picture test sample label set
Representing the first set of road obstacle picture training samplesThe road obstacle picture training samples were taken,Representing the total number of road obstacle picture training samples in the road obstacle picture training sample set; representing the first set of road obstacle picture training samples Obstacle picture training sample labels corresponding to the road obstacle picture training samples;
Representing the first road obstacle picture test sample set The road obstacle picture test samples represent the total number of the road obstacle picture test samples in the road obstacle picture test sample set; Representing the first road obstacle picture test sample set Obstacle picture test sample labels corresponding to the road obstacle picture test samples;
s14, constructing ConvNeXt convolutional neural network model, and training the road obstacle picture to sample set Obstacle picture training sample tag setInputting the road obstacle picture test sample set into ConvNeXt convolutional neural network models for training, obtaining a trained ConvNeXt convolutional neural network model after training is completed, and testing the road obstacle picture test sample setBarrier picture test sample tag setInputting the model into a trained ConvNeXt convolutional neural network model for testing, and optimizing the trained ConvNeXt convolutional neural network model according to a test result to obtain a final ConvNeXt convolutional neural network model;
the step S14 includes the steps of:
S141, constructing ConvNeXt convolutional neural network models, and setting the iteration number of training the ConvNeXt convolutional neural network models as The batch size isThe optimizer is adam optimizer and the initial learning rate is
S142, setting the current training iteration times asThe training error threshold is; According to the size of the batchAdam optimizer and initial learning rateTraining the road obstacle picture to obtain a sample setObstacle picture training sample tag setInputting the training data into ConvNeXt convolutional neural network model, and when the training error of ConvNeXt convolutional neural network model is smaller thanWhen or whenWhen the training is stopped, the trained ConvNeXt convolutional neural network model is obtained;
s143, setting an accuracy threshold Testing the road obstacle picture sample setBarrier picture test sample tag setInputting the test result into a trained ConvNeXt convolutional neural network model to obtain test accuracy; When (when)When the training ConvNeXt convolutional neural network model is used as a final ConvNeXt convolutional neural network model; when (when)When in use, the initial learning rate of the trained ConvNeXt convolutional neural network model is calculatedOptimizing to obtain an optimized ConvNeXt convolutional neural network model, and taking the optimized ConvNeXt convolutional neural network model as a final ConvNeXt convolutional neural network model;
s143, initial learning rate of the trained ConvNeXt convolutional neural network model Optimizing to obtain an optimized ConvNeXt convolutional neural network model; the method specifically comprises the following steps:
S1431, constructing a bat population, and setting the scale of the bat population as The bat population is expressed asRepresenting the first of the bat populationIndividual bats; setting the first maximum iteration number asThe pulse frequency interval of each bat in the bat population is; Setting the initial pulse loudness to beThe initial pulse emission rate isThe volume attenuation coefficient isThe pulse frequency enhancement coefficient is; Setting the maximum and minimum inertial weights of the bat population asAnd
S1432, according to the initial learning rateGenerating a set of random learning ratesRepresentation according to initial learning rateGenerated firstA random learning rate; gathering the random learning rateEach random learning rate in the bat population is respectively used as bat populationInitial positions of individual bats;
s1433, set fitness function The fitness functionThe formula of (c) is as follows,
In the method, in the process of the invention,Representing an offset greater than 0;
S1434 according to the pulse frequency interval Loudness of initial pulseInitial pulse emission rateCoefficient of sound volume attenuationPulse frequency enhancement factorFor bat populationStarting iterative operation, and setting the iterative times of the first current population asAccording to the fitness functionSearching the optimal position of each bat individual in the bat population, and performing each iteration on the bat populationThe position and speed of each bat individual are updated, and the initial pulse loudness is simultaneously updatedInitial pulse emission rateUpdating;
s1435, setting the search precision threshold as Repeating S1434 when the search accuracy of each bat individual in the bat population is greater than or equal toTime or time of dayStopping the iterative process to obtain the optimal learning rate;
S1436, taking the optimal learning rate as the learning rate of the trained ConvNeXt convolutional neural network model to obtain an optimized ConvNeXt convolutional neural network model;
S2, acquiring image data of a road surface to be passed in real time and classifying the image data when the urban road sweeper travels on an urban road to obtain a classification result;
The step S2 comprises the following steps:
s21, acquiring image data of a road surface to be passed in real time when the urban road sweeper travels on an urban road, and recording the image data as real-time image data;
s22, classifying the real-time image data by adopting the final ConvNeXt convolutional neural network model to obtain a classification result
S3, setting a maximum mass threshold and a maximum volume threshold of the road sweeper for cleaning obstacles; performing feature extraction on the real-time image data corresponding to the classification result in the step S2 by adopting a feature extraction algorithm to obtain a first feature data set or a second feature data set; calculating the quality and the volume of the road obstacle according to the first characteristic data set;
The step S3 comprises the following steps:
S31, setting the diameter of wheels of the urban road sweeper as Maximum mass threshold value of cleaning obstacle of urban road sweeperMaximum volume threshold
S32, adopting a feature extraction algorithm pairExtracting features of corresponding real-time image data to obtain a first feature data setOr a second feature data setAndRespectively representing length parameters, width parameters and height parameters of the obstacle in the real-time image data; And Respectively representing maximum distance parameters between any two points on the periphery of the concave barrier in the real-time image data and maximum concave depth parameters of the concave barrier;
the specific process of the S32 is as follows:
When (when) When the feature extraction algorithm is adopted, the feature extraction algorithm is adopted for the matchingExtracting features of corresponding real-time image data to obtain a first feature data set
When (when)When the feature extraction algorithm is adopted, the feature extraction algorithm is adopted for the matchingExtracting features of the corresponding real-time image data to obtain a second feature data set
Adopting characteristic extraction algorithm pair and methodThe corresponding real-time image data is subjected to feature extraction,The method for representing the classification result specifically comprises the following steps:
S321, will be associated with The corresponding real-time image data is recorded as first image data, and the first image data is subjected to graying and normalization processing to obtain processed first image data;
s322, performing decomposition operation on the processed first image data by adopting a two-dimensional discrete wavelet to obtain a first image data LL component;
S323, performing dimension reduction operation on the first image data LL component by adopting a linear mapping method to obtain an initial wavelet decomposition level;
S324, optimizing the initial wavelet decomposition level by adopting a genetic algorithm to obtain an optimal wavelet decomposition level;
the step S324 specifically includes the following steps:
s3241, constructing a chromosome population, and setting the chromosome population to be of a scale Representing the first of the chromosome populationA plurality of chromosomes; taking each chromosome in the chromosome population as each different initial wavelet decomposition level; setting the second maximum iteration number as
S3242, setting the identification accuracy of the feature extraction algorithm to the first image data asAnd the time consumed by the feature extraction algorithm to identify the first image data isAccording toAndSetting fitness of each chromosome in the chromosome populationThe calculation formula of (c) is as follows,
S3243 according toPerforming coding operation on each chromosome in the chromosome population to obtain a coded chromosome population;
s3244, starting iteration, and setting the iteration number of the second current population as Selecting, crossing and mutating the coded chromosome population in each iteration process to obtain the chromosome population after operation;
S3245, decoding the chromosome population after the operation to obtain a decoded chromosome population;
s3246, repeat S3243, S3244 and S3245, when Stopping iteration to obtain an optimal wavelet decomposition level;
s325, extracting the characteristics of the first image data according to the optimal wavelet decomposition level;
S33, according to the first characteristic data set Calculating the mass of road obstacleVolume and volume
S4, judging whether the urban road sweeper needs to avoid the obstacle according to the first characteristic data set or the second characteristic data set obtained in the S3;
The specific process of the S4 is as follows:
When (when) And satisfy the followingOr alternativelyWhen the urban road sweeper is needed to avoid the obstacle; when (when)And satisfy the followingAndWhen the urban road sweeper is used, the obstacle can be cleaned;
When (when) And satisfy the followingWhen the urban road sweeper is used, the obstacle does not need to be avoided; when (when)And satisfy the followingAndWhen the urban road sweeper is used, the obstacle does not need to be avoided; otherwise, the urban road sweeper needs to avoid the obstacle;
When (when) When the urban road sweeper is used, the obstacle needs to be avoided.
The urban road sweeping roadway condition sensing system based on the Internet of things comprises a road obstacle picture training sample acquisition module, a road obstacle picture test sample acquisition module, a road obstacle picture classification module, a road obstacle picture feature extraction module and a road sweeping vehicle avoidance judgment module;
the road obstacle picture training sample collection module is used for collecting a road obstacle picture training sample set according to the obstacle sub-category matrix;
the road obstacle picture test sample collection module is used for collecting a road obstacle picture test sample set according to the obstacle sub-category matrix;
The road obstacle picture classification module is used for collecting image data of a road surface to be passed in real time and classifying the image data when the road obstacle picture classification module runs on an urban road;
The road obstacle picture feature extraction module is used for carrying out feature extraction on the real-time image data corresponding to the classification result by adopting a feature extraction algorithm;
The sweeping vehicle avoidance judging module judges whether the urban sweeping vehicle needs to avoid the obstacle according to the first characteristic data set or the second characteristic data set.
In the description of the present specification, the descriptions of the terms "one embodiment," "example," "specific example," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above disclosed preferred embodiments of the invention are merely intended to help illustrate the invention. The preferred embodiments are not exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention.

Claims (5)

1. The urban road sweeping roadway condition sensing method based on the Internet of things is characterized by comprising the following steps of:
S1, setting a total type set of the obstacle; setting an obstacle sub-category matrix according to the obstacle total category set; acquiring a road obstacle picture training sample set and a road obstacle picture testing sample set according to the obstacle sub-category matrix, and setting a corresponding obstacle picture training sample label set according to the road obstacle picture training sample set; setting a corresponding obstacle picture test sample tag set according to the road obstacle picture test sample set; constructing ConvNeXt convolutional neural network models again, and obtaining final ConvNeXt convolutional neural network models according to the road obstacle picture training sample set, the obstacle picture training sample tag set, the road obstacle picture testing sample set and the obstacle picture testing sample tag set;
S2, acquiring image data of a road surface to be passed in real time and classifying the image data when the urban road sweeper travels on an urban road to obtain a classification result;
S3, setting a maximum mass threshold and a maximum volume threshold of the road sweeper for cleaning obstacles; performing feature extraction on the real-time image data corresponding to the classification result in the step S2 by adopting a feature extraction algorithm to obtain a first feature data set or a second feature data set; calculating the quality and the volume of the road obstacle according to the first characteristic data set;
S4, judging whether the urban road sweeper needs to avoid the obstacle according to the first characteristic data set or the second characteristic data set obtained in the S3;
the step S1 comprises the following steps:
S11, setting a total type set of obstacles Indicating that the road is a raised obstacle,Representing a road-recessed obstacle,Representing an obstacle of the road itself;
S12, according to the total category set of the barriers Setting up a sub-category matrix of obstaclesThe method comprises, as follows,
Wherein,Representing the first of the road raised barrier typesThe sub-category of the individual obstacles,Representing the first belonging to the road recessed barrier typeThe sub-category of the individual obstacles,Representing the first of the road's own obstacle typesA sub-category of obstacles; And Respectively representing the total number of the obstacle sub-categories belonging to the road convex obstacle type, the total number of the obstacle sub-categories belonging to the road concave obstacle type and the total number of the obstacle sub-categories belonging to the road self obstacle type;
s13, according to the obstacle sub-category matrix Training sample set for collecting road obstacle pictureRoad obstacle picture test sample setTraining a sample set according to the road obstacle pictureSet up corresponding obstacle picture training sample label set; Testing a sample set according to the road obstacle pictureSetting corresponding obstacle picture test sample label set
Representing the first set of road obstacle picture training samplesThe road obstacle picture training samples were taken,Representing the total number of road obstacle picture training samples in the road obstacle picture training sample set; representing the first set of road obstacle picture training samples Obstacle picture training sample labels corresponding to the road obstacle picture training samples;
Representing the first road obstacle picture test sample set A road obstacle picture test sample is taken,Representing the total number of road obstacle picture test samples in the road obstacle picture test sample set; Representing the first road obstacle picture test sample set Obstacle picture test sample labels corresponding to the road obstacle picture test samples;
s14, constructing ConvNeXt convolutional neural network model, and training the road obstacle picture to sample set Obstacle picture training sample tag setInputting the road obstacle picture test sample set into ConvNeXt convolutional neural network models for training, obtaining a trained ConvNeXt convolutional neural network model after training is completed, and testing the road obstacle picture test sample setBarrier picture test sample tag setInputting the model into a trained ConvNeXt convolutional neural network model for testing, and optimizing the trained ConvNeXt convolutional neural network model according to a test result to obtain a final ConvNeXt convolutional neural network model;
the step S14 includes the steps of:
S141, constructing ConvNeXt convolutional neural network models, and setting the iteration number of training the ConvNeXt convolutional neural network models as The batch size isThe optimizer is adam optimizer and the initial learning rate is
S142, setting the current training iteration times asThe training error threshold is; According to the size of the batchAdam optimizer and initial learning rateTraining the road obstacle picture to obtain a sample setObstacle picture training sample tag setInputting the training data into ConvNeXt convolutional neural network model, and when the training error of ConvNeXt convolutional neural network model is smaller thanWhen or whenWhen the training is stopped, the trained ConvNeXt convolutional neural network model is obtained;
s143, setting an accuracy threshold Testing the road obstacle picture sample setBarrier picture test sample tag setInputting the test result into a trained ConvNeXt convolutional neural network model to obtain test accuracy; When (when)When the training ConvNeXt convolutional neural network model is used as a final ConvNeXt convolutional neural network model; when (when)When in use, the initial learning rate of the trained ConvNeXt convolutional neural network model is calculatedOptimizing to obtain an optimized ConvNeXt convolutional neural network model, and taking the optimized ConvNeXt convolutional neural network model as a final ConvNeXt convolutional neural network model;
s143, initial learning rate of the trained ConvNeXt convolutional neural network model Optimizing to obtain an optimized ConvNeXt convolutional neural network model; the method specifically comprises the following steps:
S1431, constructing a bat population, and setting the scale of the bat population as The bat population is expressed asRepresenting the first of the bat populationIndividual bats; setting the first maximum iteration number asThe pulse frequency interval of each bat in the bat population is; Setting the initial pulse loudness to beThe initial pulse emission rate isThe volume attenuation coefficient isThe pulse frequency enhancement coefficient is; Setting the maximum and minimum inertial weights of the bat population asAnd
S1432, according to the initial learning rateGenerating a set of random learning ratesRepresentation according to initial learning rateGenerated firstA random learning rate; gathering the random learning rateEach random learning rate in the bat population is respectively used as bat populationInitial positions of individual bats;
s1433, set fitness function The fitness functionThe formula of (c) is as follows,
In the method, in the process of the invention,Representing an offset greater than 0;
S1434 according to the pulse frequency interval Loudness of initial pulseInitial pulse emission rateCoefficient of sound volume attenuationPulse frequency enhancement factorFor bat populationStarting iterative operation, and setting the iterative times of the first current population asAccording to the fitness functionSearching the optimal position of each bat individual in the bat population, and performing each iteration on the bat populationThe position and speed of each bat individual are updated, and the initial pulse loudness is simultaneously updatedInitial pulse emission rateUpdating;
s1435, setting the search precision threshold as Repeating S1434 when the search accuracy of each bat individual in the bat population is greater than or equal toTime or time of dayStopping the iterative process to obtain the optimal learning rate;
S1436, taking the optimal learning rate as the learning rate of the trained ConvNeXt convolutional neural network model to obtain an optimized ConvNeXt convolutional neural network model;
The step S3 comprises the following steps:
S31, setting the diameter of wheels of the urban road sweeper as Maximum mass threshold value of cleaning obstacle of urban road sweeperMaximum volume threshold
S32, adopting a feature extraction algorithm pairExtracting features of corresponding real-time image data to obtain a first feature data setOr a second feature data setAndRespectively representing length parameters, width parameters and height parameters of the obstacle in the real-time image data; And Respectively representing maximum distance parameters between any two points on the periphery of the concave barrier in the real-time image data and maximum concave depth parameters of the concave barrier;
S33, according to the first characteristic data set Calculating the mass of road obstacleVolume and volume
The specific process of the S32 is as follows:
When (when) When the feature extraction algorithm is adopted, the feature extraction algorithm is adopted for the matchingExtracting features of corresponding real-time image data to obtain a first feature data set
When (when)When the feature extraction algorithm is adopted, the feature extraction algorithm is adopted for the matchingExtracting features of the corresponding real-time image data to obtain a second feature data set
Adopting characteristic extraction algorithm pair and methodThe corresponding real-time image data is subjected to feature extraction,The method for representing the classification result specifically comprises the following steps:
S321, will be associated with The corresponding real-time image data is recorded as first image data, and the first image data is subjected to graying and normalization processing to obtain processed first image data;
s322, performing decomposition operation on the processed first image data by adopting a two-dimensional discrete wavelet to obtain a first image data LL component;
S323, performing dimension reduction operation on the first image data LL component by adopting a linear mapping method to obtain an initial wavelet decomposition level;
S324, optimizing the initial wavelet decomposition level by adopting a genetic algorithm to obtain an optimal wavelet decomposition level;
And S325, extracting the characteristics of the first image data according to the optimal wavelet decomposition level.
2. The urban road sweeping roadway condition sensing method based on the internet of things according to claim 1, wherein the step S2 comprises the following steps:
s21, acquiring image data of a road surface to be passed in real time when the urban road sweeper travels on an urban road, and recording the image data as real-time image data;
s22, classifying the real-time image data by adopting the final ConvNeXt convolutional neural network model to obtain a classification result
3. The urban road sweeping roadway condition sensing method based on the internet of things of claim 2, wherein the step S324 specifically comprises the following steps:
s3241, constructing a chromosome population, and setting the chromosome population to be of a scale Representing the first of the chromosome populationA plurality of chromosomes; taking each chromosome in the chromosome population as each different initial wavelet decomposition level; setting the second maximum iteration number as
S3242, setting the identification accuracy of the feature extraction algorithm to the first image data asAnd the time consumed by the feature extraction algorithm to identify the first image data isAccording toAndSetting fitness of each chromosome in the chromosome populationThe calculation formula of (c) is as follows,
S3243 according toPerforming coding operation on each chromosome in the chromosome population to obtain a coded chromosome population;
s3244, starting iteration, and setting the iteration number of the second current population as Selecting, crossing and mutating the coded chromosome population in each iteration process to obtain the chromosome population after operation;
S3245, decoding the chromosome population after the operation to obtain a decoded chromosome population;
s3246, repeat S3243, S3244 and S3245, when And stopping iteration to obtain the optimal wavelet decomposition level.
4. The urban road sweeping roadway condition sensing method based on the internet of things of claim 3, wherein the specific process of S4 is as follows:
When (when) And satisfy the followingOr alternativelyWhen the urban road sweeper is needed to avoid the obstacle; when (when)And satisfy the followingAndWhen the urban road sweeper is used, the obstacle can be cleaned;
When (when) And satisfy the followingWhen the urban road sweeper is used, the obstacle does not need to be avoided; when (when)And satisfy the followingAndWhen the urban road sweeper is used, the obstacle does not need to be avoided; otherwise, the urban road sweeper needs to avoid the obstacle;
When (when) When the urban road sweeper is used, the obstacle needs to be avoided.
5. A system of the urban road sweeping roadway condition sensing method based on the internet of things according to any one of claims 1-4, wherein: the road obstacle picture training sample collection system comprises a road obstacle picture training sample collection module, a road obstacle picture test sample collection module, a road obstacle picture classification module, a road obstacle picture feature extraction module and a sweeping vehicle avoidance judgment module;
the road obstacle picture training sample collection module is used for collecting a road obstacle picture training sample set according to the obstacle sub-category matrix;
the road obstacle picture test sample collection module is used for collecting a road obstacle picture test sample set according to the obstacle sub-category matrix;
The road obstacle picture classification module is used for collecting image data of a road surface to be passed in real time and classifying the image data when the road obstacle picture classification module runs on an urban road;
The road obstacle picture feature extraction module is used for carrying out feature extraction on the real-time image data corresponding to the classification result by adopting a feature extraction algorithm;
The sweeping vehicle avoidance judging module judges whether the urban sweeping vehicle needs to avoid the obstacle according to the first characteristic data set or the second characteristic data set.
CN202410583433.8A 2024-05-11 2024-05-11 Urban road sweeping roadway condition sensing system and method based on Internet of things Active CN118230294B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410583433.8A CN118230294B (en) 2024-05-11 2024-05-11 Urban road sweeping roadway condition sensing system and method based on Internet of things

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410583433.8A CN118230294B (en) 2024-05-11 2024-05-11 Urban road sweeping roadway condition sensing system and method based on Internet of things

Publications (2)

Publication Number Publication Date
CN118230294A CN118230294A (en) 2024-06-21
CN118230294B true CN118230294B (en) 2024-08-16

Family

ID=91513629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410583433.8A Active CN118230294B (en) 2024-05-11 2024-05-11 Urban road sweeping roadway condition sensing system and method based on Internet of things

Country Status (1)

Country Link
CN (1) CN118230294B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108909624A (en) * 2018-05-13 2018-11-30 西北工业大学 A kind of real-time detection of obstacles and localization method based on monocular vision

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112211145A (en) * 2020-09-07 2021-01-12 徐州威卡电子控制技术有限公司 Semi-automatic road sweeping method and device for road sweeper
CN112417967B (en) * 2020-10-22 2021-12-14 腾讯科技(深圳)有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium
CN113033436B (en) * 2021-03-29 2024-04-16 京东鲲鹏(江苏)科技有限公司 Obstacle recognition model training method and device, electronic equipment and storage medium
CN113486726B (en) * 2021-06-10 2023-08-01 广西大学 Rail transit obstacle detection method based on improved convolutional neural network
US20230417559A1 (en) * 2022-06-24 2023-12-28 Here Global B.V. Method, apparatus, and system for detecting road obstruction intensity for routing or mapping

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108909624A (en) * 2018-05-13 2018-11-30 西北工业大学 A kind of real-time detection of obstacles and localization method based on monocular vision

Also Published As

Publication number Publication date
CN118230294A (en) 2024-06-21

Similar Documents

Publication Publication Date Title
CN109544932B (en) Urban road network flow estimation method based on fusion of taxi GPS data and gate data
CN112462346A (en) Ground penetrating radar roadbed defect target detection method based on convolutional neural network
CN103235933B (en) A kind of vehicle abnormality behavioral value method based on HMM
CN111784017B (en) Road traffic accident number prediction method based on road condition factor regression analysis
CN108550259A (en) Congestion in road judgment method, terminal device and computer readable storage medium
CN106846816B (en) A kind of discretization traffic state judging method based on deep learning
CN110555476B (en) Intelligent vehicle lane change track prediction method suitable for man-machine hybrid driving environment
CN116153078B (en) Road safety assessment method and device based on millimeter wave radar and storage medium
CN111598142B (en) Outdoor terrain classification method for wheeled mobile robot
CN109598930B (en) Automatic detect overhead closed system
CN115662113B (en) Signal intersection man-vehicle game conflict risk assessment and early warning method
CN105117737A (en) Method and apparatus for determining real direction of vehicle on basis of locus vector of vehicle
CN116631186B (en) Expressway traffic accident risk assessment method and system based on dangerous driving event data
CN116168356A (en) Vehicle damage judging method based on computer vision
CN117238126A (en) Traffic accident risk assessment method under continuous flow road scene
CN118230294B (en) Urban road sweeping roadway condition sensing system and method based on Internet of things
CN112560915A (en) Urban expressway traffic state identification method based on machine learning
CN114291081B (en) Vehicle collision detection method based on artificial intelligence algorithm
CN118015839B (en) Expressway road domain risk prediction method and device
CN112580754B (en) Vehicle cleanliness judgment method and device suitable for construction site and storage medium
CN117169716B (en) Motor health diagnosis system based on Markov random field algorithm
CN118031913A (en) Unmanned aerial vehicle survey and drawing data processing device
CN116653980A (en) Driver driving habit analysis system and driving habit analysis method
CN116386020A (en) Method and system for predicting exit flow of highway toll station by multi-source data fusion
CN114140734A (en) Video data-based off-store business analysis method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant