CN109871804A - A kind of method and system of shop stream of people discriminance analysis - Google Patents
A kind of method and system of shop stream of people discriminance analysis Download PDFInfo
- Publication number
- CN109871804A CN109871804A CN201910123142.XA CN201910123142A CN109871804A CN 109871804 A CN109871804 A CN 109871804A CN 201910123142 A CN201910123142 A CN 201910123142A CN 109871804 A CN109871804 A CN 109871804A
- Authority
- CN
- China
- Prior art keywords
- shop
- human body
- identification
- frame
- key point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 80
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000013528 artificial neural network Methods 0.000 claims abstract description 43
- 238000012545 processing Methods 0.000 claims abstract description 40
- 238000003062 neural network model Methods 0.000 claims abstract description 33
- 230000033001 locomotion Effects 0.000 claims abstract description 28
- 238000012544 monitoring process Methods 0.000 claims abstract description 18
- 238000012549 training Methods 0.000 claims description 62
- 238000010276 construction Methods 0.000 claims description 24
- 230000000694 effects Effects 0.000 claims description 18
- 238000012417 linear regression Methods 0.000 claims description 14
- 230000008859 change Effects 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 8
- 230000008676 import Effects 0.000 claims description 8
- 238000012360 testing method Methods 0.000 claims description 8
- 238000012795 verification Methods 0.000 claims description 8
- 230000002159 abnormal effect Effects 0.000 claims description 7
- 230000005540 biological transmission Effects 0.000 claims description 7
- 238000012800 visualization Methods 0.000 claims description 7
- 238000004891 communication Methods 0.000 claims description 5
- 238000012546 transfer Methods 0.000 claims description 5
- 239000000284 extract Substances 0.000 abstract description 7
- 239000007787 solid Substances 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 17
- 230000004438 eyesight Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 4
- 239000000969 carrier Substances 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 206010000117 Abnormal behaviour Diseases 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 239000013589 supplement Substances 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 244000062793 Sorghum vulgare Species 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 235000019713 millet Nutrition 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of method and systems of shop stream of people discriminance analysis.The method includes the steps: server reads video flowing, extract video frame, identifying the close rectangular shaped rim for surrounding recognizable object profile and recognizable object according to deep neural network Model of Target Recognition by video image processing identification module may be the probability of a certain setting target, the key point of corresponding skeleton point is identified according to deep neural network key point identification model, interval time sequence relation between relative position and frame and frame based on key point calculates the posture of target person in frame, track, attribute on the moments such as motion state and certain time axis, finally passenger flow identifiable in certain time length group is counted and is presented to user.The system comprises the components for realizing the method.What the present invention constructed the video monitoring data of comprehensive solid and shop operation management combines system, personnel in objective and accurate, valuable shop can be provided for owner, passenger flow monitor is analyzed.
Description
Technical field
The present invention relates to computer visions and image identification technical field, and in particular to a kind of shop stream of people discriminance analysis
Method and system.
Background technique
Image recognition technology is concrete application of the pattern-recognition in image domains, be by computer to image at
Reason, analysis and classification understand, to identify the target of different mode and the technology of object.Image recognition technology is directed to and observes
Image is analyzed, and is differentiated object and is judged classification, is realized re-recognizing for image, that is, is utilized present information processing Integration Mode and meter
Program is calculated to simulate and complete understanding, the understanding process of the mankind.Narrow sense says that image recognition is to be mainly characterized by base with image
Plinth, the subject [1] that image is identified with image processing techniques.
Image recognition technology be widely used target identification in multiple fields, and involved in this patent in the narrow sense and
Gesture recognition technology has forward position in multiple fields such as robot vision, biological medicine, security protection security, automatic Pilots
Using [2].
Computer vision and image recognition this technology branch are generally always in the situation increased, in 2005-2009
After year of short duration technical bottleneck phase, the stage of Fast Growth is between 2009-2016 on the whole, number of applicant increases
Nearly 1.5 times, applications increase 2.3 times, and number of applicant and applications decline in 2017 may be due to part 2017
The undocumented reason of application.The whole world shares the relevant patent application in 43397 fields, and wherein domestic applications amount is 19856
Part.However in the world, compared with the PCT application amount of artificial intelligence field other technologies branch, computer vision and figure
Applications as identifying direction are less, and the domestic branch is still in unknown to the public state.The direction at home, enterprise's application
People and scientific research institution applicant are had half share, and wherein first five position is respectively the Chinese Academy of Sciences, Ou Po, millet, Baidu and Tencent
[3]。
In enterprise applicant, the patent of most computers vision and image recognition is all applied on intelligent terminal, by
In the upgrading of intelligent terminal hardware equipment, the continuous promotion of image-capable, while image processing requirements grow with each passing hour, mutually
The Innovation Input of networking company and intelligent terminal manufacturer in this field is also increasing.In these computer visions and image
It identifies in patent, covers face recognition technology, license plate recognition technology [4], iris recognition technology [5] and virtual reality combination skill
Art [6] etc..It, can be but very deficient with the technical application in conjunction with computer vision and image recognition however in solid shop.
Currently, all there have been monitoring system in most of supermarkets, retail shop, dining room, can out of multiple angle monitoring shops situation.However it is existing
Although some monitoring systems have a large amount of image data, but lack identification and analysis to personnel in shop, needing in shop can
Situation and the system for being used data in discriminance analysis shop.
Patent document CN108921072A, publication date 2018.11.30 disclose a kind of stream of people of view-based access control model sensor
Statistical method, apparatus and system are measured, this method comprises: acquiring the image specified in region by specific frequency using visual sensor
Data;Each frame image in described image data is detected, when recognizing in any frame image includes portrait, for identification
Portrait out distributes identiflication number;Based on analyzing the effective of the identiflication number with the continuous multiple image of any frame image
Property;It is counted to effective identiflication number is determined as, to realize to the people flow rate statistical in the specified region.The invention
Advantage is: being also added into analytical procedure after identifying portrait, to carry out further effectively analysis to each identiflication number, leads to
It crosses and effective identiflication number is carried out to count the statistics realized to flow of the people, to greatly improve the accurate of people flow rate statistical
Property, flow of the people have an important value to commercial field as a statistical data, for example, shop in different periods into shop number, with
And the distribution situation into shop number in shop can analyze out many valuable numbers in combination with the sales data in shop
According to, can for the Effective Operation and business in shop grow up guidance be provided.
However, the patent document is to still fall within primary application using image data statistics flow of the people in monitoring system,
It has no based on computer vision and image recognition technology, deep processing, discriminance analysis stream of people's identity is carried out to shop monitoring data
And posture, preferably shop to be instructed to run, while it is different to supervise people and object in shop staff and in time discovery shop
The method and system of Chang Hangwei.
Bibliography:
[1] Xu Caiyun image recognition technology Review Study [J] computer knowledge and technology, 2013,9 (10): 2446-
2447.
[2] Li Huaxing, Bloomberg image recognition technology Chinese patent application status analysis [A]
[3] Chinese patent protects association " report of artificial intelligence technology patent depth analysis "
[4] SMART CITY MANAGEMENT AND SCHEDULING PLATFORM SYSTEM. patent
WO2018161295A1.
[5] a kind of authentication method of user identity, mobile terminal patent CN108650247A.
[6]VIRTUALITY-AND-REALITY-COMBINED INTERACTIVE METHOD AND SYSTEM FOR
MERGING REAL ENVIRONMENT. patent WO2016150292A1.
Summary of the invention
The first purpose of this invention is aiming at the shortcomings in the prior art, to provide a kind of side of shop stream of people discriminance analysis
Method.
The system that the another purpose of the present invention is to provide a kind of shop stream of people discriminance analysis.
To realize above-mentioned first purpose, the technical solution adopted by the present invention is that:
A kind of method of shop stream of people discriminance analysis, comprising the following steps:
S1, camera is mounted in shop, and be connected with server;
After the power supply connection of S2, the camera are opened, video flowing is read on the server, extraction obtains video
Frame;
S3, using every frame picture of extraction as the input of video image processing identification module;
S4, the video image processing identification module are according to the deep neural network Model of Target Recognition pre-established, mark
Knowing the rectangular shaped rim for closely surrounding the profile of recognizable object out and recognizable object may be the probability of a certain setting target, mesh
Mark setting includes identity, age and gender;
S6, to the frame handled well, known one by one according to the deep neural network key point identification model pre-established
Not, 17 key points of corresponding skeleton point are identified;
S7, the relative position based on the key point in frame and the interval time sequence relation between frame and frame calculate side
Attribute in frame on the posture of target person, track, motion state moment and certain time axis;
S9, passenger flow identifiable in certain time length group is counted, statistical content include customer's ages statistics,
Dwell regions statistics in shop, residence time statistics in track statistics and shop in shop;The salesman identified is counted, in statistics
Hold and follows statistics including statistics of checking card, duration of turning out for work statistics, work ceremony statistics and customer;
S10, by statistical result to visualize and show in the form of two kinds of data sheet, from multidimensional Interpretation shop stream of people's feelings
Condition.
It further include step S5, the frame for being higher than setting value to intersection between step S4 and S6 as a preference
It is deleted, the frame to probability lower than desired value is deleted, and the frame for being less than distinguished value to area is deleted.
As another preference, in step S6,17 key points of skeleton point are corresponded in frame in addition to identifying, also
Providing each key point may correct probability.
As another preference, further include between step S7 and S9 step S8, to calculated people and object abnormal conditions into
Row alarm.
As another preference, the method for the shop stream of people discriminance analysis includes establishing deep neural network target identification
It is the step of model, specific as follows:
S401, Primary Construction deep neural network Model of Target Recognition
Image is saved as into data file, imports image processing software, manual identification closely surrounds entire human body, entire out
Head, it is entire above the waist, the rectangular shaped rim of the profile of the entire lower part of the body, by the size of above each rectangular shaped rim, each rectangular shaped rim
In coloured typies, shape and the area of each color area and identity, age and the gender of target person in each rectangular shaped rim establish
Linear regression model (LRM);
S402, building training set
Every record need to include: to surround entire human body, entire head, the entire upper part of the body, the entire lower part of the body in training set
The size of the rectangular shaped rim of profile, coloured typies, shape and the area of each color area in each rectangular shaped rim, in each rectangular shaped rim
Identity, age and the gender of target person;
The deep neural network Model of Target Recognition of S403, training Primary Construction
The deep neural network Model of Target Recognition of Primary Construction is trained study with training set, adjusts and joins, is instructed
Deep neural network Model of Target Recognition after white silk;
Deep neural network Model of Target Recognition after S404, verifying training
Deep neural network Model of Target Recognition after training is passed through to the test of associated verification collection, until reaching accuracy
Requirement, obtain final deep neural network key point identification model;Wherein verifying collection every record need to include information with
Training set is identical.
As another preference, the method for the shop stream of people discriminance analysis includes establishing the knowledge of deep neural network key point
It is the step of other model, specific as follows:
S601, Primary Construction deep neural network key point identification model
Image is saved as into data file, imports image processing software, manual identification closely surrounds entire human body, entire out
Head, it is entire above the waist, 17 key points of the rectangular shaped rim of the profile of the entire lower part of the body and skeleton point, will above each square
The size of shape frame, target person in coloured typies, shape and the area of each color area and each rectangular shaped rim in each rectangular shaped rim
Linear regression model (LRM) is established in the position of 17 key points of the skeleton point of object;
S602, building training set
Every record need to include: to surround entire human body, entire head, the entire upper part of the body, the entire lower part of the body in training set
The size of the rectangular shaped rim of profile, coloured typies, shape and the area of each color area in each rectangular shaped rim, in each rectangular shaped rim
The position of 17 key points of the skeleton point of target person;
The deep neural network key point identification model of S603, training Primary Construction
The deep neural network key point identification model of Primary Construction is trained study with training set, adjusts and joins, is obtained
Deep neural network key point identification model after training;
Deep neural network key point identification model after S604, verifying training
Deep neural network key point identification model after training is passed through to the test of associated verification collection, until reaching accurate
The requirement of degree obtains final deep neural network key point identification model;Wherein verifying collection every records the information that need to include
It is identical as training set.
To realize above-mentioned second purpose, the technical solution adopted by the present invention is that:
A kind of system of shop stream of people discriminance analysis, comprising:
Information collecting device in shop: including server, camera and display screen;The camera by communications carrier with
The server of Local or Remote is connected, and the real-time data transmission of acquisition is returned server storage, the server reads video
Video frame is flowed and extracts to obtain, the display screen is used to intuitively transfer and check the video image of the camera acquisition;
Video image processing identification module: it for handling video frame, identifies and closely surrounds entire human body, entire
Head, it is entire above the waist, the rectangular shaped rim of the profile of the entire lower part of the body, to the size of above each rectangular shaped rim, each rectangular shaped rim
In the coloured typies of each color area, shape and area carry out image recognition;The video image processing identification module is further
Include:
Human body target recognition unit, the human body target recognition unit include identification subelement, gender identification son list
Member and age identify subelement;
Human body attitude recognition unit, the human body attitude recognition unit include human body key point identification subelement and human body appearance
State judgment sub-unit;The position of human body key point identification subelement 17 key points of skeleton point for identification;Institute
State human body attitude judgment sub-unit according to the human body key point identify the relative position of key point that subelement is identified and
Interval time sequence relation between frame and frame, judgement obtain human body attitude;
Comprehensive discriminance analysis unit, the comprehensive discriminance analysis unit include Activity recognition subelement, motion path identification
Subelement, comprehensive descision subelement;The Activity recognition subelement is used for the relative position according to human body attitude, human body key point
And in certain time length human body key point change in location, discriminatory analysis obtains human body behavior property;Motion path identification
Unit is used for the motion path that human body is calculated in basis change in location of human body key point in one section of duration;The synthesis is sentenced
Disconnected subelement be used for according to identification subelement, gender identification subelement, age identification subelement, Activity recognition subelement,
Motion path identify subelement as a result, comprehensive descision obtains situation of the target person in shop, the target person is in shop
Situation in paving includes the ages statistics of customer, dwell regions statistics in shop, residence time in track statistics and shop in shop
Statistics, further includes the work summary to the salesman identified, and the work summary includes check card statistics, duration of turning out for work statistics, work
Make ceremony statistics and customer follows statistics;
Integrated information analyzes display module: for responding user instruction, allowing user intuitively to transfer by visualization interface
And/or the case where inquiry shop stream of people monitoring, identification, analysis.
As a preference, the video image processing identification module further includes frame weight processing unit, is used for counterweight
The frame that part is closed higher than setting value is deleted, and is deleted lower than expected frame probability, being less than to area can recognize
The frame of value is deleted.
As another preference, the human body key point identification subelement is also used to provide each key point may be correctly
Probability.
As another preference, the system of the shop stream of people discriminance analysis further includes alarm module, for calculating
People alarm with object abnormal conditions.
The invention has the advantages that:
1, it invention increases depth of the video monitoring in terms of data analysis in current shop, constructs comprehensive and three-dimensional
Video monitoring data and shop operation management combine system, provide personnel, passenger flow monitor in valuable shop for owner
Analysis is not only in frontline technology in computer vision and field of image recognition, and is above primary completely new trial in application.
The present invention applies once putting into, it can be achieved that following technical effect: 1. obtaining in real time and summarizes stream of people's identity in shop, passenger flow people
The shops flow informations such as number, passenger flow gender analyze shop traffic-operating period from multi-angle, can also establish the shop of various dimensions
Interior customer's portrait.2. personnel state in real-time acquisition shop, including salesman is to hilllock time, operating time, work performance, mutual with customer
Emotionally condition, experience situation in customer shop, residence time, track etc. in shop, can work to salesman and preferably be supervised, to shop
Reference role is played in the improvement of interior service.3. people and object abnormal behaviour situation are alarmed in shop.
2, the present invention identifies the rectangular shaped rim of the close profile for surrounding recognizable object, and especially close encirclement is entire
Human body, entire head, it is entire above the waist, the rectangular shaped rim of the profile of the entire lower part of the body, to the size of above each rectangular shaped rim, respectively
The coloured typies of each color area, shape and area are analyzed in rectangular shaped rim, and then obtain the identity of target person, age
And gender attribute, method is simple and feasible, and accuracy is high.
3, the present invention also by mark skeleton point 17 key points, and then simplicity accurately obtain human body attitude and
Motion profile.
4, the present invention is also marked in addition to immediately arriving in rectangular shaped rim other than the identity of recognizable object, age and gender attribute
Know its probability value out, possible correct probability value is also provided when identifying 17 key points of skeleton point, this facilitates
The lower situation of probability value is modified in practical operation.
5, the invention also includes the frames for being higher than setting value to intersection to delete, and is lower than expected frame to probability
The step of being deleted, being deleted the frame for being less than distinguished value, can significantly improve the accuracy of result, avoid because of identification
Mistake caused by error.
Detailed description of the invention
Attached drawing 1 is a kind of flow chart of the method for shop stream of people discriminance analysis of the present invention.
Attached drawing 2 is a kind of structural block diagram of the system of shop stream of people discriminance analysis of the present invention.
Attached drawing 3 is the structural block diagram of the system of another shop stream of people's discriminance analysis of the invention.
Specific embodiment
It elaborates with reference to the accompanying drawing to specific embodiment provided by the invention.
Appended drawing reference involved in attached drawing and component part are as follows:
1. 2. camera of server
3. 4. video image processing identification module of display screen
41. 411. identification subelement of human body target recognition unit
412. genders identify that 413. age of subelement identifies subelement
42. 421. human body key point of human body attitude recognition unit identifies subelement
422. human body attitude judgment sub-units 43. integrate discriminance analysis unit
431. Activity recognition subelement, 432. motion path identifies subelement
433. comprehensive descision subelement, 5. integrated information analyzes display module
44. 6. alarm module of frame weight processing unit
A kind of 1 method of shop stream of people discriminance analysis of the present invention of embodiment
The present embodiment provides a kind of method of shop stream of people discriminance analysis, a kind of people from shop of the present invention shown in Figure 1
Flow the flow chart of the method for discriminance analysis, the method for the shop stream of people discriminance analysis the following steps are included:
S1, camera is mounted on to fixation position unobscured in shop, then it is connected with server by cable.
S2, camera power supply are connected open after, the video frame extracted of video flowing is read on the server, by setting in advance
Fixed mode reads part or whole frame.It is per second that the frame per second minimum of extraction can use a frame.
S3, the every frame picture that will be extracted, the input as video image processing identification module.
S4, video image processing identification module are identified according to the deep neural network Model of Target Recognition pre-established
The rectangular shaped rim and recognizable object of the close profile for surrounding recognizable object may be the probability of a certain setting target.Target is set
It surely include identity, age and gender.
S5, the frame for being higher than setting value to intersection are deleted, and are deleted lower than expected frame probability, right
The frame that area is less than distinguished value is deleted.
S6, to the frame handled well, known one by one according to the deep neural network key point identification model pre-established
Not, 17 key points and each key point for identifying corresponding skeleton point may correct probability.
S7, the relative position based on the key point in frame and the interval time sequence relation between frame and frame calculate side
Attribute in frame on the moments such as the posture of target person, track, motion state and certain time axis.
S8, it alarms calculated people and object abnormal conditions.
S9, passenger flow identifiable in certain time length group is counted, statistical content includes but is not limited to customer's age
Level statistics, dwell regions statistics in shop, residence time statistics etc. in track statistics and shop in shop.The salesman identified is carried out
Work summary, statistical content include but is not limited to check card statistics, duration of turning out for work statistics, work ceremony statistics and customer follow statistics
Deng.
S10, by all statistical results to visualize and show in the form of two kinds of data sheet, from multidimensional Interpretation shop people
Flow situation.
Further, the method for the shop stream of people discriminance analysis further includes establishing deep neural network Model of Target Recognition
The step of, it is specific as follows:
S401, Primary Construction deep neural network Model of Target Recognition
Image is saved as into data file, imports image processing software, manual identification closely surrounds entire human body, entire out
Head, it is entire above the waist, the rectangular shaped rim of the profile of the entire lower part of the body, by the size of above each rectangular shaped rim, each rectangular shaped rim
In coloured typies, shape and the area of each color area and identity, age and the gender of target person in each rectangular shaped rim establish
Linear regression model (LRM).
S402, building training set
Every record need to include: to surround entire human body, entire head, the entire upper part of the body, the entire lower part of the body in training set
The size of the rectangular shaped rim of profile, coloured typies, shape and the area of each color area in each rectangular shaped rim, in each rectangular shaped rim
Identity, age and the gender of target person.Image may be from the volunteer recruited in training set, convenient for obtaining its true body
Part, age and gender information.
The deep neural network Model of Target Recognition of S403, training Primary Construction
The deep neural network Model of Target Recognition of Primary Construction is trained study with training set, adjusts and joins, is instructed
Deep neural network Model of Target Recognition after white silk.
Deep neural network Model of Target Recognition after S404, verifying training
The test that deep neural network Model of Target Recognition after training is finally passed through to multiple associated verification collection, until reaching
To the requirement of accuracy, final deep neural network key point identification model is obtained.Wherein every record of verifying collection need to include
Information it is identical as training set.
Further, the method for the shop stream of people discriminance analysis further includes establishing deep neural network key point identification mould
It is the step of type, specific as follows:
S601, Primary Construction deep neural network key point identification model
Image is saved as into data file, imports image processing software, manual identification closely surrounds entire human body, entire out
Head, it is entire above the waist, 17 key points of the rectangular shaped rim of the profile of the entire lower part of the body and skeleton point, will above each square
The size of shape frame, the people of coloured typies, shape and the area and each rectangular shaped rim object of each color area in each rectangular shaped rim
Linear regression model (LRM) is established in the position of 17 key points of body skeleton point.
S602, building training set
Every record need to include: to surround entire human body, entire head, the entire upper part of the body, the entire lower part of the body in training set
The size of the rectangular shaped rim of profile, coloured typies, shape and the area of each color area in each rectangular shaped rim, in each rectangular shaped rim
The position of 17 key points of the skeleton point of target person.17 keys of the skeleton point of every record in training set
Point comes from authoritative doctor.
The deep neural network key point identification model of S603, training Primary Construction
The deep neural network key point identification model of Primary Construction is trained study with training set, adjusts and joins, is obtained
Deep neural network key point identification model after training.
Deep neural network key point identification model after S604, verifying training
Deep neural network key point identification model after training is finally passed through to the test of multiple associated verification collection, until
The requirement for reaching accuracy obtains final deep neural network key point identification model.Wherein every record of verifying collection needs to wrap
The information included is identical as training set.
It should be noted that the present invention has filled up blank of the video monitoring in terms of data analysis in current shop, increase
The depth of monitoring, and provide personnel in valuable shop, passenger flow monitor analysis for owner, not only computer vision with
It is in frontline technology in field of image recognition, and is above primary completely new trial in application.The present invention once puts into application, can be with
The following technical effects are achieved: one, it obtains in real time and summarizes the shops flows such as stream of people's identity in shop, passenger flow number, passenger flow gender
Information analyzes shop traffic-operating period from multi-angle, can also establish customer in the shop of various dimensions and draw a portrait.Two, it obtains in real time
Take personnel state in shop, including salesman to the hilllock time, operating time, work performance, experienced in shop with Customer Interaction situation, customer
Situation, residence time, track etc. in shop can work to salesman and preferably be supervised, play ginseng to the improvement serviced in shop
The effect of examining.Three, people and object abnormal behaviour situation are alarmed in shop.
The present invention identifies the rectangular shaped rim of the close profile for surrounding recognizable object, especially closely surrounds entire people
Body, entire head, it is entire above the waist, the rectangular shaped rim of the profile of the entire lower part of the body, to the size of above each rectangular shaped rim, each square
The coloured typies of each color area, shape and area are analyzed in shape frame, and identity, the age of target person are obtained by this
And gender attribute, method is simple and feasible, and accuracy is high.17 key points of the present invention by mark skeleton point, Jin Erke
Simplicity accurately obtains human body attitude and motion profile.In addition, the present invention is in addition to immediately arriving at recognizable object in rectangular shaped rim
Identity, outside age and gender attribute, also identify its probability value, also provided when identifying 17 key points of skeleton point
Possible correct probability value, this helps in actual operation to be modified the lower situation of probability value.The invention also includes
The frame for being higher than setting value to intersection is deleted, and is deleted lower than expected frame probability, be can recognize to being less than
The step of frame of value is deleted is remarkably improved the accuracy of result, mistake caused by avoiding because of identification error.
The method of another shop stream of people's discriminance analysis of 2 present invention of embodiment
The present embodiment provides a kind of method of shop stream of people discriminance analysis, the method for the shop stream of people discriminance analysis includes
Following steps:
S1, camera is mounted on to fixation position unobscured in shop, then it is connected with server by cable.
Further, it is two mega pixel of 1080P that the camera, which is clarity, frame number 25FPS, camera lens size 8mm,
38 ° of angle of monitoring, the single channel monitoring camera of distance 20-30 meters of monitoring.
S2, camera power supply are connected open after, the video frame extracted of video flowing is read on the server, by setting in advance
Fixed mode reads part or whole frame.It is per second that the frame per second minimum of extraction can use a frame.
S3, the every frame picture that will be extracted, the input as video image processing identification module.
S4, video image processing identification module are identified according to the deep neural network Model of Target Recognition pre-established
The rectangular shaped rim and recognizable object of the close profile for surrounding recognizable object may be the probability of a certain setting target.Target is set
It surely include identity, age and gender.
S5, the frame for being higher than setting value to intersection are deleted, and are deleted lower than expected frame probability, right
The frame that area is less than distinguished value is deleted.
S6, to the frame handled well, known one by one according to the deep neural network key point identification model pre-established
Not, 17 key points and each key point for identifying corresponding skeleton point may correct probability.
S7, the relative position based on the key point in frame and the interval time sequence relation between frame and frame calculate side
Attribute in frame on the moments such as the posture of target person, track, motion state and certain time axis.
S8, it alarms calculated people and object abnormal conditions.
S9, passenger flow identifiable in certain time length group is counted, statistical content includes but is not limited to customer's age
Level statistics, dwell regions statistics in shop, residence time statistics etc. in track statistics and shop in shop.The salesman identified is carried out
Work summary, statistical content include but is not limited to check card statistics, duration of turning out for work statistics, work ceremony statistics and customer follow statistics
Deng.
S10, by all statistical results to visualize and show in the form of two kinds of data sheet, from multidimensional Interpretation shop people
Flow situation.
Further, the video image acquired for showing camera, and providing the display screen of visualization interface is 23
Inch IPS display screen, resolution ratio 1920*1080 can clearly show the image and visualization interface of camera acquisition.
Further, the method for the shop stream of people discriminance analysis further includes establishing deep neural network Model of Target Recognition
The step of, it is specific as follows:
S401, Primary Construction deep neural network Model of Target Recognition
Image is saved as into data file, imports image processing software, manual identification closely surrounds entire human body, entire out
Head, it is entire above the waist, the rectangular shaped rim of the profile of the entire lower part of the body, by the size of above each rectangular shaped rim, each rectangular shaped rim
In coloured typies, shape and the area of each color area and identity, age and the gender of target person in each rectangular shaped rim establish
Linear regression model (LRM).
S402, building training set
Every record need to include: to surround entire human body, entire head, the entire upper part of the body, the entire lower part of the body in training set
The size of the rectangular shaped rim of profile, coloured typies, shape and the area of each color area in each rectangular shaped rim, in each rectangular shaped rim
Identity, age and the gender of target person.Image may be from the volunteer recruited in training set, convenient for obtaining its true body
Part, age and gender information.
The deep neural network Model of Target Recognition of S403, training Primary Construction
The deep neural network Model of Target Recognition of Primary Construction is trained study with training set, adjusts and joins, is instructed
Deep neural network Model of Target Recognition after white silk.
Deep neural network Model of Target Recognition after S404, verifying training
The test that deep neural network Model of Target Recognition after training is finally passed through to multiple associated verification collection, until reaching
To the requirement of accuracy, final deep neural network key point identification model is obtained.Wherein every record of verifying collection need to include
Information it is identical as training set.
Further, the method for the shop stream of people discriminance analysis further includes establishing deep neural network key point identification mould
It is the step of type, specific as follows:
S601, Primary Construction deep neural network key point identification model
Image is saved as into data file, imports image processing software, manual identification closely surrounds entire human body, entire out
Head, it is entire above the waist, 17 key points of the rectangular shaped rim of the profile of the entire lower part of the body and skeleton point, will above each square
The size of shape frame, the people of coloured typies, shape and the area and each rectangular shaped rim object of each color area in each rectangular shaped rim
Linear regression model (LRM) is established in the position of 17 key points of body skeleton point.
S602, building training set
Every record need to include: to surround entire human body, entire head, the entire upper part of the body, the entire lower part of the body in training set
The size of the rectangular shaped rim of profile, coloured typies, shape and the area of each color area in each rectangular shaped rim, in each rectangular shaped rim
The position of 17 key points of the skeleton point of target person.17 keys of the skeleton point of every record in training set
Point comes from authoritative doctor.
The deep neural network key point identification model of S603, training Primary Construction
The deep neural network key point identification model of Primary Construction is trained study with training set, adjusts and joins, is obtained
Deep neural network key point identification model after training.
Deep neural network key point identification model after S604, verifying training
Deep neural network key point identification model after training is finally passed through to the test of multiple associated verification collection, until
The requirement for reaching accuracy obtains final deep neural network key point identification model.Wherein every record of verifying collection needs to wrap
The information included is identical as training set.
A kind of 3 system of shop stream of people discriminance analysis of the present invention of embodiment
The present embodiment provides a kind of system of shop stream of people discriminance analysis, a kind of people from shop of the present invention shown in Figure 2
The structural block diagram of the system of discriminance analysis is flowed, the system of the shop stream of people discriminance analysis includes:
One, information collecting device in shop: including server 1, camera 2 and display screen 3.The camera 2 passes through Wi-
The communications carriers such as Fi, USB data line or cable are connected with the server 1 of Local or Remote, by the real-time data transmission of acquisition
It returns server 1 to store, server 1 reads video flowing, and extracts and obtain video frame, reads by preset mode partially or complete
Portion's frame.The display screen 3 is used to intuitively transferring and checking the video image that the camera 2 acquires.
Two, video image processing identification module 4: when server 1 receives the data of the transmission of camera 2 and extracts video frame
Afterwards, video frame is handled by the video image processing identification module 4, by reading the image data of camera 2, mark
Know closely surround out entire human body, entire head, it is entire above the waist, the rectangular shaped rim of the profile of the entire lower part of the body, to above each
The size of rectangular shaped rim, coloured typies, shape and the area of each color area in each rectangular shaped rim, carries out image recognition.It is described
Video image processing identification module 4 further comprises:
Human body target recognition unit 41, the human body target recognition unit 41 include identification subelement 411, gender knowledge
Small pin for the case unit 412 and age identify subelement 413.The identification subelement 411 is used to identify human body according to image information
Identity is customer or salesman;Gender identification subelement 412 is used to identify human body gender according to image information, for male or
Female;The age identification subelement 413 was used for according to the image information identification human body age, and the human body age of identification can be specific
Age value, such as 30 years old, 45 years old, be also possible to a range of age, such as 0-5 years old, 6-10 years old ..., the human body age be identified as year
Age range helps to improve the accuracy of customer's ages statistics.
Further, the video image processing identification module 4 further includes human body attitude recognition unit 42, the human body appearance
State recognition unit 42 includes human body key point identification subelement 421 and human body attitude judgment sub-unit 422.The human body key point
Identify the position of 17 key points of skeleton point for identification of subelement 421;The human body attitude judgment sub-unit 422
The interval time between the relative position and frame and frame for the key point that subelement 421 is identified is identified according to the human body key point
Sequence relation, judgement obtain human body attitude.
Further, the video image processing identification module 4 further includes comprehensive discriminance analysis unit 43, the comprehensive knowledge
Other analytical unit 43 is based on the human body target recognition unit 41 and human body attitude recognition unit 42, the comprehensive discriminance analysis list
Member 43 specifically includes Activity recognition subelement 431, motion path identification subelement 432 and comprehensive descision subelement 433.The row
It is used for for identification subelement 431 according to human body key point in human body attitude, the relative position of human body key point and certain time length
Change in location, discriminatory analysis obtain the behavior property of target person;The motion path identification subelement 432 is used for according to one
The motion path of target person is calculated in the change in location of human body key point in Duan Shichang;The comprehensive descision subelement
433 for single according to identification subelement 411, gender identification subelement 412, age identification subelement 413, Activity recognition
Member 431, motion path identification subelement 432 as a result, comprehensive descision obtains situation of the target person in shop, including but not
It is limited to customer's ages statistics, dwell regions statistics in shop, residence time statistics etc. in track statistics and shop in shop, further includes
Statistics to the salesman identified, statistical content include but is not limited to check card statistics, duration of turning out for work statistics, work ceremony statistics and
Customer follows statistics etc..
Three, integrated information analyze display module 5: for responding user instruction, by visualization interface allow user intuitively
The case where transferring and/or inquire shop stream of people monitoring, identification, analysis.
The system of another shop stream of people's discriminance analysis of 4 present invention of embodiment
The present embodiment provides a kind of system of shop stream of people discriminance analysis, the system packet of the shop stream of people discriminance analysis
It includes:
One, information collecting device in shop: including server, camera and display screen.The camera by Wi-Fi,
The communications carriers such as USB data line or cable are connected with the server of Local or Remote, and the real-time data transmission of acquisition is returned clothes
Business device storage, server reads video flowing, and extracts and obtain video frame, reads part or whole frame by preset mode.
The display screen is used to intuitively transfer and check the video image of the camera acquisition.
Two, video image processing identification module: after server receives the data of thecamera head and extracts video frame,
Video frame is handled by the video image processing identification module, by reading the image data of camera, according to pre-
The deep neural network Model of Target Recognition first established identify closely surround entire human body, entire head, it is entire above the waist, it is whole
The rectangular shaped rim of the profile of a lower part of the body, to the size of above each rectangular shaped rim, the color of each color area in each rectangular shaped rim
Type, shape and area carry out image recognition.The video image processing identification module further comprises:
Human body target recognition unit, the human body target recognition unit include identification subelement, gender identification son list
Member and age identify subelement.The identification subelement is used for according to image information identification human body identity, still for customer
Salesman;The gender identification subelement is used to identify human body gender according to image information, for male or female;Age identification is single
Member is for identifying the human body age according to image information, and the human body age of identification can be specific age value, such as 30 years old, 45 years old,
It is also possible to a range of age human body age is identified as the range of age and helps to improve customer such as 0-5 years old, 6-10 years old ...
The accuracy of ages statistics.
Further, the video image processing identification module further includes human body attitude recognition unit, the human body attitude
Recognition unit includes human body key point identification subelement and human body attitude judgment sub-unit.The human body key point identifies subelement
For identifying the position of 17 key points of skeleton point according to the deep neural network key point identification model pre-established;
The human body attitude judgment sub-unit according to the human body key point identify the relative position of key point that subelement is identified with
And the interval time sequence relation between frame and frame, judgement obtain human body attitude.
Further, the video image processing identification module further includes comprehensive discriminance analysis unit, the comprehensive identification
Analytical unit is based on the human body target recognition unit and human body attitude recognition unit, and the comprehensive discriminance analysis unit specifically wraps
Include Activity recognition subelement, motion path identification subelement and comprehensive descision subelement.The Activity recognition subelement is used for root
According to the change in location of human body key point in human body attitude, the relative position of human body key point and certain time length, discriminatory analysis is obtained
The behavior property of target person;The motion path identification subelement is used for the position according to the human body key point in one section of duration
Set the motion path that target person is calculated in variation;The comprehensive descision subelement is used for according to identification subelement, property
Not Shi Bie subelement, the age identification subelement, Activity recognition subelement, motion path identification subelement as a result, comprehensive descision
Obtain situation of the target person in shop, including but not limited to customer's ages count, dwell regions count, in shop in shop
Residence time statistics etc., further includes the statistics to the salesman identified, statistical content includes but is not limited in track statistics and shop
Statistics, duration of turning out for work statistics, the work ceremony of checking card count and customer follows statistics etc..
Three, integrated information analyzes display module: for responding user instruction, allowing user intuitively to adjust by visualization interface
The case where taking and/or inquire shop stream of people monitoring, identification, analysis.
The system of another shop stream of people's discriminance analysis of 5 present invention of embodiment
The present embodiment provides a kind of system of shop stream of people discriminance analysis, another shop of the present invention shown in Figure 3
The system of the structural block diagram of the system of stream of people's discriminance analysis, the shop stream of people discriminance analysis includes:
One, information collecting device in shop: including server 1, camera 2 and display screen 3.The camera 2 passes through Wi-
The communications carriers such as Fi, USB data line or cable are connected with the server 1 of Local or Remote, by the real-time data transmission of acquisition
It returns server 1 to store, server 1 reads video flowing, and extraction obtains video frame, reads partly or entirely by preset mode
Frame.The display screen 3 is used to intuitively transferring and checking the video image that the camera 2 acquires.
Two, video image processing identification module 4: when server 1 receives the data of the transmission of camera 2 and extracts video frame
Afterwards, video frame is handled by the video image processing identification module 4, by reading the image data of camera 2, mark
Know closely surround out entire human body, entire head, it is entire above the waist, the rectangular shaped rim of the profile of the entire lower part of the body, to above each
The size of rectangular shaped rim, coloured typies, shape and the area of each color area in each rectangular shaped rim, carries out image recognition.It is described
Video image processing identification module 4 further comprises:
Human body target recognition unit 41, the human body target recognition unit 41 include identification subelement 411, gender knowledge
Small pin for the case unit 412 and age identify subelement 413.The identification subelement 411 is used to identify human body according to image information
Identity is customer or salesman;Gender identification subelement 412 is used to identify human body gender according to image information, for male or
Female;The age identification subelement 413 was used for according to the image information identification human body age, and the human body age of identification can be specific
Age value, such as 30 years old, 45 years old, be also possible to a range of age, such as 0-5 years old, 6-10 years old ..., the human body age be identified as year
Age range helps to improve the accuracy of customer's ages statistics.
Further, the video image processing identification module 4 further includes human body attitude recognition unit 42, the human body appearance
State recognition unit 42 includes human body key point identification subelement 421 and human body attitude judgment sub-unit 422.The human body key point
Identify the position of 17 key points of skeleton point for identification of subelement 421;The human body attitude judgment sub-unit 422
The interval time between the relative position and frame and frame for the key point that subelement 421 is identified is identified according to the human body key point
Sequence relation, judgement obtain human body attitude.
Further, the video image processing identification module 4 further includes frame weight processing unit 44, for coincidence part
The frame that part is higher than setting value is deleted, and is deleted lower than expected frame probability, is less than distinguished value to area
Frame deleted.
Further, the video image processing identification module 4 further includes comprehensive discriminance analysis unit 43, the comprehensive knowledge
Other analytical unit 43 is based on the human body target recognition unit 41, human body attitude recognition unit 42 and frame weight processing unit 44,
The comprehensive discriminance analysis unit 43 specifically includes Activity recognition subelement 431, motion path identification subelement 432 and synthesis and sentences
Disconnected subelement 433.The Activity recognition subelement 431 is used for according to human body attitude, the relative position of human body key point and certain
The change in location of human body key point in duration, discriminatory analysis obtain the behavior property of target person;Motion path identification
Unit 432 is used to be calculated the motion path of target person according to the change in location of the human body key point in one section of duration;
The comprehensive descision subelement 433 is used for single according to identification subelement 411, gender identification subelement 412, age identification
Member 413, Activity recognition subelement 431, motion path identification subelement 432 as a result, comprehensive descision obtains target person in shop
Dwell regions count, stop in track statistics and shop in shop in situation in paving, including but not limited to customer's ages statistics, shop
Time statistics etc. is stayed, further includes the statistics to the salesman identified, statistical content includes but is not limited to check card statistics, duration of turning out for work
Statistics, work ceremony statistics and customer follow statistics etc..
Three, integrated information analyze display module 5: for responding user instruction, by visualization interface allow user intuitively
The case where transferring and/or inquire shop stream of people monitoring, identification, analysis.
Four, alarm module 6, for alarming calculated people and object abnormal conditions.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
Member, under the premise of not departing from the method for the present invention, can also make several improvement and supplement, these are improved and supplement also should be regarded as
Protection scope of the present invention.
Claims (10)
1. a kind of method of shop stream of people discriminance analysis, which comprises the following steps:
S1, camera is mounted in shop, and be connected with server;
After the power supply connection of S2, the camera are opened, video flowing is read on the server, extraction obtains video frame;
S3, using every frame picture of extraction as the input of video image processing identification module;
S4, the video image processing identification module are identified according to the deep neural network Model of Target Recognition pre-established
The rectangular shaped rim and recognizable object of the close profile for surrounding recognizable object may set for the probability of a certain setting target, target
It surely include identity, age and gender;
S6, to the frame handled well, identified, known according to the deep neural network key point identification model pre-established one by one
17 key points of skeleton point Chu not corresponded to;
S7, the relative position based on the key point in frame and the interval time sequence relation between frame and frame calculate in frame
Attribute on the posture of target person, track, motion state moment and certain time axis;
S9, passenger flow identifiable in certain time length group is counted, statistical content includes that customer's ages count, in shop
Dwell regions statistics, residence time statistics in track statistics and shop in shop;The salesman identified is counted, statistical content packet
It includes statistics of checking card, duration of turning out for work statistics, work ceremony statistics and customer and follows statistics;
S10, by statistical result to visualize and show in the form of two kinds of data sheet, from multidimensional Interpretation shop stream of people's situation.
2. the method for shop stream of people discriminance analysis according to claim 1, which is characterized in that also wrapped between step S4 and S6
Include step S5, the frame that is higher than setting value to intersection is deleted, and is deleted lower than the frame of desired value probability, right
The frame that area is less than distinguished value is deleted.
3. the method for shop stream of people discriminance analysis according to claim 1, which is characterized in that in step S6, in addition to identification
17 key points for corresponding to skeleton point in frame out, giving each key point may correct probability.
4. the method for shop stream of people discriminance analysis according to claim 1, which is characterized in that also wrapped between step S7 and S9
It includes step S8, alarm calculated people and object abnormal conditions.
5. the method for shop stream of people discriminance analysis according to claim 1, which is characterized in that the shop stream of people identification point
The method of analysis includes the steps that establishing deep neural network Model of Target Recognition, specific as follows:
S401, Primary Construction deep neural network Model of Target Recognition
Image is saved as into data file, imports image processing software, manual identification closely surrounds entire human body, entire head out
Portion, it is entire above the waist, the rectangular shaped rim of the profile of the entire lower part of the body, by the size of above each rectangular shaped rim, in each rectangular shaped rim
Identity, age and the gender of target person establish line in coloured typies, shape and the area of each color area and each rectangular shaped rim
Property regression model;
S402, building training set
Every record need to include: the profile for surrounding entire human body, entire head, the entire upper part of the body, the entire lower part of the body in training set
Rectangular shaped rim size, coloured typies, shape and the area of each color area in each rectangular shaped rim, target in each rectangular shaped rim
Identity of personage, age and gender;
The deep neural network Model of Target Recognition of S403, training Primary Construction
The deep neural network Model of Target Recognition of Primary Construction is trained study with training set, adjusts and joins, after being trained
Deep neural network Model of Target Recognition;
Deep neural network Model of Target Recognition after S404, verifying training
Deep neural network Model of Target Recognition after training is passed through to the test of associated verification collection, until reaching wanting for accuracy
It asks, obtains final deep neural network key point identification model;Wherein verifying collection every records the information and training that need to include
Collect identical.
6. the method for shop stream of people discriminance analysis according to claim 1, which is characterized in that the shop stream of people identification point
The method of analysis includes the steps that establishing deep neural network key point identification model, specific as follows:
S601, Primary Construction deep neural network key point identification model
Image is saved as into data file, imports image processing software, manual identification closely surrounds entire human body, entire head out
Portion, it is entire above the waist, 17 key points of the rectangular shaped rim of the profile of the entire lower part of the body and skeleton point, will above each rectangle
The size of frame, target person in coloured typies, shape and the area of each color area and each rectangular shaped rim in each rectangular shaped rim
The position of 17 key points of skeleton point establish linear regression model (LRM);
S602, building training set
Every record need to include: the profile for surrounding entire human body, entire head, the entire upper part of the body, the entire lower part of the body in training set
Rectangular shaped rim size, coloured typies, shape and the area of each color area in each rectangular shaped rim, target in each rectangular shaped rim
The position of 17 key points of the skeleton point of personage;
The deep neural network key point identification model of S603, training Primary Construction
The deep neural network key point identification model of Primary Construction is trained study with training set, adjusts and joins, is trained
Deep neural network key point identification model afterwards;
Deep neural network key point identification model after S604, verifying training
Deep neural network key point identification model after training is passed through to the test of associated verification collection, until reaching accuracy
It is required that obtaining final deep neural network key point identification model;Wherein verifying collection every records the information and instruction that need to include
It is identical to practice collection.
7. a kind of system of shop stream of people discriminance analysis characterized by comprising
Information collecting device in shop: including server, camera and display screen;The camera passes through communications carrier and local
Or long-range server is connected, and the real-time data transmission of acquisition is returned server storage, the server reads video flowing simultaneously
Extraction obtains video frame, and the display screen is used to intuitively transfer and check the video image of the camera acquisition;
Video image processing identification module: it for handling video frame, identifies and closely surrounds entire human body, entire head
Portion, it is entire above the waist, the rectangular shaped rim of the profile of the entire lower part of the body, to the size of above each rectangular shaped rim, in each rectangular shaped rim
Coloured typies, shape and the area of each color area carry out image recognition;The video image processing identification module further wraps
It includes:
Human body target recognition unit, the human body target recognition unit include identification subelement, gender identification subelement and
Age identifies subelement;
Human body attitude recognition unit, the human body attitude recognition unit include that human body key point identification subelement and human body attitude are sentenced
Disconnected subelement;The position of human body key point identification subelement 17 key points of skeleton point for identification;The people
Body posture judgment sub-unit according to the human body key point identify the relative position of key point that subelement is identified and frame with
Interval time sequence relation between frame, judgement obtain human body attitude;
Comprehensive discriminance analysis unit, the comprehensive discriminance analysis unit include Activity recognition subelement, motion path identification son list
Member, comprehensive descision subelement;The Activity recognition subelement is used for according to human body attitude, the relative position of human body key point and one
The change in location of human body key point, discriminatory analysis obtain human body behavior property in timing is long;The motion path identifies subelement
For basis, the motion path of human body is calculated in the change in location of human body key point in one section of duration;Comprehensive descision
Unit is used for according to identification subelement, gender identification subelement, age identification subelement, Activity recognition subelement, movement
Path Recognition subelement as a result, comprehensive descision obtains situation of the target person in shop, the target person is in shop
The case where include the ages statistics of customer, dwell regions statistics in shop, residence time statistics in track statistics and shop in shop,
It further include the work summary to the salesman identified, the work summary includes check card statistics, duration of turning out for work statistics, work ceremony
Statistics and customer follow statistics;
Integrated information analyze display module: for responding user instruction, by visualization interface allow user intuitively transfer and/or
The case where inquiring shop stream of people monitoring, identification, analysis.
8. the system of shop stream of people discriminance analysis according to claim 7, which is characterized in that the video image processing is known
Other module further includes frame weight processing unit, and the frame for being higher than setting value to intersection is deleted, is lower than to probability
Expected frame is deleted, and the frame for being less than distinguished value to area is deleted.
9. the system of shop stream of people discriminance analysis according to claim 7, which is characterized in that the human body key point identification
Subelement is also used to provide each key point may correct probability.
10. the system of shop stream of people discriminance analysis according to claim 7, which is characterized in that the shop stream of people identification
The system of analysis further includes alarm module, for alarming calculated people and object abnormal conditions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910123142.XA CN109871804A (en) | 2019-02-19 | 2019-02-19 | A kind of method and system of shop stream of people discriminance analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910123142.XA CN109871804A (en) | 2019-02-19 | 2019-02-19 | A kind of method and system of shop stream of people discriminance analysis |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109871804A true CN109871804A (en) | 2019-06-11 |
Family
ID=66918866
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910123142.XA Pending CN109871804A (en) | 2019-02-19 | 2019-02-19 | A kind of method and system of shop stream of people discriminance analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109871804A (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110287878A (en) * | 2019-06-25 | 2019-09-27 | 秒针信息技术有限公司 | The determination method and device of marketing strategy, storage medium, electronic device |
CN110309801A (en) * | 2019-07-05 | 2019-10-08 | 名创优品(横琴)企业管理有限公司 | A kind of video analysis method, apparatus, system, storage medium and computer equipment |
CN110399835A (en) * | 2019-07-26 | 2019-11-01 | 北京文安智能技术股份有限公司 | A kind of analysis method of personnel's residence time, apparatus and system |
CN110415424A (en) * | 2019-06-17 | 2019-11-05 | 众安信息技术服务有限公司 | A kind of authentication method, apparatus, computer equipment and storage medium |
CN110490171A (en) * | 2019-08-26 | 2019-11-22 | 睿云联(厦门)网络通讯技术有限公司 | A kind of danger gesture recognition method, device, computer equipment and storage medium |
CN110852814A (en) * | 2020-01-14 | 2020-02-28 | 深圳惠通天下信息技术有限公司 | Advertisement delivery self-service system and method |
CN110991528A (en) * | 2019-12-02 | 2020-04-10 | 上海尊溢商务信息咨询有限公司 | Offline new retail store passenger flow multi-attribute single model identification method |
CN111339873A (en) * | 2020-02-18 | 2020-06-26 | 南京甄视智能科技有限公司 | Passenger flow statistical method and device, storage medium and computing equipment |
CN111401305A (en) * | 2020-04-08 | 2020-07-10 | 北京精准沟通传媒科技股份有限公司 | 4S store customer statistical method and device and electronic equipment |
CN111881754A (en) * | 2020-06-28 | 2020-11-03 | 浙江大华技术股份有限公司 | Behavior detection method, system, equipment and computer equipment |
CN112258026A (en) * | 2020-10-21 | 2021-01-22 | 国网江苏省电力有限公司信息通信分公司 | Dynamic positioning scheduling method and system based on video identity recognition |
CN112307855A (en) * | 2019-08-07 | 2021-02-02 | 北京字节跳动网络技术有限公司 | User state detection method and device, electronic equipment and storage medium |
CN112784784A (en) * | 2021-01-29 | 2021-05-11 | 新疆爱华盈通信息技术有限公司 | Personnel information statistical method and system based on face recognition |
CN114220141A (en) * | 2021-11-23 | 2022-03-22 | 慧之安信息技术股份有限公司 | Shop frequent visitor identification method based on face identification |
CN114529317A (en) * | 2020-10-30 | 2022-05-24 | 广东飞企互联科技股份有限公司 | Method and system for monitoring browsing volume and volume of business of store commodities |
KR20230101161A (en) * | 2021-12-29 | 2023-07-06 | 경일대학교산학협력단 | Electronic device and method for identifying object from image |
EP4231251A1 (en) * | 2022-02-18 | 2023-08-23 | Fujitsu Limited | Setting program, detection program, setting method, detection method, setting apparatus, and detection appratus |
CN118505261A (en) * | 2024-04-09 | 2024-08-16 | 徐州信明智保科技有限公司 | Intelligent retail method based on Internet of things |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104933710B (en) * | 2015-06-10 | 2018-06-19 | 华南理工大学 | Based on the shop stream of people track intelligent analysis method under monitor video |
CN108805111A (en) * | 2018-09-07 | 2018-11-13 | 杭州善贾科技有限公司 | A kind of detection of passenger flow system and its detection method based on recognition of face |
CN109165552A (en) * | 2018-07-14 | 2019-01-08 | 深圳神目信息技术有限公司 | A kind of gesture recognition method based on human body key point, system and memory |
-
2019
- 2019-02-19 CN CN201910123142.XA patent/CN109871804A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104933710B (en) * | 2015-06-10 | 2018-06-19 | 华南理工大学 | Based on the shop stream of people track intelligent analysis method under monitor video |
CN109165552A (en) * | 2018-07-14 | 2019-01-08 | 深圳神目信息技术有限公司 | A kind of gesture recognition method based on human body key point, system and memory |
CN108805111A (en) * | 2018-09-07 | 2018-11-13 | 杭州善贾科技有限公司 | A kind of detection of passenger flow system and its detection method based on recognition of face |
Non-Patent Citations (1)
Title |
---|
我是婉君的: "解读:基于动态骨骼的动作识别方法ST-GCN(时空图卷积网络模型)", 《HTTPS:https://BLOG.CSDN.NET/QQ_36893052/ARTICLE/DETAILS/79860328》 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110415424A (en) * | 2019-06-17 | 2019-11-05 | 众安信息技术服务有限公司 | A kind of authentication method, apparatus, computer equipment and storage medium |
CN110287878A (en) * | 2019-06-25 | 2019-09-27 | 秒针信息技术有限公司 | The determination method and device of marketing strategy, storage medium, electronic device |
CN110309801A (en) * | 2019-07-05 | 2019-10-08 | 名创优品(横琴)企业管理有限公司 | A kind of video analysis method, apparatus, system, storage medium and computer equipment |
CN110399835B (en) * | 2019-07-26 | 2024-04-02 | 北京文安智能技术股份有限公司 | Analysis method, device and system for personnel residence time |
CN110399835A (en) * | 2019-07-26 | 2019-11-01 | 北京文安智能技术股份有限公司 | A kind of analysis method of personnel's residence time, apparatus and system |
CN112307855A (en) * | 2019-08-07 | 2021-02-02 | 北京字节跳动网络技术有限公司 | User state detection method and device, electronic equipment and storage medium |
CN110490171A (en) * | 2019-08-26 | 2019-11-22 | 睿云联(厦门)网络通讯技术有限公司 | A kind of danger gesture recognition method, device, computer equipment and storage medium |
CN110991528A (en) * | 2019-12-02 | 2020-04-10 | 上海尊溢商务信息咨询有限公司 | Offline new retail store passenger flow multi-attribute single model identification method |
CN110852814A (en) * | 2020-01-14 | 2020-02-28 | 深圳惠通天下信息技术有限公司 | Advertisement delivery self-service system and method |
CN111339873A (en) * | 2020-02-18 | 2020-06-26 | 南京甄视智能科技有限公司 | Passenger flow statistical method and device, storage medium and computing equipment |
CN111401305A (en) * | 2020-04-08 | 2020-07-10 | 北京精准沟通传媒科技股份有限公司 | 4S store customer statistical method and device and electronic equipment |
CN111881754A (en) * | 2020-06-28 | 2020-11-03 | 浙江大华技术股份有限公司 | Behavior detection method, system, equipment and computer equipment |
CN112258026A (en) * | 2020-10-21 | 2021-01-22 | 国网江苏省电力有限公司信息通信分公司 | Dynamic positioning scheduling method and system based on video identity recognition |
CN112258026B (en) * | 2020-10-21 | 2023-12-15 | 国网江苏省电力有限公司信息通信分公司 | Dynamic positioning scheduling method and system based on video identity recognition |
CN114529317A (en) * | 2020-10-30 | 2022-05-24 | 广东飞企互联科技股份有限公司 | Method and system for monitoring browsing volume and volume of business of store commodities |
CN112784784A (en) * | 2021-01-29 | 2021-05-11 | 新疆爱华盈通信息技术有限公司 | Personnel information statistical method and system based on face recognition |
CN114220141A (en) * | 2021-11-23 | 2022-03-22 | 慧之安信息技术股份有限公司 | Shop frequent visitor identification method based on face identification |
KR20230101161A (en) * | 2021-12-29 | 2023-07-06 | 경일대학교산학협력단 | Electronic device and method for identifying object from image |
KR102702961B1 (en) * | 2021-12-29 | 2024-09-04 | 경일대학교산학협력단 | Electronic device and method for identifying object from image |
EP4231251A1 (en) * | 2022-02-18 | 2023-08-23 | Fujitsu Limited | Setting program, detection program, setting method, detection method, setting apparatus, and detection appratus |
US12051246B2 (en) | 2022-02-18 | 2024-07-30 | Fujitsu Limited | Non-transitory computer readable recording medium, setting method, detection method, setting apparatus, and detection apparatus |
CN118505261A (en) * | 2024-04-09 | 2024-08-16 | 徐州信明智保科技有限公司 | Intelligent retail method based on Internet of things |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109871804A (en) | A kind of method and system of shop stream of people discriminance analysis | |
CN104182338B (en) | Fatigue driving early warning product detection accuracy test method | |
CN109035629A (en) | A kind of shopping settlement method and device based on open automatic vending machine | |
CN110119656A (en) | Intelligent monitor system and the scene monitoring method violating the regulations of operation field personnel violating the regulations | |
CN110197169A (en) | A kind of contactless learning state monitoring system and learning state detection method | |
CN107978051A (en) | A kind of access control system and method based on recognition of face | |
CN109192302A (en) | A kind of face's multi-modality images acquisition processing device and method | |
CN106940789A (en) | A kind of method, system and device of the quantity statistics based on video identification | |
CN108010008A (en) | Method for tracing, device and the electronic equipment of target | |
CN103679591A (en) | Remote learning state monitoring system and method | |
CN105208325B (en) | The land resources monitoring and early warning method captured and compare analysis is pinpointed based on image | |
CN110276265A (en) | Pedestrian monitoring method and device based on intelligent three-dimensional solid monitoring device | |
CN105701467A (en) | Many-people abnormal behavior identification method based on human body shape characteristic | |
CN104966327A (en) | System and method for monitoring health and registering attendance on basis of internet of things | |
CN107566790A (en) | With reference to REID and the real time monitoring apparatus and method of Video Supervision Technique | |
CN109934733A (en) | One kind being based on face recognition technology intelligent canteen queue management system | |
CN110236479A (en) | Eyesight detection and management system | |
KR20210077504A (en) | Smart Farm Data System | |
CN108734910A (en) | A kind of abnormal behaviour monitoring and warning system and method | |
CN104007733B (en) | It is a kind of that the system and method being monitored is produced to intensive agriculture | |
CN106444587B (en) | A kind of system and method that human life's body characteristics are monitored based on unmanned plane | |
CN110472596A (en) | It is a kind of agricultural fining plantation and disaster prevention control system | |
CN207233038U (en) | Face is called the roll and number system | |
CN111222420A (en) | FTP protocol-based low-bandwidth-requirement helmet identification method | |
CN114359817A (en) | People flow measuring method based on entrance and exit pedestrian identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190611 |