CN113505717B - Online passing system based on face and facial feature recognition technology - Google Patents

Online passing system based on face and facial feature recognition technology Download PDF

Info

Publication number
CN113505717B
CN113505717B CN202110809781.9A CN202110809781A CN113505717B CN 113505717 B CN113505717 B CN 113505717B CN 202110809781 A CN202110809781 A CN 202110809781A CN 113505717 B CN113505717 B CN 113505717B
Authority
CN
China
Prior art keywords
module
face
points
feature
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110809781.9A
Other languages
Chinese (zh)
Other versions
CN113505717A (en
Inventor
谢晓兰
余友华
常盼
刘亚荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Technology
Original Assignee
Guilin University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Technology filed Critical Guilin University of Technology
Priority to CN202110809781.9A priority Critical patent/CN113505717B/en
Publication of CN113505717A publication Critical patent/CN113505717A/en
Application granted granted Critical
Publication of CN113505717B publication Critical patent/CN113505717B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses an online traffic system based on a face feature recognition technology. The system comprises an acquisition module, an extraction module, an initialization module, a calculation module, a matching module, a detection module and an optimization module. A characteristic face matching algorithm is adopted to realize higher-accuracy face recognition; the online/offline living body detection service is adopted to identify whether the user is a real person or not, and cheating tools such as photos, videos and 3D molds are effectively resisted; and the space and time complexity of the system is optimized by adopting a system optimization algorithm, the response time of the system is reduced, and the fluency of the system is improved. The face and face feature recognition system can prevent lawless persons from cheating by using tools such as photos, videos and 3D molds of other persons, can effectively improve face recognition accuracy and traffic efficiency of the existing face recognition system, can be widely applied to the fields of financial industry, retail industry, security traffic and the like, and meets business requirements of identity verification, face attendance, gate traffic and the like.

Description

Online passing system based on face and facial feature recognition technology
Technical Field
The invention belongs to the technical field of computer vision and face recognition, and particularly relates to an online passing system based on a face and face feature recognition technology.
Background
The face recognition comprises the aspects of face search, face comparison, living body detection and the like, can be flexibly applied to industry scenes such as finance, retail, security, community face brushing access, campus intelligent access control, enterprise intelligent access control and the like, and meets the business requirements of identity verification, face attendance, gate access and the like. The face searching is to compare the face collected on site with N faces in a designated face library to find one or more most similar faces for user identity recognition and verification; the face comparison is to compare the similarity between the user photo collected on site and the reserved photo of the user, and judge whether the user operates the face comparison, so as to ensure the authenticity of the user identity; the live body detection is online or offline live body detection and is used for identifying whether a user in a service scene is a real person or not and effectively resisting cheating tools such as photos, videos and 3D molds.
Although the current common online traffic systems based on face recognition are mature, the systems still have some disadvantages in the following aspects:
(1) the face recognition accuracy is not high, and misjudgment is easy to occur;
(2) lawbreakers can scan and pass by using cheating tools such as photos, videos and 3D molds of other people;
(3) the system response time is long, the system optimization is insufficient, and the fluency is insufficient.
Therefore, the current face recognition passing system has more problems in the aspects of face recognition accuracy, personal operation, system response time, smoothness optimization and the like, so that the development of a stable, efficient and accurate online passing system becomes a hot problem to be solved urgently in the field of face recognition. The online traffic system based on the face and face feature recognition technology provided by the invention extracts face and face feature points, adopts a feature face matching algorithm, a system optimization algorithm and online/offline living body detection service to reduce the response time of the system, improves the smoothness of the system, can effectively prevent lawless persons from cheating by using other people's photos, videos, 3D molds and other tools, improves the face recognition accuracy and traffic efficiency of the existing face recognition system, can be widely applied to the fields of financial industry, retail industry, security traffic and the like, and meets the business requirements of identity verification, face attendance, gate traffic and the like.
Disclosure of Invention
The invention aims to overcome the defect that the existing face recognition passing system is insufficient in face identification capability, and provides an online passing system based on a face facial feature recognition technology. The system adopts a characteristic face matching algorithm to realize higher-accuracy face recognition; adopting an online/offline living body detection service to identify whether the scanning behavior is personal operation; and the system is optimized by adopting a system optimization algorithm, so that the response time of the system is reduced, and the fluency of the system is improved.
The invention is realized by the following steps: an online passing system based on a face and facial feature recognition technology comprises: the device comprises an acquisition module, an extraction module, an initialization module, a calculation module, a matching module, a detection module and an optimization module. The acquisition module is connected with the acquisition module, the acquisition module is connected with the extraction module, the extraction module is connected with the initialization module, the initialization module is connected with the calculation module, the calculation module is connected with the matching module, the matching module is connected with the detection module, and the detection module is connected with the optimization module.
The acquisition module is used for acquiring a face image to be recognized.
The acquisition module is used for acquiring the feature points in the acquired face image to be recognized, the selected feature points comprise key points and dense points, the key points comprise 5 points which are respectively a left eye center point, a right eye center point, a nose tip, a mouth left boundary point and a mouth right boundary point, the dense points are N points except the key points, and the acquisition is carried out after the coordinates (x, y) of each dense feature point are calculated.
And the extraction module is used for sequentially extracting the feature vectors of the M-N +5 feature points according to the pixel value arrangement of each local area of the face image.
The initialization module is used for initializing and constructing the weights and the projection matrix of the M characteristic points.
The calculation module is configured to calculate weighted collaborative representation of the feature vectors of the M feature points to obtain a representation coefficient of the feature vector of the feature point.
The matching module is used for matching the acquired face image with the existing face database and judging whether the matching is successful or not, and the implementation mainly comprises the following steps:
s1: and reading the weights of M characteristic points in the face image acquired on site, the projection matrix and the representation coefficients of the characteristic vectors of the projection matrix.
S2: traversing the existing face database, and sequentially calculating the weight of each face image feature point in the face database, the projection matrix and the representation coefficient of the feature vector thereof.
S3: and comparing the weights of M characteristic points in the acquired face image, the projection matrix and the expression coefficients of the characteristic vectors thereof with the corresponding parameter values of each face image in the face database by adopting a characteristic face matching algorithm.
S4: and calculating the average similarity of the parameter values of the two human face images.
S5: and judging whether the average similarity of the parameter values is more than or equal to 90 percent.
S6: and if the average similarity is greater than or equal to 90%, the matching is successful, otherwise, the matching is failed.
S7: the matching module ends.
The detection module adopts online/offline living body detection service to identify whether the scanning behavior is personally operated.
The optimization module adopts a system optimization algorithm to optimize the system, so that the response time of the system is shortened, and the fluency of the system is improved.
Compared with the existing face recognition passing system, the online passing system based on the face feature recognition technology has the following advantages:
(1) a characteristic face matching algorithm is adopted to realize higher-accuracy face recognition;
(2) the online/offline living body detection service is adopted to identify whether the user is a real person or not, and cheating behaviors such as photos, videos and 3D molds are effectively resisted;
(3) and the space and time complexity of the system is optimized by adopting a system optimization algorithm, the response time of the system is reduced, and the fluency of the system is improved.
Drawings
Fig. 1 is a schematic structural diagram of a system according to an embodiment of the present invention.
FIG. 2 is a block flow diagram of a matching module in an embodiment of the invention.
The labels in the figure are: 1. an acquisition module; 2. an acquisition module; 3. an extraction module; 4. initializing a module; 5. a calculation module; 6. a matching module; 7. a detection module; 8. and an optimization module.
Detailed Description
Example (b): as shown in fig. 1, an embodiment of the present invention provides an online passing system based on a face-face feature recognition technology, which includes 8 modules, each of which is: the device comprises an acquisition module 1, an acquisition module 2, an extraction module 3, an initialization module 4, a calculation module 5, a matching module 6, a detection module 7 and an optimization module 8. The acquisition module 1 is connected with the acquisition module 2, the acquisition module 2 is connected with the extraction module 3, the extraction module 3 is connected with the initialization module 4, the initialization module 4 is connected with the calculation module 5, the calculation module 5 is connected with the matching module 6, the matching module 6 is connected with the detection module 7, and the detection module 7 is connected with the optimization module 8.
The acquisition module 1 is configured to acquire a face image to be recognized, where the face image acquired in this embodiment is a standard or non-standard real-time face image.
The acquisition module 2 is used for acquiring feature points in the acquired face image to be recognized, the selected feature points comprise key points and dense points, the key points comprise 5 points which are respectively a left eye center point, a right eye center point, a nose tip, a mouth left boundary point and a mouth right boundary point; the dense points are N points except the key point, the coordinates (x, y) of each dense feature point are calculated and then collected, and the calculation formula of the coordinates of the dense feature points is as follows:
x ═ 2 j-1. d/2+1 equation (1)
y ═ 2i-1 · d/2+1 equation (2)
In the formula, x is the abscissa of the dense feature point, y is the ordinate of the dense feature point, i is the ith row of dense feature point, j is the jth column of dense feature point, and d is the distance between two adjacent dense feature points.
The extraction module 3 sequentially extracts feature vectors of M ═ N +5 feature points according to the pixel value arrangement of each local region of the face image.
The initialization module 4 is configured to initialize weights and projection matrices of the M feature points.
The calculation formula of the weight is as follows:
Figure GDA0003548552890000041
in the formula WiIs the weight of the ith feature point, W0The initial weight of the ith feature point is 1, S by defaultiIs the shape vector of the ith feature point, piThe shape parameter corresponding to the ith feature point shape vector.
The calculation formula of the projection matrix is as follows:
Figure GDA0003548552890000042
in the formula PiFor projection matrices of feature points, arg is a function of the minimum of the projection, WiIs the weight of the i-th feature point, yiIs the ordinate, X, of the i-th feature pointiIs the ordinate of the ith feature point.
The calculating module 5 is configured to calculate weighted collaborative representation of the feature vectors of the M feature points to obtain a representation coefficient of the feature vector of the feature point, where a specific calculation formula is as follows:
Figure GDA0003548552890000043
wherein S is a coefficient, WiIs the weight of the ith feature point, PiIs a projection matrix of the feature points.
The matching module 6 is configured to match the acquired face image with an existing face database, and determine whether the matching is successful, and a main implementation flow thereof is as shown in fig. 2, and includes the following steps:
and the step S1 is used for reading the weights of M feature points in the face image collected on site, the projection matrix and the representation coefficients of the feature vectors thereof.
Step S2 is to traverse the existing face database, and sequentially calculate the weight of each face image feature point in the face database, the projection matrix, and the representation coefficient of its feature vector.
In step S3, a eigenface matching algorithm is used to compare the weights of M eigen points in the acquired face image, the projection matrix and the representation coefficients of their eigen vectors with the corresponding parameter values of each face image in the face database.
The step S4 is used to calculate the average similarity of the parameter values of the above two facial images.
Step S5 is used to determine whether the average similarity between the above parameter values is greater than or equal to 90%.
The step S6 is configured to determine whether the average similarity is greater than or equal to 90%, where if the average similarity is greater than or equal to 90%, the matching is successful, and otherwise the matching is failed.
Said step S7 is used to end the matching module 6 and enter the detection module 7.
The detection module 7 adopts online/offline living body detection service to identify whether the scanning behavior is personally operated.
The online/offline living body detection service prompts personnel who scan the human face on site to make corresponding actions according to the prompts, and judges whether the user is a living body or a real person by utilizing the background SDK to collect dynamic information in real time.
The optimization module 8 adopts a system optimization algorithm to optimize the system, so that the system response time is shortened, and the system fluency is improved.
The system optimization algorithm is a hierarchical retrieval identification method, which is characterized in that a relevant numerical parameter of a certain characteristic of a human face is used as a key word for preliminary retrieval, then images are numbered and sequenced in a corresponding subset, and then deep comparison is carried out to realize identity information authentication.
In summary, the invention relates to an online traffic system based on a face feature recognition technology, which adopts a feature face matching algorithm to realize face recognition with higher accuracy; the online/offline living body detection service is adopted to identify whether the user is a real person or not, and cheating tools such as photos, videos and 3D molds are effectively resisted; and the space and time complexity of the system is optimized by adopting a system optimization algorithm, the response time of the system is reduced, and the fluency of the system is improved. The face and facial feature recognition system can effectively improve the face recognition accuracy and the passing efficiency of the existing face recognition system, can be widely applied to the fields of financial industry, retail industry, security passing and the like, and meets the business requirements of identity verification, face attendance, gate passing and the like.
The foregoing is only a preferred embodiment of the present invention. The above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same, and the scope of the present invention is not limited to any such changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention.

Claims (1)

1. An online passing system based on a face and facial feature recognition technology is characterized by comprising: the device comprises an acquisition module, an extraction module, an initialization module, a calculation module, a matching module, a detection module and an optimization module; the acquisition module is connected with the acquisition module, the acquisition module is connected with the extraction module, the extraction module is connected with the initialization module, the initialization module is connected with the calculation module, the calculation module is connected with the matching module, the matching module is connected with the detection module, and the detection module is connected with the optimization module;
the acquisition module is used for acquiring a face image to be recognized, and the acquired face image is a standard or non-standard real-time face image;
the acquisition module is used for acquiring feature points in the acquired face image to be recognized, the selected feature points comprise key points and dense points, the key points comprise 5 points which are respectively a left eye center point, a right eye center point, a nose tip, a mouth left boundary point and a mouth right boundary point; the dense points are N points except the key point, the coordinates (x, y) of each dense feature point are calculated and then collected, and the calculation formula of the coordinates of the dense feature points is as follows:
x ═ 2 j-1. d/2+1 equation (1)
y ═ 2i-1 · d/2+1 equation (2)
In the formula, x is the abscissa of the dense feature points, y is the ordinate of the dense feature points, i is the ith row of dense feature points, j is the jth column of dense feature points, and d is the distance between two adjacent dense feature points;
the extraction module is used for sequentially extracting the feature vectors of the N +5 feature points according to the pixel value arrangement of each local area of the face image;
the initialization module is used for initializing and constructing the weights and the projection matrix of the M characteristic points;
the calculation formula of the weight is as follows:
Figure FDA0003548552880000011
in the formula WiIs the weight of the ith feature point, W0The initial weight of the ith feature point is 1, S by defaultiIs the shape vector of the ith feature point, piThe shape parameter corresponding to the ith characteristic point shape vector;
the calculation formula of the projection matrix is as follows:
Figure FDA0003548552880000012
in the formula PiFor projection matrices of feature points, arg is a function of the minimum of the projection, WiIs the weight of the i-th feature point, yiIs the ordinate, x, of the i-th feature pointiThe abscissa of the ith characteristic point is;
the calculation module is configured to calculate weighted collaborative representation of the feature vectors of the M feature points to obtain a representation coefficient of the feature vector of the feature point, where a specific calculation formula is:
Figure FDA0003548552880000021
wherein S is a coefficient, WiIs the weight of the ith feature point, PiA projection matrix of the characteristic points;
the matching module is used for matching the acquired face image with the existing face database and judging whether the matching is successful or not, and the implementation mainly comprises the following steps:
s1: reading weights of M characteristic points in a face image acquired on site, a projection matrix and representation coefficients of characteristic vectors of the projection matrix;
s2: traversing the existing face database, and sequentially calculating the weight of each face image feature point in the face database, a projection matrix and the expression coefficient of the feature vector of the projection matrix;
s3: comparing the weights of M characteristic points in the collected face images, the projection matrix and the expression coefficients of the characteristic vectors thereof with the corresponding parameter values of each face image in the face database by adopting a characteristic face matching algorithm;
s4: calculating the average similarity of various parameter values of the two human face images;
s5: judging whether the average similarity of the parameter values is more than or equal to 90%;
s6: if the average similarity is larger than or equal to 90%, the matching is successful, otherwise, the matching is failed;
s7: the matching module is finished;
the detection module adopts online/offline living body detection service to identify whether the scanning behavior is personal operation;
the online/offline living body detection service prompts personnel who scan the human face on site to make corresponding actions according to the prompt, and judges whether the user is a living body or a real person by utilizing the background SDK to collect dynamic information in real time;
the optimization module adopts a system optimization algorithm to optimize the system, so that the system response time is reduced, and the system fluency is improved;
the system optimization algorithm is a hierarchical retrieval identification method, which is characterized in that a relevant numerical parameter of a certain characteristic of a human face is used as a key word for preliminary retrieval, then images are numbered and sequenced in a corresponding subset, and then deep comparison is carried out to realize identity information authentication.
CN202110809781.9A 2021-07-17 2021-07-17 Online passing system based on face and facial feature recognition technology Active CN113505717B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110809781.9A CN113505717B (en) 2021-07-17 2021-07-17 Online passing system based on face and facial feature recognition technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110809781.9A CN113505717B (en) 2021-07-17 2021-07-17 Online passing system based on face and facial feature recognition technology

Publications (2)

Publication Number Publication Date
CN113505717A CN113505717A (en) 2021-10-15
CN113505717B true CN113505717B (en) 2022-05-31

Family

ID=78013718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110809781.9A Active CN113505717B (en) 2021-07-17 2021-07-17 Online passing system based on face and facial feature recognition technology

Country Status (1)

Country Link
CN (1) CN113505717B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114005160B (en) * 2021-10-28 2022-05-17 建湖县公安局 Access control system and method based on identity two-dimensional code and artificial intelligence
CN114638684A (en) * 2022-02-16 2022-06-17 中和农信项目管理有限公司 Financial survey anti-cheating method and device, terminal equipment and storage medium
CN115631525B (en) * 2022-10-26 2023-06-23 万才科技(杭州)有限公司 Face edge point identification-based insurance instant matching method
CN118196876B (en) * 2024-05-20 2024-08-16 东南大学 Virtual identity authentication device and authentication method thereof

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6072496A (en) * 1998-06-08 2000-06-06 Microsoft Corporation Method and system for capturing and representing 3D geometry, color and shading of facial expressions and other animated objects
CN101414348A (en) * 2007-10-19 2009-04-22 三星电子株式会社 Method and system for identifying human face in multiple angles
JP2010182150A (en) * 2009-02-06 2010-08-19 Seiko Epson Corp Image processing apparatus for detecting coordinate position of characteristic part of face
CN101968846B (en) * 2010-07-27 2013-05-15 上海摩比源软件技术有限公司 Face tracking method
CN102708359B (en) * 2012-05-08 2014-06-04 北京工业大学 Face recognition method based on color images
CN103839041B (en) * 2012-11-27 2017-07-18 腾讯科技(深圳)有限公司 The recognition methods of client features and device
CN103093490B (en) * 2013-02-02 2015-08-26 浙江大学 Based on the real-time face animation method of single video camera
CN103473801B (en) * 2013-09-27 2016-09-14 中国科学院自动化研究所 A kind of human face expression edit methods based on single camera Yu movement capturing data
CN103745209B (en) * 2014-01-27 2018-04-13 中国科学院深圳先进技术研究院 A kind of face identification method and system
CN104616005A (en) * 2015-03-10 2015-05-13 南京宜开数据分析技术有限公司 Domain-self-adaptive facial expression analysis method
CN105787430A (en) * 2016-01-12 2016-07-20 南通航运职业技术学院 Method for identifying second level human face with weighted collaborative representation and linear representation classification combined
KR102221118B1 (en) * 2016-02-16 2021-02-26 삼성전자주식회사 Method for extracting feature of image to recognize object
CN105809107B (en) * 2016-02-23 2019-12-03 深圳大学 Single sample face recognition method and system based on face feature point
CN106599833B (en) * 2016-12-12 2019-06-25 武汉科技大学 A kind of face identification method adapted to based on field and manifold distance is measured
CN107239758B (en) * 2017-05-24 2022-03-08 北京小米移动软件有限公司 Method and device for positioning key points of human face
CN107944367B (en) * 2017-11-16 2021-06-01 北京小米移动软件有限公司 Face key point detection method and device
CN109117755B (en) * 2018-07-25 2021-04-30 北京飞搜科技有限公司 Face living body detection method, system and equipment
CN110032927B (en) * 2019-02-27 2024-08-02 视缘(上海)智能科技有限公司 Face recognition method
CN113034345B (en) * 2019-12-25 2023-02-28 广东奥博信息产业股份有限公司 Face recognition method and system based on SFM reconstruction
CN111652086B (en) * 2020-05-15 2022-12-30 汉王科技股份有限公司 Face living body detection method and device, electronic equipment and storage medium
CN111950429B (en) * 2020-08-07 2023-11-14 南京审计大学 Face recognition method based on weighted collaborative representation

Also Published As

Publication number Publication date
CN113505717A (en) 2021-10-15

Similar Documents

Publication Publication Date Title
CN113505717B (en) Online passing system based on face and facial feature recognition technology
CN101587485B (en) Face information automatic login method based on face recognition technology
CN107967458A (en) A kind of face identification method
CN105825176A (en) Identification method based on multi-mode non-contact identity characteristics
JP2003317101A (en) Method for verifying face using method for automatically updating database and system therefor
Paul et al. Extraction of facial feature points using cumulative histogram
CN111507320A (en) Detection method, device, equipment and storage medium for kitchen violation behaviors
Yen et al. Facial feature extraction using genetic algorithm
Xia et al. Face occlusion detection using deep convolutional neural networks
Amaro et al. Evaluation of machine learning techniques for face detection and recognition
Mohamed et al. Automated face recogntion system: Multi-input databases
Yogalakshmi et al. Review on digital image processing techniques for face recognition
CN112258707A (en) Intelligent access control system based on face recognition
Chawda et al. Unique Face Identification System using Machine Learning
JP2008065651A (en) Face image authentication method, face image authentication apparatus and program
Mohammed et al. Face Recognition Based on Viola-Jones Face Detection Method and Principle Component Analysis (PCA)
Sankaran et al. A multi-view approach on modular PCA for illumination and pose invariant face recognition
Dagher et al. Improving the Component‐Based Face Recognition Using Enhanced Viola–Jones and Weighted Voting Technique
Praseeda Lekshmi et al. Analysis of facial expressions from video images using PCA
Ismaila et al. A study of features extraction algorithms for human face recognition
Wei Review of face recognition algorithms
Paul et al. Extraction of facial feature points using cumulative distribution function by varying single threshold group
Wang et al. Iris-face fusion and security analysis based on fisher discriminant
JP3841482B2 (en) Face image recognition device
Ahmed et al. Report Based Face Detection and Recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20211015

Assignee: Pinchuang Technology Co.,Ltd.

Assignor: GUILIN University OF TECHNOLOGY

Contract record no.: X2022450000173

Denomination of invention: An online traffic system based on facial feature recognition technology

Granted publication date: 20220531

License type: Common License

Record date: 20221124

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20211015

Assignee: Guangxi Julian Information Technology Co.,Ltd.

Assignor: GUILIN University OF TECHNOLOGY

Contract record no.: X2022450000633

Denomination of invention: An online access system based on facial feature recognition technology

Granted publication date: 20220531

License type: Common License

Record date: 20221230