CN103472434B - Robot sound positioning method - Google Patents

Robot sound positioning method Download PDF

Info

Publication number
CN103472434B
CN103472434B CN201310455238.9A CN201310455238A CN103472434B CN 103472434 B CN103472434 B CN 103472434B CN 201310455238 A CN201310455238 A CN 201310455238A CN 103472434 B CN103472434 B CN 103472434B
Authority
CN
China
Prior art keywords
robot
sound
sound source
kinect
gravity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310455238.9A
Other languages
Chinese (zh)
Other versions
CN103472434A (en
Inventor
莫宏伟
孟龙龙
徐立芳
梁作玉
蒋兴洲
雍升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanhai Innovation And Development Base Of Sanya Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN201310455238.9A priority Critical patent/CN103472434B/en
Publication of CN103472434A publication Critical patent/CN103472434A/en
Application granted granted Critical
Publication of CN103472434B publication Critical patent/CN103472434B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention discloses a robot sound positioning method, and relates to sound positioning and robot navigation. At least two Kinect sensors are arranged, the sound source direction detected by each Kinect is obtained, the deviation sector area where a sound source, determined by every two Kinect sensors, is arranged is determined, and three areas are determined totally; the gravity center of each area is solved through a gravity center method, and the mean of the three gravity centers is the optimal position of the sound source. By means of the robot sound positioning method, positioning accuracy is improved, and the method has strong practicality and flexibility, and can be applied to the fields such as sound positioning, robot navigation motion control and the like.

Description

A kind of Robotics Sound's localization method
Technical field
The invention belongs to robot field, relate to a kind of Robotics Sound's localization method, can be used for the fields such as motion planning and robot control, robot chamber inner position and navigation.
Background technology
It is special that Kinect(gnaws) be a kind of three-dimensional (3D) body sense video camera, it has imported the functions such as instant motion capture, image identification, microphone input, speech recognition, community interactive simultaneously.When Kinect first time is issued as the peripheral hardware of Xbox360, bone follow the trail of and speech recognition be Kinect SDK by the characteristic that developer welcomes, but compare bone tracking, in speech recognition, the power of microphone array is out in the cold.Part reason is attributed to infusive bone tracing system in Kinect, and another part reason is that Xbox game control panel and Kinect somatosensory game do not give full play to the advantage of Kinect audio frequency process.
The microphone array of Kinect is listed in the below of Kinect device.This array is made up of the microphone that 4 are independently distributed horizontally to below Kinect.Although identical sound signal caught by each microphone, composition array can detect the source direction of sound.Make it possible to for identifying the sound transmitted from some specific directions.The voice data that microphone array is caught flows through complicated audio frequency and strengthens effect algorithm process to remove incoherent background noise.All these complex operations process between Kinect hardware and Kinect SDK, and this makes within the scope of a larger space, even if people has certain distance also can carry out the identification of voice command and judge the source direction of sound apart from Kinect.
Robot indoor positioning technologies is a focus in robot research field, and be also a difficult point, researchers propose various method.More typically RFID technique, first, first build an intelligent space in indoor or be called sensor network space, namely in advance on flooring, a RFID label tag is laid every certain distance, incorporate the absolute coordinates of its position in each RFID label tag, secondly, mobile robot is equipped with FRID label information reading device, when robot moves in RFID label tag, the coordinate data read in RFID label tag can know the position that robot is current.But this localization method has certain requirement to environment, and the laying interval of RFID label tag is different, and the positioning precision of robot is also different.The precision of other location technologies is also subject to the impact of many factors, as the kinematic system of dead-reckoning to the precision of sensor and robot itself has very large dependence; The technology such as WIFI, bluetooth also has certain requirement to environment; Sound localization technology also has application in robot, but is limited to the process of its complexity and is subject to the impact of neighbourhood noise, so positioning precision is not high, is difficult to promote; Though it is high that indoor map builds positioning precision, map structuring process is complicated, and calculated amount is very large, and real-time is difficult to meet the demands.
Summary of the invention:
The invention provides a kind of Robotics Sound's localization method, for solving the not high problem of the positioning precision that exists in prior art.
On the one hand, provide a kind of Robotics Sound's localization method, comprising: by least two Kinect as sound transducer, obtain the source direction angle of the sound from robot; According to the position of described source direction angle and described at least two Kinect, determine deviation covering of the fan region, every two Kinect determined sound source places; Determine the geometric center of gravity of the intersection region in every two deviation covering of the fan regions; According to the described geometric center of gravity determined, calculate sound source optimal location by geometric center of gravity method, described sound source optimal location is the position of locating the described robot obtained.
Preferably, after calculating sound source optimal location by geometric center of gravity method, according to target location and described sound source optimal location, the route of described robotic movement is determined; Control line-of-road movement described in described Robot to described target location.
By such scheme, can the position of accurate and easily positioning robot.
Accompanying drawing explanation
Fig. 1 is Kinect sound localization general principles figure;
Fig. 2 is the sound source region schematic diagram that No. 1 and No. 3 Kinect sensor are determined jointly;
Fig. 3 is that irregular quadrilateral center of gravity asks method schematic diagram;
Fig. 4 is the sound source region schematic diagram that No. 1 and No. 2 Kinect sensor are determined jointly;
Fig. 5 is the sound source region schematic diagram that No. 2 and No. 3 Kinect sensor are determined jointly;
Fig. 6 realizes robot navigation's schematic diagram by the sound localization of Kinect;
Fig. 7 is Robotics Sound's location navigation control principle block diagram.
Embodiment
Below in conjunction with accompanying drawing, specific implementation process of the present invention is described in detail.
Embodiments provide a kind of method of positioning robot, comprising:
(1) 3 Kinect sensor discharged with rectangular coordinate system form, sound source a certain position in this coordinate system sends the sound of certain time, preserves the sound source deflection that three Kinect sensor obtain.
(2) certain deviation is had because each Kinect sensor receives voice signal, misalignment angle is represented with α, the sound source deviation of directivity scope that Kinect sensor detects is [-α, α], be called deviation covering of the fan, represent the sound source region that two Kinect sensor are determined, this method is called deviation covering of the fan method.The center of gravity of the intersection region that the deviation covering of the fan region then utilizing method of geometry to try to achieve every two Kinect sensor is formed, is called geometric center of gravity method.
(3) utilize deviation covering of the fan method and geometric center of gravity method to try to achieve the center of gravity of the sound source region that every two Kinect sensor are determined, totally three, calculate the mean value of these three barycentric coordinates, be the optimal location of sound source.
(4) optimum sound source position is sent to mobile robot, the positional information of robot acquisition oneself, is adjusted its direction of motion, is moved by sound localization to objective, thus realizes the navigation of mobile robot's sound localization.
The embodiment of the present invention is by the audio frequency processing power of Kinect sensor, and sound detecting source direction realizes robot chamber inner position.
Compared with prior art, the embodiment of the present invention has the following advantages:
Sound localization method is simple.The requirement of real-time of sound positioning system is very high, if adopt the algorithm of more complicated, can not meet the requirement of real-time of sound positioning system.The present invention adopts this more common method of gravity model appoach can determine the position of sound source fast.
Sound localization precision is high.Take full advantage of the microphone array of Kinect sensor and the background suppress of Kinect software driver and echo cancellation process, eliminate the impact of virtual sound source and ground unrest, detection real sources direction, obtains the optimum position of sound source.
Localization method is general.Owing to being the source direction utilizing Kinect sound detecting, even if so have barrier between Kinect and sound source, Kinect still can obtain the source direction of sound; On the other hand, this localization method can not only be used for indoor sound localization, can be applicable to outdoor equally.
Can be applicable to the accurate location in robot chamber and Navigational Movements control.
The embodiment of the present invention additionally provides a kind of robot localization and air navigation aid, comprising:
1, Kinect sound field center of gravity localization method
With reference to Fig. 1, in order to make the detection of sound source all in the optimal detection angular range of Kinect sensor, the layout of three Kinect sensor as shown in the figure.Kinect sensor obtains sound source deflection with its center line for reference, and namely figure acceptance of the bid has the dotted line of sound source angle reference line, and in the face of Kinect sensor, the angle on the left of dotted line is negative value, the angle on the right side of dotted line be on the occasion of.Sound source sends the sound of lasting 50ms in a certain position with the frequency of 16KHz.It is as follows that deviation covering of the fan method and regional barycenter methods combining obtain the sound source intersection region center of gravity process that three Kinect sensor determine:
The sound source deflection that No. 1 Kinect sensor detects, l is represented with β k1represent the sound source place straight line that No. 1 Kinect sensor detects, therefore real sources is at straight line l k1in the sector region of deflection α degree, No. 2 and No. 3 Kinect sensor similar. represent the sound source deflection that No. 2 Kinect sensor detect, l k2represent the sound source place straight line that No. 2 Kinect sensor detect.The sound source deflection that No. 3 Kinect sensor that represent γ detect, l k3represent the sound source place straight line that No. 3 Kinect sensor detect.From the above β and for negative value, γ on the occasion of.
(x k1, y k1) represent No. 1 Kinect sensor position coordinate, (x k2, y k2) represent No. 2 Kinect sensor position coordinates, (x k3, y k3) represent No. 3 Kinect sensor position coordinates, be easy in reality measure, so be known parameters.
The sound source direction that three Kinect sensor that illustrate Fig. 1 detect and between two deviation covering of the fan intersect determined sound source region mutually, for the region that the deviation covering of the fan of No. 1 and No. 3 Kinect sensor intersects to form mutually, its intersection region center of gravity asks method as follows:
As Fig. 2, represent that No. 1 Kinect sensor detects sound source place straight line and deflects α degree counterclockwise, represent that No. 1 Kinect sensor detects sound source place straight line and deflects α degree clockwise; represent that No. 3 Kinect sensor detect sound source place straight line and deflect α degree counterclockwise, represent that No. 3 Kinect sensor detect sound source place straight line and deflect α degree clockwise.A (x a, y a) represent with intersection point, B (x b, y b) represent with intersection point, C (x c, y c) represent with intersection point, D (x d, y d) represent with intersection point.Namely quadrilateral ABCD is the sound source region jointly determined by No. 1 and No. 3 Kinect sensor.Ask the center of gravity of quadrilateral ABCD for convenience of description, amplified.
With reference to Fig. 3, N (x n, y n) represent the center of gravity of triangle DAB, O (x o, y o) represent the center of gravity of triangle ABC, P (x p, y p) represent the center of gravity of triangle BCD, Q (x q, y q) represent the center of gravity of triangle CDA, R (x r, y r) represent the center of gravity of quadrilateral ABCD.
For quadrilateral ABCD, connect one bar diagonal line AC, so just quadrilateral ABCD is divided into the assembly of triangle ABC and triangle CDA, then the center of gravity of quadrilateral ABCD is on the line OQ of triangle ABC center of gravity O and triangle CDA center of gravity Q; In like manner, connect another diagonal line of quadrilateral ABCD BD, so just quadrilateral ABCD is divided into the assembly of triangle DAB and triangle BCD, the center of gravity of quadrilateral ABCD is equally on line segment NP, therefore have the center of gravity of quadrilateral ABCD on the intersection point of line segment OQ and NP, namely point.
With reference to Fig. 4 and Fig. 5, namely quadrilateral EFGH is the sound source region jointly determined by No. 1 and No. 2 Kinect sensor, namely quadrilateral IJKL is the sound source region jointly determined by No. 2 and No. 3 Kinect sensor, tries to achieve quadrilateral EFGH center of gravity be according to above-mentioned same method quadrilateral IJKL center of gravity is
Finally, the center of gravity of quadrilateral ABCD, EFGH, IJKL is asked the average of three coordinates is the optimal location S (x of sound source s, y s).
According to the method described above, the center of gravity in the region that the center of gravity in the region that No. 1 and No. 2 mutual intersections of Kinect sensor are formed and No. 2 and No. 3 mutual intersections of Kinect sensor are formed can in like manner be tried to achieve.
2, optimum sound source position algorithm
Determine that the method for optimum sound source position comprises:
Step 1: the multiple parameter of initialization
The position coordinates of three Kinect sensor all can actually record, i.e. (x k1, y k1), (x k2, y k2) and (x k3, y k3) being known parameters, error angle α is set as 5 ° according to the technical indicator of Kinect and actual experiment, and sound source and three Kinect sensor are started working.
Step 2: ask straight-line intersection coordinate
Sound source stops sounding after sending the sound of lasting 50ms, preserves the sound source deflection β that No. 1 Kinect sensor obtains, the sound source deflection that No. 2 Kinect sensor obtain , the sound source deflection γ that No. 3 Kinect sensor obtain.
Following straight-line equation can be listed according to point slope form:
Straight line y=tan (45 ° of+β+α) x (1)
Straight line y=tan (45 ° of+β-α) x (2)
Straight line y=y k3+ tan (135 ° of+γ+α) (x-x k3) (3)
Straight line y=y k3+ tan (135 ° of+γ-α) (x-x k3) (4)
Straight line and straight line intersection point be A (x a, y a), the system of equations that solution straight-line equation (1) and (4) forms obtains intersection point A (x a, y a) coordinate.Straight line and straight line intersection point be B (x b, y b), the system of equations that solution straight-line equation (2) and (4) forms obtains intersection points B (x b, y b) coordinate.Straight line and straight line intersection point be C (x c, y c), the system of equations that solution straight-line equation (2) and (3) forms obtains intersection point C (x c, y c) coordinate.Straight line and straight line intersection point be D (x d, y d), the system of equations that solution straight-line equation (1) and (3) forms obtains intersection point D (x d, y d) coordinate.
Step 3: ask irregular quadrilateral center of gravity
By triangle core coordinate formula,
Center of gravity N (the x of triangle DAB n, y n), x N = x D + x A + x B 3 , y N = y D + y A + y B 3 ,
Center of gravity O (the x of triangle ABC o, y o), x O = x A + x B + x C 3 , y O = y A + y B + y C 3 ,
Center of gravity P (the x of triangle BCD p, y p), x P = x B + x C + x D 3 , y P = y B + y C + y D 3 ,
Center of gravity Q (the x of triangle CDA q, y q), x Q = x C + x D + x A 3 , y Q = y C + y D + y A 3 .
Following straight-line equation can be listed according to two point form:
Line segment OQ place straight-line equation: y = y O + y Q - y O x Q - x O ( x - x O ) - - - ( 5 )
Line segment NP place straight-line equation: y = y N + y P - y N x P - x N ( x - x N ) - - - ( 6 )
The intersection point of line segment OQ and NP is the system of equations that solution straight-line equation (5) and (6) forms obtains quadrilateral ABCD center of gravity and is in like manner, the center of gravity of quadrilateral EFGH, IJKL is tried to achieve according to the method for step 2 and step 3
Step 4: ask sound source optimal location
Sound source optimal location S (x s, y s) coordinate x S = x R 1 + x R 2 + x R 3 3 , y S = y R 1 + y R 2 + y R 3 3 , The position of real sound source is represented with this.
3, robot Kinect sound localization navigation algorithm
With reference to Fig. 6, robot is equipped with sound source, sends sound in space, a certain moment, robot was positioned at W (x w, y w), robot target position is V (x v, y v).Robot is equipped with magnetic compass, robot self coordinate axis y can be measured 1with the angle of north orientation.Without loss of generality, if δ is the angle of coordinate axis y and north orientation.
φ represents the angle of robot deviation theory walking path, if the whole control system of robot and wheel structure all error free, then to keep straight on signal to robot one, make its certain distance of walking namely arrive target V (x v, y v), namely the theoretic track route of robot is as shown in figure WV solid line.And the actual robot one that gives keeps straight on signal, due to wheel structure error, robot always departs from its original orientation, and namely actual robot is walked along WV dotted line.
ε represents that robot finally arrives with impact point V (x v, y v) be the error-circular radius in the center of circle because there is the impact of other various factors in reality, the such as size of robot itself, the position that robot finally arrives not necessarily with impact point V (x v, y v) overlap completely, so require definition one circle of uncertainty according to realistic accuracy.As long as robot arrives with impact point V (x v, y v) be the center of circle, ε is in the circle of uncertainty of radius, then think that robot arrives impact point.Kinect sound localization robot navigation algorithm steps is as follows:
Step 1:
According to robot current location W (x w, y w) and target location V (x v, y v) angle theta of straight line WV and horizontal x-axis forward and the distance d of line segment WV can be calculated, then in order to towards target V (x v, y v) motion, robot adjusts its y 1direction of principal axis and north orientation angle are 90 ° of-θ-δ.Robot is from starting point W (x w, y w) start walking.
Step 2:
Robot sends the sound of lasting 50ms every 1s.If a certain moment, robot was positioned at U (x u, y u), the Sounnd source direction information that computer disposal three Kinect sensor obtain obtains robot current location U (x u, y u), by this position by being wirelessly sent to robot.
Step 3:
Robot calculates current location U (x u, y u) and starting point W (x w, y w) the deviation angle φ of line WU and WV, robot adjusts its y automatically 1axle is towards the direction motion that deviation angle φ reduces.
Step 4:
Repeat step 2 to step 3, finally, the Sounnd source direction information obtained when computer disposal three Kinect sensor obtains robot current location with impact point V (x v, y v) be the center of circle, ε is in the circle of uncertainty of radius, then think that robot arrives target location, completes robot from starting point W (x w, y w) to impact point V (x v, y v) navigation task.
Step 5:
From a V (x v, y v) to impact point Z (x z, y z) navigation task repeat step 1 to 4.
Constantly repeat according to above-mentioned steps, the sound localization based on Kinect just can realize the continuous navigation task of robot, and the sound localization algorithm demonstrating above-mentioned Kinect is correct and effective.
Tu7Shi robot whole sound localization Navigation Control theory diagram, PC(PC) machine connects 3 Kinect sensor and a wireless data transmission module, wireless data transmission module is used for the communication between PC and robot.Robot comprises robot core processor, motor driving, magnetic compass, wireless data transmission module and sound source module.Robot core processor is used for the motion control of robot, and robot communicates with PC, pick-up transducers information etc.Motor drives the amplification being used for robot motor's power.Magnetic compass is used for the angle in the relatively geographical geographical north of robot measurement self.Sound source module is used for robot and sounds signal, locates to realize Robotics Sound by 3 Kinect sensor.
Foregoing is only the preferred embodiments of the present invention, and on this basis, those skilled in the art can make some distortion, and when not departing from thought of the present invention, these distortion also should within protection scope of the present invention.

Claims (2)

1. Robotics Sound's localization method, is characterized in that, comprising:
By at least two Kinect as sound transducer, obtain the source direction angle of the sound from robot;
According to the position of described source direction angle and described at least two Kinect, determine deviation covering of the fan region, every two Kinect determined sound source places, the sound source deviation of directivity scope that Kinect sensor detects is [-α, α], be called deviation covering of the fan, representing the sound source region that two Kinect sensor are determined, is deviation covering of the fan region;
Determine the geometric center of gravity of the intersection region in every two deviation covering of the fan regions;
According to the described geometric center of gravity determined, calculate sound source optimal location by geometric center of gravity method, described sound source optimal location is the position of locating the described robot obtained.
2. Robotics Sound's localization method according to claim 1, is characterized in that, after calculating sound source optimal location by geometric center of gravity method, described method also comprises:
According to target location and described sound source optimal location, determine the route of described robotic movement;
Control line-of-road movement described in described Robot to described target location.
CN201310455238.9A 2013-09-29 2013-09-29 Robot sound positioning method Expired - Fee Related CN103472434B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310455238.9A CN103472434B (en) 2013-09-29 2013-09-29 Robot sound positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310455238.9A CN103472434B (en) 2013-09-29 2013-09-29 Robot sound positioning method

Publications (2)

Publication Number Publication Date
CN103472434A CN103472434A (en) 2013-12-25
CN103472434B true CN103472434B (en) 2015-05-20

Family

ID=49797354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310455238.9A Expired - Fee Related CN103472434B (en) 2013-09-29 2013-09-29 Robot sound positioning method

Country Status (1)

Country Link
CN (1) CN103472434B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105163209A (en) 2015-08-31 2015-12-16 深圳前海达闼科技有限公司 Voice receiving processing method and voice receiving processing device
US10028051B2 (en) * 2015-08-31 2018-07-17 Panasonic Intellectual Property Management Co., Ltd. Sound source localization apparatus
CN106291469B (en) * 2016-10-18 2018-11-23 武汉轻工大学 A kind of three-dimensional space source of sound localization method and system
CN108579057A (en) * 2018-04-27 2018-09-28 长沙修恒信息科技有限公司 A kind of football positioning ball test method
CN108525259B (en) * 2018-04-27 2020-11-27 湖南环境生物职业技术学院 System for be used for football location ball to test
CN109270491A (en) * 2018-08-17 2019-01-25 安徽信息工程学院 Indoor acoustic location device based on Kinect
CN110288984A (en) * 2019-05-17 2019-09-27 南昌大学 A kind of audio recognition method based on Kinect

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ATE376892T1 (en) * 1999-09-29 2007-11-15 1 Ltd METHOD AND APPARATUS FOR ALIGNING SOUND WITH A GROUP OF EMISSION TRANSDUCERS
CN201210187Y (en) * 2008-06-13 2009-03-18 河北工业大学 Robot automatically searching sound source
CN102707262A (en) * 2012-06-20 2012-10-03 太仓博天网络科技有限公司 Sound localization system based on microphone array

Also Published As

Publication number Publication date
CN103472434A (en) 2013-12-25

Similar Documents

Publication Publication Date Title
CN103472434B (en) Robot sound positioning method
US11865708B2 (en) Domestic robotic system
CN103353758B (en) A kind of Indoor Robot navigation method
CN108290294B (en) Mobile robot and control method thereof
CN106774301B (en) Obstacle avoidance following method and electronic equipment
Park et al. Autonomous mobile robot navigation using passive RFID in indoor environment
Lingemann et al. High-speed laser localization for mobile robots
CN107992052A (en) Method for tracking target and device, mobile equipment and storage medium
JP5310285B2 (en) Self-position estimation apparatus and self-position estimation method
RU2740229C1 (en) Method of localizing and constructing navigation maps of mobile service robot
CN110986920B (en) Positioning navigation method, device, equipment and storage medium
JP2012137909A (en) Movable body remote control system and control program for the same
WO2013049597A1 (en) Method and system for three dimensional mapping of an environment
JP5902275B1 (en) Autonomous mobile device
Ross et al. Toward refocused optical mouse sensors for outdoor optical flow odometry
CN103389486A (en) Control method and electronic device
JP2009237851A (en) Mobile object control system
TW201831920A (en) Auto moving device
TWI439671B (en) Map building system, building method and computer readable media thereof
Glas et al. SNAPCAT-3D: Calibrating networks of 3D range sensors for pedestrian tracking
CN117685967A (en) Multi-mode fusion navigation method
KR100784125B1 (en) Method for extracting coordinates of landmark of mobile robot with a single camera
WO2023274270A1 (en) Robot preoperative navigation method and system, storage medium, and computer device
US11865724B2 (en) Movement control method, mobile machine and non-transitory computer readable storage medium
CN114995459A (en) Robot control method, device, equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20201126

Address after: Area A129, 4th floor, building 4, Baitai Industrial Park, Yazhou Bay science and Technology City, Yazhou District, Sanya City, Hainan Province, 572024

Patentee after: Nanhai innovation and development base of Sanya Harbin Engineering University

Address before: 150001 Heilongjiang, Nangang District, Nantong street,, Harbin Engineering University, Department of Intellectual Property Office

Patentee before: HARBIN ENGINEERING University

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150520

Termination date: 20210929

CF01 Termination of patent right due to non-payment of annual fee