CN105182994B - A kind of method of unmanned plane pinpoint landing - Google Patents

A kind of method of unmanned plane pinpoint landing Download PDF

Info

Publication number
CN105182994B
CN105182994B CN201510485633.0A CN201510485633A CN105182994B CN 105182994 B CN105182994 B CN 105182994B CN 201510485633 A CN201510485633 A CN 201510485633A CN 105182994 B CN105182994 B CN 105182994B
Authority
CN
China
Prior art keywords
mrow
msub
pixel
mtr
mtd
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510485633.0A
Other languages
Chinese (zh)
Other versions
CN105182994A (en
Inventor
黄立
王宇炫
王效杰
李蔚
顾兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Puzhou Technology (Shenzhen) Co.,Ltd.
Original Assignee
Universal Aircraft Technology (shenzhen) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Universal Aircraft Technology (shenzhen) Co Ltd filed Critical Universal Aircraft Technology (shenzhen) Co Ltd
Priority to CN201510485633.0A priority Critical patent/CN105182994B/en
Publication of CN105182994A publication Critical patent/CN105182994A/en
Application granted granted Critical
Publication of CN105182994B publication Critical patent/CN105182994B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of method of unmanned plane pinpoint landing, with reference to GPS and the high-precision pinpoint landing of computer vision technique method:It is to calculate unmanned plane to make a return voyage hovering highly first;Second step is to upload make a return voyage hovering point gps coordinate and instruction of making a return voyage, and makes unmanned plane start to make a return voyage;3rd step accurately adjusts unmanned plane level orientation using intelligent algorithm, unmanned plane is located at directly over target level point;Most backward unmanned plane assigns vertical landing instruction, and unmanned plane drop to ground.

Description

A kind of method of unmanned plane pinpoint landing
Technical field
The present invention relates to a kind of method of unmanned plane pinpoint landing.
Background technology
In recent years, application of the unpiloted aircraft in the field of taking photo by plane is extremely wide, obtains the geographical position of aircraft Coordinate relies primarily on global positioning system (GPS).
GPS is early 1970s technically developing in U.S. army " meridian satellite navigation system ", is had complete Ball, totipotency, the navigator fix of round-the-clock advantage, timing, test system.GPS can typically use the measurement data of 4 satellites To calculate the position of a mobile receiving end, in the case where weather condition is good, the accuracy rating of One-Point Location 5-40m it Between.
Land in view of manipulating unmanned plane using cell phone application, under rugged environment, level point precision needs to control in dm Rank, the needs of practical application can not be met by relying solely on GPS location.
The content of the invention
It is an object of the invention to provide a kind of method of unmanned plane pinpoint landing, the inventive method combination GPS and computer regard Feel technology, unmanned plane landing position accuracy can be controlled within 10cm.
The scheme that the present invention uses is:
A kind of method of unmanned plane pinpoint landing, it is characterised in that comprise the following steps:
Step 1, calculate unmanned plane cruise-in altitude:Obtain mobile phone camera visual field angular dimensions;Determine GPS location precision Worst error;Cruise-in altitude is calculated according to two angle of visual field, GPS worst errors parameters;Step 2, upload the hovering point coordinates that makes a return voyage And instruction of making a return voyage:APP obtains cellphone GPS coordinate;Gps coordinate is changed, longitude, latitude keep constant, be highly arranged to step 1 Calculate the cruise-in altitude value obtained;Gps coordinate is uploaded to unmanned plane as hovering point of making a return voyage;Upload, which makes a return voyage to land, to be instructed, nobody Machine starts to make a return voyage;Step 3 is accurate to adjust unmanned plane level orientation:Using Gaussian modeling algorithm to background modeling, obtain Background frames;Judge picture frame, background pixel is regarded as if pixel matches with background model, be otherwise object pixel;Obtain The position of target in the picture, horizontal departure of the target with respect to mobile phone is calculated, controls unmanned plane to be moved to central point according to this;Repeat Correction, until unmanned plane is located at directly over mobile phone, into floating state;Step 4, vertical landing:Determine that unmanned plane has reached mesh Mark state;APP uploads landing instruction, unmanned plane vertical landing.
Beneficial effects of the present invention are:It is different from and relies solely on GPS positioning technology, method provided by the invention can breaks away from day The constraint of the objective condition such as gas, pinpoint landing have higher accuracy, reliability and security.
Brief description of the drawings
Fig. 1 is the flow chart of high-precision pinpoint landing processing method provided in an embodiment of the present invention.
Embodiment
With reference to the drawings and the specific embodiments, the present invention is described in further detail.
As shown in figure 1, embodiment is as follows:
1st, cruise-in altitude is calculated
Mobile phone camera angle of visual field parameter θ is obtained, determines the worst error r of GPS location precision;According to angle of visual field θ, GPS Two parameters of worst error r of positioning precision calculate cruise-in altitude H, and formula is:
Wherein θ represents the angle of visual field, and r represents the worst error of GPS location precision.
2nd, instruction of making a return voyage is assigned
The current GPS latitude and longitude coordinates of mobile phone are obtained, step 1 result of calculation is highly arranged to, sends to unmanned plane, and send Landing instruction.Mobile phone is lain against into ground, camera is upward.
3rd, unmanned plane level orientation is accurately adjusted.
Unmanned plane highly locates flight to H near level point after receiving gps coordinate and landing instruction, is regarded into camera It is wild.
(1) background modeling
Gauss modeling is carried out to background using RGB three kinds of color components, then is expressed as formula per two field picture I (X, t):
I (X, t)={ IR(X,t),IG(X,t),IB(X,t)}
Wherein X=(x, y) represents each pixel, and t is the moment;
Each state K is represented with a Gaussian function;If in t, X is usedtTo represent each pixel, then K is used The linear combination of individual state Gaussian Profile represents the probability density function P (X of this pixelt=x), formula is:
Wherein, XtRepresent each pixel, ωiRepresent the weights of i-th of Gaussian Profile, μiAnd ΣiRepresent respectively i-th The average and covariance of Gaussian Profile, η (x;μii) represent i-th of Gaussian Profile of t.
Wherein, η (x;μii) formula is:
Wherein, n is the dimension for the image that needs are filtered, and T is threshold value, typically takes 0.75.
By calculating in a period of time each pixel average gray value μ in video sequence0And variances sigma0It is mixed to initialize Gauss model is closed, i.e.,:
Wherein μ0For average gray value, σ0For variance, N represents pixel quantity.
For depth is the frame of video of 8, the scope of each pixel value is 0-255, therefore mixed Gauss model Parameter initialization using simplify formula:
Variance then takes higher value, initial variance 36, rand ∈ [0,1).
(2) matched pixel point
Next need to judge the pixel in image, see that can it match with the background model established, if Pixel Point matching, then background pixel is regarded as, be otherwise object pixel;Show pixel and the Gauss model phase of k-th state The formula of matching:
|xti,t-1|≤D×σi,t-1
Wherein, XtRepresent each pixel, μI, t-1Represent the average of i-th of Gaussian Profile of t-1 moment, σI, t-1Represent t-1 The variance of i-th of Gaussian Profile of moment, D are custom parameter, value 2.5.
If pixel is with background Gauss model, the match is successful, and more new formula is:
Wherein α represent context update speed, p be pixel probability density, ωK, tRepresent kth image in t The weights of Gaussian Profile, μK, tRepresent average of the kth image in t Gaussian Profile.0 < α < 1, ΣK, tRepresent kth In the variance of t Gaussian Profile, T is threshold value, typically takes 0.75 for image.The speed of the more big then context updates of α is faster, instead It is slower,;If pixel can not match with background Gauss model, then will substitute power using a new Gauss model It is worth less Gauss model, that is, reinitializes a larger variance, average keeps constant, and its weights then recalculates, public Formula is:
ωk,t=(1- α) ωk,t-1
(3) real-time update background model
In actual applications, the background environment in video is not unalterable that background pixel value can be with light Flicker or the movement of camera site and change, real-time update mechanism is carried out to background information, the ring of surrounding can be adapted to Border, the detection target of timely robust;The more new formula of background information:
Bt+1=(1- α) Bt+αIt
Wherein, α is constant, generally sets 0.1, BtRepresent the background image gray value of t, ItRepresent the figure of t Picture.
4th, vertical landing
APP sends landing instruction, unmanned plane vertical landing to unmanned plane.

Claims (2)

  1. A kind of 1. method of unmanned plane pinpoint landing, it is characterised in that comprise the following steps:
    Step 1, calculate unmanned plane cruise-in altitude:Mobile phone camera visual field angular dimensions is obtained, determines the maximum of GPS location precision Error;Cruise-in altitude H is calculated according to two angle of visual field, GPS worst errors parameters, formula is: Wherein θ represents the angle of visual field, and r represents the worst error of GPS location precision;Step 2, upload make a return voyage hovering point coordinates and the finger that makes a return voyage Order:APP obtains cellphone GPS coordinate, changes gps coordinate, and longitude, latitude keep constant, are highly arranged to step 1 and calculate acquisition Cruise-in altitude value, upload gps coordinate to unmanned plane as making a return voyage hovering point;Upload, which makes a return voyage to land, to be instructed, and unmanned plane starts to return Boat;Step 3 is accurate to adjust unmanned plane level orientation:Using Gaussian modeling algorithm to background modeling, background frames are obtained;Sentence Disconnected picture frame, background pixel is regarded as if pixel matches with background model, is otherwise object pixel;Target is obtained in image In position, calculate target with respect to mobile phone horizontal departure, according to this control unmanned plane to central point movement;Repeat to correct, until Unmanned plane is located at directly over mobile phone, into floating state;Step 4, vertical landing:Determine that unmanned plane has reached dbjective state; APP uploads landing instruction, unmanned plane vertical landing.
  2. 2. the method for unmanned plane pinpoint landing according to claim 1, it is characterised in that step 3, accurately adjust unmanned plane Level orientation specific method is:
    Unmanned plane highly locates flight to H near level point after receiving gps coordinate and landing instruction, into camera view;
    (1) background modeling
    Gauss modeling is carried out to background using RGB three kinds of color components, then is expressed as formula per two field picture I (X, t):
    I (X, t)={ IR(X,t),IG(X,t),IB(X,t)}
    Wherein X=(x, y) represents each pixel, and t is the moment;
    Each state K is represented with a Gaussian function;If in t, X is usedtTo represent each pixel, then with K state The linear combination of Gaussian Profile represents the probability density function P (X of this pixelt=x), formula is:
    <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>t</mi> </msub> <mo>=</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msub> <mi>&amp;omega;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>t</mi> </mrow> </msub> <mi>&amp;eta;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>;</mo> <msub> <mi>&amp;mu;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>t</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>t</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msub> <mi>&amp;omega;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>t</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> </mtd> </mtr> </mtable> </mfenced>
    Wherein, XtRepresent each pixel, ωiRepresent the weights of i-th of Gaussian Profile, μiAnd ΣiI-th of Gauss point is represented respectively The average and covariance of cloth, η (x;μii) represent i-th of Gaussian Profile of t;
    Wherein, η (x;μii) formula is:
    <mrow> <mi>&amp;eta;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>;</mo> <msub> <mi>&amp;mu;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>t</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>t</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <msup> <mrow> <mo>(</mo> <mn>2</mn> <mi>&amp;pi;</mi> <mo>)</mo> </mrow> <mrow> <mi>n</mi> <mo>/</mo> <mn>2</mn> </mrow> </msup> <mo>|</mo> <msub> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>t</mi> </mrow> </msub> <msup> <mo>|</mo> <mrow> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </msup> </mrow> </mfrac> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>t</mi> </mrow> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>t</mi> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>t</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </msup> </mrow>
    Wherein, n is the dimension for the image that needs are filtered, and T is threshold value, typically takes 0.75
    By calculating in a period of time each pixel average gray value μ in video sequence0And variances sigma0It is high to initialize mixing This model, i.e.,:
    <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mn>0</mn> <mn>2</mn> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>t</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msup> <mrow> <mo>(</mo> <mrow> <msub> <mi>X</mi> <mi>t</mi> </msub> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mn>0</mn> </msub> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>&amp;mu;</mi> <mn>0</mn> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>t</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msub> <mi>X</mi> <mi>t</mi> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced>
    Wherein μ0For average gray value, σ0For variance, N represents pixel quantity;
    For depth is the frame of video of 8, the scope of each pixel value is 0-255, therefore the ginseng of mixed Gauss model Number initialization uses simplified formula:
    <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mi>&amp;omega;</mi> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mi>K</mi> </mtd> </mtr> <mtr> <mtd> <mi>&amp;mu;</mi> <mo>=</mo> <mn>255</mn> <mo>&amp;times;</mo> <mi>r</mi> <mi>a</mi> <mi>n</mi> <mi>d</mi> </mtd> </mtr> </mtable> </mfenced>
    Variance then takes higher value, initial variance 36, rand ∈ [0,1);
    (2) matched pixel point
    Next need to judge the pixel in image, see that can it match with the background model established, if pixel Point matching, then background pixel is regarded as, be otherwise object pixel;Show that pixel and the Gauss model of k-th state match Formula:
    |xti,t-1|≤D×σi,t-1
    Wherein, XtRepresent each pixel, μI, t | 1Represent the average of i-th of Gaussian Profile of moment, σI, t | 1Represent that the moment is high i-th The variance of this distribution, D are custom parameter, value 2.5;
    If pixel is with background Gauss model, the match is successful, and more new formula is:
    <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>&amp;omega;</mi> <mrow> <mi>k</mi> <mo>,</mo> <mi>t</mi> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&amp;alpha;</mi> <mo>)</mo> </mrow> <msub> <mi>&amp;omega;</mi> <mrow> <mi>k</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>+</mo> <mi>&amp;alpha;</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>&amp;mu;</mi> <mrow> <mi>k</mi> <mo>,</mo> <mi>t</mi> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>p</mi> <mo>)</mo> </mrow> <msub> <mi>&amp;mu;</mi> <mrow> <mi>k</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>+</mo> <msub> <mi>px</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>&amp;Sigma;</mi> <mrow> <mi>k</mi> <mo>,</mo> <mi>t</mi> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>p</mi> <mo>)</mo> </mrow> <msub> <mi>&amp;Sigma;</mi> <mrow> <mi>k</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>+</mo> <mi>p</mi> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mrow> <mi>k</mi> <mo>,</mo> <mi>t</mi> </mrow> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mrow> <mi>k</mi> <mo>,</mo> <mi>t</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>p</mi> <mo>=</mo> <mi>&amp;alpha;</mi> <mo>/</mo> <msub> <mi>&amp;omega;</mi> <mrow> <mi>k</mi> <mo>,</mo> <mi>t</mi> </mrow> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced>
    Wherein α represent context update speed, p be pixel probability density, ωK, tRepresent kth image in t Gauss The weights of distribution, μK, tRepresent kth image in the average of t Gaussian Profile, 0 < α < 1, ∑K, tRepresent kth figure As the variance in t Gaussian Profile, T is threshold value, typically takes the speed of the more big then context updates of 0.75, α faster, otherwise more Slowly,;If pixel can not match with background Gauss model, then will using a new Gauss model come substitute weights compared with Small Gauss model, that is, a larger variance is reinitialized, average keeps constant, and its weights then recalculates, and formula is:
    ωk,t=(1- α) ωk,t-1
    (3) real-time update background model
    In actual applications, the background environment in video is not unalterable that background pixel value can be with the flicker of light Or camera site movement and change, real-time update mechanism is carried out to background information, the environment of surrounding can be adapted to, and When robust detection target;The more new formula of background information:
    Bt+1=(1- α) Bt+αIt
    Wherein, α is constant, generally sets 0.1, BtRepresent the background image gray value of t, ItRepresent the image of t.
CN201510485633.0A 2015-08-10 2015-08-10 A kind of method of unmanned plane pinpoint landing Active CN105182994B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510485633.0A CN105182994B (en) 2015-08-10 2015-08-10 A kind of method of unmanned plane pinpoint landing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510485633.0A CN105182994B (en) 2015-08-10 2015-08-10 A kind of method of unmanned plane pinpoint landing

Publications (2)

Publication Number Publication Date
CN105182994A CN105182994A (en) 2015-12-23
CN105182994B true CN105182994B (en) 2018-02-06

Family

ID=54905133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510485633.0A Active CN105182994B (en) 2015-08-10 2015-08-10 A kind of method of unmanned plane pinpoint landing

Country Status (1)

Country Link
CN (1) CN105182994B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105629996A (en) * 2016-03-22 2016-06-01 昆明天龙经纬电子科技有限公司 Unmanned aerial vehicle fixed-point landing guiding method and system
CN105867423A (en) * 2016-06-08 2016-08-17 杨珊珊 Course reversal method and course reversal system of unmanned aerial vehicle and unmanned aerial vehicle
CN106406351B (en) * 2016-10-28 2020-01-14 易瓦特科技股份公司 Method and apparatus for controlling a flight path of an unmanned aerial vehicle
CN107018522B (en) * 2017-02-27 2020-05-26 东华大学 Positioning method of unmanned aerial vehicle ground base station based on multi-information fusion
CN106774423B (en) * 2017-02-28 2020-08-11 亿航智能设备(广州)有限公司 Landing method and system of unmanned aerial vehicle
CN106909162A (en) * 2017-04-21 2017-06-30 普宙飞行器科技(深圳)有限公司 A kind of vehicle-mounted Autonomous landing device of universal unmanned plane
CN108475070B (en) 2017-04-28 2021-11-30 深圳市大疆创新科技有限公司 Control method and control equipment for palm landing of unmanned aerial vehicle and unmanned aerial vehicle
CN113741543A (en) * 2017-06-12 2021-12-03 深圳市大疆创新科技有限公司 Unmanned aerial vehicle, return control method, terminal, system and machine readable storage medium
CN107300486A (en) * 2017-08-11 2017-10-27 上海拓攻机器人有限公司 A kind of water quality sampling method and system based on unmanned plane
CN107977985B (en) * 2017-11-29 2021-02-09 上海拓攻机器人有限公司 Unmanned aerial vehicle hovering method and device, unmanned aerial vehicle and storage medium
CN108181922A (en) * 2017-12-01 2018-06-19 北京臻迪科技股份有限公司 Unmanned plane landing control method, apparatus and system
WO2020006658A1 (en) 2018-07-02 2020-01-09 深圳市大疆创新科技有限公司 Unmanned aerial vehicle return control method and device, and unmanned aerial vehicle
CN111742276A (en) * 2019-05-29 2020-10-02 深圳市大疆创新科技有限公司 Unmanned aerial vehicle return method and equipment, unmanned aerial vehicle and storage medium
US11670181B2 (en) 2020-01-22 2023-06-06 Honeywell International Inc. Systems and methods for aiding landing of vertical takeoff and landing vehicle
CN114818546B (en) * 2022-05-24 2024-08-16 重庆大学 Unmanned aerial vehicle hovering wind resistance performance double-dimension evaluation method based on error sequencing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156481A (en) * 2011-01-24 2011-08-17 广州嘉崎智能科技有限公司 Intelligent tracking control method and system for unmanned aircraft
CN102538782A (en) * 2012-01-04 2012-07-04 浙江大学 Helicopter landing guide device and method based on computer vision
CN104361770A (en) * 2014-11-18 2015-02-18 武汉理工大学 Precise landing automatic control method for traffic information collecting unmanned aerial vehicle
CN104808674A (en) * 2015-03-03 2015-07-29 广州亿航智能技术有限公司 Multi-rotor aircraft control system, terminal and airborne flight control system
CN104808675A (en) * 2015-03-03 2015-07-29 广州亿航智能技术有限公司 Intelligent terminal-based somatosensory flight operation and control system and terminal equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9070083B2 (en) * 2011-12-13 2015-06-30 Iucf-Hyu Industry-University Cooperation Foundation Hanyang University Method for learning task skill and robot using thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156481A (en) * 2011-01-24 2011-08-17 广州嘉崎智能科技有限公司 Intelligent tracking control method and system for unmanned aircraft
CN102538782A (en) * 2012-01-04 2012-07-04 浙江大学 Helicopter landing guide device and method based on computer vision
CN104361770A (en) * 2014-11-18 2015-02-18 武汉理工大学 Precise landing automatic control method for traffic information collecting unmanned aerial vehicle
CN104808674A (en) * 2015-03-03 2015-07-29 广州亿航智能技术有限公司 Multi-rotor aircraft control system, terminal and airborne flight control system
CN104808675A (en) * 2015-03-03 2015-07-29 广州亿航智能技术有限公司 Intelligent terminal-based somatosensory flight operation and control system and terminal equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
背景建模技术的研究与实现;刘亚利;《中国优秀硕士学位论文全文数据库》;20100815(第8期);正文第2-4章 *

Also Published As

Publication number Publication date
CN105182994A (en) 2015-12-23

Similar Documents

Publication Publication Date Title
CN105182994B (en) A kind of method of unmanned plane pinpoint landing
CN111024072B (en) Satellite map aided navigation positioning method based on deep learning
CN106373159A (en) Simplified unmanned aerial vehicle multi-target location method
CN108225307B (en) Inertia measurement information assisted star map matching method
CN103383773A (en) Automatic ortho-rectification frame and method for dynamically extracting remote sensing satellite image of image control points
CN110070025A (en) Objective detection system and method based on monocular image
CN115451948B (en) Agricultural unmanned vehicle positioning odometer method and system based on multi-sensor fusion
CN109540113B (en) Total station and star map identification method thereof
CN105046251A (en) Automatic ortho-rectification method based on remote-sensing image of environmental No.1 satellite
CN108955682A (en) Mobile phone indoor positioning air navigation aid
EP2472471A2 (en) System and method for automatically aligning a telescope without requiring user intervention
CN109863547A (en) The equipment for constructing map for using machine learning and image procossing
CN106454108B (en) Track up method, apparatus and electronic equipment based on artificial intelligence
CN109613926A (en) Multi-rotor unmanned aerial vehicle land automatically it is High Precision Automatic identification drop zone method
US10724871B2 (en) Camera-based heading-hold navigation
CN114326771A (en) Unmanned aerial vehicle shooting route generation method and system based on image recognition
CN111595332B (en) Full-environment positioning method integrating inertial technology and visual modeling
CN115618749B (en) Error compensation method for real-time positioning of large unmanned aerial vehicle
CN103530628B (en) High-resolution remote sensing image ortho-rectification method based on floating control point
CN118018112B (en) Star-earth laser communication task capability analysis method, device, system and medium
CN109579829A (en) A kind of small field of view star sensor shortwave nautical star recognition methods
CN107458619A (en) A kind of rotor Autonomous landing of full-automatic microminiature four and the method and system of charging
CN114820768B (en) Method for aligning geodetic coordinate system and slam coordinate system
Kaiser et al. Position and orientation of an aerial vehicle through chained, vision-based pose reconstruction
KR102542596B1 (en) Method and device for making a topographic map

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 518066 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Patentee after: Puzhou Technology (Shenzhen) Co.,Ltd.

Address before: 518000 Room 201, building A, 1 front Bay Road, Shenzhen Qianhai cooperation zone, Shenzhen, Guangdong

Patentee before: PRODRONE TECHNOLOGY (SHENZHEN) Co.,Ltd.