TWI812102B - Method for two unmanned vehicles cooperatively navigating and system thereof - Google Patents

Method for two unmanned vehicles cooperatively navigating and system thereof Download PDF

Info

Publication number
TWI812102B
TWI812102B TW111110880A TW111110880A TWI812102B TW I812102 B TWI812102 B TW I812102B TW 111110880 A TW111110880 A TW 111110880A TW 111110880 A TW111110880 A TW 111110880A TW I812102 B TWI812102 B TW I812102B
Authority
TW
Taiwan
Prior art keywords
unmanned vehicle
image
resolution
path
computer
Prior art date
Application number
TW111110880A
Other languages
Chinese (zh)
Other versions
TW202338301A (en
Inventor
張保榮
呂炯霖
Original Assignee
國立高雄大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立高雄大學 filed Critical 國立高雄大學
Priority to TW111110880A priority Critical patent/TWI812102B/en
Application granted granted Critical
Publication of TWI812102B publication Critical patent/TWI812102B/en
Publication of TW202338301A publication Critical patent/TW202338301A/en

Links

Images

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Traffic Control Systems (AREA)

Abstract

A method for two unmanned vehicles cooperatively navigating is provided to solve poor efficiency on route planning by one single unmanned ground vehicle in conventional way. The method of the present invention includes the following steps. A computer receives a map image having a first resolution. By a preset resolution adjusting regulation, the computer decreases the first resolution to a second resolution. In the preset resolution adjusting regulation, the decreased second resolution is positively correlated with an image capturing height of the map image. The computer receives a start-point data and an end-point data, and then analyzes said point data and the map image to generate a route image having a route trajectory with said point data.

Description

二無人載具協同導航方法與系統 2. Unmanned vehicle cooperative navigation method and system

本發明係關於一種導航方法與系統,尤其是一種二無人載具協同導航方法與系統。 The present invention relates to a navigation method and system, in particular to a cooperative navigation method and system for two unmanned vehicles.

習知技術中,使用無人車於一場域進行障礙物跨越或環境偵測時,特別是該場域的地形、障礙物是未知的狀態,會使該無人車針對該場域進行詳細的路徑探勘,以獲取對應的路徑規劃與環境偵測。然而,前述進行路徑探勘的過程,由於必須透過該無人車實際走過或拍攝該特定場域達相當程度,才能進行該場域地圖資料或模型建置,並進而進行路徑規劃,因此必虛耗費相當大量的時間,且無法即時達成路徑規劃或特定目的地環境偵測的效果。 In the conventional technology, when an unmanned vehicle is used to cross obstacles or detect the environment in a field, especially if the terrain and obstacles of the field are unknown, the unmanned vehicle will conduct detailed path exploration for the field. , to obtain the corresponding path planning and environment detection. However, the aforementioned process of path exploration requires the unmanned vehicle to actually walk through or photograph the specific site to a certain extent before it can build map data or models of the site and then perform path planning, so it will be wasted. It takes a considerable amount of time, and the effect of path planning or specific destination environment detection cannot be achieved immediately.

有鑑於此,習知的無人車導航技術確實仍有加以改善之必要。 In view of this, the conventional autonomous vehicle navigation technology still needs to be improved.

為解決上述問題,本發明的目的是提供一種二無人載具協同導航方法與系統,係能夠利用無人車與無人機協同合作,大幅縮短單純由無人車於地圖中緩慢搜尋路徑的缺點。 In order to solve the above problems, the purpose of the present invention is to provide a cooperative navigation method and system for two unmanned vehicles, which can utilize the cooperative cooperation of unmanned vehicles and drones to greatly shorten the shortcomings of the unmanned vehicle's slow search for paths on the map.

本發明的次一目的是提供一種二無人載具協同導航方法與系 統,透過將用於障礙物辨識與路徑規劃運算的電腦設置為一雲端伺服器,可提升無人載具的機動性與續航力。 The second object of the present invention is to provide a cooperative navigation method and system for two unmanned vehicles. System, by setting the computer used for obstacle recognition and path planning calculations as a cloud server, the mobility and endurance of unmanned vehicles can be improved.

本發明的又一目的是提供一種二無人載具協同導航方法與系統,提出一預設解析度規則,根據拍攝高度調整地圖影像解析度,可提升整體影像處理速度,以快速產生路徑規劃結果。 Another object of the present invention is to provide a collaborative navigation method and system for two unmanned vehicles. It proposes a preset resolution rule to adjust the map image resolution according to the shooting height, which can improve the overall image processing speed and quickly produce path planning results.

本發明的再一目的是提供一種二無人載具協同導航方法與系統,可將路徑規劃結果與原影像解析度的地圖影像合併,獲得具有路徑規劃的高品質合併影像,以便於使用者觀察規劃的路徑或據以操作該無人車移動。 Another object of the present invention is to provide a collaborative navigation method and system for two unmanned vehicles, which can merge the path planning results with the map image of the original image resolution to obtain a high-quality merged image with path planning, so as to facilitate the user to observe the plan. path or based on which the unmanned vehicle is operated.

本發明的另一目的是提供一種二無人載具協同導航方法與系統,可根據路徑規劃結果、該無人車當前位置及/或該無人車規格資訊中的最大長度、最大寬度(可選地包含最小迴轉半徑),產生對應路徑軌跡控制指令,以達成自動控制該無人車沿路徑軌跡移動。 Another object of the present invention is to provide a cooperative navigation method and system for two unmanned vehicles, which can be based on the path planning results, the current position of the unmanned vehicle and/or the maximum length and maximum width (optionally including minimum radius of gyration), generate corresponding path trajectory control instructions to achieve automatic control of the unmanned vehicle to move along the path trajectory.

本發明全文所記載的元件及構件使用「一」或「一個」之量詞,僅是為了方便使用且提供本發明範圍的通常意義;於本發明中應被解讀為包括一個或至少一個,且單一的概念也包括複數的情況,除非其明顯意指其他意思。 The use of the quantifier "a" or "an" in the elements and components described throughout the present invention is only for convenience of use and to provide a common sense of the scope of the present invention; in the present invention, it should be interpreted as including one or at least one, and single The concept of also includes the plural unless it is obvious that something else is meant.

本發明全文所述「耦接」用語,包含電性及/或訊號地直接或間接連接,係本領域中具有通常知識者可以依據使用需求予以選擇者。 The term "coupling" throughout this disclosure includes direct or indirect electrical and/or signal connections, which can be selected by those with ordinary knowledge in the art according to usage requirements.

本發明全文所述之「電腦(Computer)」,係指具備特定功能且以硬體或硬體與軟體實現的各式資料處理裝置,特別是具有一處理器以處理分析資訊及/或產生對應控制資訊,例如:伺服器、虛擬機器、桌上型電腦、筆記型電腦、平板電腦或智慧型手機等,係本發明所屬技術領域中具有通常知識者可以理解。 "Computer" as mentioned throughout the present invention refers to various data processing devices with specific functions and implemented by hardware or hardware and software, especially a processor to process and analyze information and/or generate correspondence Control information, such as servers, virtual machines, desktop computers, notebook computers, tablet computers or smart phones, etc., can be understood by those with ordinary knowledge in the technical field to which the present invention belongs.

本發明全文所述之「雲端伺服器(CloudServer)」,係利用虛 擬化軟體建立,以將一個實體(裸機)伺服器劃分為數個虛擬伺服器,以供執行應用程式和資訊處理儲存;使用者可透過線上介面遠端存取該數個虛擬伺服器的功能。 The "Cloud Server (CloudServer)" described in the entire text of this invention utilizes virtual Virtual software is created to divide a physical (bare metal) server into several virtual servers for executing applications and information processing storage; users can remotely access the functions of these virtual servers through an online interface .

本發明全文所述之「處理器(Processor)」,係指任何具有資料儲存、運算及訊號產生功能的電子晶片,或具有該電子晶片的電子設備。舉例而言,該電子晶片可以為中央處理單元(CPU)、微控制器(MCU)、數位訊號處理器(DSP)、現場可程式化邏輯閘陣列(FPGA)或系統單晶片(SoC);該電子設備可以為可程式邏輯控制器(PLC)或Arduino UNO,本領域中具有通常知識者可以依據運算效能、價格、體積限制或功能需求等予以選擇者。 "Processor" as mentioned throughout the present invention refers to any electronic chip with data storage, computing and signal generation functions, or an electronic device equipped with such an electronic chip. For example, the electronic chip can be a central processing unit (CPU), a microcontroller (MCU), a digital signal processor (DSP), a field programmable gate array (FPGA) or a system on a chip (SoC); the The electronic device may be a programmable logic controller (PLC) or an Arduino UNO, and a person with ordinary knowledge in the field can select one based on computing performance, price, volume limitations or functional requirements.

本發明全文所述之「資料庫(Database)」,係指將一群相關的電子資料集合並儲存在硬碟、記憶體或上述之組合,且可藉由資料庫管理系統(DBSMS)所提供的語法功能,例如新增、讀取、搜尋、更新及刪除等,對電子資料進行相關處理;該資料庫管理系統可以藉由不同資料結構方式管理電子資料,例如可以為關聯式、階層式、網狀式或物件導向式等,本發明係以如MySQL關聯式資料庫管理系統為例進行以下說明,惟非用以限制本發明。 The "database" mentioned in the entire text of this invention refers to a group of related electronic data collected and stored on a hard disk, a memory, or a combination of the above, and can be provided by a database management system (DBSMS). Syntax functions, such as adding, reading, searching, updating and deleting, etc., are used to process electronic data; the database management system can manage electronic data through different data structures, such as associative, hierarchical, network form or object-oriented, etc., the present invention takes a relational database management system such as MySQL as an example for the following description, but is not intended to limit the present invention.

本發明的二無人載具協同路徑導航方法,該二無人載具分別為一無人車與一無人機,在該無人車位在一預定位置且該無人機拍取該預定位置的一地圖影像的一狀態下,包含:一電腦接收該地圖影像,該地圖影像具有一第一解析度;該電腦依據一預設調整解析度規則降低該第一解析度為一第二解析度,在該預設解析度規則中,經降低後的該第二解析度係與該地圖影像的拍攝高度呈正相關;及該電腦接收一起點資訊與一終點資訊,並根據該起點資訊、該終點資訊及該地圖影像進行分析,以產生具有一路徑軌跡的 一路徑影像,且該路徑軌跡包含該起點資訊與該終點資訊;該起點資訊及該終點資訊可為該電腦依據該無人車的一當前位置及該地圖影像所定義,或是由一使用者透過一操作模組所定義,該操作模組接收並顯示該地圖影像於一顯示裝置,以供該使用者設定對應的該起點資訊與該終點資訊;該電腦根據該路徑軌跡與該無人車的一當前位置,產生對應的一路徑軌跡控制指令,以控制該無人車沿該路徑軌跡移動。 In the collaborative path navigation method of two unmanned vehicles of the present invention, the two unmanned vehicles are an unmanned vehicle and a drone respectively. The unmanned parking space is at a predetermined location and the drone captures a map image of the predetermined location. In the state, it includes: a computer receives the map image, and the map image has a first resolution; the computer reduces the first resolution to a second resolution according to a preset resolution adjustment rule, and in the preset resolution In the degree rule, the reduced second resolution is positively correlated with the shooting height of the map image; and the computer receives a starting point information and an end point information, and performs operations based on the starting point information, the end point information and the map image. analysis to produce a path trajectory with a A path image, and the path trajectory includes the starting point information and the end point information; the starting point information and the end point information can be defined by the computer based on a current location of the unmanned vehicle and the map image, or by a user. Defined by an operation module, the operation module receives and displays the map image on a display device for the user to set the corresponding starting point information and the end point information; the computer is based on the path trajectory and the unmanned vehicle. The current position is used to generate a corresponding path trajectory control instruction to control the unmanned vehicle to move along the path trajectory.

本發明的二無人載具協同路徑導航系統,包含:一無人車,具有一定位單元,用於獲取該無人車的一當前位置;一無人機,具有另一定位單元與一攝像模組,該另一定位單元用於獲取該無人機的一當前位置,該攝像模組用於拍攝一地圖影像;及一電腦,與該無人車及該無人機耦接,並執行如本發明中的二無人載具協同路徑導航方法。 The cooperative path navigation system for two unmanned vehicles of the present invention includes: an unmanned vehicle with a positioning unit for obtaining a current position of the unmanned vehicle; a drone with another positioning unit and a camera module, the Another positioning unit is used to obtain a current position of the drone, the camera module is used to capture a map image; and a computer is coupled to the unmanned vehicle and the drone, and performs the two unmanned operations as in the present invention. Vehicle collaborative path navigation method.

據此,本發明的二無人載具協同路徑導航方法與系統,可透過該無人機拍攝該地圖影像,並透過將該電腦對該地圖影像進行處理所產生的路徑軌跡關聯至無人車,可由該電腦或一使用者控制無人車移動,大幅縮短單純由無人車於地圖中緩慢搜尋路徑的缺點,提升無人車跨越障礙物及/或執行任務的效率。另,透過所述預設解析度規則,可根據拍攝高度調整地圖影像解析度,達成提升整體影像處理速度及快速產生路徑規劃結果的功效。另,藉由該起點資訊與該終點資訊可被定義,特別是該起點資訊與該終點資訊是可根據一預定義方式而設定時,可實現該電腦自動化進行路徑規劃功能的功效。另,藉由該電腦基於該路徑軌跡與該當前位置,可自動化生成控制指令,達成自動化控制該無人車運行(跨越障礙物或到達目的地)的效果。 Accordingly, the collaborative path navigation method and system of two unmanned vehicles of the present invention can capture the map image through the UAV, and associate the path trajectory generated by processing the map image with the computer to the unmanned vehicle. The computer or a user controls the movement of the unmanned vehicle, which greatly reduces the shortcomings of the unmanned vehicle simply searching for a path slowly on the map, and improves the efficiency of the unmanned vehicle in crossing obstacles and/or performing tasks. In addition, through the preset resolution rules, the map image resolution can be adjusted according to the shooting height, thereby improving the overall image processing speed and quickly generating path planning results. In addition, the starting point information and the end point information can be defined, especially when the starting point information and the end point information can be set according to a predefined method, the function of the computer's automatic path planning function can be realized. In addition, the computer can automatically generate control instructions based on the path trajectory and the current position to achieve the effect of automatically controlling the operation of the unmanned vehicle (overcoming obstacles or reaching the destination).

其中,該地圖影像的該第一解析度可為1280 x 960像素,該預設解析度規則可定義為:在該拍攝高度超過50公尺但未滿60公尺的一狀態,該第二解析度為1280 x 960像素;在該拍攝高度超過40公尺但未滿50公尺 的一狀態,該第二解析度為640 x 480像素;在該拍攝高度超過30公尺但未滿40公尺的一狀態,該第二解析度為320 x 240像素;在該拍攝高度超過20公尺但未滿30公尺的一狀態,該第二解析度為256 x 192像素;在該拍攝高度超過10公尺但未滿20公尺的一狀態,該第二解析度為128 x 96像素;在該拍攝高度為10公尺以下的一狀態,該第二解析度為128 x 96像素。如此,透過該預設調整規則,可在不影響路徑規劃正確性的情況下,提昇該電腦及整體系統運算、處理及反應的速度。 Wherein, the first resolution of the map image can be 1280 x 960 pixels, and the preset resolution rule can be defined as: in a state where the shooting height exceeds 50 meters but is less than 60 meters, the second resolution The height is 1280 x 960 pixels; the shooting height is more than 40 meters but less than 50 meters. In a state where the shooting height exceeds 30 meters but is less than 40 meters, the second resolution is 320 x 240 pixels; in a state where the shooting height exceeds 20 meters In a state where the shooting height is more than 10 meters but less than 30 meters, the second resolution is 256 x 192 pixels; in a state where the shooting height is more than 10 meters but less than 20 meters, the second resolution is 128 x 96 pixels; in a state where the shooting height is below 10 meters, the second resolution is 128 x 96 pixels. In this way, through the default adjustment rules, the computing, processing and response speed of the computer and the overall system can be improved without affecting the accuracy of path planning.

其中,該電腦可為一雲端伺服器。如此,相較該電腦直接安裝於無人載具的情況中,可減輕無人載具的負重,以提升無人載具的機動性與續航力。 Wherein, the computer can be a cloud server. In this way, compared with the situation where the computer is directly installed on the unmanned vehicle, the load of the unmanned vehicle can be reduced to improve the mobility and endurance of the unmanned vehicle.

其中,該電腦可去除該路徑影像中對應該路徑軌跡以外的背景影像,使該路徑影像僅包含該路徑軌跡的影像,並將該路徑影像與具有該地圖影像進行對位合併,以獲得一合併影像。如此,透過將路徑軌跡與該地圖影像進行對位合併,可有效率的獲得路徑規劃結果並獲得高品質的合併影像。 Among them, the computer can remove the background image corresponding to the path other than the path trajectory in the path image, so that the path image only contains the image of the path trajectory, and align and merge the path image with the map image to obtain a merged image. image. In this way, by aligning and merging the path trajectory with the map image, the path planning results can be obtained efficiently and a high-quality merged image can be obtained.

其中,一操作模組的一顯示裝置可接受並顯示該合併影像與該無人車對應該合併影像中的一即時位置。如此,透過該操作模組顯示該合併影像及該即時位置,可便於使用者觀察合併影像或據以操作該無人車移動。 Wherein, a display device of an operation module can accept and display the merged image and the unmanned vehicle corresponding to a real-time position in the merged image. In this way, displaying the merged image and the real-time position through the operation module can facilitate the user to observe the merged image or operate the unmanned vehicle to move accordingly.

其中,該電腦可根據的一對應無人車的一最大長度與一最大寬度產生一對應修正路徑軌跡。如此,透過對應該無人車的該最大長度與該最大寬度所產生的該對應修正路徑軌跡,可使路徑軌跡優化,避免該無人車依路徑軌跡移動時產生非預期的碰撞或阻礙;並可根據不同規格的無人車,產生對應合適的路徑軌跡。 Among them, the computer can generate a corresponding corrected path trajectory according to a corresponding maximum length and a maximum width of the unmanned vehicle. In this way, through the corresponding corrected path trajectory generated corresponding to the maximum length and the maximum width of the unmanned vehicle, the path trajectory can be optimized to avoid unexpected collisions or obstacles when the unmanned vehicle moves according to the path trajectory; and can be based on Unmanned vehicles of different specifications generate corresponding appropriate path trajectories.

其中,該電腦根據該路徑軌跡與一對應無人車的一當前位置、一最大長度及一最大寬度,產生對應的一路徑軌跡控制指令,以控制該對應 無人車沿該路徑軌跡移動。如此,透過該路徑軌跡與所述當前位置、最大長度及最大寬度,可自動化生成適合於符合特定規格的無人車的控制指令,達成自動化控制無人車運行的效果。 Wherein, the computer generates a corresponding path trajectory control instruction based on the path trajectory and a current position, a maximum length and a maximum width of a corresponding unmanned vehicle to control the corresponding The unmanned vehicle moves along this path. In this way, through the path trajectory and the current position, maximum length, and maximum width, control instructions suitable for unmanned vehicles that meet specific specifications can be automatically generated to achieve the effect of automatically controlling the operation of unmanned vehicles.

1:無人車 1:Unmanned vehicle

2:無人機 2: Drone

10,20:處理器 10,20: Processor

11,21:傳輸模組 11,21:Transmission module

12,22:定位單元 12,22: Positioning unit

13,23:攝像模組 13,23:Camera module

14,24:熱成像模組 14,24: Thermal imaging module

15,25:氣體偵測模組 15,25: Gas detection module

16,26:懸浮微粒偵測模組 16,26: Suspended particle detection module

3:電腦 3:Computer

31:影像處理模組 31:Image processing module

32:影像辨識模組 32:Image recognition module

33:路徑規劃模型 33: Path planning model

4:操作模組 4: Operation module

S1:影像接收步驟 S1: Image receiving steps

S2:影像降階步驟 S2: Image reduction step

S21:降低解析度步驟 S21: Steps to reduce resolution

S21A:降低彩度步驟 S21A: Steps to reduce chroma

S3:路徑規劃步驟 S3: Path planning steps

S31:障礙物辨識步驟 S31: Obstacle identification steps

S32:二值化步驟 S32: Binarization step

S32A:影像長寬比例調整步驟 S32A: Image aspect ratio adjustment steps

S33:路徑軌跡生成步驟 S33: Path trajectory generation steps

S33A:提升路徑影像解析度步驟 S33A: Steps to improve path image resolution

S33B:比例還原步驟 S33B: Proportional restoration step

S4:影像合併步驟 S4: Image merging step

S41:消除背景步驟 S41: Eliminate background step

S41A:解析度一致化步驟 S41A: Resolution consistency step

S42:合併步驟 S42: Merge step

S5:控制無人車移動步驟 S5: Steps to control the movement of unmanned vehicles

〔第1圖〕本發明一較佳實施例的系統架構圖。 [Figure 1] System architecture diagram of a preferred embodiment of the present invention.

〔第2圖〕本發明一較佳實施例的方法流程圖。 [Figure 2] A method flow chart of a preferred embodiment of the present invention.

〔第3圖〕根據本案第2圖,該路徑規劃步驟之細部流程。 [Figure 3] According to Figure 2 of this case, the detailed process of the path planning steps.

為讓本發明之上述及其他目的、特徵及優點能更明顯易懂,下文特舉本發明之較佳實施例,並配合所附圖式作詳細說明;此外,在不同圖式中標示相同符號者視為相同,會省略其說明。 In order to make the above and other objects, features and advantages of the present invention more obvious and understandable, preferred embodiments of the present invention are illustrated below and described in detail with reference to the accompanying drawings; in addition, the same symbols are used in different drawings. are considered to be the same and their description will be omitted.

請參照第1圖所示,其係本發明二無人載具協同導航系統的一較佳實施例,係包含一無人車(UGV)1、一無人機(UAV)2及一電腦3,該無人車1與無人機2係分別與該電腦3耦接。較佳地,另包含一操作模組4分別與該無人車1、該無人機2及該電腦3耦接。 Please refer to Figure 1, which is a preferred embodiment of a cooperative navigation system for two unmanned vehicles of the present invention. It includes an unmanned vehicle (UGV) 1, an unmanned aerial vehicle (UAV) 2 and a computer 3. The car 1 and the drone 2 are coupled to the computer 3 respectively. Preferably, it also includes an operation module 4 coupled to the unmanned vehicle 1, the drone 2 and the computer 3 respectively.

該無人車1與該無人機2分別具有一處理器10與20、一傳輸模組11與21、一定位單元12與22,另較佳可選地分別具有一攝像模組13與23、一熱成像模組14與24、一氣體偵測模組15與25及一懸浮微粒偵測模組16與26中的至少一者。該處理器10、20係分別用於控制該無人車1及該無人機2的作動,較佳係基於該無人車1及該無人機2所接受各種訊號的回饋,以進行對應的控制。此外,該無人車1及該無人機2較佳具有自動避 障功能。該無人車1與該無人機2的該傳輸模組11、21係可彼此耦接,或可分別與該電腦3耦接,或可與其他裝置耦接,以進行資料間的接受、傳送或交換,並可作為該處理器10、20據以分別控制該無人車1與該無人機2的基礎。該定位單元12、22可為一全球定位系統,分別用於獲取該無人車1、該無人機2的當前位置。該攝像模組13、23可例如是一攝影機,用以拍攝周圍環境影像或地圖影像。該熱成像模組14、24係用於偵測/取得一物體溫度及/或一環境溫度,可採用例如是MELEXIS公司產品型號MLX90640紅外線熱像儀。該氣體偵測模組15、25可用於偵測一氧化碳(CO)、二氧化碳(CO2)、烴類混合物氣體(LPG)、氨氣(NH3)、二氧化氮(NO2)、甲烷(CH4)、丙烷(C3H8)、丁烷(C4H10)、氫氣(H2)及酒精氣體中的至少一個。該懸浮微粒偵測模組16、26係可用於偵測PM2.5。較佳,各該無人載具另具一經預訓練的人臉辨識模型,以用於自該攝像模組13、23或該熱成像模組14、24中辨識人臉,較佳可將取得對應人臉的溫度分布。應注意的是,上述有關無人載具之間的處理器/控制器、訊號交換、自動避障、影像攝影、熱成像感應、氣體偵測、懸浮微粒偵測、人臉辨識等技術與功能係屬本發明所屬技術領域的通常知識,係本領域人員可以理解,故於此不再贅述。 The unmanned vehicle 1 and the drone 2 respectively have a processor 10 and 20, a transmission module 11 and 21, a positioning unit 12 and 22, and preferably have a camera module 13 and 23, a At least one of thermal imaging modules 14 and 24, a gas detection module 15 and 25, and an aerosol detection module 16 and 26. The processors 10 and 20 are respectively used to control the actions of the unmanned vehicle 1 and the unmanned aerial vehicle 2, preferably based on the feedback of various signals received by the unmanned vehicle 1 and the unmanned aerial vehicle 2 to perform corresponding control. In addition, the unmanned vehicle 1 and the drone 2 preferably have automatic obstacle avoidance functions. The transmission modules 11 and 21 of the unmanned vehicle 1 and the drone 2 can be coupled to each other, or can be coupled to the computer 3 respectively, or can be coupled to other devices to receive, transmit or receive data. exchange, and can be used as the basis for the processors 10 and 20 to control the unmanned vehicle 1 and the unmanned aerial vehicle 2 respectively. The positioning units 12 and 22 may be a global positioning system, respectively used to obtain the current positions of the unmanned vehicle 1 and the UAV 2 . The camera module 13, 23 can be, for example, a camera for capturing surrounding environment images or map images. The thermal imaging modules 14 and 24 are used to detect/obtain the temperature of an object and/or an ambient temperature, and may use, for example, the MELEXIS product model MLX90640 infrared thermal imaging camera. The gas detection modules 15 and 25 can be used to detect carbon monoxide (CO), carbon dioxide (CO 2 ), hydrocarbon mixture gas (LPG), ammonia (NH3), nitrogen dioxide (NO 2 ), methane (CH 4 ), propane (C 3 H 8 ), butane (C 4 H 10 ), hydrogen (H 2 ), and at least one of alcohol gas. The suspended particulate detection modules 16 and 26 series can be used to detect PM2.5. Preferably, each unmanned vehicle also has a pre-trained face recognition model for recognizing faces from the camera modules 13 and 23 or the thermal imaging modules 14 and 24. It is better to obtain the corresponding Temperature distribution on human face. It should be noted that the above-mentioned technology and functional systems related to processors/controllers, signal exchange, automatic obstacle avoidance, image photography, thermal imaging sensing, gas detection, suspended particle detection, face recognition, etc. between unmanned vehicles It belongs to common knowledge in the technical field to which the present invention belongs and can be understood by those skilled in the art, so no further description will be given here.

該電腦3具有一影像處理模組31,該影像處理模組31包含一影像辨識模組32及一路徑規劃模型33。該影像處理模組31係用於接收來自該無人機2所拍攝的具有一第一解析度(Resolution)的一影像,該影像辨識模組32用以判斷辨識該地圖影像中的至少一障礙物,並經路徑規劃模型33產生一路徑規劃結果,以供該無人車1於對應該路徑規劃的一地區運行。應注意的是,該電腦3可為裝設於該無人車1或無人機2的本機配置方式,或該電腦3可為一雲端/遠端伺服器的遠端配置方式,以對應該無人車1或該無人機2的資料進行接收、處理、及/或回饋。較佳地,為增加該無人車1與 該無人機2的機動性與續航力,該電腦3是一雲端伺服器並分別與該無人車1及該無人機2耦接,以減輕該無人車1與該無人機2的重量。 The computer 3 has an image processing module 31 , and the image processing module 31 includes an image recognition module 32 and a path planning model 33 . The image processing module 31 is used to receive an image with a first resolution (Resolution) taken from the drone 2, and the image recognition module 32 is used to determine and identify at least one obstacle in the map image. , and generate a path planning result through the path planning model 33 for the unmanned vehicle 1 to operate in an area corresponding to the path planning. It should be noted that the computer 3 can be a local configuration installed on the unmanned vehicle 1 or the drone 2, or the computer 3 can be a remote configuration of a cloud/remote server to correspond to the unmanned vehicle. The data of the vehicle 1 or the drone 2 is received, processed, and/or fed back. Preferably, in order to increase the unmanned vehicle 1 and To improve the mobility and endurance of the UAV 2, the computer 3 is a cloud server and is coupled to the unmanned vehicle 1 and the UAV 2 respectively to reduce the weight of the unmanned vehicle 1 and the UAV 2.

在一範例中,該影像辨識模組32係為一物件偵測模型,用以執行對應的影像辨識功能。該物件偵測模型係建立在Path Planning Node工作站上,Path Planning Node可定期從一資料庫檢查是否有觸發執行物件偵測、影像辨識及路徑規劃之需求。較佳地,Path Planning Node使用Windows 10系統,利用Anaconda建置研究所需的Python環境、物件偵測模型所需的Pytorch訓練框架及用於建立自定義資料集的影像標註工具,並使用Visual C++建置Darknet用於計算自定義模型則需要的anchors參數。 In one example, the image recognition module 32 is an object detection model for performing corresponding image recognition functions. The object detection model is built on the Path Planning Node workstation. The Path Planning Node can regularly check from a database whether there is a need to trigger object detection, image recognition and path planning. Preferably, the Path Planning Node uses Windows 10 system, uses Anaconda to build the Python environment required for research, the Pytorch training framework required for the object detection model, and the image annotation tool for creating custom data sets, and uses Visual C++ The anchors parameters required to build Darknet to calculate a custom model.

自定義模型使用的訓練集可來自網路免費素材及自行拍攝的影像。同一個物件透過不同的拍攝角度、距離、大小、方向等調整有助於提升訓練後的模型準確率。在一單一類別障礙物/物件的訓練範例中,訓練集總共有500多張影像,所有影像皆使用Labelimg工具標註影像中的辨識物件,每一張標註過的影像會產生標註物件座標的文件;並可基於該單一類別障礙物訓練,拓展至多種類別障礙物/物件辨識的效果。 The training set used by the custom model can come from free materials on the Internet and images taken by yourself. Adjusting the same object through different shooting angles, distances, sizes, directions, etc. can help improve the accuracy of the model after training. In a training example of a single category of obstacles/objects, the training set has a total of more than 500 images. All images are labeled with the identified objects in the image using the Labelimg tool. Each labeled image will generate a file with the coordinates of the object labeled; And based on this single category of obstacle training, it can be expanded to multiple categories of obstacle/object recognition effects.

自定義訓練集以80%資料集及20%資料集分別區分成訓練集與驗證集,並產生對應的兩份路徑檔案。產生路徑檔案後,再建立訓練模型用的yaml檔及names檔。yaml檔案內容為訓練集、驗證集路徑檔案的檔案存取位置及辨識物件的類別數量,names檔案內容為類別名稱。使用Darknet提供的聚類分析功能,將訓練集計算出3組用於不同尺度的anchors。要訓練自定義模型需修改cfg文件的參數配置。需要修改的參數有width、height、filters、anchors。在一範例中,物件偵測模型可使用608 x 608像素(Pixels)解析度及一個辨識類別,因此width與height設為608、filters數量設為18以及填入由Darknet所計算出來的anchors,並經過300個epochs後完成訓 練,以輸出訓練過程的記錄。惟,各種影像辨識/物件偵測的技術已為本領域所廣泛應用,本發明所應用之技術並不以上述內容為限。 The custom training set is divided into a training set and a validation set with 80% data set and 20% data set respectively, and two corresponding path files are generated. After generating the path file, create the yaml file and names file for training the model. The content of the yaml file is the file access location of the training set and verification set path files and the number of categories of identified objects, and the content of the names file is the category name. Using the cluster analysis function provided by Darknet, the training set is used to calculate 3 groups of anchors for different scales. To train a custom model, you need to modify the parameter configuration of the cfg file. The parameters that need to be modified are width, height, filters, and anchors. In an example, the object detection model can use 608 x 608 pixels (Pixels) resolution and a recognition category, so the width and height are set to 608, the number of filters is set to 18, and the anchors calculated by Darknet are filled in, and Training is completed after 300 epochs practice to output records of the training process. However, various image recognition/object detection technologies have been widely used in this field, and the technologies applied in the present invention are not limited to the above content.

另,在一範例中,該路徑規劃模型33係利用A*搜尋演算法(A* Search Algorithm),以執行對應的路徑規劃功能。A*搜尋演算法主要運作於平面網格圖;在運用A*搜尋演算法的一例子中,首先將輸入的二值化PNG影像轉換為佔用網格地圖(OGM),以滿足執行A*搜尋演算法進行路徑規劃所需的條件;在網格地圖中,數值1代表該節點被占用,數值0代表該節點可以行走。進行路徑規劃前,會先從獲取一起點座標與一終點座標,座標也會隨著輸入影像不同的解析度做適當的調整;亦即,輸入的圖片解析度若經過縮放,則座標也會進行縮放的調整,避免起點與終點的座標產生誤差。當前述網格地圖的轉換及獲取起點與終點的座標後,便能執行A*搜尋演算法。A*的移動方式可為八向移動(上、下、左、右、上左、上右、下左、下右)或四向移動(上、下、左、右),且較佳是以較流暢及完整的八向移動進行路徑規劃。路徑規劃完成後使用matplotlib工具將路徑資料繪製成一路徑軌跡並輸出一路徑影像。惟,各種路徑規劃的技術已為本領域所廣泛應用,本發明所應用之技術並不以上述內容為限。 In addition, in an example, the path planning model 33 uses the A* Search Algorithm to perform the corresponding path planning function. The A* search algorithm mainly operates on planar grid maps; in an example of using the A* search algorithm, the input binary PNG image is first converted into an occupancy grid map (OGM) to satisfy the need to perform A* search. The conditions required by the algorithm for path planning; in the grid map, a value of 1 represents that the node is occupied, and a value of 0 represents that the node can be walked. Before path planning, the coordinates of the starting point and the ending point will be obtained. The coordinates will also be adjusted appropriately according to the different resolutions of the input image; that is, if the input image resolution is scaled, the coordinates will also be adjusted. Adjust the zoom to avoid errors in the coordinates of the starting point and the end point. After the aforementioned grid map is converted and the coordinates of the starting point and end point are obtained, the A* search algorithm can be executed. A* can be moved in eight directions (up, down, left, right, up left, up right, down left, down right) or four directions (up, down, left, right), and preferably in Smoother and complete eight-way movement for path planning. After the path planning is completed, use the matplotlib tool to draw the path data into a path trajectory and output a path image. However, various path planning technologies have been widely used in this field, and the technologies applied in the present invention are not limited to the above content.

該操作模組4係分別與該無人車1、無人機2、電腦3耦接,用以控制該無人車1、該無人機2,並接收該無人車1、該無人機2及/或該電腦3的資訊。換言之,該操作模組4可為控制平台,具體可為具有顯示畫面、產生訊號及進行無線傳輸等功能的裝置,例如智慧型手機、平板、筆記型電腦(Laptop Computer)或桌上型電腦(Desktop Computer)。各單元(該無人車1、該無人機2、該電腦3及該操作模組4)間的資料傳輸型態可以是直接傳輸或間接傳輸,或可依各單元的連線狀態或資料處理能力而對應變化。以無人車1與該操作模組4之間的資料傳輸或接收為例,若為直接傳輸型態, 該操作模組4可發出一指令至該無人車1,以使該無人車1產生一對應反饋(例如是控制該無人車移動或停止);若為間接傳輸型態,該操作模組4所發出的指令先傳輸至該無人機2或該電腦3,再由該無人機2或該電腦3發出相應的前述指令至該無人車1,以使該無人車1產生一對應反饋。又,在其他可行範例中,該操作模組4亦可僅係耦接該電腦3的一操作面板,用以達成本發明中所述的內容。 The operation module 4 is coupled to the unmanned vehicle 1, the unmanned aerial vehicle 2 and the computer 3 respectively to control the unmanned vehicle 1 and the unmanned aerial vehicle 2, and to receive the unmanned vehicle 1, the unmanned aerial vehicle 2 and/or the Computer 3 information. In other words, the operating module 4 can be a control platform, and specifically can be a device with functions such as displaying images, generating signals, and performing wireless transmission, such as a smart phone, a tablet, a laptop computer (Laptop Computer) or a desktop computer ( Desktop Computer). The data transmission type between each unit (the unmanned vehicle 1, the drone 2, the computer 3 and the operation module 4) can be direct transmission or indirect transmission, or can be based on the connection status or data processing capabilities of each unit. And corresponding changes. Take the data transmission or reception between the unmanned vehicle 1 and the operation module 4 as an example. If it is a direct transmission type, The operation module 4 can issue an instruction to the unmanned vehicle 1 so that the unmanned vehicle 1 generates a corresponding feedback (for example, controlling the unmanned vehicle to move or stop); if it is an indirect transmission type, the operation module 4 The issued command is first transmitted to the drone 2 or the computer 3, and then the drone 2 or the computer 3 issues the corresponding command to the unmanned vehicle 1, so that the unmanned vehicle 1 generates a corresponding feedback. In addition, in other possible examples, the operation module 4 can also be only coupled to an operation panel of the computer 3 to achieve the content described in the present invention.

該操作模組4可具有一顯示裝置41用以顯示一操作介面或對應影像,該操作介面具有對應的數個操作按鈕(可為實體按鈕、虛擬按鈕或其組合),以分別對應產生控制指令,特別是用於操作對應載具(無人車1或無人機2)上述模組或單元中的各種功能或作動。在一實施範例中,該操作模組4為一行動裝置,操作介面較佳為該行動裝置的一應用程式(APP)視窗,該應用程式視窗可包含數個虛擬按鈕,該數個虛擬按鈕對應產生的控制指令可包含一方向控制指令、一拍攝控制指令、一定位指令。詳言之,該方向控制指令用以控制使用者所選擇載具(無人車1或無人機2)移動;舉例而言,該方向控制指令係可包含一停止指令、一前進指令、一後退指令、一左轉指令及一右轉指令,以分別控制對應無人載具產生對應的動作。該拍攝控制指令用以控制使用者所選擇載具之攝像模組13或23運作;舉例而言,該拍攝控制指令係可包含拍照指令、錄影指令、停止拍照指令及/或停止錄影指令,以分別控制對應攝像模組13或23產生對應的功能。該定位指令用以控制使用者所選擇載具之定位單元12或22運作,以取得對應載具當下或即時的位置資訊,例如是可量化的座標。 The operation module 4 can have a display device 41 for displaying an operation interface or a corresponding image. The operation interface has a plurality of corresponding operation buttons (which can be physical buttons, virtual buttons or a combination thereof) to generate corresponding control instructions respectively. , especially for operating various functions or actions in the above-mentioned modules or units of the corresponding vehicle (unmanned vehicle 1 or drone 2). In an implementation example, the operation module 4 is a mobile device, and the operation interface is preferably an application program (APP) window of the mobile device. The application window may include a plurality of virtual buttons, and the plurality of virtual buttons correspond to The generated control instructions may include a direction control instruction, a shooting control instruction, and a positioning instruction. Specifically, the direction control command is used to control the movement of the vehicle (unmanned vehicle 1 or drone 2) selected by the user; for example, the direction control command may include a stop command, a forward command, and a backward command. , a left turn command and a right turn command to respectively control the corresponding unmanned vehicle to produce corresponding actions. The shooting control instruction is used to control the operation of the camera module 13 or 23 of the vehicle selected by the user; for example, the shooting control instruction may include a photographing instruction, a video recording instruction, a stop photographing instruction and/or a stop recording instruction, to The corresponding camera module 13 or 23 is respectively controlled to generate corresponding functions. The positioning command is used to control the operation of the positioning unit 12 or 22 of the vehicle selected by the user to obtain the current or real-time position information of the corresponding vehicle, such as quantifiable coordinates.

根據本發明上述系統,在一具體實施例中,在該無人車1位在一預定位置時,特別是該電腦3缺乏對應該預定位置的預建立或即時的一地圖影像時,而無法有效率的跨越障礙物或抵達一目標位置時,該電腦3根據 該無人車1的一當前位置(基於其定位單元12),發出一支援指令(包含該無人車1的當前位置資訊)以控制該無人機2至該預定位置,特別是使該無人機2的定位單元22的位置資訊與該無人車1的該當前位置吻合,以拍下該預定位置的一地圖影像;該地圖影像是較佳是一俯視圖。在另一實施範例中,對應前述預定位置,使用者透過該操作模組4操控該無人機2至該預定位置拍攝該地圖影像。 According to the above system of the present invention, in a specific embodiment, when the unmanned vehicle 1 is at a predetermined location, especially when the computer 3 lacks a pre-established or real-time map image corresponding to the predetermined location, it cannot be efficient. When crossing an obstacle or reaching a target location, the computer 3 Based on a current position of the unmanned vehicle 1 (based on its positioning unit 12), a support command (including the current location information of the unmanned vehicle 1) is issued to control the UAV 2 to the predetermined position, especially to make the UAV 2 The position information of the positioning unit 22 matches the current position of the unmanned vehicle 1 to capture a map image of the predetermined position; the map image is preferably a top view. In another implementation example, corresponding to the aforementioned predetermined position, the user controls the drone 2 to the predetermined position through the operation module 4 to capture the map image.

詳言之,請參照第2圖,係顯示基於本發明上述系統的該影像處理流程,並藉由該電腦3執行以下各步驟: In detail, please refer to Figure 2, which shows the image processing flow based on the above system of the present invention, and the following steps are executed by the computer 3:

影像接收步驟S1:接收無人機2所拍攝的地圖影像,該地圖影像具有一第一解析度。較佳地,該第一解析度可為1280 x 960像素,該地圖影像具有一預設影像長寬比例為4:3。 Image receiving step S1: Receive the map image captured by the UAV 2. The map image has a first resolution. Preferably, the first resolution may be 1280 x 960 pixels, and the map image has a default image aspect ratio of 4:3.

影像降階步驟S2:用以將所接收影像的容量降低,如此以便於整體電腦運行效率,該影像降階的具體步驟包含: Image downgrading step S2: used to reduce the capacity of the received image, so as to facilitate the overall computer operating efficiency. The specific steps of the image downgrading include:

降低解析度步驟S21:降低所接收的該地圖影像的一解析度;特別是將所接收該地圖影像的該第一解析度降低為一第二解析度。 Resolution reduction step S21: reduce a resolution of the received map image; in particular, reduce the first resolution of the received map image to a second resolution.

在該預設解析度規則中,經降低後的該解析度係與該地圖影像的拍攝高度呈正相關;較佳地,該預設調整規則係如表一所示:

Figure 111110880-A0305-02-0013-1
Figure 111110880-A0305-02-0014-3
其中,根據本發明之實驗,所述拍攝高度與降階後解析度的關係,係可在不影響路徑規劃正確性的情況下,使解析度降到最低的規則。該影像處理工具OpenCV、pillow係基於Python程式語言架構所提供的開源工具(Open Source Tool)。該平均效率係表示對應該地圖影像解析度的降低程度,而能提升後續路徑規劃步驟(包含障礙物辨識及產生路徑軌跡)的處理速度(與影像處理工具OpenCV所處理的參考模式相比),如此,本發明所提出的預設調整規則,可在不影響路徑規劃正確性的情況下,提昇該電腦及整體系統運算、處理及反應的速度,以改善習知技術中在影像處理及生成具有路徑規劃之地圖影像時運算速度過慢的問題;特別是,該平均效率係顯根據本案上述步驟S21及下述步驟S31、S32、S33、S41、S42的優化效果。 In the preset resolution rule, the reduced resolution is positively correlated with the shooting height of the map image; preferably, the preset adjustment rule is as shown in Table 1:
Figure 111110880-A0305-02-0013-1
Figure 111110880-A0305-02-0014-3
Among them, according to the experiments of the present invention, the relationship between the shooting height and the reduced resolution is a rule that can minimize the resolution without affecting the accuracy of path planning. The image processing tools OpenCV and pillow are open source tools (Open Source Tool) provided based on the Python programming language architecture. The average efficiency represents the degree of reduction corresponding to the map image resolution, which can improve the processing speed of subsequent path planning steps (including obstacle identification and path trajectory generation) (compared to the reference mode processed by the image processing tool OpenCV). In this way, the preset adjustment rules proposed by the present invention can improve the computing, processing and response speed of the computer and the overall system without affecting the accuracy of path planning, so as to improve the image processing and generation capabilities of the conventional technology. The problem of too slow computing speed when using map images for path planning; in particular, the average efficiency shows the optimization effect based on the above step S21 and the following steps S31, S32, S33, S41, and S42 of this case.

可選的降低彩度步驟S21A:降低該地圖影像的一彩度(Saturation);特別是降低具有該第二解析度的該影像的彩度。 Optional reducing saturation step S21A: reducing a saturation (Saturation) of the map image; in particular, reducing the saturation of the image with the second resolution.

路徑規劃步驟S3:用以分析處理該地圖影像以產生對應的路徑軌跡,該路徑規劃的具體步驟(請參照第3圖)包含: Path planning step S3: used to analyze and process the map image to generate corresponding path trajectories. The specific steps of path planning (please refer to Figure 3) include:

障礙物辨識步驟S31:針對前一步驟(可為步驟S21或S21A)經處理的該地圖影像進行影像辨識(特別是透過該影像辨識模組32),若經辨識判斷有一障礙物,附加一障礙物邊界以標示該地圖影像中的該障礙物。 Obstacle recognition step S31: Perform image recognition (especially through the image recognition module 32) on the map image processed in the previous step (which can be step S21 or S21A). If it is determined that there is an obstacle after recognition, add an obstacle. object boundary to mark the obstacle in the map image.

二值化步驟S32:將前一步驟(步驟S31)經處理的該影像進行二值化處理。 Binarization step S32: Binarize the image processed in the previous step (step S31).

可選的影像長寬比例調整步驟S32A:藉由一比例正規化程序,將前一步驟(步驟S32)經處理的該地圖影像的影像長寬比例自一預設影像長寬比例調整為1:1。 Optional image aspect ratio adjustment step S32A: Adjust the image aspect ratio of the map image processed in the previous step (step S32) from a default image aspect ratio to 1 through a ratio normalization process: 1.

路徑軌跡生成步驟S33:載入該影像的一預定的一起點資訊/座標與一終點資訊/座標,並針對前一步驟(步驟S32或S32A)經處理的該地圖影像進行路徑規劃(特別是透過該路徑規劃模型33),以產生具有一路徑軌跡的一路徑影像。較佳地,該路徑軌跡包含該起點資訊與該終點資訊。 Path trajectory generation step S33: Load a predetermined starting point information/coordinates and an end point information/coordinates of the image, and perform path planning (especially through The path planning model 33) is used to generate a path image with a path trajectory. Preferably, the path trajectory includes the starting point information and the end point information.

其中,前述的起點資訊/座標與終點資訊/座標,係可由該電腦3依一預定義方式而設定的資訊。該起點資訊可定義為該無人車1的該當前位置;該終點資訊可以該無人車1移動至該當前位置前的一位置與該當前位置的一向量方向,並以該當前位置朝該向量方向延伸至對應的該地圖影像的一邊界或距離該邊界一距離的一位置;該終點資訊亦可以是該當前位置對應的該地圖影像的幾何中心點對稱映射的一位置。詳言之,該起點資訊與該終點資訊係該電腦依據該無人車的該當前位置及該地圖影像所定義。惟,各種預定義方式可根據使用者所需而調整,並可包含對應判斷條件以避免該起點或該終點設置於不可行的位置(例如設置於障礙物、湖泊等該無人車1無法移動的位置),本發明所應用之技術並不以上述內容為限。在另一實施範例中,該起點資訊與該終點資訊的資訊系可由使用者透過該操作模組4設置,例如是該操作模組4接受該地圖影像以顯示於該顯示裝置41,並透過該使用者設定對應的起點資訊與終點資訊。 Among them, the aforementioned starting point information/coordinates and end point information/coordinates are information that can be set by the computer 3 according to a predefined method. The starting point information can be defined as the current position of the unmanned vehicle 1; the end point information can be a vector direction between a position before the unmanned vehicle 1 moves to the current position and the current position, and the current position is directed toward the vector direction. Extending to a boundary of the corresponding map image or a position at a distance from the boundary; the end point information may also be a position symmetrically mapped to the geometric center point of the map image corresponding to the current position. Specifically, the starting point information and the end point information are defined by the computer based on the current location of the unmanned vehicle and the map image. However, various predefined methods can be adjusted according to the user's needs, and can include corresponding judgment conditions to avoid setting the starting point or the end point in unfeasible locations (such as obstacles, lakes, etc. where the unmanned vehicle 1 cannot move). position), the technology applied in the present invention is not limited to the above content. In another implementation example, the starting point information and the end point information can be set by the user through the operation module 4. For example, the operation module 4 accepts the map image to display on the display device 41, and displays it on the display device 41 through the operation module 4. The user sets the corresponding starting point information and end point information.

可選的提升路徑影像解析度步驟S33A:提升前一步驟(步驟S33)的該路徑影像的一路徑影像解析度。 Optional step S33A of improving path image resolution: improving a path image resolution of the path image in the previous step (step S33).

可選的比例還原步驟S33B:若該影像經該比例正規化程序(步驟S32A)處理,將透過一比例還原步驟,將該路徑影像的長寬比例調整為該地圖影像的該預設長寬比例。 Optional scale restoration step S33B: If the image is processed by the scale normalization process (step S32A), the aspect ratio of the route image will be adjusted to the default aspect ratio of the map image through a scale restoration step. .

可選的影像合併步驟S4:用以將所獲得的該路徑軌跡與該地圖影像合併,該影像合併的具體步驟包含: Optional image merging step S4: used to merge the obtained path trajectory with the map image. The specific steps of the image merging include:

消除背景步驟S41:將前一步驟(步驟S33、S33A或S33B)經處理的該路徑影像經一去背(消除背景)程序,以去除該路徑影像中對應該路徑軌跡以外的背景影像,使該路徑影像僅包含該路徑軌跡的影像;詳言之,對應該路徑軌跡以外的背景影像係為透明的。 Background elimination step S41: The path image processed in the previous step (step S33, S33A or S33B) is subjected to a background removal (background elimination) process to remove the background image other than the path trajectory in the path image, so that the path image is The path image only contains the image of the path trajectory; specifically, the background image corresponding to the path trajectory is transparent.

可選的解析度一致化步驟S41A:提升前一步驟(步驟S41)經處理的該路徑影像的該路徑影像解析度,使該路徑影像解析度與該地圖影像的該第一解析度一致。 Optional resolution unification step S41A: Improve the route image resolution of the route image processed in the previous step (step S41), so that the route image resolution is consistent with the first resolution of the map image.

合併步驟S42:將前一步驟(步驟S41或S41A)經處理的該路徑影像與具有該第一解析度的該地圖影像進行對位合併,特別是基於相同的尺寸比例及相同參考點的方式進行合併,以獲得一合併影像,使該地圖影像具有該路徑影像中的該路徑軌跡。 Merging step S42: Align and merge the route image processed in the previous step (step S41 or S41A) with the map image having the first resolution, especially based on the same size ratio and the same reference point. Merge to obtain a merged image, so that the map image has the path trajectory in the path image.

控制無人車移動步驟S5:根據前述含有該路徑軌跡的任一步驟(如前述步驟S33、S33A、S33B、S41、S41A或S42)中的一路徑軌跡,控制該無人車1移動,以避開該地圖中的障礙物。在一較佳實施例中,該無人車1的控制,係透過該電腦3依據該路徑軌跡與該無人車1的一當前位置,產生對應的控制指令以控制該無人車1沿該路徑軌跡移動;詳言之,此時該電腦3會將該無人車1的一當前位置與該路徑規跡轉換為可量化的座標資訊,以產生對應的路徑軌跡控制指令(特別是透過內建預定義的程式碼生成/轉換資料庫,可對應不同無人載具規格產生適當的程式指令),並由該無人車1的處理器10接收該路徑軌跡指令,以控制該無人車1進行對應的移動。其中,該當前位置可基於其定位單元12所定義;特別地,該當前位置即是該預定位置。在另一可行範例中,特別是在獲取該合併影像的一狀態(對應前述步驟S42),使用者可透過該操作模組4依據該合併影像控制該無人車1移動;例如是,可透過該操作模組4的一顯示裝置41顯示該合併影像與該無人 車1對應該合併影像中的即時位置(基於其定位單元12可定義精確的位置座標/資訊),以便該使用者進行操作。如此,透過前述無人機2拍攝的該地圖影像,及前述電腦3對該地圖影像進行處理的結果,將關聯至該無人車1,並藉以控制該無人車1移動。 Step S5 of controlling the movement of the unmanned vehicle: Control the movement of the unmanned vehicle 1 according to a path trajectory in any of the aforementioned steps containing the path trajectory (such as the aforementioned steps S33, S33A, S33B, S41, S41A or S42) to avoid the Obstacles in the map. In a preferred embodiment, the unmanned vehicle 1 is controlled by the computer 3 generating corresponding control instructions based on the path trajectory and a current position of the unmanned vehicle 1 to control the unmanned vehicle 1 to move along the path trajectory. ; In detail, at this time, the computer 3 will convert a current position of the unmanned vehicle 1 and the path trace into quantifiable coordinate information to generate corresponding path trajectory control instructions (especially through the built-in predefined The program code generation/conversion database can generate appropriate program instructions corresponding to different unmanned vehicle specifications), and the processor 10 of the unmanned vehicle 1 receives the path trajectory instructions to control the unmanned vehicle 1 to move accordingly. Wherein, the current position can be defined based on its positioning unit 12; specifically, the current position is the predetermined position. In another feasible example, especially in a state of acquiring the merged image (corresponding to the aforementioned step S42), the user can control the movement of the unmanned vehicle 1 according to the merged image through the operation module 4; for example, the user can control the movement of the unmanned vehicle 1 through the merged image. A display device 41 of the operation module 4 displays the merged image and the unmanned Car 1 corresponds to the real-time position in the merged image (accurate position coordinates/information can be defined based on its positioning unit 12) for the user to operate. In this way, the map image captured by the drone 2 and the result of processing the map image by the computer 3 will be associated with the unmanned vehicle 1 and thereby control the movement of the unmanned vehicle 1 .

較佳地,該路徑規軌跡的產生係與該無人車1的規格資訊中的最大長度與最大寬度(特別是從俯視面所定義的)等資訊關聯;更佳地,該路徑規軌跡的產生係與該無人車1的規格資訊中的最大長度、最大寬度、最小迴轉半徑等資訊關聯。如此,該路徑軌跡的規劃係符合該無人車1的規格,以避免該無人車1依該路徑軌跡移動時產生非預期的碰撞或阻礙。在一較佳實施範例中,在前述步驟S33進行路徑規劃程序時,該電腦3會載入該無人車1的前述規格資訊以產生該路徑軌跡。在另一實施範例中,亦可在透過前述步驟S33獲得該路徑規軌跡後,於後續的步驟或額外程序中,考量該無人車1的前述規格資訊以修正該路徑軌跡。或者,在另一實施範例中,該路徑軌跡的顯示或呈現係可選地為被修正或未被修正,且該電腦3將考量一對應控制無人車1的最大長度、最大寬度、及可選的最小迴轉半徑以產生一對應路徑軌跡控制指令。換言之,該電腦3可根據的一對應無人車1的一最大長度與一最大寬度產生一對應修正路徑軌跡或一對應路徑軌跡控制指令;其中,所述對應無人車1可以是原本的無人車1或另一無人車1,且該另一無人車1的規格資訊可與原本的無人車1不同或相同。上述考量各該無人車1之規格資訊的路徑軌跡或路徑軌跡控制指令,另可特別適用於不同規格無人車1的選用,以當原本的無人車1無法進行任務時,該電腦3可控制不同規格之無人車1續行原本任務,不需再重複拍攝地圖影像,而可簡化路線規劃的作業,並增進整體系統的即時性與適用性。 Preferably, the generation of the path gauge trajectory is related to information such as the maximum length and maximum width (especially defined from a top view) in the specification information of the unmanned vehicle 1; more preferably, the generation of the path gauge trajectory It is associated with the maximum length, maximum width, minimum radius of gyration and other information in the specification information of the unmanned vehicle 1. In this way, the planning of the path trajectory is consistent with the specifications of the unmanned vehicle 1 to avoid unexpected collisions or obstacles when the unmanned vehicle 1 moves along the path trajectory. In a preferred implementation example, when performing the path planning process in the aforementioned step S33, the computer 3 will load the aforementioned specification information of the unmanned vehicle 1 to generate the path trajectory. In another implementation example, after the path trajectory is obtained through the aforementioned step S33, the aforementioned specification information of the unmanned vehicle 1 can be considered in subsequent steps or additional procedures to correct the path trajectory. Or, in another implementation example, the display or presentation of the path trajectory is optionally corrected or uncorrected, and the computer 3 will consider the maximum length, maximum width, and optional length of a corresponding control unmanned vehicle 1 The minimum radius of gyration is used to generate a corresponding path trajectory control instruction. In other words, the computer 3 can generate a corresponding corrected path trajectory or a corresponding path trajectory control instruction according to a maximum length and a maximum width of a corresponding unmanned vehicle 1; wherein the corresponding unmanned vehicle 1 can be the original unmanned vehicle 1 Or another unmanned vehicle 1, and the specification information of the other unmanned vehicle 1 may be different or the same as the original unmanned vehicle 1. The above-mentioned path trajectories or path trajectory control instructions that take into account the specification information of each unmanned vehicle 1 can be particularly suitable for the selection of unmanned vehicles 1 of different specifications, so that when the original unmanned vehicle 1 cannot perform the task, the computer 3 can control different The unmanned vehicle 1 of the specifications can continue to perform its original tasks without repeatedly taking map images, which can simplify the route planning operation and improve the real-time and applicability of the overall system.

據由前述系統與影像處理流程,本發明可實施一種二無人載具 協同導航方法,特別是在該無人車1位在一預定位置且該無人機2拍取該預定位置的一地圖影像的一狀態下,包含如下步驟: According to the aforementioned system and image processing flow, the present invention can implement a two-unmanned vehicle The collaborative navigation method, especially in a state where the unmanned vehicle 1 is at a predetermined location and the drone 2 captures a map image of the predetermined location, includes the following steps:

對應前述步驟S1,該電腦3接收該地圖影像。 Corresponding to the aforementioned step S1, the computer 3 receives the map image.

對應前述步驟S2,該電腦3依據一預設調整解析度規則降低該地圖影像的一解析度,在該預設解析度規則中,經降低後的該解析度係與該地圖影像的拍攝高度呈正相關。其中,該方法可包含前述步驟S21及可選的步驟S21A。 Corresponding to the aforementioned step S2, the computer 3 reduces a resolution of the map image according to a preset resolution adjustment rule. In the preset resolution rule, the reduced resolution is correct to the shooting height of the map image. Related. The method may include the aforementioned step S21 and optional step S21A.

對應前述步驟S3,該電腦3接收一起點資訊與一終點資訊,並根據該起點資訊、該終點資訊及該地圖影像進行分析以產生具有一路徑軌跡的一路徑影像。較佳地,該路徑軌跡包含該起點資訊與該終點資訊。其中,該方法可包含前述步驟S31、S32、S33及可選的步驟S32A、S33A、S33B,並可另包含前述步驟S4中的步驟S41、S42及其中可選的步驟S41A。 Corresponding to the aforementioned step S3, the computer 3 receives a starting point information and an end point information, and analyzes the starting point information, the end point information and the map image to generate a path image having a path trajectory. Preferably, the path trajectory includes the starting point information and the end point information. The method may include the aforementioned steps S31, S32, S33 and optional steps S32A, S33A, S33B, and may further include steps S41, S42 in the aforementioned step S4 and the optional step S41A.

對應前述步驟S5,該電腦3依據該路徑軌跡與該無人車的一當前位置,產生對應的控制指令以控制該無人車1沿該路徑軌跡移動。可選地,在獲取該合併影像的一狀態(對應前述步驟S32),一使用者可透過該操作模組4依據該合併影像控制該無人車1移動。 Corresponding to the aforementioned step S5, the computer 3 generates corresponding control instructions based on the path trajectory and a current position of the unmanned vehicle to control the unmanned vehicle 1 to move along the path trajectory. Optionally, in a state of acquiring the merged image (corresponding to the aforementioned step S32), a user can control the movement of the unmanned vehicle 1 according to the merged image through the operation module 4.

綜上所述,本發明的二無人載具協同導航方法與系統,透過無人機拍攝地圖影像,及電腦對該地圖影像進行處理所產生的路徑軌跡,可關聯至無人車並藉以控制無人車移動,大幅縮短單純由無人車於地圖中緩慢搜尋路徑的缺點,提升無人車跨越障礙物及/或執行任務的效率。另,透過將電腦設置為一雲端伺服器,可提升無人載具的機動性與續航力。另,透過預設解析度規則,根據拍攝高度調整地圖影像解析度,可提升整體影像處理速度,以快速產生路徑規劃結果。另,透過電腦可根據路徑軌跡與無人車當前位置資訊產生對應路徑軌跡控制指令,可達成控制無人車沿路徑軌跡自動化的移 動。另,最終還原影像解析度形成合併影像的技術手段,可有效率的獲得路徑規劃結果,並獲得高品質的合併影像,以便於使用者觀察合併影像或據以操作無人車移動。另,根據無人車規格資訊中的最大長度、最大寬度等資訊關聯(可選地包含最小轉彎半徑),可使路徑軌跡優化,避免無人車依路徑軌跡移動時產生非預期的碰撞或阻礙。 In summary, the cooperative navigation method and system of two unmanned vehicles of the present invention can use the UAV to capture map images and the path trajectories generated by computer processing of the map images, which can be associated with the unmanned vehicle and thereby control the movement of the unmanned vehicle. , which greatly shortens the shortcomings of simply slowly searching for paths on the map for unmanned vehicles, and improves the efficiency of unmanned vehicles in crossing obstacles and/or performing tasks. In addition, by setting up the computer as a cloud server, the mobility and endurance of the unmanned vehicle can be improved. In addition, by adjusting the map image resolution according to the shooting height through preset resolution rules, the overall image processing speed can be improved to quickly produce path planning results. In addition, the computer can generate corresponding path trajectory control instructions based on the path trajectory and the current position information of the unmanned vehicle, which can control the automated movement of the unmanned vehicle along the path trajectory. move. In addition, the technical means of finally restoring the image resolution to form a merged image can efficiently obtain path planning results and obtain high-quality merged images, so that users can observe the merged image or operate the autonomous vehicle accordingly. In addition, according to the maximum length, maximum width and other information associations in the unmanned vehicle specification information (optionally including the minimum turning radius), the path trajectory can be optimized to avoid unexpected collisions or obstacles when the unmanned vehicle moves along the path trajectory.

雖然本發明已利用上述較佳實施例揭示,然其並非用以限定本發明,任何熟習此技藝者在不脫離本發明之精神和範圍之內,相對上述實施例進行各種更動與修改仍屬本發明所保護之技術範疇,因此本發明之保護範圍當包含後附之申請專利範圍所記載的文義及均等範圍內之所有變更。 Although the present invention has been disclosed using the above-mentioned preferred embodiments, they are not intended to limit the invention. Anyone skilled in the art can make various changes and modifications to the above-described embodiments without departing from the spirit and scope of the invention. The technical scope protected by the invention, therefore, the protection scope of the invention shall include all changes within the literal and equivalent scope described in the appended patent application scope.

S1:影像接收步驟 S1: Image receiving steps

S2:影像降階步驟 S2: Image reduction step

S21:降低解析度步驟 S21: Steps to reduce resolution

S21A:降低彩度步驟 S21A: Steps to reduce chroma

S3:路徑規劃步驟 S3: Path planning steps

S4:影像合併步驟 S4: Image merging step

S41:消除背景步驟 S41: Eliminate background step

S41A:解析度一致化步驟 S41A: Resolution consistency step

S42:合併步驟 S42: Merge step

S5:控制無人車移動步驟 S5: Steps to control the movement of unmanned vehicles

Claims (8)

一種二無人載具協同導航方法,該二無人載具分別為一無人車與一無人機,在該無人車位在一預定位置且該無人機拍取該預定位置的一地圖影像的一狀態下,包含:一電腦接收該地圖影像,該地圖影像具有一第一解析度;該電腦依據一預設調整解析度規則降低該第一解析度為一第二解析度,在該預設解析度規則中,經降低後的該第二解析度係與該地圖影像的拍攝高度呈正相關;及該電腦接收一起點資訊與一終點資訊,並根據該起點資訊、該終點資訊及該地圖影像進行分析,以產生具有一路徑軌跡的一路徑影像,且該路徑軌跡包含該起點資訊與該終點資訊;該起點資訊及該終點資訊是該電腦依據該無人車的一當前位置及該地圖影像所定義,或是由一使用者透過一操作模組所定義,該操作模組接收並顯示該地圖影像於一顯示裝置,以供該使用者設定對應的該起點資訊與該終點資訊;該電腦根據該路徑軌跡與該無人車的一當前位置,產生對應的一路徑軌跡控制指令,以控制該無人車沿該路徑軌跡移動。 A cooperative navigation method for two unmanned vehicles. The two unmanned vehicles are an unmanned vehicle and a drone respectively. In a state where the unmanned parking space is at a predetermined location and the drone captures a map image of the predetermined location, It includes: a computer receives the map image, and the map image has a first resolution; the computer reduces the first resolution to a second resolution according to a preset resolution adjustment rule, and in the preset resolution rule , the reduced second resolution is positively correlated with the shooting height of the map image; and the computer receives a starting point information and an end point information, and analyzes based on the starting point information, the end point information and the map image, to Generate a path image with a path trajectory, and the path trajectory includes the starting point information and the end point information; the starting point information and the end point information are defined by the computer based on a current location of the unmanned vehicle and the map image, or It is defined by a user through an operation module. The operation module receives and displays the map image on a display device for the user to set the corresponding starting point information and the end point information; the computer calculates the corresponding starting point information and the end point information according to the path trajectory and A current position of the unmanned vehicle generates a corresponding path trajectory control instruction to control the unmanned vehicle to move along the path trajectory. 如請求項1之二無人載具協同導航方法,其中,該地圖影像的該第一解析度為1280 x 960像素,該預設解析度規則係定義為:在該拍攝高度超過50公尺但未滿60公尺的一狀態,該第二解析度為1280 x 960像素;在該拍攝高度超過40公尺但未滿50公尺的一狀態,該第二解析度為640 x 480像素;在該拍攝高度超過30公尺但未滿40公尺的一狀態,該第二解析度為320 x 240像素;在該拍攝高度超過20公尺但未滿30公尺的一狀態,該第二解析度為256 x 192像素;在該拍攝高度超過10公尺但未滿20公尺的一狀 態,該第二解析度為128 x 96像素;在該拍攝高度為10公尺以下的一狀態,該第二解析度為128 x 96像素。 For example, the unmanned vehicle cooperative navigation method of claim 1-2, wherein the first resolution of the map image is 1280 x 960 pixels, the default resolution rule is defined as: when the shooting height exceeds 50 meters but is not In a state where the shooting height exceeds 60 meters, the second resolution is 1280 x 960 pixels; in a state where the shooting height exceeds 40 meters but is less than 50 meters, the second resolution is 640 x 480 pixels; in the In a state where the shooting height exceeds 30 meters but is less than 40 meters, the second resolution is 320 x 240 pixels; in a state where the shooting height exceeds 20 meters but is less than 30 meters, the second resolution 256 x 192 pixels; when the shooting height is more than 10 meters but less than 20 meters In a state where the shooting height is below 10 meters, the second resolution is 128 x 96 pixels. 如請求項1之二無人載具協同導航方法,其中,該電腦是一雲端伺服器。 For example, the unmanned vehicle collaborative navigation method of claim 1-2, wherein the computer is a cloud server. 如請求項1之二無人載具協同導航方法,其中,該電腦去除該路徑影像中對應該路徑軌跡以外的背景影像,使該路徑影像僅包含該路徑軌跡的影像,並將該路徑影像與具有該地圖影像進行對位合併,以獲得一合併影像。 For example, the unmanned vehicle collaborative navigation method of claim 1-2, wherein the computer removes the background images corresponding to the path image other than the path trajectory, so that the path image only contains the image of the path trajectory, and combines the path image with the image having the path trajectory. The map images are aligned and merged to obtain a merged image. 如請求項4之二無人載具協同導航方法,其中,一操作模組的一顯示裝置接受並顯示該合併影像與該無人車對應該合併影像中的一即時位置。 For example, the unmanned vehicle collaborative navigation method of claim 4-2, wherein a display device of an operation module accepts and displays the merged image and the unmanned vehicle corresponding to a real-time position in the merged image. 如請求項1~3之二無人載具協同導航方法,其中,該電腦可根據的一對應無人車的一最大長度與一最大寬度產生一對應修正路徑軌跡。 For example, claim 1 to 3-2 for the unmanned vehicle coordinated navigation method, wherein the computer can generate a corresponding corrected path trajectory based on a maximum length and a maximum width of a corresponding unmanned vehicle. 如請求項1~3中任一項之二無人載具協同導航方法,其中,該電腦根據該路徑軌跡與一對應無人車的一當前位置、一最大長度及一最大寬度,產生一對應路徑軌跡控制指令,以控制該對應無人車沿該路徑軌跡移動。 As claimed in any one of items 1 to 3, the second unmanned vehicle collaborative navigation method, wherein the computer generates a corresponding path trajectory based on the path trajectory and a current position, a maximum length and a maximum width of a corresponding unmanned vehicle Control instructions to control the corresponding unmanned vehicle to move along the path trajectory. 一種二無人載具協同導航系統,包含:一無人車,具有一定位單元,用於獲取該無人車的一當前位置;一無人機,具有另一定位單元與一攝像模組,該另一定位單元用於獲取該無人機的一當前位置,該攝像模組用於拍攝一地圖影像;及一電腦,與該無人車及該無人機耦接,並執行如請求項1~7中任一項之二無人載具協同導航方法。 A cooperative navigation system for two unmanned vehicles, including: an unmanned vehicle with a positioning unit used to obtain a current position of the unmanned vehicle; a drone with another positioning unit and a camera module, the other positioning unit The unit is used to obtain a current position of the drone, the camera module is used to capture a map image; and a computer is coupled to the unmanned vehicle and the drone, and executes any one of claims 1 to 7. 2. Unmanned vehicle cooperative navigation method.
TW111110880A 2022-03-23 2022-03-23 Method for two unmanned vehicles cooperatively navigating and system thereof TWI812102B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW111110880A TWI812102B (en) 2022-03-23 2022-03-23 Method for two unmanned vehicles cooperatively navigating and system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW111110880A TWI812102B (en) 2022-03-23 2022-03-23 Method for two unmanned vehicles cooperatively navigating and system thereof

Publications (2)

Publication Number Publication Date
TWI812102B true TWI812102B (en) 2023-08-11
TW202338301A TW202338301A (en) 2023-10-01

Family

ID=88585571

Family Applications (1)

Application Number Title Priority Date Filing Date
TW111110880A TWI812102B (en) 2022-03-23 2022-03-23 Method for two unmanned vehicles cooperatively navigating and system thereof

Country Status (1)

Country Link
TW (1) TWI812102B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008186146A (en) * 2007-01-29 2008-08-14 Konica Minolta Business Technologies Inc Image forming apparatus
US20150022656A1 (en) * 2013-07-17 2015-01-22 James L. Carr System for collecting & processing aerial imagery with enhanced 3d & nir imaging capability
KR20170126637A (en) * 2016-05-10 2017-11-20 팅크웨어(주) Method and system for providing route of unmanned air vehicle
US10082803B2 (en) * 2016-02-29 2018-09-25 Thinkware Corporation Method and system for providing route of unmanned air vehicle
US20190043370A1 (en) * 2017-08-02 2019-02-07 Microsoft Technology Licensing, Llc En route product delivery by unmanned aerial vehicles
US20190206044A1 (en) * 2016-01-20 2019-07-04 Ez3D, Llc System and method for structural inspection and construction estimation using an unmanned aerial vehicle
WO2020205597A1 (en) * 2019-03-29 2020-10-08 Intel Corporation Autonomous vehicle system
CN114115287A (en) * 2021-12-06 2022-03-01 西安航空学院 Unmanned vehicle-unmanned aerial vehicle air-ground cooperative patrol and guidance system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008186146A (en) * 2007-01-29 2008-08-14 Konica Minolta Business Technologies Inc Image forming apparatus
US20150022656A1 (en) * 2013-07-17 2015-01-22 James L. Carr System for collecting & processing aerial imagery with enhanced 3d & nir imaging capability
US20190206044A1 (en) * 2016-01-20 2019-07-04 Ez3D, Llc System and method for structural inspection and construction estimation using an unmanned aerial vehicle
US10082803B2 (en) * 2016-02-29 2018-09-25 Thinkware Corporation Method and system for providing route of unmanned air vehicle
KR20170126637A (en) * 2016-05-10 2017-11-20 팅크웨어(주) Method and system for providing route of unmanned air vehicle
US20190043370A1 (en) * 2017-08-02 2019-02-07 Microsoft Technology Licensing, Llc En route product delivery by unmanned aerial vehicles
WO2020205597A1 (en) * 2019-03-29 2020-10-08 Intel Corporation Autonomous vehicle system
CN113508066A (en) * 2019-03-29 2021-10-15 英特尔公司 Autonomous vehicle system
CN114115287A (en) * 2021-12-06 2022-03-01 西安航空学院 Unmanned vehicle-unmanned aerial vehicle air-ground cooperative patrol and guidance system

Also Published As

Publication number Publication date
TW202338301A (en) 2023-10-01

Similar Documents

Publication Publication Date Title
WO2022170742A1 (en) Target detection method and apparatus, electronic device and storage medium
US20170206227A1 (en) Method and apparatus for processing image
CN109447326B (en) Unmanned aerial vehicle migration track generation method and device, electronic equipment and storage medium
CN102708355A (en) Information processing device, authoring method, and program
US8149281B2 (en) Electronic device and method for operating a presentation application file
CN111652072A (en) Track acquisition method, track acquisition device, storage medium and electronic equipment
WO2021027692A1 (en) Visual feature library construction method and apparatus, visual positioning method and apparatus, and storage medium
US11069086B2 (en) Non-transitory computer-readable storage medium for storing position detection program, position detection method, and position detection apparatus
Smith et al. Advanced Computing Strategies for Engineering: 25th EG-ICE International Workshop 2018, Lausanne, Switzerland, June 10-13, 2018, Proceedings, Part I
US11631195B2 (en) Indoor positioning system and indoor positioning method
Yin et al. Overview of robotic grasp detection from 2D to 3D
Zhang et al. A posture detection method for augmented reality–aided assembly based on YOLO-6D
Chen et al. Design and Implementation of AMR Robot Based on RGBD, VSLAM and SLAM
TWI812102B (en) Method for two unmanned vehicles cooperatively navigating and system thereof
JP2022081613A (en) Method, apparatus, equipment, medium and computer program for identifying characteristic of automatic operation
CN110853098B (en) Robot positioning method, device, equipment and storage medium
CN108416044B (en) Scene thumbnail generation method and device, electronic equipment and storage medium
CN115565072A (en) Road garbage recognition and positioning method and device, electronic equipment and medium
CN112529984B (en) Method, device, electronic equipment and storage medium for drawing polygon
Li et al. A vision-based end pose estimation method for excavator manipulator
Horng et al. Building an Adaptive Machine Learning Object-Positioning System in a Monocular Vision Environment
Zhang et al. Recent Advances in Robot Visual SLAM
CN116295507B (en) Laser inertial odometer optimization method and system based on deep learning
US20230351755A1 (en) Processing images for extracting information about known objects
CN112348874A (en) Method and device for determining structural parameter representation of lane line