Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Computer Vision-Based Navigation System for Industrial AGVs

Computer Vision-Based Navigation System for Industrial AGVs

This project is all about a centralized computer vision-based navigation system for industrial AGV systems using mobile robots.

 

Things used in this project

Hardware components

Raspberry Pi 3 Model B
Computer vision based centralized system will be powered by the raspberry pi
× 1
Raspberry Pi Camera Module
Initially, the overhead camera will be the Rpi camera module.
× 1
Raspberry Pi 3 Model B
DonkeyCar will be powered by another Rpi 3 establishing strong wireless communication.
× 1
Intel Movidius Neural Compute Stick
Movidius stick will be used to enhance the image processing power and neural network and machine learning of the Raspberrypi.
× 1

Software apps and online services

OpenCV

Hand tools and fabrication machines

3D Printer (generic)

Story

In this method video pre-processing and post-processing, path planning, robot control, and wireless communication with the mobile robot will be done a by single-board computer, which in this case will be the Raspberry Pi. Raspberry Pi 3 will be used to perform the post processing techniques, including path planning algorithm, robot control algorithm and the wireless communication. The Raspberry Pi is a low-cost SBC with a credit card-size form factor that can be used for many tasks that your computer does. Raspberry Pi uses software which is either free or open source. It provides direct accessible processor pins as GPIOs, so prototyping vision projects or learning computer science from scratch in such a device is better. One can learn on a PC also but implementation at hardware level is not feasible as a PC does not provide much hardware details. Compared to its class, the processor is very good. It is a Broadcom 900 MHz quad core CPU. It is an Arm Cortex-A7-based device. It is convenient to use Raspberry Pi for beginning as it has very less software glitches and provides overall performance. Processesthat will be processed in the Raspberry Pi also can be implemented in the FPGA but it would be much complex structure and time consuming.

Main advantage of using a single board is to perform multiple task with the same architecture (in this scenario the main coding language will be Python). It is easier to interface all theprocesses and multiple threads which leads to a better system performance. Pathplanning algorithm and robot controlling algorithms interact with specifiedcommands to perform final robot navigation process. Only few sets of integervariable will be passed over the MQTT protocol to control orientation and speedof the mobile robot. Intel Movidius Neural Compute Stick willbe used in image pre-processing, path planning algorithms to provide the neural power to the Raspberry Pi. Movidius Neural Compute Stick (NCS) is produced by Intel and it can be run without any need of Internet. This software development kit enables rapid prototyping, validation, and deployment of deep neuralnetworks.

Main architecture of the DonkeyCar is based on IoT technology with a use of another Raspberry Pi 3 Model B. The main goal is to implement the wireless network over the area using an efficientcommunication protocol such as Message Queuing Telemetry Transport (MQTT) lightweight protocol. An algorithm will be developed to build the mechanism ofhow it operates and complete tasks in real-time. The mobile robots (DonkeyCar in this scenario) in the ground level completes given task without any aid of sensor by followingnavigation instructions given by a server. Between server and mobile robot havestable wireless communication method.

Obstacles and the ground robot willbe monitored by an Overhead camera as shown in figure 2. Raspberry Pi camera is used to acquire the video feed ofthe ground plane. Two main algorithms can be used for path planning. They are Wavefront and A* search algorithm. System implementation from these twotechniques will be main two methods to approach the final goal.

Wavefront Technique for Path Planning

It is the most easiest and efficientway to find the shortest path possible. In wave front based methods, values areassigned to each node starting from target node. It is followed by traversalfrom start node to target node using the values assigned. Goal is to ensureoptimal path length along with faster execution time. It will address thisproblem by preventing the full expansion of waves and used a new cost functionso that optimality is not compromised. In this algorithm It starts at the goalcell and marks each adjacent cell with the distance to the current goal. Using 8-point connectivity, each cell has up to eight adjacent cells, with thediagonal cells having a distance of √ 2 ≈ 1.4 to the current cell. Theremaining cells each have a distance of this process is then repeated for eachcell, continuously marking neighboring cells, until the robot position has beenreached. Cells defined as obstacles1 after dilation are ignored. Focused Wavefront algorithm is a further modification to MWF. This algorithm is quitefaster than previous algorithms because it explores only a limited number ofnodes. Each node is allocated two values – weight and cost. Weight is the valueassigned to the node depending on its position. It is assigned in exactly same fashionas we allocate values in modified wavefront algorithm. This MWF algorithm canbe used as a modification.

Image pre-processing and post-processing methods are one of the most critical areas from the whole process. A gray scale image will of the ground be produced as the initial step of the image processing. Obstacle areas will be identified with use of binary conversion and a proper pixel coordinate system. To find the correspondence between the real coordinates and the image coordinates a calibration procedure has to be developed [3]. Pin-hole model will be the main method, which gives a way to compute the world coordinates from the image coordinates and the focal distance f.

To calibrate the camera we work into phases: first from image to floor, then from floor to robot. The estimate of the twelve elements of the matrix M is reduced to eleven considering the scale factor. In the first calibration phase the camera takes a picture of calibration object

whose dimension is known. We do notwant to use the classical least-squares method, which requires precise measuresin world coordinates, so we choose all the points of the calibration object tobe on the floor (z is null, and 3 elements of M are null). The calibrationobject is a white square, 21-cm width. The vertex coordinates are computed. Theestimate of M is done on the first picture using least squares and trying tomatch the reference square. The initial estimate is then improved with Newtonmethod. This is a minimization problem, where the function to minimize is thedifference between the estimated segment length and the real length.

After the reference system on thefloor as shown in figure 5, weconstruct the matrix to transform it in the reference of the robot. During thissecond calibration phase, pictures of the object are taken from differentpositions and orientations of the robot, and again this minimization problem issolved as before. We consider as robot reference system the one used bydead-reakoning of the robot. Let x be the coordinate vector on the robot,

To get the shortest path wavefronttechnique will be used. Related algorithms will be developed in python languagein a Linux environment.

Wavefront technique was used tocreate a proper path plan. As shown in figure11 the red dot represents the start and the blue dot represents thedestination. An image of the ground plane was given as an input to thealgorithm. The grayscale image will be processed to identify darker and lighterareas in order to identify the free path by eliminating darker areas

Discussion

According to the figure 1 the whole system is divided into three main stages Image acquisition, Image processing and controlling, Mobile robot. When processing the ground plane via overhead camera a clearimage acquisition from the Pi cam is a must. One of the main problems we haveidentified is the calibration process of the overhead camera. Pin-hole model will be the mainmethod, which gives a way to compute the world coordinates from the imagecoordinates and the focal distance f. The next task was to develop an algorithmto determine the orientation of the robot in order to drive through waypointssmoothly.

Developing an image processing-based orientation feedback system was continued with OpenCV ArUco marker method. AnArUco marker is a synthetic square marker (figure..) composed by a wide blackborder and a inner binary matrix which determines its identifier (id). Theblack border facilitates its fast detection in the image and the binarycodification allows its identification and the application of error detectionand correction techniques. The marker size determines the size of the internalmatrix.

The post Computer Vision-Based Navigation System for Industrial AGVs appeared first on SummerSolderS.



This post first appeared on Summersolders, please read the originial post: here

Share the post

Computer Vision-Based Navigation System for Industrial AGVs

×

Subscribe to Summersolders

Get updates delivered right to your inbox!

Thank you for your subscription

×