At Shanghai Yangshan Deep Water Port, we can always see such scenes. Bright red bridge cranes stretch their huge arms; tower cranes line up along the port; rail cranes lock the containers accurately. Huge containers are quickly grabbed one by one and automated guided vehicle already awaits beneath them; those containers are quickly loaded and towed to the designated position. The traffic flow in the port area is incessant, but you can hardly see a worker there.
Automated Guided Vehicle, which abbreviation is AGV, refers to an autonomous unmanned vehicle that has electromagnetic or optical automated guidance and can run along a specified guidance path.
Can Maqueen Plus be acted as an AGV? Even smarter than AGV? For example, recognizing the type of transported goods and transporting to the designated position according to the recognition result. Let’s make Maqueen Plus an AGV with "Eyes"!
This project uses object recognition function of HuskyLens to identify different items and transport them to different positions according to the recognition results, so that Maqueen Plus can be transformed into AI sorting master!
In order to realize accurate transportation, the built-in line tracking sensor of Maqueen Plus will be used to realize fixed-point transportation through a simple line tracking algorithm.
When we talk about computer vision, the first thing that comes to mind is image classification (object recognition). Yes, image classification is one of the most basic assignments of computer vision.
In this project, the object recognition function was applied to distinguish different kinds of objects.
Object recognition mainly refers to the perception and understanding to the entity and environment in the three-dimensional world, and it belongs to the category of advanced computer vision.
In object recognition, there are many complicated and interesting tasks, such as target detection, object localization, image segmentation, etc. Target detection can be seen as a combination of image classification and image localization. Take an example as a given picture, the target detection system should be able to identify the target in the picture and give its location.
At present, there are two popular target detection algorithms, one is R-CNN and the other is Yolo. R-CNN algorithm has a higher accuracy but slower speed. Yolo algorithm is faster, but its accuracy is lower. The following is a brief introduction to Yolo algorithm.
Yolo is an abbreviation of “you only look once”, which means prediction can be made just after look once. The inspiration comes from humans ourselves. Because when humans look at a picture, we can know the location of various types of targets in the picture at a glance.
Yolo excels in its simplicity and speed. If target detection is regarded as a process of fishing, other algorithms are just like sniping fishes with fork one by one. When it comes to Yolo, it just scatters a fishing net and catch them all!
Yolo makes prediction based on the whole picture, and it will output all the detected target information at once, including the classification and location. Yolo algorithm is implemented by splitting the input picture into SxS grids. If the center of an object falls in one grid, the corresponding grid is responsible for predicting the size and class of the target. As shown in the picture below, the center of the dog falls into the blue grid, so the blue grid is responsible for the information prediction of this target object.
"The grid on which the center of the object falls is responsible for predicting the object" is divided into two stages, including training stage and testing stage.
predict the objects in the image. In the testing stage, the grid will naturally continue to do so because it has been taught in the training stage to predict the objects whose centers fall in the grid.
The object recognition function in HuskyLens can identify what the object is and track it.
At present, 20 kinds of objects are supported: planes, bicycles, birds, boats, bottles, buses, cars, cats, chairs, cattle, dining tables, dogs, horses, motorcycles, mankind, potted plants, sheep, sofas, trains and televisions.
The default setting is to identify one object, but learn multiple can also be set up.
Dial the function button to the right or left until the word "Object Recognition" is displayed at the top of the screen. Long press the function button to enter the parameter setting of the object recognition function.
Dial the function button until "Learn Multiple" is displayed, then short press the function button, and dial to the right to turn on the "Learn Multiple" switch, i.e. progress bar turns blue and the square icon on the progress bar moves to the right. Then short press the function button to confirm this parameter.
Dial the function button to the left until "Save & Return" shows. And the screen prompts "Do you want to save the parameters?" Select "Yes" in default, now short-press the function button to save the parameters and return automatically.
When detecting objects, HuskyLens will automatically recognize it. On the screen, white bounding boxes pick out the object with their names. At present, only 20 built-in objects can be recognized, and the rest objects cannot be recognized so far.
Point HuskyLens at the target object. When the object displayed on the screen is detected and its name is displayed, point the “+” symbol at the object, then short press the “learning button”. When pressing, the color of the bounding box changes from white to blue, and the name of the object and its ID number will appear on the screen. At the same time, a message prompts: "Click again to continue! Click other button to finish". Please short press the "learning button" before the countdown ends if you want to learn next object. If not, short press the "function button" before the countdown ends, or do not press any button to let the countdown ends.
The object IDs displayed on HuskyLens are in the same order as the marking objects, that is, the IDs will be marked as "ID1", "ID2", "ID3" and so on. And the bounding box colors corresponding to different object IDs are also different.
When HuskyLens encounters the objects that have been learnt, there will be colored bounding boxes on the screen to pick out these objects and display the object names and IDs. The size of the boundary will change according to the size of the object, and the bounding boxes will automatically trace these objects. The same class of objects are in same box color, name and ID. Simultaneous recognition of multiple types of objects are supported, such as recognizing bottles and birds at the same time.
This function can be used as a simple filter to find out the target object from a pile of objects and trace it.
* This function are not able to tell the difference within same class of objects. For example, it can only identify the object is cat, but cannot identify what kind of cat it is. So it is different from face recognition function which different faces can be distinguished.
How is object recognition of HuskyLens used? How to control Maqueen Plus to follow the designated route? Let's break down the whole project into several small tasks and complete AI sorting master step by step.
This project will be broken down into three steps. First, we will learn to use object recognition function of HuskyLens and output the object name through serial port. Then we will learn the grey-scale line tracking algorithm to realize the fixed-point movement. Finally, we’ll improve the whole project and simulate sorting and transportation scenario.
Learning and Recognition:Select 3 items here for HuskyLens to learn. (Note: First switch to "Learn Multiple" function)
When HuskyLens recognizes the bottle, the screen of the mainboard displays 1 and the serial port outputs bottle. When HuskyLens recognizes the bicycle, the screen on the mainboard displays 2 and the serial port outputs bicycle. When HuskyLens recognizes the chair, the screen of the mainboard displays 3 and the serial port outputs chair.
STEP 1 Function Analysis
As an AI sorting master, Maqueen Plus uses HuskyLens' object recognition to identify the object type. But the sorting task can only be completed when the object is transported to the designated location, just like the AGV in the port, which completes the automated transportation.
For this kind of point-to-point accurate transportation, you can use the built-in greyscale sensor of Maqueen Plus and implement it through line tracking.
Maqueen Plus has 6 built-in greyscale sensors at the bottom, which can be used to detect black line.
When the greyscale sensor faces white background, the greyscale indicator LED is off, and the detection value of the greyscale sensor is 0; when the greyscale sensor faces the black line, the greyscale indicator LED is on, and the detection value of the greyscale sensor is 1.
STEP 2 Instruction Learning
Let's take a look at some of the main instructions.
①Read the value of the line tracking sensor, and the feedback value is 0 or 1, 1 is indicated on the black line. Select greyscale in the drop-down box, L1, L2, L3, R1, R2, R3 are consistent with the logo at the bottom of Maqueen Plus.
STEP3 Greyscale Test
Output greyscale value through serial port. Test program:
1.When the greyscales L1 and R1 are both on the black line, the serial port outputs 1, 1;
2.when only L1 is on the black line, serial port outputs 1, 0;
3.when only R1 is on the black line, serial port outputs 0, 1;
If the program does not work properly, troubleshoot the following problems:
(1) The black color printed by the printer may not be correctly recognized. The black adhesive tape and printed map can be used normally.
(2) Ambient light may affect the greyscale sensor. When the light changes greatly, the greyscale need to be recalibrated. Greyscale calibration method: Maqueen Plus has a one-button greyscale calibration function. The calibration button is shown in the following picture. When in use, ensure that all line tracking sensors are in the black calibration area. Press the calibration button for 1 second. The two RGB lights in front of Maqueen Plus flash in green, indicating the calibration is completed. Release the button to complete.
* Principle of greyscale sensor: each greyscale sensor consists of an infrared emitter and an infrared receiver. Because it is often used to control robots to walk along the line, it is also called a line tracking sensor. The infrared transmitter continuously emits infrared light to the ground. If the infrared light is reflected (such as meeting white or other light-colored planes), the receiver receives the infrared signal, outputs a value of 0, and the greyscale indicator LED is off. If infrared light is absorbed or cannot be reflected, the receiver will not receive infrared signals, output a value of 1, and the greyscale indicator LED will be on.
STEP 4 Flow Chart Analysis
Here we use two greyscales L1 and R1 to patrol the line, and the default width of black line is 2cm.When L1 and R1 are both on the black line, Maqueen Plus goes straight ahead.
If you don't have a suitable map at hand, it is recommended to stick a patrol map with 2cm-wide black tape, such as the following picture.
When running the program, the car will automatically patrol the black line.
STEP 1 Function Analysis
Assuming that three kinds of objects need to be sorted, Maqueen Plus will complete transportation along a certain track after recognizing the object class.
In order to accurately control Maqueen Plus, T-junction, left junction and right junction (as shown in the following picture) are added to the line tracking map, and the junction judgment is correspondingly added to the programs.
Using L3 and R3 greyscales on Maqueen Plus to assist in judging junctions.
STEP 2 Junction Judgment Program
Modify the function "read greyscale" in task 2 and create functions "junction judgment", "turn left" and "turn right".
It is suggested that when debugging the program, the above functions "junction judgment", "turn left" and "turn right" should be run separately to test whether each function can execute the corresponding function.
STEP 3 Map Example
STEP 4 Flow Chart Analysis
Different maps need different route planning. Take the map above as an example, the route planning from the starting point to the end point 1 is: if Maqueen Plus patrols along the line until it meets the left junction, Maqueen Plus turns left, and it continues to move along the line until it meets the T junction, it reaches the terminal point 1.If HuskyLens recognizes the object of ID1, it will go to the terminal point 1 according to the route planning. Terminal point 2, 3 and so forth.
The sequence of object recognition in Task 1 is copied here.
The flow chart shows as follows:
Modify the main program and add functions "route 1", "route 2" and "route 3".When you are actually programming, refer to your own map to make a route plan. The following program uses the previous example map.
Understand the working principle of object recognition in this lesson and the operation method of object recognition by using HuskyLens AI sensor.
HuskyLens combines with Maqueen Plus built-in line tracking sensor, transforming Maqueen Plus into an AI sorting master which is capable of automated sorting and transportation.
Imagine if Maqueen Plus could load or unload cargo with a mechanical arm, the whole application would be more intelligent.
1.Understand the working principle of object recognition;
2.Learn the operating method of object recognition function of HuskyLens;
3.Learn the line-tracking control of Maqueen Plus.
In this project, we implemented the function of sorting and transportation, but how can we get back to the starting point after Maqueen Plus is transported to the designated location? Can Maqueen Plus continue to track the line and come back? Try to implement it with a program.