Project 5: Pokémon

projectImage

Speaking of Pikachu, everyone is familiar with it. It is the first Pokémon of Ash Ketchum, the hero character of the animation "Pokémon". After being domesticated by Ash Ketchum, it has always accompanied him around and grown up with him.

Such companionship is really enviable. Although there is no real Pokémon in our life, everyone wants to have such a Pokémon that can always accompany us.

Let’s think about it, can we create a Pokémon by ourselves?

Function Description:

This project uses HuskyLens object tracking function to turn the Maqueen Plus into your exclusive Pokémon, which can move flexibly and always follow behind you.

Materials Checklist:

projectImage

Knowledge Extension:

When we need to track a moving object, we need to use visual object tracking technology in addition to manual operation. This technology is widely used in our life, such as video surveillance, drone follow shooting, etc. This project is to use HuskyLens object tracking function.

I. What is Object Tracking?

Object tracking is an important assignment in computer vision. It refers to the process of continuously inferring the state of objects in video sequences, simply speaking, to identify and trace specified objects.

projectImage

II. The Working Principle of Object Tracking

The image is collected by a single camera and the image information is transmitted to a computer. After analyzing and processing, the relative position of the moving object is calculated. At the same time, the camera is controlled to rotate to track the object in real time.

When the object tracking system performs the tracking function, it is mainly divided into four steps: object recognition, object tracking, object movement prediction and camera control.

projectImage

Object Recognition: Under the static background, accurate moving object information can be obtained through some image processing algorithms.

projectImage

Object Tracking: According to the position of moving object obtained in the previous step, the algorithm is used to track the following image sequence according to the color probability distribution, and further learning can be carried out in the subsequent tracking to make the tracking more and more accurate.

projectImage

Object Movement Prediction: In order to improve the efficiency, the algorithm is used to calculate and predict the position of the moving object image in the next frame. The algorithm can be optimized and the efficiency can be improved.

projectImage

Camera Control: Move the camera while collecting image information to adjust the direction of the camera along with the moving direction of the object, which generally requires to cooperate with PTZ or other motion modules.

projectImage

III. Object Tracking Application Field

Intelligent video surveillance: based on motion recognition (human recognition, automated object detection, etc.), automated monitoring (monitoring a scene to detect suspicious behavior); traffic monitoring (collecting traffic data in real time to direct traffic flow).

projectImage

Human-computer Interaction: traditional human-computer interaction is carried out through the computer keyboard and mouse. In order to enable the computer to have the ability to recognize and understand people's poses, actions and gestures, tracking technology is the key.

projectImage

Virtual Reality: 3D interaction and virtual character motion simulation in virtual environment directly benefit from the research results of video human motion analysis, which can give participants more abundant forms of interaction, and human body tracking analysis is its key technology.

projectImage

IV. Demonstration of HuskyLens Object Tracking Function

HuskyLens has built-in object tracking function, which can track objects by learning their features and feedback the position information to the mainboard.

Different from other functions such as color recognition or face recognition, object tracking can learn and recognize an object (or person) completely. Color recognition is only for colors, while face is only a part of human body. Object tracking is to learn and track the overall characteristics of this object.

The object tracking function can only track one object and does not support tracking multiple objects for the time being. It is better for the learned object to have obvious contours so that they can be easier to identify.

1. Select the "Object Tracking" Function

Dial the function button to the left or right until the word "Object Tracking" is displayed at the top of the screen.

projectImage
2. Object Learning

Point HuskyLens to the target object, adjusting the distance and until the object is contained in the orange bounding box of the center of the screen. It is also acceptable that only part of the object included in the box but within distinct features.

Then long press "learning button" to learn the object from various angles and distances. During the learning process, the orange box with words "Learning: ID1" will be displayed on the screen.

projectImage

When HuskyLens can track the object at different angles and distances, then release the "learning button" to complete the learning.

* If there is no orange box on the center of the screen, it means that the HuskyLens has already learned an object. Please select “Forget Learned Object” and learn again.

3. Keep Learning

Under the object tracking function, HuskyLens can keep learning, that is, as long as the camera sees the learned object, it will keep learning the current state of the object, which is conducive to capturing dynamic object.

Operation method: Long press the function button to enter the parameter setting of the object tracking function. Dial the function button to the right to select "Learn Enable", then short press the function button, and dial it to the right to turn the "Learn Enable" ON, that is, the square icon on the progress bar is turned to the right. Then short press the function button to confirm this parameter.

projectImage
4. Save the Model

When restarting HuskyLens, the last learned object is not saved by default, and you can turn on the switch to save models automatically.

Operation method: the same as above, after entering the parameter setting, switch "Auto Save" ON. In this way, you only need to learn the object once. Restarting the camera, the object you learned last time will be saved.

Project Practice:

How is HuskyLens object tracking function used? How can Maqueen Plus closely follow its owner? Let's break down the whole project into several small tasks and create our own Pokémon step by step.

We will complete the task in three steps. First, we will learn to use HuskyLens object recognition function and output various parameters in the serial port area. Then learn how to make the car follow the designated target. When the target moves, it can be adjusted left and right and back and forth. Finally, improve the whole project so that Maqueen plus can follow the designated objects as required.

Task 1: Get to Know Object Tracking

1.Program Design

STEP 1 Learning and Recognition

Here you can select an object with obvious contour features, such as gestures.

projectImage

STEP 2 Instruction Learning

Let's take a look at some of the main instructions.

①The parameter of IDx is obtained from the requested "result", and -1 will be returned if this ID is not or has not been learned in the screen.

projectImage

STEP 3 Coordinates Analysis

HuskyLens sensor screen resolution is 320*240, as shown in the following picture. The coordinates of the object center point obtained through the program are also within this range. For example, if the coordinate values obtained are (160, 120), the object being tracked is in the center of the screen.

projectImage

"X coordinates" and "Y coordinates" refer to the position of the box center point in the screen coordinate.

"Object width" and "Object height" refer to the size of the frame. Under the object tracking function, the frame is square, so the width and height are equal.

projectImage

STEP 3 Function Analysis

Check each parameter of the frame through the serial port.

2. Program Example
projectImage
3. Execution Result

In the serial port area of Mind+, turn on the serial port switch to see the parameters. Try to move the object left and right to observe the numerical change of the X center. Move the object up and down to observe the numerical change of Y center. Move the object back and forth to observe the numerical change of width and height.

projectImage

Task 2: Adjust Left and Right

1. Program Design

STEP 1 Function Analysis

As shown in the following picture, the screen is divided into 3 sections according to the X axis of the camera screen coordinate system, and the middle section is our target section. When the camera continuously detects the state of the target object in the picture, its X center is 120-200, which means that the target is in the center of the field of vision and Maqueen Plus does not need to adjust its position; its X center is 0-120, Maqueen Plus need to adjust by turning right; its X center is 200-320, Maqueen Plus need to turn left to adjust.

projectImage

STEP 2 Flow Chart Analysis

projectImage

2. Program Example

projectImage
3. Execution Result

When the box of identified object is in the center of the screen, the car stops; when the box is on the left or right side of the screen, the car automatically adjusts the position left or right until the box is located in the target section of the screen.

projectImage

Task 3: Adjust Back and Forth

1. Program Design

STEP 1 Function Analysis

As the owner's Pokémon, it should also be able to follow the owner to move together. When it is relatively far away from the owner, it will automatically chase after. And when it is too close, it will retreat to a safe distance.

STEP 2 Flow Chart Analysis

Based on task 2, the distance between Maqueen Plus and the target object is judged by the height of the box, and the back and forth adjustments are made according to the results.

projectImage
2. Program Example

Based on programs in task 2, the function "Adjust left and right" remains unchanged, and the program is modified. The complete program is as follows.

projectImage
3. Execution Result

After Maqueen Plus finish learning an object, it will automatically follow the object to move forward, backward, left and right, keeping the object box in the center of the screen and at a suitable distance.

Project Summary:

Project Review

Understand the working principle of object tracking and the operation method of face recognition? by using HuskyLens.

HuskyLens sensor can output the X center, Y center, height and width of the target box. These parameters can be used to judge the relative position and distance of the target object.

When the Maqueen Plus is used as a tracking car, these parameters can assist to locate the target object, which is still valid under other functions of the HuskyLens. For example, you can turn this project into a human face follower, and let Maqueen Plus follow the recognized the human face.

Knowledge Nodes Recap

1.Understand the working principle of object tracking;

2.Learn the operating method of HuskyLens object tracking function;

3.Learn to use HuskyLens to make Maqueen Plus follow the target.

Project Extension:

Just like the animation in which the hero character tames his Pokémon to make it stronger, can you make your Maqueen Plus have more functions? For example, add gesture control. When your hand draws a circle to the left, the car will circle to the left. When your hand pushes forward, the car will move forward. Can you implement it with programming?

License
All Rights
Reserved
licensBg
0