Smart Spotlights | Huskylens Playground with micro:bit EP03

0 3620 Easy
projectImage

When watching programs in theater, concerts, musicals, or large-scale presentations, we often see a bright beam of light tracking actors around the stage and moving with their actions, which is very eye-catching.

How does that work? Normally, in actual scenarios, the spotlight is often controlled by manual or programming. The former way requires the lighting console operator to rehearse multiple times to achieve perfect stage performance. While for the latter one, we need to set the specified routine by programming to let the spotlight move along with it, which means many limitations for actors since a little deviation will affect the whole performance.

Wouldn’t it be great if there is a smart spotlight that can automatically recognize and track actors? How can we achieve that? Well, an accurate spotlight “operators” is very necessary for making this project.

Function Introduction:

This project is going to use the object tracking function of the HuskyLens to track the actor on the stage, and drive the pan-tilt to auto follow the people.

Material:

projectImage

Knowledge Field:

When we are going to track a moving object, visual object tracking is needed besides manual operation. This technology has already been widely used in our life, such as video monitoring, UAV follow shot, etc. In this project, we will make used the object tracking of HuskyLens to develop a vocabulary game machine.

1. What is object tracking?

As one type of behavior recognition, object tracking is one of the vita functions of AI visual recognition. Object tracking is a key point in computer vision, referring to the process of making continuous inferences about the target’s state in a video sequence, which can be simply regarded as recognizing and tracking the objects moving within the visual range of the camera. This technology has a wide range of applications in both military and civilian.

projectImage

Operating Principle:

The image information is collected by the camera and set to the computer. After analysis and process, the computer can work out the relative position of the moving object. Meanwhile, it will make the camera rotate to carry out real-time tracking. The object tracking system is mainly divided into four steps: object recognition, object tracking, movement predication, camera controlling.

projectImage

Object recognition is to obtain accurate appearance information of the object through some image processing algorithms under a static background, and then recognize and mark the shape of the object, as shown in the figure.

projectImage

Object tracking refers to tracking the subsequent image sequence through algorithms according to the appearance characteristics of the object obtained from the previous step, and carry out more in-depth learning in the subsequent tracking so as to make the tracking more accurate.

projectImage

Movement prediction means predicting the image of a moving object in the next frame using algorithms so as to optimize the algorithm and improve efficiency. As the picture shows, the follow-up move path and action can be predicted by the bird’s movement trend in the first few seconds.

projectImage

Camera controlling is to move the camera according to the moving direction of the object while collecting the image information. It usually requires coordination with a cloud platform or other movement mechanism.

projectImage

Object tracking is mainly used in:

Smart video monitoring: motion-based recognition(human recognition based on footwork, automatic object detection), automatic monitoring(monitor the suspicious acts), traffic monitoring(collecting the real-time traffic data to direct the traffic).

projectImage

Human-computer interaction: the traditional human-computer interaction is carried out by the keyboard and mouse of the computer. Tracking technology is the key point when a computer needs to be able to recognize and understand the posture, movement and gesture.

projectImage

Robot visual navigation: for a smart robot, the tracking technology can be used to compute the motion trail of the shot object.

projectImage

VR: 3D interaction and virtual character action simulation in the virtual environment directly benefit from the research results of video human motion analysis. They provide richer forms of interaction for the participants. And human tracking and analysis are the key technologies.

projectImage
Technology photo created by rawpixel.com - www.freepik.com

2. HUSKYLENS Sensor - Object Tracking Function

If you want HuskyLens to follow your steps, then he needs a pair of “Eyes” to gaze at you. How could we realize this function? We are going to use the object tracking of HuskyLens sensor.

The function is a built-in algorithm in the sensor, allowing to learn the features of the object, track the position of object on the screen and feedback the coordinate figure of the position to the main board. We can drive the Mind+ robot to achieve real-time tracking by the object position information acquired.

Object Learning:

Different from color recognition and face recognition, object recognition is able to completely learn and recognize an object(human). Color recognition is only for color, while face recognition is only for a part of the body. Object tracking is to track a object by learning the overall characteristics of the object.

Point HuskyLens to the target object, adjusting the distance until the object is included in the yellow frame of the center of the screen. If the object is difficult to be completely contained in the yellow frame, you can just cover the distinctive features. Then long-press the “Learning button”to learn the object from various angles and distances. During the learning process, the yellow frame with the words “Learning: ID1” will be displayed on the screen.

projectImage

When HuskyLens completed the track of the object at different angles and distances, you can release the “Learning button” to end the learning.

*Note: if there is no yellow frame in the center of the screen, it means that HuskyLens has learned an object before. Please let it forget the learned one and learn again.

Turn on “Spotlight”:

There are two LEDs on HuskyLens for detecting under dim light environment.

projectImage

Turn on LEDs: Switch to “General Setting”interface to find the LED switch.

projectImage

Press down the button and dial left or right to control the LEDs.

projectImage

3. 2 DOF Pan-Tilt

projectImage

What is pan-tilt?

The Pan-tilt is a device that controls the movement of a camera. It usually divides into fixed and electric pan-tilt. Fixed pan-tilt is suitable for a small monitoring range. The horizontal and tilt angle can be adjusted after the camera is installed. Electric pan-tilt covers a larger area than a fixed camera. It usually consists of two motors that receive signals from the controller to run and locate accurately.

Pan-tilt in this project

The 2-DOF pan-tilt controls the movement of the camera and keeps it pointing at the recognition area, by which the follow spotlight can always shine on the target. This pan-tilt allows the camera to do the 2-DOF motion in the horizontal(X-axis) and vertical(Y-axis) direction.

projectImage

Tips: what is a servo?

Servo is a kind of motor that can specify control position(angle). In Mind+, the rotation angle of a servo can be controlled by the program. The most commonly-used servos normally feature 0°~180° rotation angle. There is also 360° servo, but it cannot be controlled to rotate to a specified angle. Here we will use a 180° servo.

Project Practicing:

The project will be divided into two parts. First of all, we are going to learn to use the object tracking function of HuskyLens, and read the object coordinate data. Then based on that, add a pan-tilt to realize the auto spotlight following function.

Task 1: object tracking and coordinate

When HuskyLens is detecting a object, the target will be automatically selected by a color frame on the screen. The coordinates of the color frame position x and y are assigned according to the following coordinate system. After getting the coordinates from the UART / I2C port, you can know the position of the object.

Task 2: pan-tilt-controlled smart spotlight

After obtaining the coordinate data in the previous step, drive the servo to control the HuskyLens sensor, move the coordinate value to the center of the screen to realize the real-time light following effect.

Task 1: Object Tracking and Coordinate

1. Hardware Connection

projectImage

HuskyLens sensor uses I2C interface, please pay attention to wiring order.

2. Program Design

STEP 1 : Learn & Recognition

Before designing the program, we need to let HuskyLens learn the “actor” to be tracked.

projectImage

STEP 2 : Mind+ software Settings

Open Mind+(Version 1.6.2 or above), switch to “Offline mode”, click “Extension”. Load ”micro:bit”block in “Board”, and click “HuskyLens AI camera”in “Sensor”.

projectImage

STEP 3 : Command Learning

Here are the instructions mainly used.

projectImage

① Initialize only once between the beginning of the main program and looping executions. You can select I2C or Soft-serial, and no need to change I2C address. Please note that the “Output protocol”of your HuskyLens sensor should be set to be consistent with the program, otherwise, data cannot be read.

projectImage

② You can switch to other algorithms freely, but please note that you can run only one algorithms at each time, and it takes some time to switch algorithms.

projectImage

③ The main controller requests HuskyLens to store data in the “Result” once(stored in the memory variable of the main board, and once requested, it will refresh the data in the memory once), then the data can be obtained from the “Result”. The latest data can be got from the “Result”only when this module is called.

projectImage

④ Check whether there is frame or arrow in the screen from the requested “Result”, including the learned(id>0) and unlearned, and returns 1 if there is one or more.

projectImage

⑤ Check whether the IDx has been learned from the requested “Result”.

projectImage

⑥ Check if the IDx requested from the “Result”is in the screen. The frame refers to the algorithm of the frame on screen, arrow refers to the algorithm of the arrow on screen. Select arrow when the current is only line-tracking algorithm, for others, choose frame.

projectImage

⑦ Get the parameter of the IDx from the “result”. If this ID is not in screen or unlearned, return -1.

STEP 4 : Coordinate Analysis

The screen resolution of HuskyLens is 320*240, as shown below. The object coordinate center point obtained by the program should be within this range. For instance, when the obtained coordinate value is (160, 120), it means that the tracked object is in the center of the screen.

projectImage

STEP 5 : Flowchart Analysis

projectImage

3. Sample Program

projectImage

4. Result

projectImage

Open Mind+ serial port to read data. The traced object target is lost when the data(-1, -1) appeared.

Task 2: Pan-tilt-controlled Smart Spotlight

1. Structure building and hardware connection

When got known to HuskyLens object tracking and Pan-tilt function, then we are going to assemble the 2-DOF pan-til.

(1) Cut the single horn and X-horn, as shown below.

projectImage

(2) Fix the x-horn to the base with screws.

projectImage

(3) Attach one servo to the base, pay attention to the direction.

projectImage

(4) Fix the single horn at the position shown in the figure below.

projectImage

(5) Combine the base with the part above.

projectImage

(6) Attach the second servo to the last structural component.

projectImage

(7) Assemble all parts together. You need to prise the buckle a little bit when installing.

projectImage

(8) Fix the HuskyLens sensor onto the pan-tilt. Since this pan-tilt is not specially designed for HuskyLens sensor, we need to use a piece of corrugated paper to reinforce them.

projectImage

(9) The centre of gravity of the pan-tilt is shifted to one side, so we need to add larger base to fix it.

projectImage

(10) Connect hardware as shown below.

projectImage

*Note: pay attention to the pins when connecting.

Huskylens - IIC 

X-axis Servo - P8

Y-axis Servo - P9

2. Program Design

Step1: Command Learning

To keep the spotlight tracking the object, we will mainly use the function below:

projectImage

Step2: Servo Coordination

In the last step we know that the resolution of HuskyLens is 320*240. Then the coordinate of screen center point is (160, 120). We can make the servo do the following actions to track object according to the object centre point coordinate.

projectImage
projectImage

When the object frame center point arrives at the screen centre point, the system will determine that now the object is at the centre position of the spotlight. The two LEDs on the HuskyLens will be the spotlight.

Step3: Flowchart Analysis

The program flowchart can be designed as below:

projectImage

Step4: Variables and Blocks

To simplify the program, we will use the “variable”and“Make a block”function in Mind+. A variable is mainly used to store information and can be called later when necessary. A block stores a section of commands for later use.

projectImage

3. Sample Program

Since we added two servos in this program, we have to load the related servo block in extension module.

projectImage

(1) To begin with, we should set variable and make a block. Create two numeric variables named“X-axis”and “Y-axis”; Make three blocks “Initialize Servo”, “Move on X-axis”, “Move on X-axis”.

projectImage

(2) Define the three blocks. The initialize block aims to set servo position to the initial value, and associate the servo rotation angle with variable “X-axis” and “Y-axis”.

projectImage

(3) Define “Move on X-axis”. The function performed here is to judge the object center point coordinate of X-axis and make servo to execute the related actions.

projectImage

(4) Similarly, “Move on Y-axis”performs actions and judges position on the Y-axis.

projectImage

(5) Finally, let’s complete the main program. Just call the previous defined blocks.

projectImage

4. Result

HuskyLens LEDs light tracks the people on the stage.

projectImage

Project Summary:

In this lesson, we learned how to use the object tracking of HuskyLens, and realized the auto smart spotlight function combined with a pan-tilt.

Review:

1. Learn the main technology of object tracking;

2. Get to know the principle and use of pan-tilt;

3. Learn the concept and use of variable and block

Project Development:

Can we use HuskyLens object tracking function to make an auto shooting device based on the smart spotlight project? Let HuskyLens to auto aim at a moving or fixed target, use laser light as the “Bullet”to shoot and use light sensor to detect if it hits the center of the target.

Extended Knowledge:

Object tracking deep learning:

Move the HuskyLens sensor or target, the yellow frame on the screen will automatically track object. When tracking object, it shows “Learning: ID1”, which indicates that HuskyLens is tracking the object while learning. This improves the object tracking ability. Or you can long press the “function button”to enter the parameter setting of the object tracking function, select “Learn Enable”, then short press the function button to confirm this parameter.

projectImage

When the recognition result meets the requirements, you can turn off “Learn Enable”.

*Tips: Only one object can be tracked at a time. It can be any object with a clear outline, even various gestures.

Next Class: Self-service Checkout | Huskylens Playground with micro:bit EP04 >

License
All Rights
Reserved
licensBg
0