Nearly everyone has played a game at some point in their lives. No matter what age you are, there are games that interest you. Of course, we should not spend too much time on game or even let it affect our life. Playing games properly is helpful for our intellectual development. It would be great if we could develop a game that is related to learning so that children can learn something useful while playing.
There used to be a game machine called "Xiao Ba Wang", in which there is such a game of learning English vocabulary, whose background music I can still remember. Today we are going to develop a game for learning English vocabulary.
In this project, we are going to learn about the object tracking function of HUSKYLENS, using its built-in machine learning function to recognize and track the learned objects. The Mind+ 1.6.3 and above developed HUSKYLENS programming in Online mode, so there are much more we can do with Mind+ and Huskylens. Click the little green flag to start the game. Make Huskylends learn faces to realize the tracking function. When a face is in the range of Husky recognition, the program will determine whether it is a learned face, and if it is, the Mind+ robot will follow the face. By moving the face within the range of HUSKYLENS recognition, we can control the Mind+ robot's movement (the starting UI). At the same time, items will fall from the top of the stage. When the item encounters Mind+ robot (the game UI), you will score 1, while the objects miss Mind+ robot and touches the edge of the object, you will deduct 1 score. When you get more than 20 scores (which can be set according to the difficulty of the game), the script (change to the victory UI) will be stopped and the game will end.
When we are going to track a moving object, visual object tracking is needed besides manual operation. This technology has already been widely used in our life, such as video monitoring, UAV follow shot, etc. In this project, we make use of the object tracking of HUSKYLENS to develop a vocabulary game machine.
As one of the vital functions of AI visual recognition, object tracking belongs to one type of behavior recognition. Object tracking is a key point in computer vision, referring to the process of making continuous inferences about the target’s state in a video sequence, which can be simply regarded as recognizing and tracking the objects moving within the visual range of the camera. This technology has a wide range of applications in both military and civilian.
The image information is collected by the camera and sent to the computer. After analysis and process, the computer can work out the relative position of the moving object. Meanwhile, it will make the camera rotate to carry out real-time tracking. The object tracking system is mainly divided into four steps: object recognition, object tracking, movement prediction, camera controlling.
Object recognition is to obtain accurate appearance information of the object through some image processing algorithms under a static background, and then recognize and mark the shape of the object, as shown in the figure.
Object tracking refers to tracking the subsequent image sequence through algorithms according to the appearance characteristics of the object obtained from the previous step, and carry out more in-depth learning in the subsequent tracking so as to make the tracking more accurate.
Movement prediction means predicting the image of a moving object in the next frame using algorithms so as to optimize the algorithm and improve efficiency. As the picture shows, the follow-up movement path and action can be predicted by the bird’s movement trend in the first few seconds.
Camera controlling is to move the camera according to the moving direction of the object while collecting the image information. It usually requires coordination with a cloud platform or other movement mechanism.
Object tracking is mainly used in:
1.Smart Video Monitoring: Basing on motion recognition (human recognition basing on footwork, automatic object detection), automatic monitoring (monitor the suspicious acts), traffic monitoring (collecting the real-time traffic data to direct the traffic).
2.Human-computer Interaction: The traditional human-computer interaction is carried out by the keyboard and mouse of the computer. Tracking technology is the key point when a computer needs to be able to recognize and understand the posture, movement, and gesture.
3.Robot Visual Navigation: For a smart robot, the tracking technology can be used to compute the motion trail of the shot object.
4.VR: 3D interaction and virtual character action simulation in the virtual environment directly benefit from the research results of video human motion analysis. They provide richer forms of interaction for the participants. And human tracking and analysis are the key technologies.
If you want HUSKYLENS to follow your steps, then he needs a pair of “eyes” to gaze at you. How could we realize this function? We are going to use the object tracking of HUSKYLENS sensor.
The function is a built-in algorithm in the sensor, allowing to learn the features of the object, track the position of object on the screen and feedback the coordinate figure of the position to the main control board. We can drive the Mind+ robot to achieve real-time tracking by the face position information acquired.
Different from color recognition and face recognition, object recognition is able to completely learn and recognize an object (or human). Color recognition is only for color, while face recognition is only for a part of the body. Object tracking is to learn the overall characteristics of the object to track it.
Point HUSKYLENS to the target object, adjusting the distance until the object is included in the yellow frame of the center of the screen. If the object is difficult to be completely contained in the yellow frame, containing distinctive features is also okay. Then long-press the "learning button" to learn the object from various angles and distances. During the learning process, the yellow frame with the words "Learning: ID1" will be displayed on the screen.
When HUSKYLENS completed the track of the object at different angles and distances, you can release the "learning button" to end the learning.
Note: If there is no yellow frame in the center of the screen, it means that HUSKYLENS has learned an object before. Please let it forget the learned one and learn again.
After learning the object tracking function of HUSKYLENS, let’s realize the “vocabulary game”. The first function to be achieved is to control the movement of the Mind+ robot through the movement of the face. Then, create a list to store the English vocabulary (it is able to check the vocabulary learned today) and score variate. Determine whether the score is 20 above, if so, stop the script and send broadcast message (cut to the victory background). The last is the program setting of each item character. They need to be hidden at first and displayed when the items is activated as a clone. If the item encounters the Mind+ robot (indicating that the word has learned before), the score will increase by 1, if misses the Mind+ robot and hits the stage, the score will reduce by 1. If the object receives a broadcast message, it will stop the script and hide items.
Here we are going to learn how to learn the human face using HUSKYLENS. If it is a face learned before, we need to use the movement of the camera to control the Mind+ robot, and we can add some music to achieve a better effect (music function in module).
After completing the above functions, we need to program for a single object. Actually, we only need to write one program and copy it. The object needs to be hidden at first, and only displays when the character is activated as a clone. If the object encounters the Mind+ robot (indicating that you have learned the word), and you can read out the corresponding word, the score will increase by 1, if misses the Mind+ and hits the stage, the score will reduce by 1. If the object receives a broadcast message, it will stop the script and hide the item.
Create an English vocabulary list to store all the learned vocabulary. Create a score variate for determining whether the number of vocabularies that been learned is more than 20 (difficulty can be set by yourself). If it is, the program will broadcast a message, read victory, and stop the script.
HUSKYLENS: I2C pin (T—SDA, R—SCL, +—5V, - —GND)
We need to control the movement of Mind+ robot on the stage through the object tracking function of HUSKYLENS. First, we need to know the range of motion of Mind+ robot. We can get a range of X-axis by moving the robot to the two ends of each side at the bottom. In the program, we get the range of (-195,220). Then, we can get the motion range of HUSKYLENS on the X-axis is (40,270). Finally, we achieve relative motion through the mapping in the operator. Besides, if we want a better effect, we can change the modeling of the robot, and change the background and add some sound effects all at once, or even we can make the background color keep changing with the program.
Step1. Mind+ Software Settings
Open Mind+ (V1.6.3 and above), switch to “Online”, click “Extension”, choose” Arduino Uno” under “Board”, load “HUSKYLENS EDU”, "Music" and in Function, select "Text to Speech" in the Internet.
Step2. Instruction Learning
Here are the instructions mainly used.
Step3: Flowchart Analysis
(1) Program Part
Mind+ Robot Program:
For the picture part, click “choose a backdrop”, find out “concert” and “hearts2”. Change the name “concert” to "game start”, and add the words "click the green flag to start" on the background. Change the name ”hearts2” to “In game”, and write the code in the script area to continuously change the background color.
Background of start:
Tick the box beside the variable when setting, then it will be displayed on the stage in real time.
Click the green flag, when HUSKYLENS detects the learned face, the Mind+ robot will relatively move with the "learning frame" of HUSKYLENS with corresponding sound effects, and the game background will continuously change colors.
According to the code above, when the program is operating, our object needs to fall from above to below. And here we use clone module, when the object touches the Mind+ robot, it will read the corresponding content and disappear, and when touches the stage, it will disappear directly. After the code is improved later, the corresponding English words can be read out.
Program of Object Character:
We add different objects through the character library and change the names to the corresponding English words (here we add 5 objects, the corresponding English words are: balloon, banana, apple, basketball, butterfly). After finishing writing a character, we can just copy the code to other characters. We can set them to display at different times, which makes the effect more artistic. Here are two examples for illustration.
Program of the Balloon:
Program of the Banana:
When HUSKYLENS detects a learned face, Mind+ robot will move on the stage with the movement of the face, and make music effect. Meanwhile, there will be objects falling from the top of the stage. When the objects touch the Mind+ robot, it disappears after the corresponding word is spoken. It will also disappear when touching the edge of the stage.
Based on the codes above, we need to add two variables, one is the English vocabulary list for storing the learned words, when the game is over, the player can check the English words learned today, and the other is the score, for determining whether the player wins. First, we create and initialize these two variables. And we can add a piece of mp3 music. Then, it will determine whether the score is over 20 (the specific value can be set on your own), if so, it will change to victory background, broadcast the message, and stop the script. For the program of the objects, we need to make it score 1 every time the object touches the Mind+ robot, insert the English words encountered into the list and speak the corresponding English words. Finally, we can add a victory background. Switch to this interface when the script stops.
1) Mind+ Script
2) Object Program (only 1 for example)
When we click the green flag, if HUSKYLENS detects any learned face, the Mind+ robot moves with the movement of the face, with music on. By this time, we can see a lot of corresponding objects falling from the top of the stage. If the objects touch the robot, we can see that the score increases by 1, the corresponding English words are also added to the list, and then it will be spoken. When the score is more than 20 points, it will read victory, stop the script, and switch to the victory background interface. At this point, we have completed all the functions of the "vocabulary learning" game.
In this project, we learned a new function of HuskyLens--object tracking, and realized the interaction of Mind+ and our face. All the functions are working as we expected. Now let's review the takeaway points of this chapter.
1. The principle of object tracking;
2. The learning process of HuskyLens object tracking;
3. The instruction commands of HuskyLens object tracking;
4. The combination of hardware and function modules in Mind+ Online mode.
We have only designed functions of 5 English words in this project, how about adding more words? And different from a real game, players may get a negative score in this game, so could you revise the code to make the minimum score to 0? Or You can improve the difficulty of the game based on your needs.