By the end of the lesson, students should be able to:
1. Understand and use Smart Grayscale Sensor and Logic AND Module
2. Complete the project under the guidance of teachers
3. Renovating the project through the transformation of the learned knowledge
Computer with Mind+ software installed, BOSON Artificial Intelligence Starter Kit (Micro:bit*1, Expansion Board*1, Neurone Module*2, Ultrasonic Sensor*1, Smart Grayscale Sensor*1, Logic AND Module*1, Red LED Module*1)
Textbook, pen, scissor, utility knife, double-side adhesive tape, compass or round object, paper box
Preface: This is the first lesson of School Bus Transformation. In the previous lesson, we learned neurone module and the connection method of two neurone modules. In this chapter, we are going to build a neurone module network from simple to complex. In this process, students will witness that the school bus gets more and more “smart” in practice, so that they can have a simple understanding of the principle of the neural network in AI, and better understand the application learned in the first experience of AI module. In this chapter, starting from the automatic parking in driverless vehicles, students will make “eyes” for the magical school bus with neurone module.
For reference: In this part, you can lead students to explain the phenomenon in practice through the knowledge and skills learned in the previous chapter, and then discover the driving problems of this project, disassemble the functional requirements and general steps, so as to initially clarify their own design ideas.
Intro Question: The driverless vehicles can "see" the obstacles ahead on the road and determine whether it needs to slow down and stop. Connecting with the single neuron module, can you briefly explain how that works?
Driving Question: How to use neurone module to recognize the obstacles ahead and determine whether it needs to slow down and stop.
Function 1: detect the obstacles in front of the vehicle
Function 2: determine whether it needs to stop based on the recognition result
For reference: This part mainly includes the knowledge and skills related to the project, let students learn and use the Smart Grayscale Sensor and Logic AND Module
The grayscale sensor has integrated both analog and digital output features. By short clicking the button, it can switch between analog and digital values.
Under analog output mode, the sensor can perceive the corresponding signals generated by different color grayscales on the ground or desktop. The LED keeps on when a color detected. Based on this principle, it can be used to detect the distance between a color and the sensor. The closer the distance, the brighter the LED.
Under digital output mode, the sensor can only sense black and white. White---no signal output, LED off; White---output signal, LED on.
The relation between the two inputs of Logic module And is the same as that of the two conditions in the program. We use two switches to refer to two inputs, “OFF” means there is no input signal, and “ON” means there is input signal. The light on or off indicates the presence or absence of the output signal.
For reference: This part aims to help students formulate design ideas based on the knowledge and skills learned in the previous part.
Hardware Design Idea:
Ultrasonic sensors can detect objects in front of them and measure the distance. However, in a real driving environment, not all objects need the car to stop(for example, other cars in the front that run at the same speed), and only for the objects in front that gradually approach the driverless vehicle, the brake needs be acted. So, we have to use the neurone module to learn this gradual approaching process.
The smart grayscale sensor can also measure the distance according to its own principle. In order to make the recognition more accurate, ultrasonic and smart grayscale sensors are used at the same time. The detection data is processed by the Logic AND module. Only when signals exist in both ultrasonic and smart grayscale modules at the same time, the braking action will be performed.
Software Design Idea:
Use the Mind+ stage to simulate the animation of school bus driving. The school bus is located below the stage, and the lane line and the car move down the stage to simulate the school bus driving upward. If the lane line does not move, it means that the school bus stops.
For reference: Since the difficulty of the project procedures in this section, teachers are required to lead students to complete them step by step. The programming sequence is recommended as "Background-School Bus-Vehicle-Accident".
Learning stage: Run the program, press the learning buttons of the two neuron modules at the same time, move a book closer to the sensor at a certain speed for learning, release the learning button. If the program on the stage stops, it indicates that the learning successes.
Adjustment stage: refer to the approaching speed in the learning stage, adjust the neurone module according to your needs.
For reference: This part will ask students to rethink and share their works. You can remind them to complete this part from these aspects: how do you feel after finishing this project? Do you encounter any difficulties in making, and how do you overcome them; What do you think about artificial intelligence? Let two students share their work and ideas after a given time.
For reference: In this part, you can summarize the curriculum project by raising questions to let students think and discuss so as to recall the content of this lesson and deepen the understanding of the project.
Question: Can you talk about the working principles of automatic braking of driver-less car?
Answer: Refer to the design analysis section.
For reference: At the end of this lesson, you can assign homework to students as an extension of the course.
Question: Ask your classmates, being with you (two sets), to use the vehicle model in this lesson to make the side collision avoidance system for the vehicle.
The driverless vehicle is a type of smart car, also called a wheeled mobile robot. It mainly relies on the intelligent driving instrument based on the computer system in the vehicle to achieve the goal of unmanned driving.
Like many other things, unmanned driving actually has a process of gradual technological development. There are three different stages:
Stage 1: Assisted driving stage. Driving assistance functions such as lane tracking and adaptive cruise are all technologies at this stage, but the human driver is still the main operator.
Stage 2: Semi-automatic driving. At this stage, automatic driving under the control of computer can already complete the process of arriving at a destination. It can be used as a backup system to complete the driving. However, due to factors such as laws and regulations, it still cannot exist as the subject of the entire driving.
Stage 3: Fully automatic driving. Factors such as technology, cost, law and regulation are no longer affecting its popularization. Computer-controlled systems have already existed as the main part, and the driver can take over the operating system at any time.
Due to technical and regulatory restrictions, most of the current driverless vehicles are in stage 2. There are two types of current mainstream driverless vehicle technology: liDAR and camera + ranging radar.