MAE 4190 Fast Robots
Hello! I am Heisen, currently a senior at Cornell University, majoring in Mechanical Engineering.
Also minoring in Business and Information Science.
I am interested in product design, robotics, and bringing automation into the consumer market. Passionate about building solutions into everyday experiences through customer discovery research, user-experience & feature design. I develop automated systems with my mechanical engineering background, and experiences in simple electrical systems and basic machine learning algorithms.
Lab 1A aims to setup and help us familiarize the Arduino IDE, Artemis board, and its libraries.
For the first part of the lab, we blinked the onboard LED of the Artemis board using example code "Blink".
Next, we tested out the Serial function on the Artemis board using the example file from Apollo3->Example4_Serial.
We employed the onboard temperature sensor of the Artemis board using example code Apollo3 -> Example2_analogRead.
We also used the onboard micropone with example file PDM -> Example1_MicrophoneOutput.
Lab 1B establishes the Bluetooth communiation between my computer and the Artemis board. Basic functions are implemented onto the Artemis through Arduino and on Jupyter Notebook to demonstrate and familiarize with the communication protocols.
Setting up the virtual environment involves:
1. Confirming Python requirements on my computer (Python >=3.10, pip >=21.0)
2. Installing the virtual environment package and creating a virtual environment
3. Activating the virtual environment
4. Installing the required packages into the virtual environment folder (described by codebase below)
Setting up the virtual environment involves:
1. Replacing the MAC address in the code with the MAC address of the Artemis board
2. Generating a new unique uuid, then update in ble_arduino.ino and connection.yaml
Task ECHO is a simple task that sends a message from the Artemis board to the computer, and the computer sends the message back to the Artemis board. The Artemis board then prints the message received from the computer.
Sends a string command to Artemis and extracts three float values.
Artemis parses the string and prints onto Serial Monitor.
Implemented command GET_TIME_MILLIS such that Artemis replies with the time in its internal clock (in milliseconds).
The NOTIFICATION HANDLER is capable of receiving a string value and parse necessary information. It ultimately is capable of extracting timestamps, or timestamps AND temperature depending on the string received.
Coupled with "ble.start_notify" function, it is able to "wait" for information from the Artemis without explicitly knowing when it will come.
We determine the effective data transfer rate by requesting for time from the Artemis 100 times. We take the difference in time of 1st transmission and time of last transmission, divided by 100, to determine the transfer rate. While the message is received in string format, the information is effectively of int-type. Hence, we calculate byte transfer rate by multiplying the transfer rate (Hz) by 2 bytes/sec.
We then measure the data collection rate within the Artemis by adding timestamps to an array. Then, sending each element of the array afterwards.
The arduino code below demonstrates data collection first, then sending timestamps one-by-one.
Doing similar arithmetic as before, we determine the average "data collection rate". Onboard data collection is SIGNIFICANTLY faster than sending a message at every instance.
Similar Arduino implementation to SEND_TIME_DATA, but temperature readings are now also included within the string message, as follows:
With the notification handler implemented prior, it is able to parse into temperature and time readings, then appending into their respective arrays.
Instantaneous looped data transfer is effectively at 10-15 Hz, which is much slower compared to onboard data collection of 44500 Hz. Hence, it is most likely more efficient to do onboard calculations and log status into an array if the movement (or stunt) requires very quick calculations. However, if the action is less time-dependent and relies on instantaneous communication and prompting and receiving between the Artemis and computer, instantaneous data transfer will suffice.
As stated prior, the Artemis is able to record ~89000 bytes of data per second while it is able to transfer an int equivalent of 25 bytes per second
The Artemis board has 384kb = 348,000 bytes of RAM. That would be approx. equivalent to 174,000 int variables. Also a constant data collection of 3.91s = 3910 milliseconds.
This lab connects the IMU to the Artemis, as we get acquainted with the strengths and witnesses of the Accelerometer and Gyroscope.
For the first part of the lab, we connect the IMU to the Artemis through the QWIIC connector.
The IMU in this breakout board has the AD0 pin conected to VCC, and hence AD0_VAL is set to 1. If the AD0 pin was connected to ground, it would be set to 0.
In order to find the pitch and roll angles, we utilize the constant force of gravity. Reflected in the code below.
ROLL {-90, 0, 90}
PITCH {-90, 0, 90}
When conducting a two-point calibration, the change of 180 degree yields the range from -88.7 to 89.01 degrees.
((89.01-(-88.7))-180)/180 = -0.012. Quite accurate if disregarding noise.
Data recorded from the Acclerometer tends to be noisy (shown below). Hence, we perform a Fourier Spectrum Analysis to define the alpha for a Low Pass Filter.
Following the guide here, the Fourier Spectrum Analysis is shown below.
The results from the Fourier Analysis suggests that we would like the cutoff frequency to be around 5Hz. Hence we calculate the alpha for the Low Pass Filter with variables 50Hz sampling rate and 5Hz cutoff frequency.
The alpha is calculated to be 0.385, and the Low Pass Filter is implemented as shown below.
The results of the Low Pass Filter is shown below:
The Low Pass Filter is able to reduce the noise while folowing the overall readings from the Accelerometer.
The Gyroscope is able to measure the angular velocity of the Artemis board. The code below integrates the angular velocity over time to find angular position.
Comparing the angular positions calculated from the Gyroscope vs the Accelerometer, the Gyro has less noise, but will experience drift. The gyro is also capable to calculate Yaw angle, where the Accelerometer is not.
Image below demonstrates drift of the Gyroscope within 4 seconds of data collection.
The Complementary Filter is implemented to fuse both Accelerometer and Gyroscope readings. The alpha-value is adjusted to reduce drift from the gyro, but also reduce outlying noise readings from the Accelerometer.
Results are shown below, comparing RAW data from ACC and GYRO, the LPF-filtered data from the Accelerometer, and the Complementary Filter data:
After removing any and all delays and Serial.print statements from the data-collection loop within the Artemis, we measure the rate of data collection.
The Artemis is able to collect data at a rate of 370 Hz. Compared to the loop rate of 44500Hz found from Lab 1, it is significantly slower to measure data from the IMU. However, it is still faster to collect data onboard the Artemis then send through Bluetooth instead of sending a message at every single data collection (~10Hz).
The visualized data was stored in arrays onboard the Artemis, then sent to the computer for further processing. With 1903 datapoints, the Artemis is capable of storing 6.1 secs of continuous undelayed data.
Here are some stunts with the RC car!
This lab connects the Time of Flight (ToF) sensors to the Artemis, getting acquainted with the speed and accuracy of ToF sensors. Also running 2 ToF sensors concurrently.
We connect ToF sensors to the Artemis through QWIIC connectors, communicating through the I2C channel. According to the datasheet, the default I2C address is 0x52.
My approach to using 2 ToF sensors begins with connecting the xshut pin of sensor 2 to a GPIO pin (pin 8) on the Artemis.
During setup, the Artemis shuts down sensor 2 through the xshut pin and sets an I2C address different from the default sensor address. Now the 2 ToF sensor has distinct addresses where the Artemis may identify and communicate with.
I will be placing sensors on the front side, and right side of the car. This allows me to gain information of distance to obstacles on 2 distinct sides.
I may miss obstacles if the obstacles are (1). Too far, beyond the range of ToF sensors, (2). Too small, might be missed between the slower sampling frequency of fast moving car, (3). Too low, the obstacle may be below the mounted height of the ToF sensor.
The I2C scan shows that a single ToF sensor has a "0x29" address. This does not match up with the "0x52" address from the datasheet. This is because the Artemis scans the "true" 7-bit address. The datasheet shows an 8-bit address where the last bit represents the read/write information from the sensor. 0x29 shifted left is 0x52.
The ToF sensor has 3 modes (Short, Medium, Long). After some testing, I've noticed that the different modes yield rather similar results in terms of distance sensed. However, there is a difference in sensing precision (low variance). In Short mode, the sensor picks up a much smaller variance in short distances (<200mm), while in Long mode, the sensor picks up a much smaller variance in long distances (>200mm). However, the sensor reads 0mm for any distance less than 30mm. Hence, the mode I choose to use depends on how large and far apart the obstacles I expect it to be. It also depends on at which scenario would I prefer the sensor to be more accurate.
The testing setup is shown below:
I also made comparisons between the ToF operating in Bright, Room, and Dark environments. The sensor is able to pick up distances in all 3 environments, but the distances picked up in the Dark environment is closer to the real distance than the other lighting environments.
Ranging times are tested with Arduino code below:
In terms of ranging times,
The ToF takes on average 3.91ms to range with "stopRanging()" and "startRanging()" commands.
The ToF takes an average of 2.37ms to range when ranging without stopping (using the "stopRanging()" command),
The ToF takes an average of 2.24ms to range Without checking for data availability.
The full data visualization is shown below:
In order to communicate to both ToF sensors concurrently, I have to set a different address to one of the ToF sensors. One of the ToF sensors has their xshut pins connected to the Artemis pin 8. Say the one with xshut connnected is sensor 2, and one without connection to xshut is sensor 1, I turn off sensor 2 through the xshut pin by setting pin 8 to low, then change the I2C address of sensor 1 to an arbitrary "20" with the function "setI2CAddress()"
The code below demonstrates the setup of 2 ToF sensors:
We tested the speed of the ToF sensor in a short Serial loop, sending distance data only when data is ready with "checkForDataReady()" function.
Implemented below:
Results shown:
the delay was ~100ms on average, which would come out to be around 10Hz of sampling rate. The limiting factor might be due to the Serial printing time as quickly as possible between each loop, slowing down the processing speed of the Artemis.
Combining all the labs up till now, we are now able to get and store sensor readings from 2 ToF sensors and the IMU into arrays within the Artemis. Then send it through bluetooth to Jupyter Notebook for further analysis.
The code below demonstrates the setup:
The data is then visualized in Python through Matplotlib!
This lab connects onboard motors to the motor drivers, to be controlled by the Artemis. The end of the lab demonstrates Open Loop control of the RC car through the Artemis.
In addition to the wiring for the ToF sensors, the following diagram wires the 2 motors onboard each to a motor driver to pins on the Artemis board. The Left motor is pinned to pin 2,3, and Right motor pinned to pin 6,7. These pins are chosen because they are capable of PWM output signals.
The Artemis and Motors are powered by 3 different batteries — 650mAh and 850mAh respectively.
This is for 2 reasons:
(1). A separate 850mAh battery is preferrable to supply a steady and consistent current to both motors.
(2). Separate batteries decrease chance of current/voltage spikes for both steady signal input and output for the Artemis, and steady power for the motors.
(3). Battery operation capacity. The 850mAh battery is able to supply power to the motors for a longer period of time than the 650mAh battery, making it more convenient for testing and use.
After soldering the motor driver to the Artemis pins, and leave one end of the wire loose for those pinned to VIN,one of the GND, OUT1, and OUT2 on the motor driver, I connect the positive end of the power supply to VIN, negative end of the power supply to GND, the positive probe of the oscilloscope to OUT1, and the ground prove of the oscilloscope to OUT2.
The power supply is set to 5V as the motor driver is rated for a voltage range from 2.7V to 10.8V.
The oscilloscope is set to "auto set" for convenience.
The code below demonstrates the PWM output from the motor driver. The PWM signal is set to increase from 0 to 100% duty cycle within a for loop to showcase control and variation of the motor speed.
The following video shows the PWM readings on the oscilloscope:
After soldering the remaining open-ended wires into their respective parts (motor and battery), and using the same code incrementing speed as above, the following video, we demonstrate the car operating as expected powered only by batteries:
To test and find the lower limit of PWMs, we slowly increase the PWM value from 0 in increments of 5 until it moves (barely) when placed stationary on the ground. The lower limit of PWM is not anything >0 as there exists static friction and the motor itself is rated at a minimum range of 2.7V. The lower PWM limits are different depending on the type of motion:
Straight linemovements require lower PWM at ~35 value as both motors are moving in the same direction. Shown below:
Turning in placemovements require higher PWM at ~80 value as motors are moving in the opposite direction. Shown below:
To calibrate the motors, we allow both motors to go forward and observe any drift or deviations the car makes when trying to go in a straight line. If the car drifts left, we add a calibration factor to increase the PWM value of the right motor, vice versa.
The following code snippet initializes a calibration factor and applies to the left motor in a straight line function. With great fortune, the motors onboard the car are fairly equal and there is currently no need for calibration value yet.
The following vidoe demonstrates the car moving at a straight line for ~6 feet:
For simplicity and convenience for future coding, I have divided simple motions of the car (straight, left-turn, right-turn) to separate functions. They are then called within the main loop when necessary. A "ran" boolean value is used such that the open loop program could run once when I hit the "RESET" button on the Artemis.
Code below:
Video demonstration below:
This lab experiments with PID controls on the car to drive towards the wall as fast as possible, and stop 1 ft away from the wall.
In order to send data from the Artemis to my computer for further analysis, I define a range of arrays to store data within the Artemis while PID is running. Storing data within an internal array is much faster than sending bluetooth data and ensures the loop runs as quickly as possible.
The data is then sent after the PID control loop is completed. As shown below:
As for data receiving, a basic data handler function is defined within the python script. As demonstrated below:
A PID controller is separated into 3 parts:
(1). Proportional: The error between the desired distance and the current distance is multiplied by a constant Kp. This is the main driving force of the car towards the wall.
(2). Integral: The sum of all errors is multiplied by a constant Ki. The integral portion usually is used to eliminate steady state error.
(3). Derivative: The rate of change of the error is multiplied by a constant Kd. This usually helps with overshoot and oscillations around the target distance.
The PID controller is then calculated as the sum of the 3 parts.
From lab 3, the range time of the ToF sensor is around 100ms, or ~10Hz. The loop rate of the Artemis is much quicker than 10Hz. Hence the ToF sensor is the "bottleneck" of the PID control loop from being more agile and reactive. This issue may be remedied with extrapolation (later this lab), or Kalman filters (in future labs).
The implementation of the PID controller is entirely on the Artemis. In my implementation, once the function is activated through BLE, it is mainly separated into 3 parts —
1. Receive P/I/D constraints,
2. Take sensor readings,
3. Calculate output speed through PID controller
A few "hacks" are employed to boost the performance of the controller. i.e. I only limited the maximum input speed of the car when going forward, and not when going backward. The limit PWM for reversing is 255 (maximum) such that the car is able to stop much quicker.
For code cleanliness, the PID controller is a separate callable function. Implementation is demonstrated below:
In order to tune the PID controller, I have created auxiliary functions to better analyze data sent from the Artemis after every PID control test. The python function shown below calculates the max velocity of the car running into the wall. I ultimately also calculate ToF sampling rate and PID loop rate for every run.
I first started with just the P-controller. It yielded decent results with the maximum PWM @ 90 (1.79 m/s) without running into the wall.
However, it had very noticable oscillations and the car could perform the task much better than 1.79m/s.
The following video demonstrates the car running with just the P-controller:
The data visualization is shown below:
The data shows that the car is able to reach a maximum speed of 1.79m/s before running into the wall. The PID loop rate is around 104Hz, and the ToF sensor sampling rate is around 18Hz.
Since the steady state error was not greatly noticeable, but the oscillations were, I moved onto a PD controller to mitigate the oscillations and increase performance (safe maximum speed into the wall). It yielded much stronger results with the maximum PWM @ 150 (2.27 m/s) without running into the wall.
The oscillations were drastically reduced and it looks pretty good!
The following video demonstrates the car running with the PD-controller P = 0.6, D = 70:
The data visualization is shown below:
The data shows that the car is able to reach a maximum speed of 2.275m/s before running into the wall, almost double the speed of the P-controller. The PID loop rate is around 113Hz, and the ToF sensor sampling rate is around 19Hz.
To increase performance, we require more frequent sampling of the car's distance to the wall. However, the sampling rate for the ToF sensor is capped. For a rudimentary solution, we may extrapolate the position of the car at a given time with the discrete velocity calculated from the previous PID cycle.
Implementation shown below:
This lab controls the yaw (z-axis) angle of the robot using the IMU as a sensor input through a PID controller.
The robot is required to take target angle inputs sent throught BLE, and the PID controller will drive the robot to the target angle, sustaining outside disturbances and interferences.
In order to send data from the Artemis to my computer for further analysis, I define a range of arrays to store data within the Artemis while PID is running. Storing data within an internal array is much faster than sending bluetooth data and ensures the loop runs as quickly as possible.
The case function send_data within the Artemis code is defined to send data to python code for analysis.
The PID controller is first activated (through the "start_pid_orient" command) , and data collection will only begin when command "start_record" is sent through BLE. Data collected is then sent through the "send_data" command after the PID control loop is completed.
Commands to define new target angle ("new_target_angle), stop record (stop_record), and stop PID control (stop_pid_orient) are also defined.
A notification handler function is defined to handle incoming data from the Artemis.
As shown below:
A PID controller is separated into 3 parts:
(1). Proportional: The error between the desired distance and the current distance is multiplied by a constant Kp. This is the main driving force of the car towards the wall.
(2). Integral: The sum of all errors is multiplied by a constant Ki. The integral portion usually is used to eliminate steady state error.
(3). Derivative: The rate of change of the error is multiplied by a constant Kd. This usually helps with overshoot and oscillations around the target distance.
The PID controller is then calculated as the sum of the 3 parts.
From lab 2, the sampling time of an IMU sensor is around ~370Hz, relatively fast and comparable to the Artemis loop. I also used the DMP (Digital Motion Processing) function in the IMU to find the yaw angle, which might actually sample faster than the Artemis main loop rate. However, the DMP has the ability to set sampling frequency as remedy(to be discussed later).
The implementation of the PID controller is entirely on the Artemis. In my implementation, once the function is activated through BLE, it is mainly separated into 3 parts —
1. Receive P/I/D constraints,
2. Take sensor readings,
3. Calculate output speed through PID controller
For code cleanliness, the PID controller is a separate callable function. Implementation is demonstrated below:
As possibly noticed within the PID_controller and previously mentioned, I used the the DMP (Digital motion processing) function within the IMU to find the current yaw angle of the car.
We first have to modify the SparkFun ICM-20948 library in Arduino, specifically, uncommenting line 29 in the ICM_20948_C.h file to define the constant ICM_20948_USE_DMP.
In addition to exisitng IMU sensor setup (from LAB2), the following code demonstrates additional setup necessary to initalize the DMP function.
When calling DMP to find yaw angle within the PID controller function, we receive q1,q2,q3 quaternion angles, and we call the defined quaternion_to_yaw(q1,q2,q3) function to convert to yaw angle and further used for PID controls.
The code is shown below:
NOTE: line 11 within the code gist "myICM.setDMPODRrate(DMP_ODR_Reg_Quat6, 4)" sets the DMP output data rate.
The second argument value is calculcated by equation: value = (DMP running rate / ODR ) - 1
e.g. with DMP running at 55Hz, value = 4 yields output data rate of 11Hz.
This is helpful to set the DMP output rate slower than the Artemis loop rate such that the DMP queue does not overflow as it operates in a FIFO queue. However, an overly slow DMP output rate yields a less sensitive PID controller.
My solution to enable the car to continuously receive BLE inputs and outputs is to utilize flags indicating status of the car (i.e. pid_orient_on, pid_orient_recording)
The BLE commands then only alters the flags and other value initializations.
The main loop is then able to continuously loop through the write_data, read_data functions to handle BLE commands at every loop within the Artemis. If the flags yield true, we enter the if-statements to call the PID controller and data recording functions etc.
The code is shown below:
In order to tune the PID controller, I have created auxiliary functions to better analyze data sent from the Artemis after every PID control test. In addition to graphing target angle/current angle vs Time, and P,I,D, and PID speed values vs Time, I also measure the average DMP output and Artemis loop Rate to optimize performance.
I first started with just the P-controller. It yielded decent results with P = 5; maximum PWM @200. However, there exists a substantial amount of steady state error due to the high deadband value (~90 PWM) of in place turning for our setup of motors and car.
Results in the video below:
Graph & rate below:
To remedy the steady state error, I add the Integral term to the controller. It yielded much better results that ultimately returns the car back to its target angle with P = 5, I = 0.007; maximum PWM @200.
Results in the video below:
Graph & rate below:
The controller that works best within this setup and objective of keeping a constant target orientation angle is with a PI controller. This is due to the addition of the Integral term that accumulates the error and is ultimately able to overcome the deadband value of this setup and correct itself to precisely the target angle.
It is likely that the performance improves if we align the DMP cycle rate closer to the PID loop rate, such that the controller is more sensitive with more up-to-date input yaw angle data. However, DMP cycle rate at ~11Hz works fine for this setup and objective. Hence, I have opted to be conservative in its DMP output rate to ensure that the FIFO queue does not overflow and render the controller useless.
This lab is a continuation of linear PID control from Lab 5. The goal is to execute the behaviour from lab 5 faster.
Kalman Filters are capable of estimating the state of the robot (distance from the wall) in between readings from the Time-of-Flight sensor.
We first estimate the drag and momentum of the car by running the car at a selected PWM (120) and collecting the time, distance from wall, and input PWM data from the Artemis.
The following functions are used to calculate the instantaneous velocities of the car to find its top speed, 90% speed, and 90% rise time.
After truncating and plotting the data, we find the 90% rise period as 728ms with steady state speed at 1112 mm/s, as shown below:
The drag and momentum of the car are then calculated as d=0.0008993 and 0.0002843 respectively using the arithmetic below:
The A and B matrices are computed with drag and momentum constants.
Delta_T is chosen to be 0.01s due to the 10Hz cycle rate of the ToF sensor.
The C matrix is chosen to be [-1,0] as we are only measuring the distance from the wall.
sigma_1,sigma_2,sigma_3 are process and measure noise that are tuned to optimize the filter. (Spoiler alert: sigma_1 and sigma_2 = 50, sigma_3 = 5 yields the closest fit within my robot system)
The following snippet demonstrates implementing the Kalman Filter function in Python, which we use to call the filter on the previously collected data to plot the results.
We do this to compare the Kalman Filter's estimation of the car's distance from the wall to the actual distance from the wall, to tune the parameters process and measure noise parameters.
My initial process and measure noise were sig_1,sig_2 = 5, and sig_3 = 20. Setting 20 as the measure noise parameter was recommended from the lecture.
The plotted results are shown below:
Tuning up the process noise to sig_1,sig_2 = 40 yielded much better results while maintaining measure noise sig_3 = 20:
Lastly, just a bit more tuning up the process noise to sig_1,sig_2 = 50 and tweaking measure noise down to ig_3 = 5 yielded best fit:
The first thing to transfer what we did in Python to Arduino code is to initialize the matrices.
We use the BasicLinearAlgebra.h library to work with matrices.
The function is modelled after implemented KF function in Python, calling the function demonstrated at the bottom:
Lastly, the KF function is implemented within the PID loop
The graph below shows the difference between purely ToF distance readings and ToF supplemented with Kalman Filters:
This lab's goal is to do a stunt by any means necessary!
We first write hard-code to test the car's ability to flip. Is it achieved by running forward full-speed, then reverse in full-speed. The code is shown below:
The flip is enabled by increasing the weight in front-end of the car by adding an extra battery pack for weight.
The car is better able to gain traction and flip over after cleaning the wheels free of dust.
The next stunt is to reach the wall and return back to the starting point. This is achieved by running the car forward for a set amount of time, then initiating the flip procedure, and reversing back to the starting point.
Code shown below:
The stunt is achieved by running the car forward at full speed, then reversing at full speed. The car is able to flip over and return back to the starting point.
Due to the same start and end point, a hard-coded open loop solution is sufficient to achieve the stunt while being rather robust.
This lab maps out a static room, to be used in later localization and navigation tasks.
We build the map by placing the robot in set marked locations within the map, spin on its axis while collecting ToF readings.
We choose to control the robot rotation through Orientation Control.
It is implemented by re-purposing the Orientation PID controller from Lab 6.
We take ToF readings at 14 different angles and we orient the robot to each angle by entering the target angle arguments to the PID controller function through a for-loop.
The code is shown below:
The orientation control is showcased in the video below:
The graph shows the target angle and current angle of the robot while turning.
The PID controller is able to turn the robot to the target angle with a small amount of overshoot (maximum of ~20 degrees, average <10degrees)
we execute 2 full rotations at each of the marked positions in the lab space: (-3,-2), (5,3), (0,3), (5,-3), and (0,0).
On the Arduino side, we add a record_map function to find the actual angle and read distances at each of the 14 sampling angles. The stored data is then sent through bluetooth to the computer for further analysis.
The data is then plotted in Python to visualize the distances at each angle.
Example plot shown below:
The rotational data seems reliable enough to be used for mapping. Hence we use the actual angles for plotting instead of target angles.
After the data is sent to the computer, we store the data of each rotation into a separate csv file in case we need to access it at a later time.
We then apply a transformation matrix of each data point and offset it rotation coordinate system to global coordinates.
Transformation matrix is calculated as:[r*cos(theta), r*sin(theta)] where theta is the angle of rotation.
The following implementation showcases the transformation for the 1st run of the robot at (-3,-2):
The merged graph of all 10 datasets (2 runs on each coordinate:(-3,-2), (5,3), (0,3), (5,-3), and (0,0)) is shown below:
We convert the lines into 2 list of start and end points of the lines.
Start_list = [[-1.5, -1.5, -1.0, -0.6, -0.6, 1.9, 1.9, -0.3, -0.3, -0.6, -0.6],
[-1.3, 0.1, 0.1, 0.1, 1.3, 1.3, -1.3, -1.3, -0.6, -0.6, -1.3]]
End_list = [[-1.5, -1.0, -0.6, -0.6, 1.9, 1.9, -0.3, -0.3, -0.6, -0.6, -1.5],
[0.1, 0.1, 0.1, 1.3, 1.3, -1.3, -1.3, -0.6, -0.6, -1.3, -1.3]]
This lab implements a grid localization algorithm using Bayes Filter, deployed virtually on Python environment.
The implementation is broken down into 5 functions: compute_control, odom_motion_model, prediction_step, sensor_model, and update_step.
This function computes the control input of the robot by taking in odometry poses (the previous and current position of the robot).
Outputs delta_rot_1, delta_rot_2, delta_trans are calculated by geometries shown below:
The Odometry Motion Model function is used to calculate the probability of the robot's current pose given the control input and previous pose.
The probablity is calculated using the Gaussian distribution, which is a common assumption for the motion model.
The total probability is calculated by multiplying the probabilities of each individual motion model, as shown below:
At each of the possible states, we calculate the probability of the robot's current pose given the control input and previous pose by calling the odom_motion_model previously defined.
While looping through the grid, we ignore the cells that are "impossible" to be the robot's current pose (p<0.0001).
Since we ignored impossible cells, we normalize the probability of each cell to ensure that the sum of all probabilities is equal to 1.
Code that implements the prediction-step is shown below:
The sensor model is used to calculate the probability of the sensor measurements under a gaussian distribution.
We use the snesor reading as the mean of the Gaussian distribution, and the standard deviation is defined in loc.sensor_sigma.
Implementation of the sensor model is shown below:
The update step is used to update the probability of each cell in the grid given the sensor measurements.
The probability is calculated by multiplying the probability of the sensor model and the probability of the prediction step.
The code that implements the update step is shown below:
The following image shows simply executing trajectory and estimating robot pose using simply odometry (red), comparing to ground truth (green).
It is clear to see that the performance is not very accurate.
The following image shows the robot's estimated pose using the Bayes Filter (blue) compared to the ground truth (green), with the simple odometry estimation (red) for comparison.
The following video demonstrates the simulation in action with prediction and updates of the Bayes Filter.
Results of prediction and update statistics shown below:
This lab implements a localization with Bayes Filter deployed on an actual robot. We only update step using full 360 degree scans with the ToF sensor and does not take into account odometry due to the high noise of IMU readings.
The implementation is first done in the Arduino code as a localization case, called to perform a full 360-degree rotation taking ToF sensor readings and sending it back to Python code for further analysis.
We then implement perform_observation_loop function in Python to perform the localization task.
This function is called to perform a full 360-degree rotation, taking ToF sensor readings in 20 degree increments for 18 total readings.
Implementation of the function is shown below:
The perform_observation_loop function calls the localization function on the Arduino, waits for the data to be sent back and rescales the data to be used for localization.
Implementation of function is shown below:
While I believe the implementation is correct, I was unable to get the robot to perform the localization task.
This was due to a discrepancy between the set of data that is taken on the Arduino and the data that was sent and received on the Python code.
As shown below, the data collected in Arduino exists in relatviely robust 20 degree angle increments (in the CA current angle column) with its corresponding distance readings.
However, the data received on Python skips data at certain angles at first, then proceed to maintain the same exact readings for the following data points.
This makes it impossible to perform the localization task as the data is not reliable.
Readings taken from Arduino Serial screen, to be sent to Python code:
Readings received from Arduino, shown on Python:
I tried multiple methods to fix this issue, including sending data in varying size, implementing varying delay times etc. However, none of it seemed to fix the problem.
Very unfortunate that I was unable to fix this issue and get the localization task to work. Perhaps I may be able to figure a workaround if given more time.
I will be sure to update this section if I am able to fix the issue in the future.
This lab programs the robot through a set of waypoints in a set environment, with the goal of going as quickly and as accurately as possible.
The set of waypoints are defined and hence the path is also pre-defined. Over the course of the semester, we have implemented multiple functions to control the robot's movement, ranging from Distance and Orientation PID controls to mapping localization.
Given the time constraints and materials available to us, it is best to use the PID Orientation Control function to control the robot's movement over localization as it is more reliable and robust.
The graph and drawing below shows the set of waypoints and the planned orientations that we plan on executing at each waypoint. The distance to travel between each waypoint is tuned through trial and error as that is believed to be the most straightforward solution.
The series of actions, both orientation PID and timed forward motions, are implemented as a callable function in Arduino.
Implementation shown below:
It could be seen that the angle arguments passed into orientation PID control sometimes do not match the angle previously planned. This is due to the fact that the robot does not always turn to the exact angle as planned and is compensated by passing through angle values over/under the desired value.
In order to ensue the robot stops immediately at waypoint instead of coasting, I changed the stop() function to pass slow decay/Brake arguments to the h-bridge, which are 255 into both pins.
The robot was able to successfully navigate through the set of waypoints and go to the endpoint!
The video below shows the robot executing the path planning task: