Chrysafly

This art project was designed and created by the talented Nico Woodward and Nathyn Sanche at eatART Vancouver. Our team of engineering students were brought along to help control the thing. See more of Chrysafly here.

I was going to do a more technical write up for Chrysafly, similar to that of the robot competition, but I really didn’t see the value in it, so I will just add some pictures.

I was recruited to help Emma Gray and Michelle Khoo work on the wing calibration (position sensing using the encoders and manipulation of the wings). This included the controller circuit (below) and attempts at waterproof electronic enclosures.

Testing of the Multiplexer Circuit for controls. The Arduino inputs were limited with the addition of the motor drivers.

This beast is driven by two Arduinos, each with a Pololu Dual G2 High-Power Motor Driver. The 4 attached motors pull the wings with string, as seen from Nico’s prototype:

Nico showing off his prototype
Lower Wings Completed

Github

CAD Mini Projects

Why not put these somewhere?

Ring:

This ring was designed in Fusion 360 (which I would say is much is much better thought out and more intuitive than solidworks) and was a Christmas present. The design was created using component subtraction from a “comfort fit” style ring. I don’t think there’s much to more to say about it so I will let the photos speak for themselves!

Fusion 360 Render of the Ring

Claw:

This is one of the first claw prototypes from Robot Competition. It was created with Onshape (I still think Fusion360 is king).

Enclosure:

Woa! Look at that sexy beast! This enclosure was originally for UBC solar car to house our Nomura MPPTs and other circuits, but this was abandoned. I included it here only because I’ve been enjoying rendering things and oh my, that blue acrylic is absolutely sexy in the pale moonlight.

Bonus:

I did not create this, but I think it’s fun to see the perks of being friends with other engineering students. This is Gabriel waterjet cutting sheriff badges for our cowboy party!

Life is great.

2019 Engineering Physics Autonomous Robotics Competition

(the infamous all-nighter before competition)

Hi! Welcome. This blog post is made to show off the robot that my team and I developed over the summer. If you have any questions, please leave a comment or send me an email. I want to help any way I can!

Table of Contents

  1. Robot Objective
  2. Mechanical Design
  3. Controls and IO (line following, arm and claw control, navigation)
  4. The “Disaster Bug”
  5. Takeaways

1. Robot objective

The Engineering Physics Autonomous Robot Competition (known colloquially as robot comp) is a five week hands on project course in which teams of four work together to build an autonomous robot from scratch. And when I say scratch, I mean scratch. There is a lot of work involved when it comes to creating the chassis, wheel mechanisms, circuits (for control and sensors), and then creating code to navigate the course.

IT’S ALIVE! Hektar is tuned and drives up the ramp at record speed

This is what the course participants have access to:

  • Andre Marziali provided H-bridge circuit schematics using the LTC1161 gate driver, as well as a crash course on development boards (STM32 “Blue Pill”)
  • 3D printers
  • Laser Cutters / Waterjet cutter with particle board and various plastics
  • STM32 “Blue Pill”, DC motors, mosfets, optoisolators, gate driver, Li-Po batteries, NAND gates and other electronic components [with restrictions, no integrated H bridges for instance, some other motors are banned].
  • Raspberry Pi allowed

The rest is up to us (using our fantastic instructors for guidance)!

The Competition

Robots compete head to head in 2 minute heats. The goal: Collect as many stones as possible and place them in your gauntlet. The team with the most stones wins!

In accordance with previous years, 50% of teams captured a grand total of 0 stones during the competition day.

The competition surface. Notice that the robot must be able to perform on the left or right side of the course. There also existed a separate objective for the robots: collecting plushies and disposing them in the labelled bins. 1 team decided to pursue this task and ignore the stones, the other 15 decided to pursue the stones and ignore the plushies.

2. Mechanical Design

Most of this work was done by the fantastic Gabriel and Fiona. They put a lot of work into the design but this section will be short.

Our robot got its name from an Ikea lamp, due to its visual similarities:

Playing with the arm mechanism after Gabriel and Fiona finish fabrication. Fantastic work!

We built an arm prototype out of Mechano but it’s not very 𝕒𝕖𝕤𝕥𝕙𝕖𝕥𝕚𝕔 so I will not include it here.

The arm design is especially unique. It uses a series of double parallelograms, which keeps the claw parallel to the ground at all times. It is controlled with two motors attached with worm gears. Its angle relative to the robot was controlled by a centrally mounted servo:

Yes, the circuits were messy. At least we had nice connectors between them. I would do this differently if I were to rebuild Hektar.

The chassis was made from particle board. The wheels and geartrain were designed to ensure a proper balance between speed and torque to push the robot up the ramp.

3. Controls and IO

Our robot was somewhat unique in that it was being controlled by a Raspberry Pi in addition to the STM32 “Blue Pill”. Why did we do this? Truthfully, it was mostly for learning opportunities. The team was really excited to learn Robot Operating System (ROS) because of its widespread adoption elsewhere.

Circuits

Our robot had ’em.

Dual H-Bridge Design:

As mentioned above, this design was taken from Andre Marziali but it’s worth showing here:

Thank you to Fiona Lucy for laying out this circuit so beautifully!

The LTC1161 gate driver eliminated the need for P-channel mosfets, thus (according to the program director) making this a cheaper and more reliable design. Sounds good to me!

And, the soldered version (I think we all ended up making one or two of these):

Very Sexy. What I learned in this process: don’t directly solder male Dupont connectors. Like other males (including myself), they get damaged and weak when you poke them with a hot soldering iron. The white wire to board connectors are placed for two DC motors (only 2 pins each used).

There were a few more hiccups with this circuit we encountered. Our robot needed two of these circuits (because our arm was driven by DC motors and not stepper motors/servos), which lead to a shortage of PWM-capable outputs on the Blue Pill. To resolve this, instead of using two PWM signals per motor (forwards and backwards), I built 2-way demultiplexer so that the H bridge could be controlled with a single PWM signal and a forwards/backwards pin. This is the only change that we made to the above circuit (not pictured)

Line Following / Feature Detection

Our robot had an array of 5 infrared reflective sensors (QRD1114) to detect tape. The circuit output was regulated to 3.3v so the raw input could be fed into the blue pill.

Right: a testing of the soldered protoboard in action, transmitting data from the Blue Pill analogue read to the Raspberry Pi through ROSserial.

Software

ROSpy github
Blue Pill github

The Raspberry Pi/ROS did offer use one advantage that very few other teams had: WiFi. While of course we couldn’t use this during the competition, this made software tuning and debugging a breeze when compared to other teams. our PID tuning was performed with a GUI over realVNC!

Hektar’s Brain. Look how 𝕒𝕖𝕤𝕥𝕙𝕖𝕥𝕚𝕔 it is in there.

For the curious, here is the component network that controlled all of Hektar’s motion:

“/serial_node” also contained arduino code (for the blue pill) to translate the messages into PID commands using the ROSserial_arduino library.

ir_error_node: reported the state of how far Hektar is from the centerline. Also sends a flag when it believes a feature to be present (a T or Y intersection, for example).

control_master: keeps track of how many features the robot has hit and tells it what to do accordingly (line follow, stop, dead reckon). Also controls the arm.

The rest are somewhat self explanatory and are also visible on the github repo.

The use of a Raspberry Pi also meant that we could manually control our robot remotely:

4. The “Disaster Bug”

They all said that something would break the night before the comp for no apparent reason. I didn’t belive them, but they were right: it’s a measly 12 hours before our robot is put on display in front of hundreds of people, and Hektar forgot how to follow lines!

What is going on!? We should be calibrating setpoints for the arm locations*. After an arduous debugging process, we found the invisible culprit: EMF. Whenever the left motor was running, the unshielded serial connection between the Blue Pill and the Raspberry Pi would be interrupted (scoping the connection showed only the Hi logic level). Uh oh. That connection was our robot’s spinal cord, and without it Hektar could not walk.

How was this happening when our circuits were electrically isolated? Also, why was our robot working flawlessly for weeks beforehand, only to have it fail the night before? I actually still don’t know the answer to the latter question…

*we had shown that Hektar could pick up stones as well as deposit them. He could also stop at intersections and know which way to go. Collecting one stone was close and realistic, it was just a matter of putting 2 and 2 and 2 together.

The Fix

The fix was two pronged. Firstly, we reduced the noise that the motor gave off by following this guide. But it wasn’t enough.

Then came the crash course in electromagnetic shielding and antennas:

I don’t own a single straight edge

The USB serial connection typically has 3 pins: Tx, Rx, and Ground. However, the Blue Pill and Raspberry Pi were already connected to the same ground because they were sharing the same power supply. The result: I had created a beautiful and large loop that is the perfect antenna for motor noise.

Once we removed this unnecessary ground, the robot’s future was getting brighter. Coincidently, so was the room we were in, because we had spent all night figuring out this problem and the sun was starting to rise.

You might be wondering why we opted for a USB serial instead of using the Raspy’s built in GPIO pins. Fair point, however the GPIO pins don’t have overvoltage protection and frankly I don’t trust my electronic taping skills very strongly.

5. Takeaways and Mistakes

I think the biggest takeaways from this project were not technical. They were to do with the organization of people in a small timescale project.

5.1. When it comes to Gantt charts and timing, you really don’t know what you don’t know. We found that it was rarely the technical challenges that impeded our progress, but instead all the “trivial” problems we didn’t foresee, such as dealing with library errors or setting up your environment, soldering things incorrectly, and integration. In this project I would say that finding and fixing small bugs took up more time than our actual engineering design work. This is not something I expected going into the project.

5.2. Stick to the agreed upon standards, even if it isn’t the “most logical” thing to do. I made this mistake and here I have to own up to it. The team decided upon a standard for the power rails on our breadboards ( I believe it was ground, 3v3, 5v from outside to inside). I realized while planning the new H-bridge design that if I swapped one of the rails, the circuit could look a lot cleaner and it would feel more “correct” to me. This decision was rash and lead to confusion amongst the team (and maybe a blown capacitor if I remember correctly).

5.3. Things that seem like great ideas due to the cool or clean/purity factor may not work that way in reality:

  • We built the jointed arm because we thought it would be really cool to have it, therefore ignoring the additional challenges we took on and did not have time to properly implement.
  • Our robot was meant to look clean and simple, and I think we succeeded on Hektar’s exterior. But, by making the external cute and compact, we were left trying to shove our all our circuits into the tiny box we had made (hence our resort to tape). The better solution would have been to make a larger chassis and create more room for circuits, sacrificing the cute small size for internal modularity and order.
  • 5.2 was an example of me trying to make a cleaner design at the expense of design standards and modularity, and it ended up backfiring.

5.4. You can’t do it all. What is your goal here: To learn the most? To have the most innovative design? or to win [with a minimum viable product]? 5 weeks is not a long time. Our team chose to innovate and learn in the process, and the proof of that can be seen in the design of our robot. However, in a real engineering job (as my experience at Broadcom has shown) what matters is to deliver something that meets or exceeds the design specifications with the least amount of human/economic capital. Sometimes the best solution is not elegant or sophisticated. This was the truth that we decided to ignore for our project.

5.5. People aren’t robots: trust is everything and everybody has a distinct communication style. Often during times of tension there is no wrong person or perspective: the two are just speaking different languages. I encountered this with one of my group members. Some people like their ideas to be challenged head on. Others have a greater need to feel trusted (perhaps in the form of validation from the group) before an analysis of their work can be done. I see myself in the wrong for not recognizing this right away. At the end of the day, everybody has emotional needs and I believe your relationships are what matter far more than whatever work you can push out by yourself.

5.6. It can be a trap to spend too much time hypothesizing where problems may occur and not enough time running experiments to actually figure it out (though the contrary is also true). This is a trap I fell into, and this especially becomes a time sink when multiple opinions are involved. A failure during testing is not a waste of time or failure at all; it’s an invaluable resource.

5.7. Take more pictures. Document More.

5.8. Being calm is a skill and a strength.

My Lovely Team. I can’t think of a better way to spend a summer.

Computer Vision Project

(UBC ENPH 353 Course Report)

Keywords: Robot Operating System (ROS), Keras, OpenCV

Hello all. Welcome. This is half a log book for the ENPH 353 course. It will focus on my contribution to the project (the computer vision portion). Shoutout to Gosha Maruzhenko for providing navigation and making sure we don’t hit any pedestrians.

Github repositories for some context: [1] [2]
Plate reader python notebook (uploaded to Colab for readability) [3]

Overview

This is a brand new course at UBC! So firstly I would like to provide special thanks for Miti and Griffin at UBC for setting everything up. I have learned a lot in this course.

The goal of ENPH353 is to design a robot to navigate virtual environments and read license plates using machine learning. Fancy. Stay on the road, don’t hit the pedestrians, you know the drill.

Apparently UBC Engineering Physics and MIT share a few course designs (ENPH 253, for example). For those who are familiar with MIT’s “Duckietown”, our task is quite similar to that. We even use the same framework to control our robots (ROS). The difference is that our track is not physical, and is instead modelled in a software called Gazebo, which integrates with ROS nicely.

Figure: our gazebo simulation environment. Thanks, Miti and Griffin!

This is the course we must navigate. We are restricted to only two methods of interfacing with the robot:

  1. The camera feed
  2. Twist commands (move forward, move backwards, turn left, turn right)

With these two I/Os, we must design a robot which will accurately report license plates and their location through a ROS message.

I know what you may be thinking: since this is in virtual space, can’t you just hardcode motion into the robot, select only a certain area of the screen after a timer goes off, ect, ect..?

ROS Gazebo Colormask License Plate
Figure: Colormask

The answer: yes, I guess you could do that. But why would we? We are here to learn, dammit. So if you ask yourself “why didn’t they just use this simpler exploit given to them by the nature of the simulation?”, the answer is most likely “for the sake of knowledge and art”.

With that said, like literally every other team I’ve talked to, we did end up resorting to a few cheap tricks. The plates are found with a somewhat selective colormask that is restricted to a certain field of view. I’m not proud of it, but sometimes you just need to plop in the ugly solution and get er done.

Methods

Yes, one could set up a full CNN plate reader which scans the whole image for characters, such as this beauty . Would indeed be dope, however in terms of training time this model is far from ideal and considering our short time frame it a high risk strategy to rely on one single complex method to do everything for us. Furthermore, the nature of the plates all being the same size and color makes it extremely attractive to break this into two separate systems: one that collects the license plate and another that tells us what is on it.

Plate Detector

We had originally planned for the plate detector to be another object detection neural net. Due to time constraints we moved to an OpenCV color mask instead. It’s really nothing impressive so I’d say you can just skip this section.

Plate Detection Failure ROS Gazebo
Figure: False Positives for the Plate Detector

Once the color mask was chosen, we passed it through an opening in OpenCV (that is, erosion and then dilation) to remove the random white pixels here and there. We then passed that through findContours, and s so that rectangular shapes are included with a minimum bounding height and width.



ROS Gazebo License Plate Capture
Figure: Plate Detector

The problem here was that, for some reason, openCV also counted the road features as white things with 4 edges (see figure 2). This was a simple fix, which involved filtering for the purple in the image, once again finding contours, and then scaling the filled bounding box of the purple so that the white licence plate is always covered. Using that bounding box as another bitwise and mask, we have the final product which is effective and fast:

Observe that it is not perfect. Because the license plate is not included in our color mask (only the “true” white), the bounding box has been simply stretched a little bit. This works for angles that are head on but it leads to imperfection when the plate is read at an angle.

Data Generation

Lab 3 of this course was building a CNN to read the characters of a virtual license plate. However, since images were perfect (skew, or color deformation), and we can generate a lot of them, it achieved 100% accuracy within the first epoch….

In the real competition, running through Gazebo, where there is skew and color deformation due to lighting and camera effects, the task is not as simple.

To generate data, we employed two methods:

  1. Create a generator python script, which would generate plates and artificially Skew them in front of collected backgrounds. This had the advantage of knowing the corner locations and text of any huge number of images, but does it map to the real virtual world?
  2. Collect real-camera data through the use of a bash script. Said script launched spawned the robot in a specific location, turned it ever so slightly, killed the Gazebo model, and then did the whole thing over again. This has the advantage that it is real world data, but is it comprehensive? Unfortunately it also neglected to kill the xterm keyboard controller so after running it overnight my computer looked like this:
  3. Of course, there was also the option of manually driving around, gathering and labelling data. Sounds like a bad idea, but once you realize that you could have over 1000 images of juicy REAL data in under 3 hours, it becomes pretty appealing.

We elected to use methods 1 and 3. At the end of this process, we had over 700 simulated license plates and over 1000 real license plates. Example Data is shown below:

Both types of data were they were unskewed and separated into individual characters (see the python notebook below) so they would be fed into the neural network. Each 40×80 character image looked this:

Figure: Real data after characters cropped in python notebook. Notice that some characters are sometimes quite close to the chosen crop boundaries.
Figure: Synthetic data after characters cropped in python notebook. Notice how well behaved the synthetic data is compared to the real data.

Keras Neural Network

This is the python notebook housing the neural network. Most of it is just piping the data to the correct form (unskewing simulated data and cropping, the result being what you saw above). But in the end we were left with this model and an accuracy of over 95% on real data:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_4 (Conv2D)            (None, 76, 36, 32)        832       
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 38, 18, 32)        0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 36, 16, 32)        9248      
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 18, 8, 32)         0         
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 16, 6, 32)         9248      
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 8, 3, 32)          0         
_________________________________________________________________
conv2d_7 (Conv2D)            (None, 6, 1, 32)          9248      
_________________________________________________________________
flatten_1 (Flatten)          (None, 192)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 60)                11580     
_________________________________________________________________
dense_3 (Dense)              (None, 36)                2196      
=================================================================
Total params: 42,352
Trainable params: 42,352
Non-trainable params: 0
_________________________________________________________________

That’s all, Folks

There you are! It works. It is accurate enough.

I realize this report is quire surface level, so if you have any questions, please reach out! Though I truthfully don’t think there is anything too new to share here. But, here is the takeaway: this is a course for engineers. The main point here wasn’t creating a fantastic neural network and showing off technical prowess, it was creating something that works well in a short amount of time. This is NOT a research course. It is a project course, and therein lies the difference (in contrast to my opening statements about the art and beauty of our creation).

Also, remember it really doesn’t matter how fantastic your model is if: 1. You have shitty data and 2. your model is not a realistic depiction of the real world. I suppose this is the truth for any model you create, not just ML models (a la Nassim Taleb). I’ve talked to students who have run over 200 epochs on their model and generated 5-10x the images I have, for perhaps only a marginal increase in accuracy. That, I would say, is the power of having real data. You just can’t beat it.

These are some of the cool things I have found while doing research, hopefully they will aid in your next computer vision project:

Really cool license plate recognition (though out of the time scope of our project)

YoloV3 ( I really love Redman’s approach to academic writing here. What a fantastic costly signal to his work)

Creating your own object detector – Towards Data Science

Simple tDCS Device: a Start

Disclaimer: don’t build one of these, but if you do, this design is probably better than everything thus far on the internet, but maybe not. Ultimately I am not responsible for anything you do or build.

Transcranial direct current stimulation (tDCS) is a technology used to treat a variety of ailments, especially anxiety and depression. Read more about it on wikipedia or in one of the 1000+ studies investigating the technology. This article showcases a DIY tDCS device that I have built.

The implementation is so absolutely simple. The only choice was picking the LM334 current regulator, since after that, TI nicely provides you with a schematic to follow. This is literally the most simple device that one can create. I added an LED just so I could stand out a little bit.

Why this circuit?

Despite tDCS devices be so simple, I have found a few devices I am not fond of online. I would not recommend any circuit that uses the LM317 for any current regulation since the minimum recommended output current of that IC is 10mA, about 5-10x higher than our design specification. This is why the LM334 is the better option here. Also, many point out the temperature dependence of the LM334, which other online designs do not take measures to prevent. But this can be easily rectified with the design on figure 15 of TI’s data sheet, which is pictured below. Though, even without this precaution, the change in current is only a predicted 7uA / K , or less than 1%/K, so it isn’t really a huge deal anyways. Just keep your tDCS device away from the fireplace.

Image from Texas Instruments “LM134/LM234/LM334 3-Terminal Adjustable Current Sources”

I also decided to use a DC-DC voltage converter for my circuit, but this is really not necessary if you have a fresh 9v battery. For the F3 -> FP2 montage and my DIY sponge electrodes (below), I have observed that only 5-7 volts are required to maintain a nice 2mA. The DC-DC regulator might be required for higher impedance montages or combination tACS+tDCS in lieu of an extra 9v battery.

Figure 15 in breadboard form. R2 and R1 values were generated through resistors in series.

Analysis

There are two downsides to this circuit. Firstly, because the current level is dependent on two resistor values, it is more slightly more cumbersome to design a circuit with a 1mA/2mA switch. I wasn’t interested in doing this, however it could be done with some resistor/transistor/switch fun. If I wanted a whole spectrum of currents, I might as well design something with tACS capabilities (spoiler alert).

Secondly (and this is actually a downside), because the LM334 really wants to push that 2mA out there, when the power is initially connected or electrodes are reapplied, one gets a fun shock. Simple solutions include a series potentiometer to slowly ramp up the current, or a very large inductor which will provide a nice L/R time constant. None of these are ideal since one is a bit impossible and the other requires me to manually move a knob every time I want to adjust my headgear. A well designed circuit would be able to sense this huge voltage spike and limit it accordingly.

Lastly, some sort of voltmeter would be nice. I found it quite useful to observe the electrode voltage so that one can infer the quality of their electrode placements.

Cheers,

Tyler

References:
http://www.ti.com/lit/ds/symlink/lm134.pdf
https://www.diytdcs.com/2013/01/the-open-tdcs-project/