Book Review: The Dip

The Dip, Book by Seth Godin (2007): Short and sweet: Essential advice for some facets of life.

The Dip reminds me that achieving anything great is an exercise of faith. To push through the dip, one must have a strong vision and the conviction that it can become reality. Without this, you absolutely will fail. No exception.

Read this book if you aren’t sure when you should quit. Ask yourself: in what aspects of my life can I make it to the “other side”? How far will your faith and abilities take you? Drop the rest.

Great for Finite Games.

This concern keeps “The Dip” from being an easy “yes”: It explores only a narrow definition of success, specifically, aim for status. Be “the best” and make sure you’re known for being it. It’s no surprise that Seth Godin’s other books are about marketing.

Striving to be “the best” is a great strategy for specific finite games, and probably useful when forging a career, but in the infinite game there is no “the best”. You can’t play for a perfect life, and maybe there is more to life than being the best at something. Seth advises: specialize, stick to what you’re good at so you can be better at it, don’t get distracted by the desire for change or something new or exciting.

This comes down to personal philosophy, but I’d say a successful life is made up of many commonplace, even “boring”, experiences that don’t jive with this worldview. Supportive friends and family, good food, a fun past time.

Where does passion and interest lie in Seth Godin’s narrative? I love playing chess. I will never be the best at it. I’m not even good. I intend to continue playing.

What I can take for granted

All I can be sure of, really, is that capital == good, where capital can be loosely defined as everything that is anti-entropy. Because it is anti-entropy, by definition capital takes time and effort to accrue.

Valid forms of capital include:

  • Economic: financial assets, investments, means to security and increased leverage
  • Social: strength of relationships (specifically with high quality people), eligibility into social networks, trust
  • Status: creation of [valid and self-important] self-narrative, material wealth symbols
  • Skills: “external”, especially marketable (which includes all hard/technical training, but also negotiation, leadership, conflict management), and “internal” (resilience, creativity, ability to fail, ego depreciation, considering the arguments of others, being “interesting”)
  • Health: fitness, mood, strength, anything which contributes to a healthy lifestyle and increased time alive, physical safety
  • Autonomy: can be tied to economic capital, but is not always. Is NOT equivalent to lack of responsibility

And in the end, we can use and trade our capital for 1. other forms of capital (hopefully you’re good at negotiation), or 2: Pleasure: that which is great and enriching in moderation but is not enough to sustain you alone.

Nearly all (all?) of our day-to-day conversations revolve around the central question: how do I trade my capital?

  • Do I take the job which pays more but may jeopardize my health (physical safety)?
  • Should I eat out with my friends? Is it worth buying that car?
  • Is it worth paying for this course or degree?
  • Should I go to the gym? Should I eat the ice cream?
  • How do I get more capital?

The list could literally go on forever.

The Government and Unions

It has always been the job of the government and unions to restrict the domain of such questions, especially for those less privileged, to minimize the abuse of power in zero-sum games.

For instance, properly imposed safety regulations should make it impossible to ask such questions as “Is it better for me to risk my life following the demands of my employer, or get fired and risk starving to death?”

Ethics and Morality

Practically, ethics and morality further restrict the rules of capital exchange. Even abstract problems, such as the trolley problem, can be viewed as a problem in allocating capital.

While these take the form of socially constructed laws, they also bleed into natural natural laws. To violate ethical/moral principles is to slash your own social capital in favour of [typically] economic capital or pleasure.

Because all forms of capital are highly interdependent, this strategy is high-risk and not advised for long term games.

Comparisons: there is no absolute

We all value certain capital more than other kinds of capital. This is what makes trade mutually beneficial.

There will always, without exception, be someone with more capital than you and with less capital than you (see the previous point).

The power of social pressure is predicated on the need for social capital.

The ideal

There are certain really good moves in the game. What makes start-ups so alluring?

Ideally, they provide you with a high-status and high-paying job which interests you (pleasure) and leave you surrounded with a strong team you enjoy being a part of, all while having autonomy around your success. Though high risk, it is a means for growing your capital multi-dimensionally.

Chrysafly

This art project was designed and created by the talented Nico Woodward and Nathyn Sanche at eatART Vancouver. Our team of engineering students were brought along to help control the thing. See more of Chrysafly here.

I was going to do a more technical write up for Chrysafly, similar to that of the robot competition, but I really didn’t see the value in it, so I will just add some pictures.

I was recruited to help Emma Gray and Michelle Khoo work on the wing calibration (position sensing using the encoders and manipulation of the wings). This included the controller circuit (below) and attempts at waterproof electronic enclosures.

Testing of the Multiplexer Circuit for controls. The Arduino inputs were limited with the addition of the motor drivers.

This beast is driven by two Arduinos, each with a Pololu Dual G2 High-Power Motor Driver. The 4 attached motors pull the wings with string, as seen from Nico’s prototype:

Nico showing off his prototype
Lower Wings Completed

Github

Book Review: Feeling Good

Feeling Good, David D. Burns, M.D. (1980): Not a self-help book.

The worst thing about this book is the way it looks, and I’m not referring to the cover (though it could use an update from the 1980’s styling). Although this book is clearly marketed and focused towards treating those with depression, the CBT techniques described in the Feeling Good are universally applicable and will improve the worldview, productivity, and mood of anybody who takes it seriously. So, while I applaud Burns for writing the #1 go-to book on depression-related CBT inquiries, he might just be selling his content short by limiting its scope. While I would love to gift this book to everyone around me, I worry the message will get shot down immediately with the qualm “but I’m not depressed”. This book is about more than treating depression: it’s a new paradigm for anybody who has emotions which need interpreting (you probably fall into this category).

With that out of the way, here is my review: It’s Lindy!

The meat of this book is really in the first 150 pages, with the remaining 3/4 of the book exploring examples that may or may not resonate with you. If you are interested in learning real CBT and don’t know where to start, I would recommend this book without a doubt. If you aren’t interested in learning CBT, here’s why you maybe should be:


Cognitive Behavioural Therapy (CBT) is a means for rewiring your brain for betterment and productivity [1][2][3]. It’s a method of emotional deescalation, which can be focused internally or externally. This deescalation will free your psyche and your mental bandwidth, making you calmer, procrastinate less, think more creatively, and even be more willing to take risks.

CBT is not cathartic. It’s not Freudian, either. Hell, it’s not even about expressing your feelings! Maybe that kind of stuff is overrated….

If you’re willing to change, which we all should be, you will be able to benefit from CBT. Read the first 150 pages and try the exercises. Avoid personalizing them too much. Perhaps you will be surprised by how easy and effective of a tool this is.

For better or worse, the practice of CBT has not changed much over the last four decades. Go pick up this book!

Why you can’t study

Here you are again. You know you should do it. In fact, you even want to do it. You’ve become so bored just floating along and not challenging yourself that it makes you sick and even spiteful towards yourself. Yes, scrubs is a fantastic sitcom, and yes, beer really does taste good and give you that momentary feeling of contentment, but goddam that’s a sad life and you know it. Every move you make is a defensive one because to act in any other way hurts.

And, although dying slowly in that fashion doesn’t hurt at all, it is extremely painful. You know it. No amount of alcohol can make you forget that you could be doing more and you could be doing better.

That knowledge makes you guilty. Look at yourself. You’re a complete failure. Look how pathetic you’ve become, just searching for a way to retreat and avoid the suffering of life. You could be doing better, you could be working on projects and bettering yourself and proving to yourself and everyone around you that you are worth the air you breathe. But for the past few weeks you haven’t been doing that. You’ve been hiding.

Since you’ve been hiding, you feel like you’re living a bit of a lie. It’s something that embarrasses you and you’d really rather others not know about it. The guilt builds up.

You haven’t been competitive with your peers for the past few weeks, and for the first time you see most of the people around you being more successful than you are. And really, it is your fault. There’s nobody else to blame. The guilt builds up.

You are so guilty and ashamed of yourself that you avoid even being in the presence of others; what if they find out your dirty secret? What if they realize how utterly pathetic and unskilled you are? The guilt builds up.

You end up trapped in your own prison. For fear of judgement, everything you do becomes painful and more difficult. You are so embarrassed about the whole embargo that you are paralyzed, unable to ask for help while the parasite that is this thought pattern continues to eat away at your motivation and intelligence. You can’t escape it.

You stare at your math homework once more. This used to be so easy. And this content doesn’t look that much harder than what I’ve already done, but getting going once more feels as easy as pushing against a moving train in hopes of not getting plowed. The wall you’ve built between you and the work is just too thick.

Does this sound like you?

More abstractly:

Here is the model: once upon a time, your frontal cortex had a pretty good connection with cognitively heavy content. As challenging as the concept was, you had your full arsenal of attention to throw at it. But, as you start to run away, your fear system wraps around your cognition. It learns to become the mediator, or the guard, between your actions and your executive functioning.

So, the difficulty studying isn’t really about the math at all. The math is simple. It always was, it always will be. Math never gets harder or easier, it just exists. It has a constant runtime O(1).

But, instead of making calls to your frontal cortex object directly, your intelligence has become a nested class inside of your fear object. This design is really unideal as the fear calls are quite expensive and exhausting, AND the complexity scales with the strength of the prison you’ve constructed. I think fear calls are at least O(n), or maybe even O(n^2) depending on the person.

Thanks to the fear wrapper, the only way to get to your intelligence is to go through the middle man.

How to Forgive

It’s time to do some refactoring. The above pattern occurs because we have instantiated a recursive negative affect style towards studying. The loop will continue until you reach full self-forgiveness, accepting your faults, and thus being liberated by them [See related article and study].

Here are some tools to free yourself:

  1. Write about yourself
  2. Practice “active CBT”
  3. Remember that this experience is commonplace
  4. Perfectionism is death
  5. You can’t brute force this

1. Write about yourself

The first step is to admit that this is a problem for you. Write or talk to a friend about it, I don’t care, but you have to get it out there. It’s impossible to work on a solution if you don’t define the problem.

Outline all the times you have fallen in this trap. Mention all of them. This might feel embarrassing for you, but in fact that is a good thing. To absolve your sins, you must lay them out and admit your errors. This takes vulnerability and strength.

Check: have you really forgiven yourself? Think of the time you have failed once more. Focus on it. What sensations surface? If you still feel guilt, embarrassment, or shame, maybe it’s time to restructure those narratives using a CBT strategy. There is no need to feel guilty about having these feelings resurface: retraining your brain takes time for everybody, and you will likely need to provide multiple iterations of restructuring before you can expect any changes to stick.

You have to put in the work with this. It takes time. But, because this activity is a source of forgiveness and not one of guilt, it should be the opposite of stressful.

2. Practice “active CBT”

While and after you write about yourself, you will still likely run into emotional barriers while working. It might give you a hit of anxiety. Once again, this is expected. Relax.

In addition to writing and practicing forgiveness outside of your study time, you will also have to restructure your emotional blockages while studying. Yes, this will cut into the amount of time you spend actually working, but compound payoff of this investment is worth it. Forgive yourself while studying. Give out forgiveness like the stuff grows like trees.

The difficult thing to get right here is how to differentiate cognitive restructuring and avoidant affect. This takes practice to get right, but it comes down to this: is your thinking focused on the material, or is it not? Are you trying to focus inwards for a distraction or are you maintaining an outward focus, looking for forgiveness and acceptance?

You should have an end goal here, and it’s to actively look at your homework page without feeling dread, guilt, or fear. During this exercise, don’t look for those emotions (as you tend to find what you’re looking for). Instead, actively look for peace. The important thing is to do so without escaping inwards or turning off. In time, you will find it.

3. Remember that this experience is commonplace

Loneliness is one of the most common emotions that we share. In the same paradoxical way, without your knowledge, many people around you are going through the exact same emotional trough.

4. Perfectionism is death

A lot of this guilt and dread comes from an overactive ego and the denial or subsequent hate of one’s imperfections. This is not to say that this trend is specifically narcissistic [in thinking that one is perfect], but rather that it is extremely dangerous to hold one’s self-standards of progress too high as one seeks perfection.

Once again: forgive yourself. You are far from perfect, and that’s perfectly okay. Your friends, colleagues, and parents are also far from perfect, but somehow the world hasn’t fallen apart yet.

Freeing yourself from perfectionism is NOT the same thing as rejecting the aim of self-improvement.
Do not fall into either end of the trap.

In the same vein:

  1. Accept that you’re not a machine. You’re a human with emotional and physical needs. Make sure that those needs are a priority for you, as they provide the foundation for your life and abilities.
  2. Asking for help is not a sign of weakness. Don’t be afraid to do so.

5. You can’t brute force this

Aggression is a powerful tool, but it must be used sparingly. It sometimes seems that a simple fix to the problem I’ve outlined is to push harder and be tougher on yourself.

This is an effective method for squeezing the last bit of juice out of one’s faculties, but unfortunately it is self destructive and not sustainable. While it may give you the motivation to hand in that assignment today, you are doing it at the expense of your effectiveness tomorrow.

You are throwing yourself at each wall, successfully breaking through it, but one can not do this unscathed. As you get weaker, the walls remain the same strength.

CAD Mini Projects

Why not put these somewhere?

Ring:

This ring was designed in Fusion 360 (which I would say is much is much better thought out and more intuitive than solidworks) and was a Christmas present. The design was created using component subtraction from a “comfort fit” style ring. I don’t think there’s much to more to say about it so I will let the photos speak for themselves!

Fusion 360 Render of the Ring

Claw:

This is one of the first claw prototypes from Robot Competition. It was created with Onshape (I still think Fusion360 is king).

Enclosure:

Woa! Look at that sexy beast! This enclosure was originally for UBC solar car to house our Nomura MPPTs and other circuits, but this was abandoned. I included it here only because I’ve been enjoying rendering things and oh my, that blue acrylic is absolutely sexy in the pale moonlight.

Bonus:

I did not create this, but I think it’s fun to see the perks of being friends with other engineering students. This is Gabriel waterjet cutting sheriff badges for our cowboy party!

Life is great.

2019 Engineering Physics Autonomous Robotics Competition

(the infamous all-nighter before competition)

Hi! Welcome. This blog post is made to show off the robot that my team and I developed over the summer. If you have any questions, please leave a comment or send me an email. I want to help any way I can!

Table of Contents

  1. Robot Objective
  2. Mechanical Design
  3. Controls and IO (line following, arm and claw control, navigation)
  4. The “Disaster Bug”
  5. Takeaways

1. Robot objective

The Engineering Physics Autonomous Robot Competition (known colloquially as robot comp) is a five week hands on project course in which teams of four work together to build an autonomous robot from scratch. And when I say scratch, I mean scratch. There is a lot of work involved when it comes to creating the chassis, wheel mechanisms, circuits (for control and sensors), and then creating code to navigate the course.

IT’S ALIVE! Hektar is tuned and drives up the ramp at record speed

This is what the course participants have access to:

  • Andre Marziali provided H-bridge circuit schematics using the LTC1161 gate driver, as well as a crash course on development boards (STM32 “Blue Pill”)
  • 3D printers
  • Laser Cutters / Waterjet cutter with particle board and various plastics
  • STM32 “Blue Pill”, DC motors, mosfets, optoisolators, gate driver, Li-Po batteries, NAND gates and other electronic components [with restrictions, no integrated H bridges for instance, some other motors are banned].
  • Raspberry Pi allowed

The rest is up to us (using our fantastic instructors for guidance)!

The Competition

Robots compete head to head in 2 minute heats. The goal: Collect as many stones as possible and place them in your gauntlet. The team with the most stones wins!

In accordance with previous years, 50% of teams captured a grand total of 0 stones during the competition day.

The competition surface. Notice that the robot must be able to perform on the left or right side of the course. There also existed a separate objective for the robots: collecting plushies and disposing them in the labelled bins. 1 team decided to pursue this task and ignore the stones, the other 15 decided to pursue the stones and ignore the plushies.

2. Mechanical Design

Most of this work was done by the fantastic Gabriel and Fiona. They put a lot of work into the design but this section will be short.

Our robot got its name from an Ikea lamp, due to its visual similarities:

Playing with the arm mechanism after Gabriel and Fiona finish fabrication. Fantastic work!

We built an arm prototype out of Mechano but it’s not very 𝕒𝕖𝕤𝕥𝕙𝕖𝕥𝕚𝕔 so I will not include it here.

The arm design is especially unique. It uses a series of double parallelograms, which keeps the claw parallel to the ground at all times. It is controlled with two motors attached with worm gears. Its angle relative to the robot was controlled by a centrally mounted servo:

Yes, the circuits were messy. At least we had nice connectors between them. I would do this differently if I were to rebuild Hektar.

The chassis was made from particle board. The wheels and geartrain were designed to ensure a proper balance between speed and torque to push the robot up the ramp.

3. Controls and IO

Our robot was somewhat unique in that it was being controlled by a Raspberry Pi in addition to the STM32 “Blue Pill”. Why did we do this? Truthfully, it was mostly for learning opportunities. The team was really excited to learn Robot Operating System (ROS) because of its widespread adoption elsewhere.

Circuits

Our robot had ’em.

Dual H-Bridge Design:

As mentioned above, this design was taken from Andre Marziali but it’s worth showing here:

Thank you to Fiona Lucy for laying out this circuit so beautifully!

The LTC1161 gate driver eliminated the need for P-channel mosfets, thus (according to the program director) making this a cheaper and more reliable design. Sounds good to me!

And, the soldered version (I think we all ended up making one or two of these):

Very Sexy. What I learned in this process: don’t directly solder male Dupont connectors. Like other males (including myself), they get damaged and weak when you poke them with a hot soldering iron. The white wire to board connectors are placed for two DC motors (only 2 pins each used).

There were a few more hiccups with this circuit we encountered. Our robot needed two of these circuits (because our arm was driven by DC motors and not stepper motors/servos), which lead to a shortage of PWM-capable outputs on the Blue Pill. To resolve this, instead of using two PWM signals per motor (forwards and backwards), I built 2-way demultiplexer so that the H bridge could be controlled with a single PWM signal and a forwards/backwards pin. This is the only change that we made to the above circuit (not pictured)

Line Following / Feature Detection

Our robot had an array of 5 infrared reflective sensors (QRD1114) to detect tape. The circuit output was regulated to 3.3v so the raw input could be fed into the blue pill.

Right: a testing of the soldered protoboard in action, transmitting data from the Blue Pill analogue read to the Raspberry Pi through ROSserial.

Software

ROSpy github
Blue Pill github

The Raspberry Pi/ROS did offer use one advantage that very few other teams had: WiFi. While of course we couldn’t use this during the competition, this made software tuning and debugging a breeze when compared to other teams. our PID tuning was performed with a GUI over realVNC!

Hektar’s Brain. Look how 𝕒𝕖𝕤𝕥𝕙𝕖𝕥𝕚𝕔 it is in there.

For the curious, here is the component network that controlled all of Hektar’s motion:

“/serial_node” also contained arduino code (for the blue pill) to translate the messages into PID commands using the ROSserial_arduino library.

ir_error_node: reported the state of how far Hektar is from the centerline. Also sends a flag when it believes a feature to be present (a T or Y intersection, for example).

control_master: keeps track of how many features the robot has hit and tells it what to do accordingly (line follow, stop, dead reckon). Also controls the arm.

The rest are somewhat self explanatory and are also visible on the github repo.

The use of a Raspberry Pi also meant that we could manually control our robot remotely:

4. The “Disaster Bug”

They all said that something would break the night before the comp for no apparent reason. I didn’t belive them, but they were right: it’s a measly 12 hours before our robot is put on display in front of hundreds of people, and Hektar forgot how to follow lines!

What is going on!? We should be calibrating setpoints for the arm locations*. After an arduous debugging process, we found the invisible culprit: EMF. Whenever the left motor was running, the unshielded serial connection between the Blue Pill and the Raspberry Pi would be interrupted (scoping the connection showed only the Hi logic level). Uh oh. That connection was our robot’s spinal cord, and without it Hektar could not walk.

How was this happening when our circuits were electrically isolated? Also, why was our robot working flawlessly for weeks beforehand, only to have it fail the night before? I actually still don’t know the answer to the latter question…

*we had shown that Hektar could pick up stones as well as deposit them. He could also stop at intersections and know which way to go. Collecting one stone was close and realistic, it was just a matter of putting 2 and 2 and 2 together.

The Fix

The fix was two pronged. Firstly, we reduced the noise that the motor gave off by following this guide. But it wasn’t enough.

Then came the crash course in electromagnetic shielding and antennas:

I don’t own a single straight edge

The USB serial connection typically has 3 pins: Tx, Rx, and Ground. However, the Blue Pill and Raspberry Pi were already connected to the same ground because they were sharing the same power supply. The result: I had created a beautiful and large loop that is the perfect antenna for motor noise.

Once we removed this unnecessary ground, the robot’s future was getting brighter. Coincidently, so was the room we were in, because we had spent all night figuring out this problem and the sun was starting to rise.

You might be wondering why we opted for a USB serial instead of using the Raspy’s built in GPIO pins. Fair point, however the GPIO pins don’t have overvoltage protection and frankly I don’t trust my electronic taping skills very strongly.

5. Takeaways and Mistakes

I think the biggest takeaways from this project were not technical. They were to do with the organization of people in a small timescale project.

5.1. When it comes to Gantt charts and timing, you really don’t know what you don’t know. We found that it was rarely the technical challenges that impeded our progress, but instead all the “trivial” problems we didn’t foresee, such as dealing with library errors or setting up your environment, soldering things incorrectly, and integration. In this project I would say that finding and fixing small bugs took up more time than our actual engineering design work. This is not something I expected going into the project.

5.2. Stick to the agreed upon standards, even if it isn’t the “most logical” thing to do. I made this mistake and here I have to own up to it. The team decided upon a standard for the power rails on our breadboards ( I believe it was ground, 3v3, 5v from outside to inside). I realized while planning the new H-bridge design that if I swapped one of the rails, the circuit could look a lot cleaner and it would feel more “correct” to me. This decision was rash and lead to confusion amongst the team (and maybe a blown capacitor if I remember correctly).

5.3. Things that seem like great ideas due to the cool or clean/purity factor may not work that way in reality:

  • We built the jointed arm because we thought it would be really cool to have it, therefore ignoring the additional challenges we took on and did not have time to properly implement.
  • Our robot was meant to look clean and simple, and I think we succeeded on Hektar’s exterior. But, by making the external cute and compact, we were left trying to shove our all our circuits into the tiny box we had made (hence our resort to tape). The better solution would have been to make a larger chassis and create more room for circuits, sacrificing the cute small size for internal modularity and order.
  • 5.2 was an example of me trying to make a cleaner design at the expense of design standards and modularity, and it ended up backfiring.

5.4. You can’t do it all. What is your goal here: To learn the most? To have the most innovative design? or to win [with a minimum viable product]? 5 weeks is not a long time. Our team chose to innovate and learn in the process, and the proof of that can be seen in the design of our robot. However, in a real engineering job (as my experience at Broadcom has shown) what matters is to deliver something that meets or exceeds the design specifications with the least amount of human/economic capital. Sometimes the best solution is not elegant or sophisticated. This was the truth that we decided to ignore for our project.

5.5. People aren’t robots: trust is everything and everybody has a distinct communication style. Often during times of tension there is no wrong person or perspective: the two are just speaking different languages. I encountered this with one of my group members. Some people like their ideas to be challenged head on. Others have a greater need to feel trusted (perhaps in the form of validation from the group) before an analysis of their work can be done. I see myself in the wrong for not recognizing this right away. At the end of the day, everybody has emotional needs and I believe your relationships are what matter far more than whatever work you can push out by yourself.

5.6. It can be a trap to spend too much time hypothesizing where problems may occur and not enough time running experiments to actually figure it out (though the contrary is also true). This is a trap I fell into, and this especially becomes a time sink when multiple opinions are involved. A failure during testing is not a waste of time or failure at all; it’s an invaluable resource.

5.7. Take more pictures. Document More.

5.8. Being calm is a skill and a strength.

My Lovely Team. I can’t think of a better way to spend a summer.

Why I wasn’t at the Vancouver Climate Strike

It was fun,
It was slacktivism.

A strike should have concrete goals and actionable items. There should be a way to quantify its success. It should take a narrative of this form: “We are marching because we want these things to happen. The purpose of this large march is to show how strong public sentiment is on this cause. We all agree on having these things happening, so government/company/whoever, please do them.”

“Showing you care” is not an actionable item. It is virtue signalling.

How to make your strike useful: Pick something concrete. I don’t care what. Americans want back in to the Paris agreement? Sure. Increased carbon tax? Why not. Do them both for all I care. But have something. Maybe even draft your own bill and send it to Ottawa. THEN you can strike. And then I will be there.

There are a lot of real and important strikes and protests going on around the world.
The Climate Strike was not one of them.

Sincerely,

Someone who cares about the environment (but also doesn’t want class to be cancelled for no good reason).

Computer Vision Project

(UBC ENPH 353 Course Report)

Keywords: Robot Operating System (ROS), Keras, OpenCV

Hello all. Welcome. This is half a log book for the ENPH 353 course. It will focus on my contribution to the project (the computer vision portion). Shoutout to Gosha Maruzhenko for providing navigation and making sure we don’t hit any pedestrians.

Github repositories for some context: [1] [2]
Plate reader python notebook (uploaded to Colab for readability) [3]

Overview

This is a brand new course at UBC! So firstly I would like to provide special thanks for Miti and Griffin at UBC for setting everything up. I have learned a lot in this course.

The goal of ENPH353 is to design a robot to navigate virtual environments and read license plates using machine learning. Fancy. Stay on the road, don’t hit the pedestrians, you know the drill.

Apparently UBC Engineering Physics and MIT share a few course designs (ENPH 253, for example). For those who are familiar with MIT’s “Duckietown”, our task is quite similar to that. We even use the same framework to control our robots (ROS). The difference is that our track is not physical, and is instead modelled in a software called Gazebo, which integrates with ROS nicely.

Figure: our gazebo simulation environment. Thanks, Miti and Griffin!

This is the course we must navigate. We are restricted to only two methods of interfacing with the robot:

  1. The camera feed
  2. Twist commands (move forward, move backwards, turn left, turn right)

With these two I/Os, we must design a robot which will accurately report license plates and their location through a ROS message.

I know what you may be thinking: since this is in virtual space, can’t you just hardcode motion into the robot, select only a certain area of the screen after a timer goes off, ect, ect..?

ROS Gazebo Colormask License Plate
Figure: Colormask

The answer: yes, I guess you could do that. But why would we? We are here to learn, dammit. So if you ask yourself “why didn’t they just use this simpler exploit given to them by the nature of the simulation?”, the answer is most likely “for the sake of knowledge and art”.

With that said, like literally every other team I’ve talked to, we did end up resorting to a few cheap tricks. The plates are found with a somewhat selective colormask that is restricted to a certain field of view. I’m not proud of it, but sometimes you just need to plop in the ugly solution and get er done.

Methods

Yes, one could set up a full CNN plate reader which scans the whole image for characters, such as this beauty . Would indeed be dope, however in terms of training time this model is far from ideal and considering our short time frame it a high risk strategy to rely on one single complex method to do everything for us. Furthermore, the nature of the plates all being the same size and color makes it extremely attractive to break this into two separate systems: one that collects the license plate and another that tells us what is on it.

Plate Detector

We had originally planned for the plate detector to be another object detection neural net. Due to time constraints we moved to an OpenCV color mask instead. It’s really nothing impressive so I’d say you can just skip this section.

Plate Detection Failure ROS Gazebo
Figure: False Positives for the Plate Detector

Once the color mask was chosen, we passed it through an opening in OpenCV (that is, erosion and then dilation) to remove the random white pixels here and there. We then passed that through findContours, and s so that rectangular shapes are included with a minimum bounding height and width.



ROS Gazebo License Plate Capture
Figure: Plate Detector

The problem here was that, for some reason, openCV also counted the road features as white things with 4 edges (see figure 2). This was a simple fix, which involved filtering for the purple in the image, once again finding contours, and then scaling the filled bounding box of the purple so that the white licence plate is always covered. Using that bounding box as another bitwise and mask, we have the final product which is effective and fast:

Observe that it is not perfect. Because the license plate is not included in our color mask (only the “true” white), the bounding box has been simply stretched a little bit. This works for angles that are head on but it leads to imperfection when the plate is read at an angle.

Data Generation

Lab 3 of this course was building a CNN to read the characters of a virtual license plate. However, since images were perfect (skew, or color deformation), and we can generate a lot of them, it achieved 100% accuracy within the first epoch….

In the real competition, running through Gazebo, where there is skew and color deformation due to lighting and camera effects, the task is not as simple.

To generate data, we employed two methods:

  1. Create a generator python script, which would generate plates and artificially Skew them in front of collected backgrounds. This had the advantage of knowing the corner locations and text of any huge number of images, but does it map to the real virtual world?
  2. Collect real-camera data through the use of a bash script. Said script launched spawned the robot in a specific location, turned it ever so slightly, killed the Gazebo model, and then did the whole thing over again. This has the advantage that it is real world data, but is it comprehensive? Unfortunately it also neglected to kill the xterm keyboard controller so after running it overnight my computer looked like this:
  3. Of course, there was also the option of manually driving around, gathering and labelling data. Sounds like a bad idea, but once you realize that you could have over 1000 images of juicy REAL data in under 3 hours, it becomes pretty appealing.

We elected to use methods 1 and 3. At the end of this process, we had over 700 simulated license plates and over 1000 real license plates. Example Data is shown below:

Both types of data were they were unskewed and separated into individual characters (see the python notebook below) so they would be fed into the neural network. Each 40×80 character image looked this:

Figure: Real data after characters cropped in python notebook. Notice that some characters are sometimes quite close to the chosen crop boundaries.
Figure: Synthetic data after characters cropped in python notebook. Notice how well behaved the synthetic data is compared to the real data.

Keras Neural Network

This is the python notebook housing the neural network. Most of it is just piping the data to the correct form (unskewing simulated data and cropping, the result being what you saw above). But in the end we were left with this model and an accuracy of over 95% on real data:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_4 (Conv2D)            (None, 76, 36, 32)        832       
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 38, 18, 32)        0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 36, 16, 32)        9248      
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 18, 8, 32)         0         
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 16, 6, 32)         9248      
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 8, 3, 32)          0         
_________________________________________________________________
conv2d_7 (Conv2D)            (None, 6, 1, 32)          9248      
_________________________________________________________________
flatten_1 (Flatten)          (None, 192)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 60)                11580     
_________________________________________________________________
dense_3 (Dense)              (None, 36)                2196      
=================================================================
Total params: 42,352
Trainable params: 42,352
Non-trainable params: 0
_________________________________________________________________

That’s all, Folks

There you are! It works. It is accurate enough.

I realize this report is quire surface level, so if you have any questions, please reach out! Though I truthfully don’t think there is anything too new to share here. But, here is the takeaway: this is a course for engineers. The main point here wasn’t creating a fantastic neural network and showing off technical prowess, it was creating something that works well in a short amount of time. This is NOT a research course. It is a project course, and therein lies the difference (in contrast to my opening statements about the art and beauty of our creation).

Also, remember it really doesn’t matter how fantastic your model is if: 1. You have shitty data and 2. your model is not a realistic depiction of the real world. I suppose this is the truth for any model you create, not just ML models (a la Nassim Taleb). I’ve talked to students who have run over 200 epochs on their model and generated 5-10x the images I have, for perhaps only a marginal increase in accuracy. That, I would say, is the power of having real data. You just can’t beat it.

These are some of the cool things I have found while doing research, hopefully they will aid in your next computer vision project:

Really cool license plate recognition (though out of the time scope of our project)

YoloV3 ( I really love Redman’s approach to academic writing here. What a fantastic costly signal to his work)

Creating your own object detector – Towards Data Science

Simple tDCS Device: a Start

Disclaimer: don’t build one of these, but if you do, this design is probably better than everything thus far on the internet, but maybe not. Ultimately I am not responsible for anything you do or build.

Transcranial direct current stimulation (tDCS) is a technology used to treat a variety of ailments, especially anxiety and depression. Read more about it on wikipedia or in one of the 1000+ studies investigating the technology. This article showcases a DIY tDCS device that I have built.

The implementation is so absolutely simple. The only choice was picking the LM334 current regulator, since after that, TI nicely provides you with a schematic to follow. This is literally the most simple device that one can create. I added an LED just so I could stand out a little bit.

Why this circuit?

Despite tDCS devices be so simple, I have found a few devices I am not fond of online. I would not recommend any circuit that uses the LM317 for any current regulation since the minimum recommended output current of that IC is 10mA, about 5-10x higher than our design specification. This is why the LM334 is the better option here. Also, many point out the temperature dependence of the LM334, which other online designs do not take measures to prevent. But this can be easily rectified with the design on figure 15 of TI’s data sheet, which is pictured below. Though, even without this precaution, the change in current is only a predicted 7uA / K , or less than 1%/K, so it isn’t really a huge deal anyways. Just keep your tDCS device away from the fireplace.

Image from Texas Instruments “LM134/LM234/LM334 3-Terminal Adjustable Current Sources”

I also decided to use a DC-DC voltage converter for my circuit, but this is really not necessary if you have a fresh 9v battery. For the F3 -> FP2 montage and my DIY sponge electrodes (below), I have observed that only 5-7 volts are required to maintain a nice 2mA. The DC-DC regulator might be required for higher impedance montages or combination tACS+tDCS in lieu of an extra 9v battery.

Figure 15 in breadboard form. R2 and R1 values were generated through resistors in series.

Analysis

There are two downsides to this circuit. Firstly, because the current level is dependent on two resistor values, it is more slightly more cumbersome to design a circuit with a 1mA/2mA switch. I wasn’t interested in doing this, however it could be done with some resistor/transistor/switch fun. If I wanted a whole spectrum of currents, I might as well design something with tACS capabilities (spoiler alert).

Secondly (and this is actually a downside), because the LM334 really wants to push that 2mA out there, when the power is initially connected or electrodes are reapplied, one gets a fun shock. Simple solutions include a series potentiometer to slowly ramp up the current, or a very large inductor which will provide a nice L/R time constant. None of these are ideal since one is a bit impossible and the other requires me to manually move a knob every time I want to adjust my headgear. A well designed circuit would be able to sense this huge voltage spike and limit it accordingly.

Lastly, some sort of voltmeter would be nice. I found it quite useful to observe the electrode voltage so that one can infer the quality of their electrode placements.

Cheers,

Tyler

References:
http://www.ti.com/lit/ds/symlink/lm134.pdf
https://www.diytdcs.com/2013/01/the-open-tdcs-project/