All I can be sure of, really, is that capital == good, where capital can be loosely defined as everything that is anti-entropy. Because it is anti-entropy, by definition capital takes time and effort to accrue.
Valid forms of capital include:
Economic: financial assets, investments, means to security and increased leverage
Social: strength of relationships (specifically with high quality people), eligibility into social networks, trust
Status: creation of [valid and self-important] self-narrative, material wealth symbols
Skills: “external”, especially marketable (which includes all hard/technical training, but also negotiation, leadership, conflict management), and “internal” (resilience, creativity, ability to fail, ego depreciation, considering the arguments of others, being “interesting”)
Health: fitness, mood, strength, anything which contributes to a healthy lifestyle and increased time alive, physical safety
Autonomy: can be tied to economic capital, but is not always. Is NOT equivalent to lack of responsibility
And in the end, we can use and trade our capital for 1. other forms of capital (hopefully you’re good at negotiation), or 2: Pleasure: that which is great and enriching in moderation but is not enough to sustain you alone.
Nearly all (all?) of our day-to-day conversations revolve around the central question: how do I trade my capital?
Do I take the job which pays more but may jeopardize my health (physical safety)?
Should I eat out with my friends? Is it worth buying that car?
Is it worth paying for this course or degree?
Should I go to the gym? Should I eat the ice cream?
How do I get more capital?
The list could literally go on forever.
The Government and Unions
It has always been the job of the government and unions to restrict the domain of such questions, especially for those less privileged, to minimize the abuse of power in zero-sum games.
For instance, properly imposed safety regulations should make it impossible to ask such questions as “Is it better for me to risk my life following the demands of my employer, or get fired and risk starving to death?”
Ethics and Morality
Practically, ethics and morality further restrict the rules of capital exchange. Even abstract problems, such as the trolley problem, can be viewed as a problem in allocating capital.
While these take the form of socially constructed laws, they also bleed into natural natural laws. To violate ethical/moral principles is to slash your own social capital in favour of [typically] economic capital or pleasure.
Because all forms of capital are highly interdependent, this strategy is high-risk and not advised for long term games.
Comparisons: there is no absolute
We all value certain capital more than other kinds of capital. This is what makes trade mutually beneficial.
There will always, without exception, be someone with more capital than you and with less capital than you (see the previous point).
The power of social pressure is predicated on the need for social capital.
There are certain really good moves in the game. What makes start-ups so alluring?
Ideally, they provide you with a high-status and high-paying job which interests you (pleasure) and leave you surrounded with a strong team you enjoy being a part of, all while having autonomy around your success. Though high risk, it is a means for growing your capital multi-dimensionally.
This art project was designed and created by the talented Nico Woodward and Nathyn Sanche at eatART Vancouver. Our team of engineering students were brought along to help control the thing. See more of Chrysafly here.
I was going to do a more technical write up for Chrysafly, similar to that of the robot competition, but I really didn’t see the value in it, so I will just add some pictures.
I was recruited to help Emma Gray and Michelle Khoo work on the wing calibration (position sensing using the encoders and manipulation of the wings). This included the controller circuit (below) and attempts at waterproof electronic enclosures.
Feeling Good, David D. Burns, M.D. (1980): Not a self-help book.
The worst thing about this book is the way it looks, and I’m not referring to the cover (though it could use an update from the 1980’s styling). Although this book is clearly marketed and focused towards treating those with depression, the CBT techniques described in the Feeling Good are universally applicable and will improve the worldview, productivity, and mood of anybody who takes it seriously. So, while I applaud Burns for writing the #1 go-to book on depression-related CBT inquiries, he might just be selling his content short by limiting its scope. While I would love to gift this book to everyone around me, I worry the message will get shot down immediately with the qualm “but I’m not depressed”. This book is about more than treating depression: it’s a new paradigm for anybody who has emotions which need interpreting (you probably fall into this category).
With that out of the way, here is my review: It’s Lindy!
The meat of this book is really in the first 150 pages, with the remaining 3/4 of the book exploring examples that may or may not resonate with you. If you are interested in learning real CBT and don’t know where to start, I would recommend this book without a doubt. If you aren’t interested in learning CBT, here’s why you maybe should be:
Cognitive Behavioural Therapy (CBT) is a means for rewiring your brain for betterment and productivity . It’s a method of emotional deescalation, which can be focused internally or externally. This deescalation will free your psyche and your mental bandwidth, making you calmer, procrastinate less, think more creatively, and even be more willing to take risks.
CBT is not cathartic. It’s not Freudian, either. Hell, it’s not even about expressing your feelings! Maybe that kind of stuff is overrated….
If you’re willing to change, which we all should be, you will be able to benefit from CBT. Read the first 150 pages and try the exercises. Avoid personalizing them too much. Perhaps you will be surprised by how easy and effective of a tool this is.
For better or worse, the practice of CBT has not changed much over the last four decades. Go pick up this book!
Here you are again. You know you should do it. In fact, you even want to do it. You’ve become so bored just floating along and not challenging yourself that it makes you sick and even spiteful towards yourself. Yes, scrubs is a fantastic sitcom, and yes, beer really does taste good and give you that momentary feeling of contentment, but goddam that’s a sad life and you know it. Every move you make is a defensive one because to act in any other way hurts.
And, although dying slowly in that fashion doesn’t hurt at all, it is extremely painful. You know it. No amount of alcohol can make you forget that you could be doing more and you could be doing better.
That knowledge makes you guilty. Look at yourself. You’re a complete failure. Look how pathetic you’ve become, just searching for a way to retreat and avoid the suffering of life. You could be doing better, you could be working on projects and bettering yourself and proving to yourself and everyone around you that you are worth the air you breathe. But for the past few weeks you haven’t been doing that. You’ve been hiding.
Since you’ve been hiding, you feel like you’re living a bit of a lie. It’s something that embarrasses you and you’d really rather others not know about it. The guilt builds up.
You haven’t been competitive with your peers for the past few weeks, and for the first time you see most of the people around you being more successful than you are. And really, it is your fault. There’s nobody else to blame. The guilt builds up.
You are so guilty and ashamed of yourself that you avoid even being in the presence of others; what if they find out your dirty secret? What if they realize how utterly pathetic and unskilled you are? The guilt builds up.
You end up trapped in your own prison. For fear of judgement, everything you do becomes painful and more difficult. You are so embarrassed about the whole embargo that you are paralyzed, unable to ask for help while the parasite that is this thought pattern continues to eat away at your motivation and intelligence. You can’t escape it.
You stare at your math homework once more. This used to be so easy. And this content doesn’t look that much harder than what I’ve already done, but getting going once more feels as easy as pushing against a moving train in hopes of not getting plowed. The wall you’ve built between you and the work is just too thick.
Does this sound like you?
Here is the model: once upon a time, your frontal cortex had a pretty good connection with cognitively heavy content. As challenging as the concept was, you had your full arsenal of attention to throw at it. But, as you start to run away, your fear system wraps around your cognition. It learns to become the mediator, or the guard, between your actions and your executive functioning.
So, the difficulty studying isn’t really about the math at all. The math is simple. It always was, it always will be. Math never gets harder or easier, it just exists. It has a constant runtime O(1).
But, instead of making calls to your frontal cortex object directly, your intelligence has become a nested class inside of your fear object. This design is really unideal as the fear calls are quite expensive and exhausting, AND the complexity scales with the strength of the prison you’ve constructed. I think fear calls are at least O(n), or maybe even O(n^2) depending on the person.
Thanks to the fear wrapper, the only way to get to your intelligence is to go through the middle man.
How to Forgive
It’s time to do some refactoring. The above pattern occurs because we have instantiated a recursive negative affect style towards studying. The loop will continue until you reach full self-forgiveness, accepting your faults, and thus being liberated by them [See related article and study].
Here are some tools to free yourself:
Write about yourself
Practice “active CBT”
Remember that this experience is commonplace
Perfectionism is death
You can’t brute force this
1. Write about yourself
The first step is to admit that this is a problem for you. Write or talk to a friend about it, I don’t care, but you have to get it out there. It’s impossible to work on a solution if you don’t define the problem.
Outline all the times you have fallen in this trap. Mention all of them. This might feel embarrassing for you, but in fact that is a good thing. To absolve your sins, you must lay them out and admit your errors. This takes vulnerability and strength.
Check: have you really forgiven yourself? Think of the time you have failed once more. Focus on it. What sensations surface? If you still feel guilt, embarrassment, or shame, maybe it’s time to restructure those narratives using a CBT strategy. There is no need to feel guilty about having these feelings resurface: retraining your brain takes time for everybody, and you will likely need to provide multiple iterations of restructuring before you can expect any changes to stick.
You have to put in the work with this. It takes time. But, because this activity is a source of forgiveness and not one of guilt, it should be the opposite of stressful.
2. Practice “active CBT”
While and after you write about yourself, you will still likely run into emotional barriers while working. It might give you a hit of anxiety. Once again, this is expected. Relax.
In addition to writing and practicing forgiveness outside of your study time, you will also have to restructure your emotional blockages while studying. Yes, this will cut into the amount of time you spend actually working, but compound payoff of this investment is worth it. Forgive yourself while studying. Give out forgiveness like the stuff grows like trees.
The difficult thing to get right here is how to differentiate cognitive restructuring and avoidant affect. This takes practice to get right, but it comes down to this: is your thinking focused on the material, or is it not? Are you trying to focus inwards for a distraction or are you maintaining an outward focus, looking for forgiveness and acceptance?
You should have an end goal here, and it’s to actively look at your homework page without feeling dread, guilt, or fear. During this exercise, don’t look for those emotions (as you tend to find what you’re looking for). Instead, actively look for peace. The important thing is to do so without escaping inwards or turning off. In time, you will find it.
3. Remember that this experience is commonplace
Loneliness is one of the most common emotions that we share. In the same paradoxical way, without your knowledge, many people around you are going through the exact same emotional trough.
4. Perfectionism is death
A lot of this guilt and dread comes from an overactive ego and the denial or subsequent hate of one’s imperfections. This is not to say that this trend is specifically narcissistic [in thinking that one is perfect], but rather that it is extremely dangerous to hold one’s self-standards of progress too high as one seeks perfection.
Once again: forgive yourself. You are far from perfect, and that’s perfectly okay. Your friends, colleagues, and parents are also far from perfect, but somehow the world hasn’t fallen apart yet.
Freeing yourself from perfectionism is NOT the same thing as rejecting the aim of self-improvement. Do not fall into either end of the trap.
In the same vein:
Accept that you’re not a machine. You’re a human with emotional and physical needs. Make sure that those needs are a priority for you, as they provide the foundation for your life and abilities.
Asking for help is not a sign of weakness. Don’t be afraid to do so.
5. You can’t brute force this
Aggression is a powerful tool, but it must be used sparingly. It sometimes seems that a simple fix to the problem I’ve outlined is to push harder and be tougher on yourself.
This is an effective method for squeezing the last bit of juice out of one’s faculties, but unfortunately it is self destructive and not sustainable. While it may give you the motivation to hand in that assignment today, you are doing it at the expense of your effectiveness tomorrow.
You are throwing yourself at each wall, successfully breaking through it, but one can not do this unscathed. As you get weaker, the walls remain the same strength.
This ring was designed in Fusion 360 (which I would say is much is much better thought out and more intuitive than solidworks) and was a Christmas present. The design was created using component subtraction from a “comfort fit” style ring. I don’t think there’s much to more to say about it so I will let the photos speak for themselves!
This is one of the first claw prototypes from Robot Competition. It was created with Onshape (I still think Fusion360 is king).
Woa! Look at that sexy beast! This enclosure was originally for UBC solar car to house our Nomura MPPTs and other circuits, but this was abandoned. I included it here only because I’ve been enjoying rendering things and oh my, that blue acrylic is absolutely sexy in the pale moonlight.
I did not create this, but I think it’s fun to see the perks of being friends with other engineering students. This is Gabriel waterjet cutting sheriff badges for our cowboy party!
Hi! Welcome. This blog post is made to show off the robot that my team and I developed over the summer. If you have any questions, please leave a comment or send me an email. I want to help any way I can!
Table of Contents
Controls and IO (line following, arm and claw control, navigation)
The “Disaster Bug”
1. Robot objective
The Engineering Physics Autonomous Robot Competition (known colloquially as robot comp) is a five week hands on project course in which teams of four work together to build an autonomous robot from scratch. And when I say scratch, I mean scratch. There is a lot of work involved when it comes to creating the chassis, wheel mechanisms, circuits (for control and sensors), and then creating code to navigate the course.
This is what the course participants have access to:
Andre Marziali provided H-bridge circuit schematics using the LTC1161 gate driver, as well as a crash course on development boards (STM32 “Blue Pill”)
Laser Cutters / Waterjet cutter with particle board and various plastics
STM32 “Blue Pill”, DC motors, mosfets, optoisolators, gate driver, Li-Po batteries, NAND gates and other electronic components [with restrictions, no integrated H bridges for instance, some other motors are banned].
Raspberry Pi allowed
The rest is up to us (using our fantastic instructors for guidance)!
Robots compete head to head in 2 minute heats. The goal: Collect as many stones as possible and place them in your gauntlet. The team with the most stones wins!
In accordance with previous years, 50% of teams captured a grand total of 0 stones during the competition day.
2. Mechanical Design
Most of this work was done by the fantastic Gabriel and Fiona. They put a lot of work into the design but this section will be short.
Our robot got its name from an Ikea lamp, due to its visual similarities:
We built an arm prototype out of Mechano but it’s not very 𝕒𝕖𝕤𝕥𝕙𝕖𝕥𝕚𝕔 so I will not include it here.
The arm design is especially unique. It uses a series of double parallelograms, which keeps the claw parallel to the ground at all times. It is controlled with two motors attached with worm gears. Its angle relative to the robot was controlled by a centrally mounted servo:
Yes, the circuits were messy. At least we had nice connectors between them. I would do this differently if I were to rebuild Hektar.
The chassis was made from particle board. The wheels and geartrain were designed to ensure a proper balance between speed and torque to push the robot up the ramp.
3. Controls and IO
Our robot was somewhat unique in that it was being controlled by a Raspberry Pi in addition to the STM32 “Blue Pill”. Why did we do this? Truthfully, it was mostly for learning opportunities. The team was really excited to learn Robot Operating System (ROS) because of its widespread adoption elsewhere.
Our robot had ’em.
Dual H-Bridge Design:
As mentioned above, this design was taken from Andre Marziali but it’s worth showing here:
The LTC1161 gate driver eliminated the need for P-channel mosfets, thus (according to the program director) making this a cheaper and more reliable design. Sounds good to me!
And, the soldered version (I think we all ended up making one or two of these):
There were a few more hiccups with this circuit we encountered. Our robot needed two of these circuits (because our arm was driven by DC motors and not stepper motors/servos), which lead to a shortage of PWM-capable outputs on the Blue Pill. To resolve this, instead of using two PWM signals per motor (forwards and backwards), I built 2-way demultiplexer so that the H bridge could be controlled with a single PWM signal and a forwards/backwards pin. This is the only change that we made to the above circuit (not pictured)
Line Following / Feature Detection
Our robot had an array of 5 infrared reflective sensors (QRD1114) to detect tape. The circuit output was regulated to 3.3v so the raw input could be fed into the blue pill.
Right: a testing of the soldered protoboard in action, transmitting data from the Blue Pill analogue read to the Raspberry Pi through ROSserial.
The Raspberry Pi/ROS did offer use one advantage that very few other teams had: WiFi. While of course we couldn’t use this during the competition, this made software tuning and debugging a breeze when compared to other teams. our PID tuning was performed with a GUI over realVNC!
For the curious, here is the component network that controlled all of Hektar’s motion:
ir_error_node: reported the state of how far Hektar is from the centerline. Also sends a flag when it believes a feature to be present (a T or Y intersection, for example).
control_master: keeps track of how many features the robot has hit and tells it what to do accordingly (line follow, stop, dead reckon). Also controls the arm.
The rest are somewhat self explanatory and are also visible on the github repo.
The use of a Raspberry Pi also meant that we could manually control our robot remotely:
4. The “Disaster Bug”
They all said that something would break the night before the comp for no apparent reason. I didn’t belive them, but they were right: it’s a measly 12 hours before our robot is put on display in front of hundreds of people, and Hektar forgot how to follow lines!
What is going on!? We should be calibrating setpoints for the arm locations*. After an arduous debugging process, we found the invisible culprit: EMF. Whenever the left motor was running, the unshielded serial connection between the Blue Pill and the Raspberry Pi would be interrupted (scoping the connection showed only the Hi logic level). Uh oh. That connection was our robot’s spinal cord, and without it Hektar could not walk.
How was this happening when our circuits were electrically isolated? Also, why was our robot working flawlessly for weeks beforehand, only to have it fail the night before? I actually still don’t know the answer to the latter question…
*we had shown that Hektar could pick up stones as well as deposit them. He could also stop at intersections and know which way to go. Collecting one stone was close and realistic, it was just a matter of putting 2 and 2 and 2 together.
The fix was two pronged. Firstly, we reduced the noise that the motor gave off by following this guide. But it wasn’t enough.
Then came the crash course in electromagnetic shielding and antennas:
The USB serial connection typically has 3 pins: Tx, Rx, and Ground. However, the Blue Pill and Raspberry Pi were already connected to the same ground because they were sharing the same power supply. The result: I had created a beautiful and large loop that is the perfect antenna for motor noise.
Once we removed this unnecessary ground, the robot’s future was getting brighter. Coincidently, so was the room we were in, because we had spent all night figuring out this problem and the sun was starting to rise.
You might be wondering why we opted for a USB serial instead of using the Raspy’s built in GPIO pins. Fair point, however the GPIO pins don’t have overvoltage protection and frankly I don’t trust my electronic taping skills very strongly.
5. Takeaways and Mistakes
I think the biggest takeaways from this project were not technical. They were to do with the organization of people in a small timescale project.
5.1. When it comes to Gantt charts and timing, you really don’t know what you don’t know. We found that it was rarely the technical challenges that impeded our progress, but instead all the “trivial” problems we didn’t foresee, such as dealing with library errors or setting up your environment, soldering things incorrectly, and integration. In this project I would say that finding and fixing small bugs took up more time than our actual engineering design work. This is not something I expected going into the project.
5.2. Stick to the agreed upon standards, even if it isn’t the “most logical” thing to do. I made this mistake and here I have to own up to it. The team decided upon a standard for the power rails on our breadboards ( I believe it was ground, 3v3, 5v from outside to inside). I realized while planning the new H-bridge design that if I swapped one of the rails, the circuit could look a lot cleaner and it would feel more “correct” to me. This decision was rash and lead to confusion amongst the team (and maybe a blown capacitor if I remember correctly).
5.3. Things that seem like great ideas due to the cool or clean/purity factor may not work that way in reality:
We built the jointed arm because we thought it would be really cool to have it, therefore ignoring the additional challenges we took on and did not have time to properly implement.
Our robot was meant to look clean and simple, and I think we succeeded on Hektar’s exterior. But, by making the external cute and compact, we were left trying to shove our all our circuits into the tiny box we had made (hence our resort to tape). The better solution would have been to make a larger chassis and create more room for circuits, sacrificing the cute small size for internal modularity and order.
5.2 was an example of me trying to make a cleaner design at the expense of design standards and modularity, and it ended up backfiring.
5.4. You can’t do it all. What is your goal here: To learn the most? To have the most innovative design? or to win [with a minimum viable product]? 5 weeks is not a long time. Our team chose to innovate and learn in the process, and the proof of that can be seen in the design of our robot. However, in a real engineering job (as my experience at Broadcom has shown) what matters is to deliver something that meets or exceeds the design specifications with the least amount of human/economic capital. Sometimes the best solution is not elegant or sophisticated. This was the truth that we decided to ignore for our project.
5.5. People aren’t robots: trust is everything and everybody has a distinct communication style. Often during times of tension there is no wrong person or perspective: the two are just speaking different languages. I encountered this with one of my group members. Some people like their ideas to be challenged head on. Others have a greater need to feel trusted (perhaps in the form of validation from the group) before an analysis of their work can be done. I see myself in the wrong for not recognizing this right away. At the end of the day, everybody has emotional needs and I believe your relationships are what matter far more than whatever work you can push out by yourself.
5.6. It can be a trap to spend too much time hypothesizing where problems may occur and not enough time running experiments to actually figure it out (though the contrary is also true). This is a trap I fell into, and this especially becomes a time sink when multiple opinions are involved. A failure during testing is not a waste of time or failure at all; it’s an invaluable resource.
Keywords: Robot Operating System (ROS), Keras, OpenCV
Hello all. Welcome. This is half a log book for the ENPH 353 course. It will focus on my contribution to the project (the computer vision portion). Shoutout to Gosha Maruzhenko for providing navigation and making sure we don’t hit any pedestrians.
Github repositories for some context:  Plate reader python notebook (uploaded to Colab for readability) 
This is a brand new course at UBC! So firstly I would like to provide special thanks for Miti and Griffin at UBC for setting everything up. I have learned a lot in this course.
The goal of ENPH353 is to design a robot to navigate virtual environments and read license plates using machine learning. Fancy. Stay on the road, don’t hit the pedestrians, you know the drill.
Apparently UBC Engineering Physics and MIT share a few course designs (ENPH 253, for example). For those who are familiar with MIT’s “Duckietown”, our task is quite similar to that. We even use the same framework to control our robots (ROS). The difference is that our track is not physical, and is instead modelled in a software called Gazebo, which integrates with ROS nicely.
This is the course we must navigate. We are restricted to only two methods of interfacing with the robot:
With these two I/Os, we must design a robot which will accurately report license plates and their location through a ROS message.
I know what you may be thinking: since this is in virtual space, can’t you just hardcode motion into the robot, select only a certain area of the screen after a timer goes off, ect, ect..?
The answer: yes, I guess you could do that. But why would we? We are here to learn, dammit. So if you ask yourself “why didn’t they just use this simpler exploit given to them by the nature of the simulation?”, the answer is most likely “for the sake of knowledge and art”.
With that said, like literally every other team I’ve talked to, we did end up resorting to a few cheap tricks. The plates are found with a somewhat selective colormask that is restricted to a certain field of view. I’m not proud of it, but sometimes you just need to plop in the ugly solution and get er done.
Yes, one could set up a full CNN plate reader which scans the whole image for characters, such as this beauty . Would indeed be dope, however in terms of training time this model is far from ideal and considering our short time frame it a high risk strategy to rely on one single complex method to do everything for us. Furthermore, the nature of the plates all being the same size and color makes it extremely attractive to break this into two separate systems: one that collects the license plate and another that tells us what is on it.
We had originally planned for the plate detector to be another object detection neural net. Due to time constraints we moved to an OpenCV color mask instead. It’s really nothing impressive so I’d say you can just skip this section.
Once the color mask was chosen, we passed it through an opening in OpenCV (that is, erosion and then dilation) to remove the random white pixels here and there. We then passed that through findContours, and s so that rectangular shapes are included with a minimum bounding height and width.
The problem here was that, for some reason, openCV also counted the road features as white things with 4 edges (see figure 2). This was a simple fix, which involved filtering for the purple in the image, once again finding contours, and then scaling the filled bounding box of the purple so that the white licence plate is always covered. Using that bounding box as another bitwise and mask, we have the final product which is effective and fast:
Observe that it is not perfect. Because the license plate is not included in our color mask (only the “true” white), the bounding box has been simply stretched a little bit. This works for angles that are head on but it leads to imperfection when the plate is read at an angle.
Lab 3 of this course was building a CNN to read the characters of a virtual license plate. However, since images were perfect (skew, or color deformation), and we can generate a lot of them, it achieved 100% accuracy within the first epoch….
In the real competition, running through Gazebo, where there is skew and color deformation due to lighting and camera effects, the task is not as simple.
To generate data, we employed two methods:
Create a generator python script, which would generate plates and artificially Skew them in front of collected backgrounds. This had the advantage of knowing the corner locations and text of any huge number of images, but does it map to the real virtual world?
Collect real-camera data through the use of a bash script. Said script launched spawned the robot in a specific location, turned it ever so slightly, killed the Gazebo model, and then did the whole thing over again. This has the advantage that it is real world data, but is it comprehensive? Unfortunately it also neglected to kill the xterm keyboard controller so after running it overnight my computer looked like this:
Of course, there was also the option of manually driving around, gathering and labelling data. Sounds like a bad idea, but once you realize that you could have over 1000 images of juicy REAL data in under 3 hours, it becomes pretty appealing.
We elected to use methods 1 and 3. At the end of this process, we had over 700 simulated license plates and over 1000 real license plates. Example Data is shown below:
Both types of data were they were unskewed and separated into individual characters (see the python notebook below) so they would be fed into the neural network. Each 40×80 character image looked this:
Keras Neural Network
This is the python notebook housing the neural network. Most of it is just piping the data to the correct form (unskewing simulated data and cropping, the result being what you saw above). But in the end we were left with this model and an accuracy of over 95% on real data:
I realize this report is quire surface level, so if you have any questions, please reach out! Though I truthfully don’t think there is anything too new to share here. But, here is the takeaway: this is a course for engineers. The main point here wasn’t creating a fantastic neural network and showing off technical prowess, it was creating something that works well in a short amount of time. This is NOT a research course. It is a project course, and therein lies the difference (in contrast to my opening statements about the art and beauty of our creation).
Also, remember it really doesn’t matter how fantastic your model is if: 1. You have shitty data and 2. your model is not a realistic depiction of the real world. I suppose this is the truth for any model you create, not just ML models (a la Nassim Taleb). I’ve talked to students who have run over 200 epochs on their model and generated 5-10x the images I have, for perhaps only a marginal increase in accuracy. That, I would say, is the power of having real data. You just can’t beat it.
These are some of the cool things I have found while doing research, hopefully they will aid in your next computer vision project:
There is one qualifying feature which makes humans capable of music, conversation, and sales: play.
For a machine to be equally skilled at play (based on a human’s perception) the machine would have to be able to pass the Turing test. It must be able to introduce expectations and break them. To do this, the computer must have an intuition for how a human will react to its output, and it must be able behave deceptively (though not maliciously so).
The best example of this need comes from comedy, narrative prose, and music. For how can one make tell a good joke without a hidden motive? Explaining a joke kills it. For an AI joke to be funny, the AI must be good at deception.
I’m not saying that this is impossible. I am, however, suggesting that there are four meta-skills that must be learned for good comedy to occur, and that the acquirement of these skills would mean that a machine is Turing capable:
Language skills equal to or greater than that of a human’s.
Deception skills equal to or greater than that of a human’s.
Emotional intelligence and assessment equal to or greater than that of a human’s.
A desire to do and create things, just because [the reward function tells me to].
The above four skills are clearly required for good comedy and narrative prose, but they are also required for musical competency if one chooses to map language skills from English to our 12-tone musical scale. And, if a machine can do all of this, how would it be distinguished by humans?
Sure, AI can already create music that sounds nice. But nice is a huge distance from awe-inspiring and mind blowing. Our current methods can produce pleasant elevator music, but they will never produce the next Jimi Hendrix. Consequently, I think it will be a while until an auto-generated song makes me say “damn”.