My DeepLens Challenge Project is called ‘Nautilus Face Tracker’. The goal of my project is to integrate the DeepLens Camera with Alexa, using AWS Cloud as the back-bone and brains of the system, to allow Alexa to recognize faces as well as learn to recognize new faces she has never seen before. My project was more of an AWS Service Integration problem rather than a Machine Learning exercise, really; but I did learn alot about the DeepLens Platform and definitely have my interest piqued in learning more about Machine Learning and Artificial Intelligence. In fact, my girlfriend also has a new interest in the subject presumably just from listening to me work on this project. She even started taking a ChatBot course last week on Coursera as a result. These are fascinating times to work in the Information Technology field (and I shan’t be outdone)!
Here’s the architectural diagram I made to describe and explain, at a high level, how I put my project together. The source code is still in a private repository while the competition takes place. The Hackathon ends on Valentine’s Day (2/14/2018).
I put this whole thing together using some fairly simple Python and nodejs scripts and a few Lambda Services. In retrospect, the AWS services and programming APIs are very easy and powerful to use.
Here’s the video I made demonstrating my project. A video demonstration of working code was a requirement for this project.
So What’s Next?
I definitely want to learn more about Machine Learning. I found some good learning resources as starting points, and Coursera has some good Deep Learning classes as well:
I’ve got one week left to complete my AWS DeepLens Hackathon Project. I wrote about my decision to participate in the AWS DeepLens Hackathon here. It’s been an enjoyable learning experience so far. I’ve seen a few projects doing some amazing Machine Learning things with real-time image recognition using things like Tensorflow. I can barely spell Tensorflow. Tonight, I was pretty stoked just to get my Alexa Spot to recognize my face by using the vision and face recognition capabilities of the DeepLens.
I just won this Echo Spot at my company Christmas Party a few weeks ago…
Now to apply some finishing touches and to submit my project. Tensorflow or not…yolo.
It was the best of times, it was the worst of times. Day 10 (for me) of the DeepLens Challenge (I first blogged about this here). I have made some progress and am now able to match face images, retrieved from the DeepLens Camera, against a face image gallery I built using AWS S3, Lambda, DynamoDB and the Rekognition Service (I used this blog post to get things setup). Using the Rekognition Service was actually pretty straight forward and easy, especially as there is a clear blog outlining how to start using it to go from. Unfortunately, working with the DeepLens Camera is not so easy at times.
Downloading Projects from the AWS Console to the DeepLens sometimes get hung up. I found that running
sudo systemctl restart greengrassd.service
on the Camera usually kicks it into gear and allows the Project to download. But the build deploy process is time consuming and fraught with missteps.
Your Project version can only go up to 9 for some reason, so I was deleting my Project when the Version hit 9. However, I ran into a bug last night where the DeepLens Camera would get Deregistered whenever you deleted an associated Project. So that meant resetting the device to put the on-board Wifi in the right state so that the device could be Registered with Amazon again. Arrrggh! And no deleting Projects until this is over!
My DeepLens was automatically updating itself putting my Camera in a bad state as the AWS Camera software was apparently incompatible with the Linux updates I was receiving. I finally figured out how to turn off the automatic updates (done when Registering the DeepLens with AWS), and followed steps to lock-in Linux kernel 4.10.17+.
This is a cool little song from the immensely talented Martin Garrix. I first heard this song at AWS re:Invent in 2016. The depth of the bass and sharpness of the sounds blew me away, not to mention the psychedelic jelly-fish visuals.
This is Day 5 (for me) of the DeepLens Challenge, which I talked about starting in my post here. I have to submit my project by February 12th or 13th. I’m making progress toward my project goal, which right now is simply to recognize a face in an image cache from a live video feed using the stock face detection model on the DeepLens device. Face and image recognition is pretty common place today, I guess, but I’m stoked to get something similar working myself. I’d also love to integrate Alexa into the mix somehow as well, but I need to start making bigger strides with less messing about with the fiddly things!
Coding Challenges And Solutions
Some of the challenges I’ve faced, and (mostly) overcome, so far include:
Cropping a detected face out of the DeepLens video feed in the Lambda Python script. Turns out this is very simple, but it took me a while to figure out.
How to convert the cropped face image to a jpg and write it to disk. Also very simple in retrospect, but I’m a moron.
I thought it would be easy to write the resulting face jpg to AWS S3 from the DeepLens edge device, but this one I just could not figure out due to permission issues. I can write to S3 using the aws cli as the aws_cam user, but so far I’ve not been able to extend those same permissions to the ggc_user account, which seems is what runs the awscam software. I even hard-coded credentials in the creation of my S3 client in the lambda code, but still had permission problems. I had to back-off from hacking on the device out of fear of really screwing something up, however. Best to stay off the DeepLens as much as possible in retrospect.
The only way I was able to get a face image off the DeepLens and into the cloud so far is by converting it to a base64 String, putting into a JSON object, and putting it on the IoT Topic. I worry that all this data transfer is going to cost me an arm-and-a-leg by the end of this thing…
When creating a lambda function to read from the IoT Topic, I kept getting a random error when trying to save it, which made no sense as I was following an AWS Blog Post for how to do the same. Then I found this: https://forums.aws.amazon.com/thread.jspa?messageID=825417&tstart=0. And this is what makes hackathons using new technology so fun! Writing software is really just lots of Google Searches.
And speaking of the Internet of Things (IoT), to-date I’ve thought this was just another marketing buzz word that wasn’t going to pan-out, so to speak. I used to think the same about ‘cloud’ (and still think this about Bitcoin and its ilk). But this DeepLens development challenge is giving me a greater appreciation for IoT and edge computing. In fact, we’ve been talking about the proliferation of internet connected things and the resulting possibilities since Java Jini, and probably before that, but I suspect Python will be its great enabler instead of Java at this point. But I digress…
Baby Steps, But Machine Learning Learning No Where In Sight
So as of today, I am able to leverage the stock face detection model to detect and crop a face out of a live video feed from DeepLens, send it up to the AWS Cloud Lambda IOT Topic Listener, and put it into an S3 Bucket. Next step is to try to figure out how to use the AWS Rekognition service to recognize face images in an image cache.
The Flow Zone
I’ve found listening to music particularly distracting these last few days. However, I find this Horn Solo in Tchaikovsky’s 5th Symphony really soothing and not distracting (but too short). I played this solo in Solo and Ensemble in High School. I’ve been told that french horn players are better kissers…
So far, I’ve learned some painful, hard-fought, lessons in the last two days. I was initially able to register my DeepLens device with AWS Cloud, no problem. The first hiccup I encountered was when I tried to push one of the pre-made models down to the device. They simply would not go, and there are no logs to look at, as that might be too helpful. So, thinking like a DeepLens device myself, I reasoned I probably screwed-up the IAM roles when I tried to register the device (later I learned my assumption was spot-on). To correct the model push problem I was having, I Deregistered the device hoping I could simply go through the Registration process again making sure my IAM Roles were properly configured. And wouldn’t you know the dang wifi on the device stopped working preventing me from logging in to the device to re-register it with the cloud.
The way the DeepLens currently works is that you can only configure it (and upload the certificates it needs to identify itself with your AWS Account) by using it’s on-board wifi and pointing your web browser (on another computer) to http://192.168.0.1. I still can’t get over how odd this is – not sure what Amazon was thinking with this 🙂 . I think it’s odd because my first inclination is to treat the DeepLens like a first-class computer, meaning I have my keyboard, mouse and monitor connected to it. Why would I need to configure it from another computer over wifi? OMG so funny!!
Whither Went My DeepLens Wifi
So the wifi simply would not come on again, as life’s ironies often dictate. So my girlfriend and I went out to Best Buy in 20 degree weather (I bet your girlfriend wouldn’t do that) to buy a USB Hub and a USB-to-Ethernet connector, the idea being that if I could get the device online over ethernet, maybe I could configure this thing that way. Using a hard-wired ethernet connection, my DeepLens was back online, but now with an IP Address of 192.168.1.13. The instructions say to connect to your device console using http://192.168.0.1. Being the contrarian that I am, I tried to connect using http://192.168.1.13 – yeah, no dice. In fact, I could not even find anything running on port 80 of the device at this point. What had I done?!?
AWS guys, I’d totally put an ethernet port in the back of this device.
After poking around a bit, I found the awscam software in /opt/awscam. It looks to me like the DeepLens console is just a nodejs app that is served by some python scripts in the daemon-scripts directory. And wouldn’t you know, those scripts are hard-coded to bind the nodejs app to the wifi device and to run on port 192.168.0.1. I’m dying here. Ok, so I either have to figure out how to modify the daemon python scripts to use the eth0 device and bind to 192.168.1.13, or I have to get the on-board wifi working again.
Luckily, I saw a mention on the AWS forum about a possible Linux Kernel incompatibility with the DeepLens wifi hardware, so I decided to try the path of getting the wifi hardware working again by reverting to an older Linux Kernel, if one even existed – I didn’t know at this point. The following video got me over the hardest piece of solving how to boot an older Ubuntu Kernel:
The GRUB Loader does not display upon reboot in the DeepLense by default, so my first step was to get the GRUB Menu to show:
comment out GRUB_HIDDEN_TIMEOUT
sudo update-grub (or do it as root)
Once the device reboots you will finally see the GRUB menu – fantastic! Select advanced settings, then select the 4.10.17+ kernel. Once rebooted, the on-board wifi should be working again and the little blinky middle light should be happy again. Now you should be back on track to register your device per the AWS instructions. And if you ever need the happy blinking middle wifi light again, the setup pin hole in the back of the camera should work as long as you are running the correct Linux Kernel.
I’m not positive the kernel is the problem, but I am positive these steps worked for me. And how did I get kernel 4.13.0-26-generic installed in the first place? I’m not even sure. I did try to update my device, and maybe that was the start of the problem? I’m not sure.
Anyway, I am now able to download the pre-built Face-Detection Project to my device, as seen here:
At this rate, it’s doubtful I’ll get anything built by the hackathon deadline, but it’s kind of fun messing with the hardware.
This Armin Van Buuren Ibiza set is so tight. Love it, especially around minute 40!
I know next to nothing about Machine Learning. Shoot, I don’t even have a C.S. Degree. But damn the torpedoes, full speed ahead, I’ve committed to completing a project for the AWS DeepLens Hackathon, currently slated to conclude on this February 14th, 2018. Technology is advancing at a break-neck pace and this is one way to try to keep up. Plus, I’ve heard that the first Trillionaires will be minted from the A.I. Industry, so show me the money! I thought it would be cool to blog some of my experience using the DeepLens technology during this hackathon (and I’m actually writing this blog on my DeepLens device).
Actually, on a side note, I’m a little worried about where technology is going these days, especially with such a strong emphasis on A.I. and autonomous machines and all, all driven by profit and power motives, not REAL problems. If we’re not innovating, we’re dying, right? But as Sun Tzu once said, keep your friends close but your (potential) enemies closer.
Confronting the Beast
I went to AWS re:Invent last November in Vegas and somehow managed to get into one of the last DeepLens Workshops of the Conference. Competition for these workshops was, shall we say, fierce! Attending meant I had to miss the re:Play party, but at that point I didn’t really care since the D.J. was not Van Buuren, Garrix or AfroJack. By attending the DeepLens Workshop, I was able to take a free DeepLens computer home, and even received a voucher for $25 worth of AWS Credits to get started.
I was really psyched to get started on the hackathon upon returning home, but I already had a project in progress I had to complete first. Fortunately, I finally completed my Android App (my second Android App ever) and got it released in the Google Play Store (CANDLES Tracker) on 1/11, so I finally have my evenings ‘free’ to devote to this hackathon.
So yesterday, 1/12, I cracked open my DeepLens box and unpacked the device. I realized I needed a new keyboard, mouse and HDMI cable with a micro-HDMI male end. So last night I ordered these things, off of Amazon…of course, and received them in a Prime Shipment by the time I got home from work this evening.
Tonight, 1/13, my DeepLens is all connected and registered with my AWS Account. I imagine it’s not going to take me long to burn through the $25 AWS Credits when I start uploading data to train my models, which I will hopefully get a better feel for this weekend.
Once the camera boots-up, you can log into the OS using the password ‘aws_cam’, which is the same as the username. You can connect to wifi and use Firefox to get on the internet and access your AWS Account from there. Strangely though, the instructions say to connect to the DeepLens Wifi endpoint from another computer and configure it using a browser pointing to http://192.168.0.1. I found this strange as I was already logged into the device, but could not get Firefox to connect to http://localhost to connect to the configuration portal from within the device. But it’s all working now by simply not thinking and following the instructions.
Start deploying some of the Amazon pre-built models to get a better feel for the deploy process and integration possibilities outside of the device.
I’ve recently gotten hooked on this Carl Cox Ibiza set, which is a nice groove for hacking to, I’ve found:
Early to bed, early to rise, work like hell and advertise.
I just finished reading Ted Turner’s autobiography, ‘Call Me Ted’. This truly was a page turner…get it? Seriously, I very much enjoyed his book and feel like I learned a great deal about not only Ted’s life and how he thinks, but what it must be like to be a businessman operating at his level.
Ted has led a fascinating life – from winning America’s Cup, to winning the 1979 Fastnet Race, to owning the Atlanta Braves, to taking the Atlanta Braves to the World Series, to starting CNN and the Goodwill Games, to living the life of a Billionaire. He got his professional start working in his Dad’s Billboard Advertising business, which he eventually took over and proceeded to grow. He eventually moved into television and started Turner Broadcasting. It was his business pivot into television that allowed him to create such innovations as colorized versions of many famous black-and-white films, like ‘Gone With the Wind’, the 24/7 news channel CNN, and my personal favorite, the Cartoon Network (I used to love ‘Johnny Bravo’ and ‘Dexter’s Laboratory’).
Reading about Ted Turner’s life is an excellent study on how to increase the scope of your thinking, problem solving skills, and level of persistence.
Here are some themes that impacted me from Ted’s Book:
Discipline. Knowledge of Military History, Discipline and Bearing can help in outmaneuvering and executing business competitors. Ted was a graduate of McCallie in Tennessee, which used to be a military prep school at the time he attended. All of Ted’s sons attended The Citadel, The Military College of South Carolina (my alma mater) because of the importance Ted put on military schooling.
Leadership. Sailing provided Ted a laboratory to practice Leadership, Team Building, and competitive tactics. The leadership lessons Ted learned from skippering an ocean-going vessel were applied to his business. I’m sure the networks he built around sailing helped in business as well.
Problem Solving Acumen. One example of the importance of big problem solving is highlighted during the disappearance of the RCA SATCOM III satellite after launch, which was to be the transponder for the new CNN news channel. Problem solving skills, at scale, help make the impossible possible, which is a necessity for innovators seeking to disrupt the status quo.
The Power of Good Debt. Reading the book, it seemed like Ted’s ventures were in debt most of the time, waiting for profitability. To float a venture until profitability or until new investment money was secured, Ted would often sell previously acquired assets from the billboard business. By selling assets in the billboard business to fund new ventures in the television and cable space, Ted parlayed his fortune into an even greater one.
Have Powerful Friends. Friends like John Malone helped Ted deal with difficulties encountered after buying MGM from Kirk Kerkorian and during the Turner Broadcasting merger with Time Warner .
Love of the Environment. “Why, land’s the only thing in the world worth working for, worth fighting for, worth dying for, because it’s the only thing that lasts.” That is a quote from ‘Gone With the Wind’ that Ted references to underscore his feelings about how important the environment is to him, and should be to everyone.
In 2017, the world’s billionaire club grew by a healthy 13% to roughly 2,043 people. In my home state of Virginia, according to Forbes, there are just 5 such billionaires. Since there are roughly 8.3 million people living in Virginia, this gives me about a 5 in 8.3 million chance of being counted as a billionaire. As remote as that possibility is, it’s still better than the 1 in 258,890,850 chance I have of buying a winning Mega Millions lottery ticket, and the pot for that currently stands at a paltry $145 million.
Yes, the odds of becoming a millionaire or a billionaire are annoyingly remote, even for the average ‘free’ American. Nevertheless, my goal is to be financially free and in command of my own financial future at some point in my life before I die. The benefits of financial freedom are obvious. We all dream about it. But the execution of the dream seems to vary greatly, while many have completely given up on it all together.
Here’s a rough plan to Financial Freedom I have begun to formulate for myself. This particular plan, as opposed to the ‘win the lottery’ plan, seems easiest and most practical for me to execute, even though it is far from ideal:
According to my current calculations, my closest shot at financial freedom will be in 19 years when I turn 67. That kind of sucks, but let me sketch it out anyway. I have three kids to get through college over the next 8 years and I have a house mortgage to pay off. I have accelerated my mortgage payoff through additional principal payments of $1000, which should allow me to pay it off in the next 10-13 years (if I can be consistent with the plan). Helping my kids get through college will induce some financial headwinds over the next 8 years. As a divorcee, my alimony payments will end in roughly six months, which will free up additional capital to help with college expenses. Once Child Support, Alimony, College and Mortgage expenses have all been paid, I estimate my monthly expenses to be somewhat akin to the following:
Water Utilities: $50
Electricity Utilities: $67
Gas Utilities: $42
Health Insurance: $800
Car Insurance: $60
Life Insurance: $45
House Taxes: $410
Car Taxes: $25
Annual Car Maintenance: $167
Total estimated monthly expenses should be around $2,536, or $30,432 per year. My current estimated monthly Social Security benefit at age 67 is $2,893, which should just cover these estimated living expenses. Additionally, If I can manage to save $1500 per month for the next 228 months (taking me to age 67), the nut accrued could provide an additional $950 per month for the next, and probably last, 30 years of my life. So from age 67 to 97, I should have about $3,843 per month to cover living expenses, at least until I succumb to assisted living (at which point I become my kids’ problem lol)!
In one of my blog posts last month, I wrote a little bit about a garden we were planting in our back yard. A month later, this garden has already started producing fruit…or vegetables, as the case might be. It’s a great feeling to grow many of the vegetables that we eat on a daily basis, right in our own back yard.
“Ideas are welcomed, but execution is worshipped.”
My girlfriend and I attended Freedom Fast Lane Live in Austin, Texas in 2015 (I wrote about this experience here) and we were fortunate to hear Jeff Hoffman speak. We learned that Mr. Hoffman went to Yale University and graduated with a degree in Computer Science. He almost got kicked out of school on his first day because he could not pay his tuition bill. To pay his tuition, he started a software company in his dorm to help pay his way through college.
He learned early on that entrepreneurship is a tool that could allow him to solve problems in not only his own life, but the lives of others as well.
“Entrepreneurship is the shovel you use to dig a path to a brighter future.”
Mr. Hoffman dreamed of doing bigger things. It did not take him long to start launching his own companies. As a result of his entrepreneurship, he became a multi-millionaire in his early twenties. At the time we heard Mr. Hoffman speak in Austin, he had taken two companies public, two companies he had started had failed, and two companies were still going strong. Perhaps the one company he started that he is most well known for is priceline.com.
“The whole point of being a business owner is to design the life you always dreamed you were going to have.”
Here is some great advice I’ve noted from some of his talks:
Solve Real Problems: Entrepreneurship is about solving real problems. Mr. Hoffman created software that allowed people to book travel online as opposed to over the phone with a travel agent (this later became Expedia). He was once annoyed that only one person who worked for the airlines could print boarding passes. He created and patented kiosk technology that allowed anyone to print their own boarding passes.
FOCUS: Follow One Course Until Success. Win a Gold Medal at ONE thing. Then move on to your next best idea or thing. And winning a Gold Medal is *REALLY* hard.
Harness The Power of Wonder/Curiosity: Stay curious about things around you. Never stop asking ‘why?’. Answers to your curiosities can lead to innovations, newer and better ideas, and solutions to problems. Never be satisfied with the status quo.
Info-Sponging: Spend time each day reading about things and look for things that interest you outside of your industry. Write these things down. Try to connect the dots between interests over time.
Filter Data Through the Eyes of Your Customers: If your executive team is not a representative cross-section of your customers, take their data analysis and decision making with a grain of salt. Get real feedback from real customers. Sam Walton used to put on a John Deer hat and go to a diner to buy Apple Pie for people who were representative Walmart customers, so he could learn more about their buying patterns.
Dream Big and Make it Happen: Hone in on your dream. Print out a picture of it. Make it the reason behind the things you do.