Ohio was hammered with snow last night, leaving me with a snow day today. After spending an hour shoveling myself out of the driveway (which I expect I’ll have to do again to get back in), I headed to Starbucks, grabbed a coffee, and decided to look into the current state of iOS app development.
I’ve dabbled in Mac/iOS development in the past, but I’ve never made the time to build anything significant/useful. This might relate to my opinion that writing code in Objective C wasn’t really fun. I get a thrill out of writing clean Python code that works well, whereas Objective C just kind of felt a little too raw and nit-picky. I frankly don’t want to have to worry about memory management, and “NS” everywhere just seemed like unnecessary typing and clutter.
I had heard of Swift, but didn’t really know what it involved or how it was different from Objective C development. All I knew was that it was an iOS 8/Yosemite-focused language, and that it was Apple’s new thing.
I was speaking to someone the other day about test-driven development, and he used a word that was unfamiliar to me: “kata.” I had done TDD as part of my software engineering class (writing unit tests first, and then writing code to pass those tests), but had never heard of kata before.
After clarifying how to spell it, I googled the term and found the page http://codekata.com/. I was distracted by the kittens at first, but then found a little bit of interesting discussion about deliberately practicing coding in an attempt to hone ones craftsmanship.
I do view programming as a sort of craft, somewhere between an art and a science, so I found this intriguing. I decided to try my hand at one.
E has been friends with M since they were little kids. One of their fond memories involves selecting and watching the After Dark screensavers on their teacher’s computer in elementary school.
For Christmas this year, E decided she wanted to figure out a way to give M the gift of After Dark for her phone, so she could pick and watch a screensaver any time she wanted to. Since I’m a computer scientist, I assured her we could pull this off.
For fall 2014, I took Knowledge-Based AI (GT CS-7637). Frankly, before I enrolled for the class I didn’t really have much clue what separated “knowledge-based AI” from other types of artificial intelligence, but I was fascinated by the topics mentioned in the course description (i.e., Watson).
There were two different types of assignments for the semester – one involved writing papers on different KBAI topics, and the other involved writing code to solve various forms of Raven’s Progressive Matrices.
The latter were really interesting. I’ve posted the code I developed to solve these problems on github, which you can view here:
We started with very simple versions of the problems, solving them based on textual descriptions, and in the final project we had to actually analyze image data and make our best guess. This is where my experiences with OpenCV proved really beneficial.
I structured my code around identifying shapes and sizes using OpenCV, recreating the textual descriptions that we used in the earlier projects. This allowed me to reuse most of the code from project to project, with only slight tweaks to optimize for data that was difficult to get using OpenCV.
Other students approached the problem by looking at pure pixel data, but I greatly preferred this method. KBAI is largely about modeling knowledge structures/information in a way that is similar to how humans store and process it. Pulling out information about each shape, in a way a human might describe it to someone who can’t see it, seemed more along the lines of KBAI. It also had the benefit that it just made more sense to me!
I’ve been feeling an itch to do some programming lately, and stumbled across these Coffee Time Challenges while browsing reddit. It’s not really “coffee time” for me right now since I’m on summer break, but I think the idea still applies.
I decided to post my solutions to github. You can of course browse them (but you should really try them on your own first!).
I had the pleasure of taking GT CS-8802: Artificial Intelligence for Robotics this past term as part of my first semester working on my Online Master of Science in Computer Science degree (that’s a mouthful). The course was taught by Sebastian Thrun, who besides founding Udacity, also works for Google on their autonomous car team.
I worked with a team of two other engineers to investigate localization techniques using landmarks based solely on visual information. In other words, we wrote code to help a robot figure out where it was based on pictures. Rather than read about it in boring text, though, you should watch our presentation video!