Thursday, November 26, 2020

Charity Event

Greetings,

I'm going to be playing Super Metroid for a charity event coming up on December 5th, 2020. I will be playing starting around 8:15am EST. I expect the gameplay to take about an hour and forty-five minutes at most.

If you'd like to watch you can see more details at www.dogpoundexpo.com this will include a schedule of events, and list of runners.

You can find out more details about the charity here: https://www.dogpoundexpo.com/charity



You can view the official twitch channel for the event here: https://www.twitch.tv/meddadog

I will be donating up to $250 matched to any donations during my run. So if you want to see me parted with $250 join and donate!

If you want to follow me I'm on https://www.youtube.com/channel/UCfkc2ygXwub-1pUj_Nq6hPg/ and twitch.tv/TysonRuns.

Thanks, 

Tyson

Friday, December 21, 2018

Deep Learning For Security Cameras Part 1

This is part 1 of a series of posts on my experience trying to build a object detection setup for my home security cameras. Here I'm going over some preliminary results and talking about the history to this point. I plan to graph my results and look at what I have in future posts.

Deep Learning is all the rage these days. I get it, the idea of letting a computer extract out features and find things is amazing. I love it. In fact while, working on my Masters at Georgia Tech I took any Machine Learning class I could get my hands on and the Reinforcement Learning course that was also offered. They were AWESOME and they really opened my eyes to how a lot of this stuff worked.

A few months back I started with using darknet and the tiny YOLO approach. I setup an RTSP server on my raspberry pi in python to pull images and wanted to see how it did, I experimented on a few images, but to my surprise, (although in retrospect, unsurprisingly) it failed.... pretty bad. The first attempt it actually saw the car on the side of my yard, which was utterly amazing. I ran it again mere minutes later and it never saw the car again... after multiple attempts. Of course this was all running quite slow, if I recall on the order of 60 seconds per image. I decided that as long as there was some kind of "motion" that zoneminder detected then I could have it process on that. I was slightly less concerned about realtime than I was about just simply notifying me at some point in the future if something odd was going on.

I decided that over the holiday break I would try to work on this detector more. I started downloading all my camera data (which is really not all that old), and it turned out I had about 65GB of image data from motion captures using zoneminder. First off, sharing that is difficult if I wanted to with anyone to work on this, so I am working on pairing it down, it is at least broken down by camera, so I should be able to divy it up that way.

I decided to try out using the image recognition to even detect anything out of these images as a first pass. I got everything setup and used https://www.tensorflow.org/tutorials/images/image_recognition
I modified it to run through an entire folder, it then spits out 3 different files. First is a mapping of image -> human string, score and index. The index is used for looking up the human string in the mapping. The second file is a listing of the index with all images which had that index (didn't matter the score as long as it was in the top 5). Finally, I output the index to human string mapping, so I could easily look things up. The files are named image_filtering_analysis, image_filter_mapping, and image_filtering_by_class. I opened up the image_filtering_by_class and looked up the first thing I saw, which was 160. This translated to "wire-haired fox terrier". This was a view of my driveway and I thought....well I mean I suppose that might be possible.

wire-haired fox terrier

Ok so first one is not very awesome. Though to be really fair, I looked up the confidence score and that one got 0.016631886. Interestingly these were the top 5 scores.

  • submarine, pigboat, sub, U-boat, 0.27390754
  • patio, terrace,  0.064846955
  • steam locomotive, 0.040173832
  • fountain, 0.023076175
  • wire-haired fox terrier, 0.016631886
I looked up what some of the labels were in the dataset they used. I didn't see any just plain "car" labels. This is just odd to me. I will likely have to re-train on my data, but for now I need to figure out a way to find common features.

So of course improvements could be to require the scoring to at least reach some threshold, so suppose we set it to 50%, this would prevent seeing that particular failure. Searching through my data (which by the way hasn't finished running). It found another version of the same image above but called it a patio, terrace with over a 90% confidence. In fact the image with the highest confidence was also labeled a patio, terrace. Here it is.
patio,terrace
I can only speculate that the poor pixelated images on the left is well me, looking at it closely it sort of looks like a body. It's in the images for a few frames and it appears there are arms. I am pretty sure I was bringing a package in that day.

I wonder if someone could (maybe me ;)) build a form of a feature detector much like is used in CV, but instead of being trained to a specific task, it's generic. This might enable a way to group common features across images and make creating a supervised dataset easier.

I'll post more interesting ones as I come across them. I think at this point most of my image recognition's are at night and will eventually hit daytime which I think will show some even more humorous results.

For now, it's suffice to say. This detection is not doing well. I wanted to at least run this with the hopes that it would find cars or other things for building up a supervised set, however I don't think that's going to happen.

Here are some plots of what labels had been applied to the images (of course none of these have any references to confidence, but it's interesting).








Tuesday, November 15, 2016

Busy, Busy, Busy

I've been a busy boy while going back for my masters, I'm currently taking 2 classes, and have been posting quite a few videos called 60 Seconds to Success in OMSCS. Anyhow if you're in the program and you want some helpful tips go check it out.

https://www.youtube.com/channel/UCR-mQpFEIiBWd164H4-yhgw

I'll try to come back once in a while and post some fun stuff.

Lately I've been doing Hough Transforms, Disparity Maps, solving linear algebra problems for stereo imaging and calculating Optical Flow on sets of images. It's been pretty wild, and I'm getting ready for the end (2 more assignments left), but I can't wait to get setup using lightshowpi for my christmas lights.

Saturday, March 12, 2016

Pickling for easier testing

Today I want to talk about Pickling. This approach allows you to save off a variable/data for re-use later.

Why would you want to do this? Well suppose you're like me and you're working on an AI agent, and you have a LOT of problems you need to run through, and all these problems take a little bit of time, and say 50 of them are already solved by your agent, but that 51st isn't, instead of running all 50 over and over (unless your agent is learning things, in which case, you're good). You can pickle the passed in variable, and then make a simple "test" function that will call only the one you want to call so that it's way easier to test out individual parts.

For more general information about pickling take a look here: https://wiki.python.org/moin/UsingPickle

Lets get started.

Import Pickle in your file with the following line added to the top of your Agent.py file:
import pickle

(I"m going to be using some verbiage from my AI course, however the basic idea is this, the agent we have starts with normal class name, __init__ and must implement a Solve function).

Make the start of your Solve function look like this:

   def Solve(self,problem):
      pickle.dump( problem, open( './pickles/' + problem.name + ".p", "wb" ) )

Finally Make a new directory called pickles so pickle can save it.

Finally run your agent like you normally would.

If you notice there should be a bunch of .p files in your pickles directory.

Next lets setup a "test" agent that allows you to run a SINGLE test (I know folks on our forums have asked about this, and well... here it is!

Create a new file called test.py

Inside it you would add the following

import pickle
from Agent import Agent

A = Agent()
problem  = pickle.load( open( "./pickles/Basic Problem E-09.p", "rb" ) )
A.Solve(problem)

Replacing the .p filename with whatever you choose.

You can make this more "generic" if you'd like by accepting command line input and then you run it like this .

test.py Basic Problem E-09 and it would run the one you command it to. Pretty Slick right?

That would look like this:

import pickle
from Agent import Agent
from sys import argv

A = Agent()
script, name = argv
problem  = pickle.load( open( "./pickles/" +  name + ".p", "rb" ) )
A.Solve(problem)

and be ran by this command.

python test.py "Basic Problem E-09"


The final test.py can be found here:
https://gist.github.com/onaclov2000/d1d7fc01b22b98e0098e



Wednesday, March 2, 2016

Thinking outside the box

I'm currently enrolled in the degree program Georgia Tech offers through Udacity. It's called OMSCS. I'm taking Knowledge Based Artificial Intelligence:Cognitive Systems.

The primary project we are working on is Ravens Progressive Matrices.

I have tons of ideas, and approaches. I'll talk about one that I'm experimenting with (and have no idea if it'll work).

One thought that crossed my mind was what if i could think of these problems as a time relationship. Could I apply a Fast Fourier Transform (FFT)? Well I am giving it a shot.

First I took a line by line reading of the image, then layed it end to end. So in a way you have a all kinds of crazy waveform.

Next I used numpy to convert to the Power Domain using numpy.fft.fft(array).

I'm not much further than this, but I did try dividing A by B and A by C and graphing these ratios along with inverse FFT'ing.

I thought the graphs looked really neat. So I am going to leave a few here for your enjoyment.

Which by the way I should note, I have no idea what I'm doing here, it's experimentation and who knows if these graphs are even logical, but they're cool looking.

A/C
A/B


Original Images
C
A
B
Have a good one, I'll probably post more about this in the future.