Monday, January 13, 2025

Home Served Apps

I have discovered, there are lots of sites/tools I like to use but I don't want to have, to download, etc.

I call these simple apps, I think you see these everywhere, want to shrink a pdf? Sure either install some tools locally, figure out syntax, or find a sketchy page and try that (in the meantime uploading your pdf to some website, no idea what they do with it).

In comes.... the AI assistant to the rescue. 

I spent the better part of my christmas break making simple apps that I need and don't need to go to a public site to, using chatgpt as my assistant.

For example. I needed to convert a png to svg.... sure there's online options, but I don't want to have to use them. I also don't want to have to recall the command options for each of these. So I made a converter.local site (in my home network) and now I can just go there, upload an image, and it'll return an SVG.

Here's the list of tools/sites I've made that make life just so much easier. And a link to something "simliar" (mine usually don't look as pretty but they just work)

  • youtube downloader site (https://yt1d.com/en16/)
  • png to svg (https://convertio.co/png-svg/)
  • shrink a pdf (https://smallpdf.com/compress-pdf, https://github.com/aklomp/shrinkpdf)
  • extract audio from video (https://biteable.com/tools/extract-audio-from-video/)
  • podcast host/feed (I don't even know here what is similar)
  • Perform K-Means on pixels
  • Extract all colors to their own image


Tuesday, April 16, 2024

Portable Car Media Server

***

This is a work in progress post, it has most of the content of what I did, but it's after the fact, and I haven't replicated it to ensure it works exactly as described, but you should have enough info to accomplish what I did!

***

 

(Note I have Amazon Affiliate links below FYI)

I decided to build a portable Jellyfin Server. I wanted my kids to be able to login and watch shows as needed while we were driving and thus started going down the path.

I bought a travel router from amazon. I came across this on HackerNews a while back, and liked the idea, which is where this all started.

We used it on our last road trip and it worked quite well.

Parts required (I used stuff I had around, so the links below are largely placeholders for you to try, with the exception of the travel router and power bank, I did buy those).

1 Usb-A to Usb-micro cable

1 Usb-A to Usb-c cable

1 Power Bank

1 Raspberry Pi (I selected 3b+ as it's what I had lying around)

1 SD Card (If you already have a pi up and running, you may not need this)

1 GL-MT3000 (Beryl AX) Pocket-Sized Wi-Fi 6 Wireless Travel Gigabit Router

1 Storage usb drive

1 Shutdown Key (Or just find a spare usb device you can bring along)

1 Storage Case (Totally optional)

Key Stages:

  1. Get Pi Setup and running (Assumed you have set this up)
  2. Get Jellyfin installed on Pi
  3. Setup USB Drive for media
  4. Get nginx installed on Pi
  5. Static IP on Travel Router
  6. Get Adblocker running on Travel Router with Redirect
  7. Get USB Shutdown Key Created


Installing Jellyfin

https://pimylifeup.com/raspberry-pi-jellyfin/

Ultimately I just searched install jellyfin on raspberry pi and it just worked.

Setup USB Drive for media

TODO: More details on how I setup with the USB drive.

Installing nginx

https://engineerworkshop.com/blog/setup-an-nginx-reverse-proxy-on-a-raspberry-pi-or-any-other-debian-os/

Ultimately we will follow the above however there are some details that are a bit fuzzy for me. It has you edit this

sudo nano example.com.conf

file, however my system wasn't setup like that,  so instead of that I used default (likely this command):

ln -s /etc/nginx/sites-available/default /etc/nginx/sites-enabled/default

From there we need to update:

sudo nano /etc/nginx/sites-available/default

to contain this:

server {
	listen 80;
	server_name jellyfin.car.com;
	location / {
	proxy_pass http://localhost:8096;
	}
}

This means that we will redirect a request coming in for jellyfin.car.com to the jellyfin server. You can replace jellyfin.car.com with whatever you want, but that's what I used.

I specifically used localhost, because when I was doing setup, I was on my main wifi, so when I swapped between wifi's as long as I had the request coming in via jellyfin.car.com (on my main or the travel wifi) it handled correctly.

Static IP

https://docs.gl-inet.com/router/en/3/setup/gl-e750/more_settings/#static-ip-address-binding

This should generally be what you need to do. Figure out your mac address, and then put in a static IP there. I think the default IP range is 192.168.8.x so I set my pi to something like 192.168.8.31.

Get Adblocker running on Travel Router with Redirect

https://medium.com/@life-is-short-so-enjoy-it/homelab-adding-local-dns-entry-into-adguard-home-arpa-and-pushing-to-clients-from-udm-se-8493253830e5

Next we are going to turn on the adguard (A sweet feature of the travel router), and point the redirect to our pi!

Specifically we are going to do the section called add local DNS entries into Adguard.

I put in 

192.168.8.31 jellyfin.car.com

(instead of their arpa example)

Now when I visit jellyfin.car.com on the wifi that router is on, the adguard will look that URL up, and then return the pi's ip address, which in turn, means that it will connect over NGINX on port 8096

 

Get USB Shutdown Key Created

http://blog.onaclovtech.com/2024/03/usb-shutdown-key.html

Finally, we will setup a USB shutdown key, this makes it so when we are ready to shutdown for the day we can kill our pi, and unplug power after maybe 30 seconds.

You can use any old usb device I have an old 32 mb usb key (my first perhaps?) that I am using, I labelled it Shutdown Key.


Results

 

Once you have this setup, you can kick it off, the system should last on the battery bank more or less 5-7 hours (maybe more, maybe less, depending on use).

Alternatively you can buy two banks, and then just run the router on one, and the pi on another. 

Finally you "could" possibly just run your router off the USB power in the car, so you don't need the power bank at all, however I didn't test that out, I think the router can handle power interruptions, the pi generally can get finnicky on that, so I wanted a way to be in control of that which is why it's on a power bank, so if I shut the car off I don't accidentally forget to gracefully power the pi off, and kill it in the middle of my trip.


***

This is a work in progress post, it has most of the content of what I did, but it's after the fact, and I haven't replicated it to ensure it works exactly as described, but you should have enough info to accomplish what I did!

***


Friday, March 22, 2024

USB Shutdown Key

While building up a raspberry pi system recently I needed an easy way to shutdown the device without having to ssh into it.

I had an idea pop into my head, what if I plug in a usb drive and maybe it automatically runs a script on the usb key, or something and powers the device down.

Well after a bit of googling it turns out.... you can use udev rules to do what I want, and no need to put anything on my drive.

This is the process.

First, you figure out the vendor ID and the product ID of the usb key. 

The way I did this was by running lsusb and noting the devices, since this is a raspberry pi, the list was short.

I then plugged the usb drive in, and ran lsusb again. I noted which device it was.

Then I ran lsusb -v to get the values for idVendor and idProduct. 

I then copied them into this line (remember not to include the 0x), in the applicable spots. 

ACTION=="add" , ATTRS{idProduct}=="2168" , ATTRS{idVendor}=="0ea0" , RUN+="/usr/local/bin/my_shutdown_script.sh"

I then placed that line into this file:

sudo nano /etc/udev/rules.d/99-usb-shutdown.rules

Next, I edited /usr/local/bin/my_shutdown_script.sh to contain the following:

#!/bin/sh sudo shutdown -h now 

Finally I ran 

sudo chmod +x /usr/local/bin/my_shutdown_script.sh

on the file and rebooted, just to make sure everything was stable.

Now when I plug in the usb drive, bam, it just shuts down.

Thanks to this post for the details they used, which was exactly what I did with the minor note that I used lsusb -v (they just mentioned using lsusb ;))! 

https://www.reddit.com/r/linuxquestions/comments/yiw13c/trying_to_create_a_udev_event_to_safely_shutdown/

Saturday, December 30, 2023

2023 Donations

In case anyone is interested or needs ideas this is the list of Non-Profits I donated to this year.

I'm making it a goal of increasing my donations by 3% each year. (This is the first year I was intentional about that decision)

Signal -- https://signal.org/donate/

Pihole -- https://pi-hole.net/donate/#donate

Ronald Mcdonald House -- https://rmhc.org/

Farama Foundation -- https://farama.org/donations

MIT -- https://giving.mit.edu/form/#/ (I searched for opencourseware)

 

Archive.org -- https://archive.org/donate

 

EFF -- https://supporters.eff.org/donate/year-end-challenge--DB

 

Explora -- https://www.explora.us/become-a-member/donate/ 

 

To Write Love on her arms -- https://twloha.com/donate/

 

 

This is my current list, I need to donate more, but will have to do some looking to see what else I'd like to help out with!

Wednesday, December 16, 2020

Raspberry Pi and Circuit Playground Express Temperature Logging using Python

So I tried searching for the above title and well, it gave me this as the top result:

https://learn.adafruit.com/circuit-playground-express-serial-communications/overview

This one wants us to us a USB to serial device and hook up to the physical serial pins, but we are connected via USB, so I was a bit frustrated.

The rest of the search results weren't much better. 

So lets start with, connect your circuit playground express to your raspberry pi. Make sure you can see the folder to plop files into. You might have to hit reset to put it back in the mode.

I did come across this finally: 

https://learn.adafruit.com/circuitpython-made-easy-on-circuit-playground-express/circuit-playground-express-library

So we will start there, you can follow along from the start, but the instructions keep pushing to use the mu editor, which I didn't attempt to do as I wasn't sure how it would actually log data for me. Following the instructions ignoring that amounts to updating our board with the circuitpython, once that was on the board we had the following page:

https://learn.adafruit.com/circuitpython-made-easy-on-circuit-playground-express/temperature

I then pushed code.py which contained the following:

"""This example uses the temperature sensor on the Circuit Playground, located next to the image of
a thermometer on the board. It prints the temperature in both C and F to the serial console. Try
putting your finger over the sensor to see the numbers change!"""
import time
from adafruit_circuitplayground import cp

while True:
    print("Temperature C:", cp.temperature)
    print("Temperature F:", cp.temperature * 1.8 + 32)
    time.sleep(1) 
source: https://github.com/adafruit/Adafruit_CircuitPython_CircuitPlayground/blob/main/examples/circuitplayground_temperature.py

OK now our board should be outputting temperature data, great. How do we view it?

Now there is a process to view the content on the board not found in the series of steps, instead I came across that here: 

 https://learn.adafruit.com/welcome-to-circuitpython?view=all#whats-the-port-2977243-1

Which we use to find the port our device is communicating over:

 ls /dev/ttyACM*

Then from there we can test out we can see the data. This tells us to use screen (so you might need to apt-get install screen).

 https://learn.adafruit.com/welcome-to-circuitpython?view=all#connect-with-screen-2977916-10

screen /dev/ttyACM0 115200

when I ran the ls command above mine returned /dev/ttyACM0 so yours will *likely* do this unless you have other stuff connected, additionally 115200 is the speed with which to communicate.

This at least got me to the point of making sure I can see the temp. If you get connected via screen you should have to hit Ctrl+d and it'll start showing temps. If it's not it might tell you to hit reset then it'll start running, so do that. If that didn't work, you might need to check the instructions up to this point and make sure you followed them. When you are done with Screen Ctrl+A should get you out.

Ok so we have data through screen but that doesn't help us if we want to save the data. This is where python comes in.

I tried connecting using python earlier but had no luck, it would just print a newline over and over, which wasn't much help, but here is the thing after seeing that we needed ctrl+d to get the temp data, something clicked. We need to send a ctrl+d.

here is how I got it working.

#!/usr/bin/env python
import time
import serial
import sys
import datetime

ser = serial.Serial(
    port='/dev/ttyACM0', # remember earlier we discovered mine is ttyACM0
    baudrate=115200
)
ser.write(b'\x04') # This mimics the Ctrl + D
time.sleep(5) # Wait a few moments for the Ctrl+D to take effect, it'll print a few messages and start telling us temps.
tmp = ser.readline().strip('\n') # One of the messages that gets printed we don't care about so we throw it away.

print (sys.argv[1]) # Check we have a filename
while True:
    tmp = ser.readline().strip('\n') # Read the serial data and toss the newline.
    f = open(sys.argv[1], 'a') # Open our output file
    f.write(str(datetime.datetime.now()) + "," + tmp.split()[2] + "\n") # Print the current date/time and the temp (in F) to the file.
    f.close() # Close the file each time so it writes.
    time.sleep(10) # I collect the data every 10 seconds, you could wait a minute or 10 minutes if you like


One constraint I'm uncertain of is if we wait 10 minutes to read the serial, does it read it at the time, or does it read the "last unread" line? I'm not worrying too much right now but will look into it eventually.

One final note, in the above, I only print out the F temp so my python only accounts for that, so you adjust your temp printing so it matches what you want.  


Thanks for following along I hope this helped!



Thursday, November 26, 2020

Charity Event

Greetings,

I'm going to be playing Super Metroid for a charity event coming up on December 5th, 2020. I will be playing starting around 8:15am EST. I expect the gameplay to take about an hour and forty-five minutes at most.

If you'd like to watch you can see more details at www.dogpoundexpo.com this will include a schedule of events, and list of runners.

You can find out more details about the charity here: https://www.dogpoundexpo.com/charity



You can view the official twitch channel for the event here: https://www.twitch.tv/meddadog

I will be donating up to $250 matched to any donations during my run. So if you want to see me parted with $250 join and donate!

If you want to follow me I'm on https://www.youtube.com/channel/UCfkc2ygXwub-1pUj_Nq6hPg/ and twitch.tv/TysonRuns.

Thanks, 

Tyson

Friday, December 21, 2018

Deep Learning For Security Cameras Part 1

This is part 1 of a series of posts on my experience trying to build a object detection setup for my home security cameras. Here I'm going over some preliminary results and talking about the history to this point. I plan to graph my results and look at what I have in future posts.

Deep Learning is all the rage these days. I get it, the idea of letting a computer extract out features and find things is amazing. I love it. In fact while, working on my Masters at Georgia Tech I took any Machine Learning class I could get my hands on and the Reinforcement Learning course that was also offered. They were AWESOME and they really opened my eyes to how a lot of this stuff worked.

A few months back I started with using darknet and the tiny YOLO approach. I setup an RTSP server on my raspberry pi in python to pull images and wanted to see how it did, I experimented on a few images, but to my surprise, (although in retrospect, unsurprisingly) it failed.... pretty bad. The first attempt it actually saw the car on the side of my yard, which was utterly amazing. I ran it again mere minutes later and it never saw the car again... after multiple attempts. Of course this was all running quite slow, if I recall on the order of 60 seconds per image. I decided that as long as there was some kind of "motion" that zoneminder detected then I could have it process on that. I was slightly less concerned about realtime than I was about just simply notifying me at some point in the future if something odd was going on.

I decided that over the holiday break I would try to work on this detector more. I started downloading all my camera data (which is really not all that old), and it turned out I had about 65GB of image data from motion captures using zoneminder. First off, sharing that is difficult if I wanted to with anyone to work on this, so I am working on pairing it down, it is at least broken down by camera, so I should be able to divy it up that way.

I decided to try out using the image recognition to even detect anything out of these images as a first pass. I got everything setup and used https://www.tensorflow.org/tutorials/images/image_recognition
I modified it to run through an entire folder, it then spits out 3 different files. First is a mapping of image -> human string, score and index. The index is used for looking up the human string in the mapping. The second file is a listing of the index with all images which had that index (didn't matter the score as long as it was in the top 5). Finally, I output the index to human string mapping, so I could easily look things up. The files are named image_filtering_analysis, image_filter_mapping, and image_filtering_by_class. I opened up the image_filtering_by_class and looked up the first thing I saw, which was 160. This translated to "wire-haired fox terrier". This was a view of my driveway and I thought....well I mean I suppose that might be possible.

wire-haired fox terrier

Ok so first one is not very awesome. Though to be really fair, I looked up the confidence score and that one got 0.016631886. Interestingly these were the top 5 scores.

  • submarine, pigboat, sub, U-boat, 0.27390754
  • patio, terrace,  0.064846955
  • steam locomotive, 0.040173832
  • fountain, 0.023076175
  • wire-haired fox terrier, 0.016631886
I looked up what some of the labels were in the dataset they used. I didn't see any just plain "car" labels. This is just odd to me. I will likely have to re-train on my data, but for now I need to figure out a way to find common features.

So of course improvements could be to require the scoring to at least reach some threshold, so suppose we set it to 50%, this would prevent seeing that particular failure. Searching through my data (which by the way hasn't finished running). It found another version of the same image above but called it a patio, terrace with over a 90% confidence. In fact the image with the highest confidence was also labeled a patio, terrace. Here it is.
patio,terrace
I can only speculate that the poor pixelated images on the left is well me, looking at it closely it sort of looks like a body. It's in the images for a few frames and it appears there are arms. I am pretty sure I was bringing a package in that day.

I wonder if someone could (maybe me ;)) build a form of a feature detector much like is used in CV, but instead of being trained to a specific task, it's generic. This might enable a way to group common features across images and make creating a supervised dataset easier.

I'll post more interesting ones as I come across them. I think at this point most of my image recognition's are at night and will eventually hit daytime which I think will show some even more humorous results.

For now, it's suffice to say. This detection is not doing well. I wanted to at least run this with the hopes that it would find cars or other things for building up a supervised set, however I don't think that's going to happen.

Here are some plots of what labels had been applied to the images (of course none of these have any references to confidence, but it's interesting).








Tuesday, November 15, 2016

Busy, Busy, Busy

I've been a busy boy while going back for my masters, I'm currently taking 2 classes, and have been posting quite a few videos called 60 Seconds to Success in OMSCS. Anyhow if you're in the program and you want some helpful tips go check it out.

https://www.youtube.com/channel/UCR-mQpFEIiBWd164H4-yhgw

I'll try to come back once in a while and post some fun stuff.

Lately I've been doing Hough Transforms, Disparity Maps, solving linear algebra problems for stereo imaging and calculating Optical Flow on sets of images. It's been pretty wild, and I'm getting ready for the end (2 more assignments left), but I can't wait to get setup using lightshowpi for my christmas lights.

Saturday, March 12, 2016

Pickling for easier testing

Today I want to talk about Pickling. This approach allows you to save off a variable/data for re-use later.

Why would you want to do this? Well suppose you're like me and you're working on an AI agent, and you have a LOT of problems you need to run through, and all these problems take a little bit of time, and say 50 of them are already solved by your agent, but that 51st isn't, instead of running all 50 over and over (unless your agent is learning things, in which case, you're good). You can pickle the passed in variable, and then make a simple "test" function that will call only the one you want to call so that it's way easier to test out individual parts.

For more general information about pickling take a look here: https://wiki.python.org/moin/UsingPickle

Lets get started.

Import Pickle in your file with the following line added to the top of your Agent.py file:
import pickle

(I"m going to be using some verbiage from my AI course, however the basic idea is this, the agent we have starts with normal class name, __init__ and must implement a Solve function).

Make the start of your Solve function look like this:

   def Solve(self,problem):
      pickle.dump( problem, open( './pickles/' + problem.name + ".p", "wb" ) )

Finally Make a new directory called pickles so pickle can save it.

Finally run your agent like you normally would.

If you notice there should be a bunch of .p files in your pickles directory.

Next lets setup a "test" agent that allows you to run a SINGLE test (I know folks on our forums have asked about this, and well... here it is!

Create a new file called test.py

Inside it you would add the following

import pickle
from Agent import Agent

A = Agent()
problem  = pickle.load( open( "./pickles/Basic Problem E-09.p", "rb" ) )
A.Solve(problem)

Replacing the .p filename with whatever you choose.

You can make this more "generic" if you'd like by accepting command line input and then you run it like this .

test.py Basic Problem E-09 and it would run the one you command it to. Pretty Slick right?

That would look like this:

import pickle
from Agent import Agent
from sys import argv

A = Agent()
script, name = argv
problem  = pickle.load( open( "./pickles/" +  name + ".p", "rb" ) )
A.Solve(problem)

and be ran by this command.

python test.py "Basic Problem E-09"


The final test.py can be found here:
https://gist.github.com/onaclov2000/d1d7fc01b22b98e0098e



Wednesday, March 2, 2016

Thinking outside the box

I'm currently enrolled in the degree program Georgia Tech offers through Udacity. It's called OMSCS. I'm taking Knowledge Based Artificial Intelligence:Cognitive Systems.

The primary project we are working on is Ravens Progressive Matrices.

I have tons of ideas, and approaches. I'll talk about one that I'm experimenting with (and have no idea if it'll work).

One thought that crossed my mind was what if i could think of these problems as a time relationship. Could I apply a Fast Fourier Transform (FFT)? Well I am giving it a shot.

First I took a line by line reading of the image, then layed it end to end. So in a way you have a all kinds of crazy waveform.

Next I used numpy to convert to the Power Domain using numpy.fft.fft(array).

I'm not much further than this, but I did try dividing A by B and A by C and graphing these ratios along with inverse FFT'ing.

I thought the graphs looked really neat. So I am going to leave a few here for your enjoyment.

Which by the way I should note, I have no idea what I'm doing here, it's experimentation and who knows if these graphs are even logical, but they're cool looking.

A/C
A/B


Original Images
C
A
B
Have a good one, I'll probably post more about this in the future.