Infinite Craft

There’s this guy named Neal who makes the coolest little web games / interactive pages. (Seriously, everything on his site is so cool you should definitely check it out: https://neal.fun ).

His new “game” might be the weirdest, but also most addictive one yet. It’s called Infinite Craft, and starting with just “earth”, “wind”, “fire”, and “water” you drag and drop words on top of each other to craft new ones. That’s it. You just keep making new words and see what weird paths you can go down. If you happen to be the first person to discover a new word you get a little notification too. It’s so simple but really engrossing. I can’t stop trying to find new words! PLEASE SEND HELP.

eXoDOS – Someone Collected Over 7000 FULL DOS Games In One Download

This is legitimately nuts! Someone made a collection of over 7000 full release DOS games you can download. It’s called eXoDOS. It comes with the DOS box emulator bundled, and a front end with box art and game descriptions and the whole 9 yards. Even a script to set it all up for you. Very slick!

When I was a kid we had an old Compaq Presario with Windows 3.1/DOS on it, and I played so many of those shareware DOS games. I never did buy a full game – I would just cycle through my different samples again and again and I was perfectly happy with that (sorry Apogee).

My only complaint here is that there’s too many games! I had to go hunt down my old shareware disks so I could find the handful I played as a kid and would love to see again. I have a lot of great memories with Commander Keen, Jazz Jackrabbit, The Hugo Trilogy, Duke Nukem 1+2, Rise of the Triad (with the coolest box art of ALL TIME), Monster Bash, Raptor, Pickle Wars, Wacky Wheels, and so many more I can’t remember the titles of.

And if that’s not enough for you, the same person has put together a similar package of Windows 3x games.

Linkwarden – A Self Hosted Bookmark Manager and Cache

Google recently announced that they’re discontinuing their web page caching service. Instead, they are directing folks to the Internet Archive’s Wayback Machine, which is a great project for sure, but it sucks to lose options. I know I’ve used both extensively. It’s handy when you need some information from a site that’s suddenly down, and I can’t count the number of times I’ve bookmarked a useful tutorial from a random blog that has gone completely missing the next time I try to access it.

But what if you could cache websites that are important to you… yourself?

Linkwarden (if you somehow skipped the title of this post) is a self-hosted, web based bookmark manager that caches all the pages saved within it. It can be configured to save pages as a screenshot, PDF, and/or searchable text. Links can be arranged in collections and with multiple tags, making them very easy to search for.

I spun up an instance and imported all my browser bookmarks and man, it is SO much easier to find what links your looking for when they’re tagged and you can search the actual text content of the cached pages. I often find myself totally lost looking for exactly what I named some random tutorial I bookmarked 5 years ago in my browser (I refer back to a lot of tutorials, ok?). Interestingly, you can have multiple accounts on an instance and share collections between users to add to collaboratively. Not likely a feature I’ll use much, but could be handy for group research projects I guess?

It also has a plugin for most browsers allowing you to quickly add new links to it as you go. Super basic but it gets the job done. Honestly the only features I’d like to see added is the ability to search bookmarks from the browser plugin, and (this is very much wishful thinking) some way to add notes to the text cached pages. Sometimes I want add clarification to the tutorial steps! That’s perhaps a bit outside the scope of a tool like this, but I would love such a feature.

Wish list features aside this is a fantastic program and a great way to store web links in a way that actually makes them easy to find.

How I Cleaned Up My Bookmarks Before Importing Them Into Linkwarden

It turns out that my bookmarks were hella messy. Between moving back and forth between browsers multiple times and never ever cleaning them up before… it was bad. Thankfully I found a couple of handy Firefox extensions that made the cleanup much easier. I’m sure similar options exist for other browsers as well.

Duplicate Bookmark Finder – This extension lets you know if you have duplicate copies of the same bookmark and helps you to easily remove them. I had like 700 bookmarks and a couple hundred of them were duplicates… oops.

404 Bookmarks – This extension will ping all of your bookmarks and let you know which ones are already dead links. I had a lot of these as well. I ended up deleting a bunch that weren’t important to me anymore, and replacing the rest with a cached version from The Wayback Machine.

So, What Else Can You Do With a Steam Deck?

2024-03-08 Update: added Chiaki, how cool is that?

I finally caved and got myself a Steam Deck (one of the shiny new OLED ones even!) and being pathologically unable to just leave things as-is, it took about a day before I started to wonder what else you can do with one beyond just playing Steam games. Quite a bit actually!

  • Get a dock/USB-C hub and Use It As a Portable Desktop Computer
    • Obviously not the most convenient operation here, but Desktop mode works as advertised if that’s your jam.
  • Install Discord and Chat While you Play
    • Grab it from the Discovery Store, right click and add to Steam and you’re good to go!
  • Install Emulators and Play ROMs With EmuDeck
    • It’s amazing how slick this script is. It will even add your ROMs to your steam library so you can search for and launch them just like any other game.
  • Install All Sorts of Crazy Plugins with Decky
    • Too many plugins to list here, but I particularly like SteamGridDB which lets you update the images for your games – particularly useful to add images to non-steam games.
  • Install Other, Non-Steam Game Launchers & Games
  • Install Proton-GE for Extra Compatibility
    • Proton-GE is a community developed fork that contains extra fixes that may make some games run better. You can get it from the Non-SteamLaunchers script as well.
  • Even Install Freaking Android Apps On It?!? Yeah sure, why not, go crazy!
    • If you really want to, Waydroid has you covered.
  • Install Minecraft Java Edition with Steam Deck Controller Support
    • There’s a lovely Linux Minecraft launcher called Prism which contains a mod pack aptly titled “Fabulously Optimized mod pack” with great Steam Deck controller support.
  • Install Linux Native Fan Ports of Classic Games
    • I am constantly amazed by all of the reverse-engineered (or reconstructed from scratch!) fan ports of games. If they have a Linux version then there’s a good chance they should work on your steam deck. Install them in desktop mode, and right click on the executable and select “Add to Steam” and you should be good to go!
    • Some of my favorites include Super Mario 64, The Legend of Zelda: Ocarina of Time, Jedi Academy, Jazz Jackrabbit 2, and DOOM.
  • Install Mods!
    • Obviously the steps to do so will vary per game, but here’s some instructions for Stardew Valley.
  • Install Chiaki4Deck and Stream your PS4/PS5 to Your Deck
    • Chiaki is an open source re-implementation of Sony’s remote play software that works on Linux. Someone made a special version just for the Steam Deck and it works great!

That’s everything I’ve found. Is there anything cool I’ve missed? Let me know!

Fixing a Fat Ps2 With Disk Reading Issues

I’ve got an old fat model PS2 sitting in a box that couldn’t read disks for some reason. I figured it might be fun to try and fix that up. After doing some reading online it turns out there’s a bunch of issues that could cause your PS2 to not read disks, but most people suggest the laser is the likely culprit and will need to be replaced. You can get a new laser shipped from Ali Express for about $20, so why not? I ordered one and gave it a go when it arrived.

I found a very nice tutorial on YouTube showing you how to swap the laser. I had to get a buddy to remove the little solder blob for me since I don’t own a soldering iron, but the rest was easy: Pop it open, open the drive, take the laser off the rail, disconnect the ribbon cable, and then reverse it with the new laser.

Well I put it all back together and… it didn’t work. Crap.

So after some research… Other Possible Issues Preventing a PS2 From Reading Disks:

  • The laser skew needs to be adjusted
  • The laser is dirty
  • The motor rail is bent / worn down
  • The worm gear / rails need to be re-greased
  • The magnet can’t get a good hold of the disk and so it can’t spin up to full speed

I started with the skew adjustment. As you can see from my header image, there’s a large white gear just above the laser. Turning this gear adjusts the skew of the laser. You’ll notice as you turn it clockwise the laser housing raises slightly, and if you keep going it will eventually fall down to the starting position and then starting moving up again.

Most people recommend you use a marker to put a little mark on the gear’s initial position, and then drop it down to the starting position and work your way up slowly to see what position works best. You’re going to want to turn the gear a couple of notches, and then try to load a disk. If it doesn’t work, turn it a couple more. It’s a pretty slow process, so to speed it up make sure you’ve disassembled your disk drive housing and popped out the magnet from the top of the drive (it looks like it’s stuck in there good, but it is flexible enough to pop out). This way you can just set the magnet on top of the disk and pop the tray in and out to make quick adjustments.

In my case, I eventually found a sweet spot where the PS2 would try to load the disk, but couldn’t quite make it. I’d get the white on black “PlayStation 2” loading screen… and nothing for a bit, and then it would give up and toss me back at the main menu.

Unfortunately, none of the other possible issues seemed to be effecting me either. The laser was brand new, the rails and worm gear were well greased, the motor rail seemed to be connecting well, and I’d popped the magnet right out of drive housing so it was sitting directly on the disk. However I did notice two other strange things: After adjusting the skew of the laser the PS2 could successfully load PS1 disks(but not any type of PS2 disk), and it make a strange clicking sound when it tried to load PS2 disks and failed. The fact it could read PS1 disks told me the laser wasn’t a dud and was connected correctly at least.

I took it apart again and was manually moving the laser assembly up and down the worm gear when I noticed something odd… the left side of the laser was clipping on the edge of a PCB board directly underneath it! Well that would explain the weird clicking! But why was this happening? From what I could tell it was just barely clipping the board, and if the laser was raised ever so slightly higher it would easily clear the board. I took a look at the old laser and noticed something different: The old laser had a little screw that went through it and pushed against the left rail, raising it ever so slightly, and my new laser was missing this screw! I moved the screw over and viola! the new laser could clear the board and move freely and read the disk now!

And look at that, it successfully read Virtual Fighter Evolution, the least scratched disk I own!

Now my best guess as to why it would successfully load PS1 disks when the laser module couldn’t move past the half way point is that maybe everything it needs to successfully boot a PS1 disk (or at least the ones I tried) exist right near the center of the disk. I’m sure if I kept playing those games I’d eventually hit a point where the laser hit the PCB and was unable to read further. Still, very odd.

I also double checked that YouTube laser swap tutorial, and he doesn’t mention anything about having to move that weird little screw over, but I do see the screw on his new laser. I suspect most new lasers must come with this screw attached and mine was just missing it for some reason.

Either way, I’ve now got a PS2 that can successfully read disks and load up games! MISSION ACCOMPLISHED.

How to Turn a Microsoft ODATA ASP.NET Web API Into a Docker Image

A couple of years ago I had to learn how to make ODATA API’s to allow access to some of our applications at work. I was amazed at how easy Microsoft makes this process for you in Visual Studio, there’s so much scaffolding that doesn’t really change between projects and they just… do it all for you automatically? How nice!

If you’ve read my past few blog entries you’ll know I’m a big fan of self-hosting and Docker containers, and now that Microsoft’s newest versions of DotNet can run on Linux this got me thinking… how hard would it be to make an ODATA API project into a Docker container? As it turns out, very easy!

I took an old joke script I’d made and turned it into an ODATA Web API to help me learn the ropes. It’s basically a mad-lib generator that spits out random bits of text you can configure. Very dumb, but a perfect candidate to test containerizing.

Let’s take a look at the Dockerfile I made for containerizing that project:

Really? That’s It???

As you can see there’s almost nothing to it. It’s way less complicated than my last Dockerfile example. Let’s go over each part:

FROM mcr.microsoft.com/dotnet/aspnet:6.0

Microsoft provides their own base images that have the ASP.Net for Linux runtime installed and ready to go, so we just have to specify one of those images to build off of (when I made this, asp.net 6.0 was the latest version available, just make sure you match the version your project is using)

WORKDIR /app
COPY . /app

First you have to compile the application in Visual Studio, but assuming you’ve got it in the same directory as this Dockerfile, you simply need to run this copy all command to pull the whole project into the image.

ENV ASPNETCORE_URLS http://*:80

The ENV command lets you set environment variables and we’re setting the ASPNETCORE_URLS to “http//:*:80”, which tells the DotNet runtime to listen on port 80 for any IP address it has

EXPOSE 80

This exposes port 80 externally so the container can use that port.

ENTRYPOINT ["dotnet", "TitleGeneratorAPI.dll"]

This line says when the container runs to call the compiled program (TitleGeneratorAPI.dll) in DotNet.

The ENTRYPOINT command is similar to CMD which we used last time, but allows you to pass in additional arguments (which we aren’t doing here, but you know, let’s mix it up a little).

And that’s really it. Build the image, spin up a container, and you’re good to go!

if you really want to, you can grab the compiled container for this example project from it’s DockerHub repo and see it in action.

Creating Your Own Docker Container Image

Docker containers are amazing AND they’re surprisingly simple to make! As long as you know the basics of Linux you have what it takes!

All you need is:

  • A program that runs in Linux to containerize
  • A base image
  • A startup script to launch the program (you could call it directly, but this gives us some flexibility)
  • a dockerfile to put it together

I’m a big fan of the new HITMAN video games, but annoyingly you need to always be online to use most of the features. Thankfully some really smart folks made a little Node.JS HITMAN server replacement you can point your game at instead if you’d rather play locally called “The Peacock Project“. This program is great, and I noticed no one else had made a containerized version yet so I went ahead and did that myself!

Come along with me as I update it to the latest version and you can see how simple the process is.

First, let’s take a look at the Dockerfile on my github page:

Some of this is a bit convoluted, but I’ll break it down (I promise it’s not that bad!) Running this file “compiles” your docker image so the file needs to include information on all the steps it takes to put your container together.

# syntax=docker/dockerfile:1

The first line simply indicated that this is a dockerfile

FROM debian:stable-slim

This line indicates what base image to use. Debian slim is just what it sounds like: a version of Debian that doesn’t have much bundled in with it, this will help keep our image super small.

EXPOSE 80/udp
EXPOSE 80/tcp

These two lines indicate which ports internal to the container should be exposed externally. These will be whatever ports your application itself uses (assuming it needs to talk over the network). In this case, Peacock uses port 80, the standard unencrypted web traffic port. When you go to setup an instance of the image you can map these to different external ports on your host.

RUN apt-get update && apt-get install -y wget unzip bash curl

The “RUN” command specifies that you want to run some shell commands internally to the image. In this case, make sure that our package lists are up to date and install the wget, unzip, bash, & curl tools, as we’ll need those later (yes, this base image is THAT slim!)

RUN curl -s https://api.github.com/repos/thepeacockproject/Peacock/releases/latest \
    | grep "browser_download_url.*zip" | grep -v "lite"\
    | cut -d : -f 2,3 \
    | tr -d \" \
    | wget -q -O Peacock.zip -i -

This is the most complicated bit, but essentially I’m telling it to go to the releases page of the peacock github repo and download the .zip file for the release tagged “latest” and renames it to “Peacock.zip” to keep things simple.

This is pretty cool and makes updating my docker image semi automatic – when there is a new version I just have to compile the dockerfile again and it will automatically grab the newest version for me. A lazier version of this could have a hardcoded download link and would need to be updated between versions.

RUN unzip -q Peacock.zip \
    && rm Peacock.zip \
    && mv Peacock-* Peacock/ \
    && rm -r Peacock/nodedist \
    && wget -q -O node.tar.gz https://nodejs.org/dist/v18.18.2/node-v18.18.2-linux-x64.tar.gz \
    && tar -xzf node.tar.gz --directory Peacock \
    && mv ./Peacock/node-v18.18.2-linux-x64 ./Peacock/node \
    && rm node.tar.gz

This bit unzips the Peacock.zip file we downloaded, deletes the .zip, removes the bundled version of Node.JS for Windows, and downloads the appropriate Linux version and un-tars it into the directory.

WORKDIR /Peacock

The WORKDIR command sets the current working directory, so all commands run relative to that location. In this case we’re just moving into the “/Peacock” directory we made when we unzipped the project.

RUN mkdir {userdata,contractSessions}

Again using the run command, but this time to create a couple more directories we will later expose for volume mounting. The server uses these 2 directories for persistent user data.

COPY ["start_server.sh", "start_server.sh"]

This command copies the startup script I made (which should be in the same directory as the Dockerfile on the host) into the current working directory of the image (/Peacock), which will allow us to call it to run the software.

RUN chmod a+x start_server.sh

This run command marks the startup script as executable, so we can run it.

CMD ["./start_server.sh"]

The CMD command sets the “starting point” of the container. This is the script/program that it will attempt to run when you start up a container of this image. In this case we’re setting the startup script we copied over. The script itself is very simple, and just calls the Peacock starting script in Node.JS running on port 80.

VOLUME /Peacock/userdata
VOLUME /Peacock/contractSessions

The VOLUME command specifies that a particular directory should be an externally mounted directory to the container from the host system. This allows you to keep persistent data. Any changes made to the files of a container are not maintained if the container is destroyed and a new one is created from the same image. By default, the VOLUME command will automatically make directories on the host when a container is spun up, but we can specific specific directories we want mounted too (bind mounting).

And that’s it for the Dockerfile! If you need a rundown of the other available commands, check out the Dockerfile Reference page.

Ok, now that we know how a Dockerfile works we need to run it! I’m jumping onto a terminal on my UNRAID server here, pathing to the script’s directory, and then running:

docker build -t gunslap/peacockserver:6.5.7 .

This command says to build the Dockerfile in the current directory, and name it as “gunslap/peacockserver” with the version “6.5.7” tag. The name matches my Dockerhub account name and project name, which will allow me to upload it there later, and the tag is how we track different versions, or things like “latest” or “nightly”, etc. We can add additional tags to this version when it’s complete.

That will take a few minutes to run through all the commands and build the image. Once it’s complete you should see:

Successfully built <random characters>
Successfully tagged gunslap/peacockserver:6.5.7

in the terminal output. Then we can run:

docker tag gunslap/peacockserver:6.5.7 gunslap/peacockserver:latest

This command takes the image titled “gunslap/peacockserver” with the tag “6.5.7” (the one we just built) and adds the additional tag of “latest”. This way when we upload it to DockerHub it will replace the previous version tagged “latest” and anyone who asks for the “latest” version will now get this one.

Then we can run the following two commands:

docker push gunslap/peacockserver:latest
docker push gunslap/peacockserver:6.5.7

Which will push both of our tags “latest” and “6.5.7” to DockerHub. You should get a bunch of messages with the second one that each layer already exists, this is expected as these are the exact same image. We just want to make sure both tags are available for download from the hub, should someone want the specific version or just the latest.

This only works if you’re currently signed into DockerHub (or some other Docker repository) and are attempting to push to a repo that you have access to. If you’re not signed in you will have to use the docker login command first.

And that’s it! Now anyone using your image can query DockerHub / visit the website for the latest version.

You can still use the image you created even if you don’t upload it to DockerHub. Referencing the name + tag you set when you go to create a container will look locally first.

In my case I have an instance of the image pointed to Dockerhub, so I simply have to tell UNRAID to look for an update and apply it and I’m good to go. Thankfully Peacock is nice enough to output the current version to the terminal when it loads up, so I can peek there for an additional check that everything is updated and working correctly:

In the future I’d love to make it so the startup script itself checks for and downloads a new version of the application. I’ve seen other Docker images do this and it’s great. All you have to do is restart it and it will update itself.

Vertical Videos

I admit it: I’m getting old. I really don’t understand what the deal is with TikTok and short vertical videos in general. But I’m ok with that, I’m totally cool with not understanding what the kids are into these days. I accept my fate.

And now that I’ve said that… I was real bored the other day so I chopped up a bunch of our old MissilePuppy videos into short vertical clips and posted them to YouTube and Facebook… and I’ll be damned. Most of these videos got way more views in vertical format than they ever did as “regular” YouTube videos. We’re talking thousands instead of dozens in some cases. Maybe it’s the way the format and the algorithm work? People can spam-watch a bunch of these in a short period of time so it’s easier to get it in front of people?

Either way it inspired me to get over my old man fears of TikTok and make an account to post a few clips there. My account was almost immediate banned with no explanation. I guess TikTok is super anti guns/weapons so maybe having “Missile” in my name got me banned? Whatever, I made a new account without any references to weapons in the name and it seems to have stuck around, BUT NOW I can’t post most of the videos I made because the gun ban is super strict and like half of these videos end with someone getting shot (I am aware that’s my only punchline).

I give up. I’m too old for this shit.

The Joys of Running a Home Server

Over the last handful of years I’ve got deeper and deeper into running a home server / selfhosting apps. If you love playing around with tech, there’s so many cool and fun things you can do with a home server setup.

It started simply enough: I wanted to have a live backup for my familes files and photos at home, so I bought a little 2 bay Synology NAS (it was a DS218, but the current model is the DS223). That was a fun little machine! Synology has put a lot of work into their DSM OS and it’s very easy to use and a great entry point if you’d like to dip your toes into Home Server stuff. Obviously it’s primary purpose is for storage, but there is a great bunch of apps you can run on your Synology (it has it’s own little app store with all the basics and you can extend it with the community store for even more options), AND it supports Docker containers, which allows you to run practically anything you can imagine.

If you buy a Synology NAS keep in mind it’s not meant to be a powerhouse server, so you can’t go too crazy with the number of apps running off it. However, I was surprised just how many things I could get away with. At one time I had this blog, Plex, a file sync service, and a bunch of the default Synology apps running off it with no issues. It did start to chug when I tried to add a Minecraft server though… So like, not bad! If you want a small, quiet, power efficient, easy to use NAS that can handle a few other tasks, you’d be hard pressed to find something better than a Synology DiskStation.

(When I wrote up my last post on searching for a useable Google Photos replacement I left out Synology Moments, which is pretty great in it’s own right! You just need to have a Synology NAS to use it).

This setup worked great for a while, but then one day my brother-in-law asked me if I wanted an old Dell PowerEdge rack mounted server he wasn’t using anymore… and how could I say no to a free server?!? This thing has 2 10-core CPU’s and 96GB of RAM. I’m not even sure if I can utilize all this power even if I tried! Suddenly I’d gone from struggling to run one Minecraft server to easily running a dozen if I’d wanted.

At the suggestion of a friend with a similar setup, I decided to run UNRAID on it for the OS. UNRAID is a Linux based OS that as the name implies, doesn’t use RAID for data redundancy. Instead, it uses what are called “parity” drives, which is a drive that for each byte contains the sum of the same byte on all the other drives. That way if one drive goes down, you can reconstruct it using the others and the parity drive. I didn’t even realize that was a thing. How cool is that?

What UNRAID really offers is flexibilty. You can have any combination of hard drive sizes, speeds, or brands, no problem. It can make it all work together. The catch (and it’s a minor one) is that your parity drive needs to be your largest drive. This makes sense as how else could you calculate the sum of a byte on the drive doesn’t have bytes numbered that high on it?

The OS is accessed almost entirely from the web and is very intuitive and easy to use. Obviously it works great as a NAS and makes sharing file access over the network a breeze, but it also has Wireguard VPN built in, can run Virtual Machines, and (most importantly in my mind) has a really lovely interface for running Docker containers.

For those unfamiliar with Docker containers they are similar to running virtual machines. However, where with virtual machines we are emulating the hardware, in Docker’s case we’ve bumped that up a level and we’re emulating the operating system as well. The advantage here is that each Docker container comes bundled with copies of it’s own prerequesits, so you never have to worry about finding the right version of an application to work with all your programs. This makes deploying, maintaining, and updating Docker containers a breeze. For a great intro to Docker, I recommend Network Chuck’s Docker Containers 101 video.

Like I mentioned earlier, UNRAID has it’s own Docker container manager that works great. It’s simple and has everything you’ll need to manage containers. They also have something called “Community Apps” which are containers that come with a configuration file that allows you to easily setup a container with pre-filled defaults. Most of the popular containers have a Community App configuration setup all ready, and if they don’t? No worries! You can search the DockerHub collection of containers directly by clicking the “Click Here To Get More Results From DockerHub” button:

look, there it is

When you grab a container directly from Dockerhub you also have the option to attempt an automatic community conversion where it will try to setup all the ports and mapped drives the container requires. In my experience this works quiet well and I rarely have to fight with it further.

Alright, I’ve told you my entire life’s story and how I got to this point, but what am I actually running on my UNRAID server you may ask? What makes all this effort worth it?

Good question! Here’s a non-exhaustive list of some of my favorite self-hosted apps I’m running:

And here’s some of my favorite resources for learning more:

What’s next? Well I haven’t got into the world of Home Automation yet, but I swear everyone I know is into it so I suspect I’m going to get sucked into that one sooner or later.

Immich – Finally, A Good Self-Hosted Google Photos Replacement

As I mentioned in my last post, I was on the lookout for a self-hosted Google Photos replacement. The first thing I did was get PhotoSync for my phone so I could get all my photos going forward synced to my NAS and sorted nicely (it’s a great little program, highly reccomended).

Then I started testing out different photo management software and THERE ARE SO MANY OPTIONS. LibrePhotos and PhotoPrism were amoung the better ones I tried, but all of them left me feeling disappointed. It turns out Google Photos has a lot of features and I don’t like missing any of them! (Call me greedy if you must). Some could only have one user, didn’t have automatic face detection (or even worse had horrible face detection), wouldn’t accept a pre-existing library… the list goes on and on.

I was close to giving up when I heard about Immich. It looks almost identical to Google Photos, it supports multiple users and external photo libraries, it has android and iphone apps, and it’s face detection algorithm is half decent! I am VERY impressed!

There are a few missing features/things that annoy me about it, but thankfully Immich is in very active development so there’s a good chance these things will get cleared up in future updates. In fact, in the time I’ve been using the app they’ve already fix two things that were bugging me:

  • Added in the ability to modify the date/time of photos from within the app – something I had to do all the time in Google Photos. (This might still be broken for external libraries, but it’s a start).
  • Added in the ability to correct false face detection matches – My wife and 4 year old looked very similar as toddlers and the auto detected faces for them are all mixed together haphazardly. I’m not sure if there’s a way to outright remove a false detection (it seems you have to pick a new person to assign it to?) but this is veyr welcome fix.

None of these other issues are showstoppers for me… but I’m going to list them anyways:

  • You have to manually refresh “External” libraries. By default, Immich expects you to upload photos from the phone app to the default library. In my case I already have all my photos organized on my NAS (and I’m adding new ones using the aformentioned PhotoSync) and I just want them to show up in the app. Thankfully, they have a feature called “External Libraries” where you can point it to a folder and have it include those photos. This works very well except that you need to go in and manually trigger a refresh of the external libraries to have any new photos show up. A bit of a pain in the butt.
  • You can’t delete photos from External Libraries from within the app. For some reason they haven’t given the app the ability to delete externally linked photos, if you want them gone, you need to go directly to the file system to do it. Another small pain in the butt.

Realistically those are minor gripes. Overall this is a fantastic, and frankly, the first truely great Google Photos replacement I’ve seen. Kudos to the developers! Thanks to your efforts I might be able to decouple myself from Google Photos yet!