<!-- image = "images/plex-server-ai.jpeg" --> -->

Why Host?

I used Netflix for a long time. I don’t think there’s anything wrong with this, it’s like a household staple at this point. Netflix even has a button on my NVIDIA Shield remote:

Your image description

But the fractured state of digital media has left people spending a lot of money on streaming services. We’re not even paying for convenience, half the time I don’t even know where to watch a given movie/show unless it’s an obviously mediocre Netflix original. And to add to the list of annoyances Netflix has begun cracking down on password sharing (I’m screwed).

So instead of being at their mercy, I chose a platform that allows me to take control of my own media library and consume it on my own terms without annoying ads and spending hundreds of dollars on entertainment each year. With PleX, I store and stream my own movies, TV shows, music, audioboks, and other media from a central server.

It does cost $120 for a lifetime subscription to PleX, and you would also need your own server/hardware to run the service, but I have rationalized all that: I’d rather spend $$$ for hardware that I would own, than spend $$ for subscriptions that I don’t.

My Plex Media Server (PMS) Setup

Parts List

It’s a humble server. It doesn’t have UHD 4K movies and doesn’t have to concern itself with dozens of concurrent streams. It actually just sits there bored most of the time. I wish I had as much idle time as my server.

While most of the components are modest, the cost of disk space isn’t – even with getting the 10TB HDDs at historically low prices.

It runs Unraid OS (Plus). Funnily enough I didn’t even have my own license for several years since I just bummed one off a colleague who switched from Unraid to Synology and ended up not needing their license.

Plex Deployment

I deploy my PMS using docker containers. This allows me to run upgrades pretty swiftly (thank you watchtower) and easily backup configurations and container data.

Here’s the docker-compose configuration for my PleX service:

docker-compose.yml
version: '3.3'
services:
  plex:
    image: plexinc/pms-docker:latest
    container_name: plex
    restart: always
    labels:
      - com.centurylinklabs.watchtower.enable=true
    ports:
      - '32400:32400/tcp'
    environment:
      - TZ="America/Los_Angeles"
      - PLEX_CLAIM="claim-REDACTED"
    volumes:
      - /mnt/cache/appdata/PlexMediaServer:/config
      - /mnt/cache/system-docker/plex-transcodes:/transcode
      - /mnt/user/media:/data

I tried to maximize performance by configuring Unraid shares properly. The /mnt/cache/appdata share is using the cache with setting Prefer (more on this here). The benefit of this configuration is my Plex container gets PCIe SSD read/write speeds, and the data is periodically synced to my array of HDDs hourly.

The other share at /mnt/cache/system-docker is where I store data that does not need to be moved to the array for long-term or parity-protected storage. The setting for how this share is to use cache is: Only, meaning data will not be moved anywhere else outside of cache. Plex transcode data is ephemeral, and do not need to be stored elsewhere.

You might notice /tmp/:/transcode would be technically the best read/write performance since all transcodes would be stored in RAM instead of cache/disk. However, it is not the most stable especially when other containers/processes use a volatile amount of RAM… I will rarely be low on cache space but I am frequently low on RAM.

The Media Pipeline

PMS doesn’t do it all (especially if you are serving your own content). You’ll have to get that legitimate content somehow, and there’s limited options these days in the world of DRM and copyright laws, but that’s not something I’ll be going over or advocating for or against. I’ll just list the tools that I’ve heard work well with Plex:

These tools will help you do a lot of tasks automatically, like download the media to your server and move the completed downloads to its corresponding movies/shows directory.

When you have a lot of content, maintaining those files can become unwieldy, so I defer some of that to additional services (also running in containers).

Filebot

github.com/jlesage/docker-filebot has been a game changer for me by (almost) flawlessly organizing my entire media library. Each directory, filename, and even subtitles were properly organized. Previously it was common for Plex to have bad matches against my files and resolve the wrong movie or not at all. Using filebot really does beat the manual process of either renaming the file directly or fixing the match through the Plex UI.

docker-compose.yml
version: '3.3'
services:
  filebot:
    image: jlesage/filebot:latest
    networks:
      frontend:
        ipv4_address: 172.168.1.30
    environment:
      - TZ="America/Los_Angeles"
      - KEEP_APP_RUNNING=1
      - "OPENSUBTITLES_USERNAME=<your_username>"
      - "OPENSUBTITLES_PASSWORD=<your_password>"
      - AMC_CUSTOM_OPTIONS=--def minLengthMS=1 --def minFileSize=1
      - AMC_SUBTITLE_LANG=en
      - AMC_INTERVAL=900
      - AMC_CONFLICT=auto
      - AMC_MATCH_MODE=opportunistic
      - UMASK=000
      - PUID=99
      - PGID=100
      - AMC_ACTION=move
      - HOST_OS="Unraid"
      - AMC_SERIES_FORMAT=/output/{plex}
      - AMC_MOVIE_FORMAT=/output/{plex}
    ports:
      - "5800:5800"
    volumes:
      - "/mnt/cache/appdata/filebot:/config:rw"
      - "/mnt/user/media/shows:/watch/TV Shows:rw"
      - "/mnt/user/media/shows:/output/TV Shows:rw"
      - "/mnt/user/media/movies:/watch/Movies:rw"
      - "/mnt/user/media/movies:/output/Movies:rw"
    restart: always

Cleanarr

github.com/se1exin/Cleanarr is useful as well. I had a lot of duplicates at one point, and using the PleX web UI to comb through removing duplicates is labor intensive. Using Cleanarr is still a manual process since you wouldn’t want some program deleting the wrong files without you knowing it but I find doing it manually gives me more peace of mind.

docker-compose.yml
version: '3.3'
services:
  cleanarr:
    image: selexin/cleanarr:latest
    ports:
      - "5050:80"
    environment:
      - BYPASS_SSL_VERIFY=0
      - PLEX_TOKEN=<plex_token>
      - PLEX_BASE_URL=http://<local_plex_url>:32400
      - LIBRARY_NAMES=Movies;TV Shows;
    restart: always
    volumes:
      - /mnt/cache/appdata/cleanarr:/config

Mitigating Data Loss

Unraid makes use of Parity, and I make use of only one parity drive – meaning if one non-parity drive fails, I can recover the data by reconstructing the lost drive onto a replacement drive through the Parity reconstruction process.

I’ve done this before and it isn’t a pretty picture with Unraid especially with these large and slow disks. It was like 12+ hours of downtime to reconstruct a 10TB HDD, and the drives (Parity and the rebuilt drive) ran hot during the rebuild, making me reconsider my life choices of not getting a case that can support more fans.