Skip to content

Raspberry Pi — Operations Reference

Detailed reference for running GoGetEm on a Raspberry Pi. For initial setup, start with beta-setup.md (Option B).

Why a Pi?

A Raspberry Pi makes a great home for GoGetEm — it's cheap to run 24/7, draws a few watts, and sits quietly on your network scraping jobs on a schedule. You check the web UI from any device on your LAN (or remotely via Tailscale), and your job feed is always fresh without you lifting a finger.

With the included GitHub Actions workflows, deploying is automatic: the nightly workflow runs at 4 AM UTC, cross-compiles the Go binaries for ARM64 on GitHub's runners (faster than compiling on the Pi), SSHs in via Tailscale, drops the binaries in place, and restarts the service. It skips entirely if dev hasn't changed since the last run. For emergencies, deploy.yml can be triggered manually from the Actions tab.

Architecture on the Pi

Three processes share one SQLite database (db/jobs.sqlite3):

Process Binary Managed by
Web server bin/gogetem-web (or bin/gogetem-web-linux-arm64 from CI) gogetem-web.service
Scraper .venv-jobspy/bin/jobscrape Built-in scheduler (invoked by web server)
MCP server /api/mcp endpoint Embedded in web server (HTTP, bearer token auth)

The web server's built-in scheduler handles scraping on a configurable interval. The legacy gogetem-scrape.service and gogetem-scrape.timer are still in deploy/systemd/ for manual/fallback use but aren't required.

Systemd units

Unit files are in deploy/systemd/. They assume user gogetem and path /opt/gogetem — customize before installing:

cd /opt/gogetem

# Replace username with yours
sed -i 's/User=gogetem/User='"$USER"'/' deploy/systemd/*.service

# Replace install path if not /opt/gogetem
# sed -i 's|/opt/gogetem|/home/pi/gogetem|g' deploy/systemd/*.service

# If building locally (not from CI), fix the binary name
sed -i 's/gogetem-web-linux-arm64/gogetem-web/' deploy/systemd/gogetem-web.service

# Install
sudo cp deploy/systemd/gogetem-web.service /etc/systemd/system/
sudo cp deploy/systemd/gogetem-scrape.service /etc/systemd/system/
sudo cp deploy/systemd/gogetem-scrape.timer /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable --now gogetem-web

See deploy/systemd/README.md for full details.

Environment variables

All configured in /opt/gogetem/.env (loaded by the systemd unit via EnvironmentFile):

Variable Default Description
DB_PATH db/jobs.sqlite3 Path to SQLite database
LISTEN_ADDR 127.0.0.1:8080 Bind address. Use 0.0.0.0:8080 to expose on the network
APP_ENV development Set to production in systemd (disables debug output)
PAGE_SIZE 25 Jobs per page (10-50)
MCP_TOKEN (empty = disabled) Bearer token for /api/mcp. Generate with openssl rand -hex 32
PROFILE_PATH personal/profile.yaml YAML file for first-boot profile migration (DB takes over after import)
SCRAPER_BIN .venv-jobspy/bin/jobscrape Path to Python scraper binary

The systemd unit also sets Environment= lines that override some of these — check the service file if values seem wrong.

Memory limits

Tuned for Raspberry Pi 5 (4GB+):

Service MemoryMax Notes
Web server 128 MB Sufficient for normal use
Scraper 384 MB JobSpy + pandas can spike during large fetches

Increase in the service files if you're on a machine with more RAM. On a Pi 4 with 2GB, these defaults should still work.

Enabling the MCP HTTP endpoint

This lets Claude Code on your laptop talk to GoGetEm on the Pi.

On the Pi

cd /opt/gogetem

# Generate a token
openssl rand -hex 32

# Add to .env
echo 'MCP_TOKEN=<paste-token-here>' >> .env

# Restart
sudo systemctl restart gogetem-web

# Verify — should return 401 (auth required)
curl -s -o /dev/null -w "%{http_code}" -X POST http://127.0.0.1:8080/api/mcp

On your laptop

Add to ~/.zshrc or ~/.bashrc:

export GOGETEM_MCP_URL="http://<pi-ip>:8080"
export GOGETEM_MCP_TOKEN="<paste-token-here>"

The project's .mcp.json reads these env vars automatically — no secrets in the repo.

Finding your Pi's IP: If using Tailscale, run tailscale ip -4 on the Pi. Otherwise use the LAN IP from hostname -I.

Checking health

# Web server status
systemctl status gogetem-web
journalctl -u gogetem-web --no-pager -n 30

# Health endpoint
curl -sf http://127.0.0.1:8080/healthz

# Scraper logs (scraping runs inside the web server process)
journalctl -u gogetem-web --no-pager -n 30 | grep -i scrap

# Database size
du -h db/jobs.sqlite3

Updating

With GitHub Actions (automatic)

If you've set up CI/CD per beta-setup.md, the nightly workflow deploys from dev at 4 AM UTC (skips if no changes). The workflow: 1. Cross-compiles gogetem-web-linux-arm64 and gogetem-mcp-linux-arm64 2. SSHs to the Pi via Tailscale 3. Copies binaries to /opt/gogetem/bin/ 4. Restarts gogetem-web.service 5. Runs a health check

For immediate deploys, trigger deploy.yml manually from the GitHub Actions tab.

Manual update

cd /opt/gogetem
git pull
make build-all
sudo systemctl restart gogetem-web

From a remote machine (cross-compile locally)

# On your laptop
make build-pi build-mcp-pi
scp bin/*-linux-arm64 <user>@<pi-host>:/opt/gogetem/bin/
ssh <user>@<pi-host> "sudo systemctl restart gogetem-web"

File ownership

The user in the systemd unit must own the install directory:

sudo chown -R $USER:$USER /opt/gogetem

Common issues

Problem Cause Fix
SQLITE_BUSY errors Concurrent access contention Both Go and Python use 5s busy timeout — usually self-resolving. If persistent, check for stuck processes
Web server OOM-killed MemoryMax=128M too low Increase in service file, sudo systemctl daemon-reload && sudo systemctl restart gogetem-web
Scraper OOM-killed Large result set from JobSpy Increase MemoryMax in scrape service, or reduce results_wanted in scrape profiles
Port 8080 in use Another service on that port Change LISTEN_ADDR in .env and restart
MCP endpoint returns 404 MCP_TOKEN not set Add token to .env and restart
Scores all zero No match profile configured Go to /settings and save a match profile