Luis-Ruiz

Building in Public: Turning My Orin Nano Into a Worker Appliance for Razzy

5/1/2026, 2:11:52 AM

building in publicai agentsraspberry pi 5orin nanotailscalenodejshome labruiztechservicesrazzyworker api

I removed OpenClaw from my NVIDIA Orin Nano and rebuilt the machine as a private, token-protected worker appliance controlled by Razzy, my Raspberry Pi 5 home agent. This is the first stable foundation for my personal home AI architecture.


Building in Public: Turning My Orin Nano Into a Worker Appliance for Razzy

I am building my home AI architecture in public.

The goal is simple: I want a practical, private, modular system where my Raspberry Pi 5 acts as the main controller for my home agent setup, and my NVIDIA Orin Nano acts as a child worker appliance for heavier tasks.

I am calling the Raspberry Pi 5 agent Razzy.

The Orin Nano is no longer the main agent. It is now a worker.

That distinction matters.


The Problem

I originally had OpenClaw running on my Orin Nano. It worked, but the setup started to feel bulky and redundant.

The Orin Nano has a GPU. That makes it valuable. But it does not need to be the main brain of my home architecture.

I do not want multiple parent agents competing for control.

I want one parent system.

That parent is Razzy.

The Orin should be used when needed, not treated like another full-time agent brain.


The New Architecture

The new design looks like this:

Raspberry Pi 5 / Razzy
  Parent controller
  Owns goals, memory, planning, orchestration

        |
        | Tailscale private network
        | Bearer token authentication
        v

NVIDIA Orin Nano
  Child worker appliance
  Handles approved jobs only

The Raspberry Pi 5 decides what should happen.

The Orin Nano performs approved work and returns the result.

That is the relationship I want.


What I Removed

The first step was removing OpenClaw from the Orin Nano completely.

That included:

  • OpenClaw CLI
  • OpenClaw systemd service
  • OpenClaw user
  • OpenClaw npm package
  • OpenClaw data directories
  • OpenClaw shell completion references
  • OpenClaw ports and running processes

After cleanup, the Orin was no longer acting as an OpenClaw agent.

That gave me a clean foundation.


What I Built Instead

I built a private Node.js worker API on the Orin Nano.

It runs as a systemd service and listens only on the Orin's Tailscale IP:

100.86.175.53:8787

It does not bind to 0.0.0.0.

That means the worker API is not exposed broadly on the local network. It is designed to be reachable through my private Tailscale network.

The worker currently supports:

  • Public status index
  • Protected health endpoint
  • Protected capabilities endpoint
  • Protected job creation
  • Protected job listing
  • GPU probing
  • Workspace creation
  • Workspace listing
  • Append-only job logging

All protected routes require a bearer token.


Current Worker Capabilities

The Orin worker currently supports these capabilities:

health_check
capabilities
gpu_probe
token_auth
job_create
job_list
job_get
create_workspace
list_workspaces

The currently allowed job types are intentionally limited:

gpu_probe
create_workspace
list_workspaces

This is deliberate.

I am not adding raw shell execution yet.


Why I Am Avoiding Shell Execution For Now

It would be easy to give Razzy the ability to run arbitrary commands on the Orin.

That would also be reckless.

Before command execution exists, the worker needs safer primitives:

  • Workspace inspection
  • File writing
  • File reading
  • File listing
  • Job logs
  • Policy rules
  • Output limits
  • Timeout enforcement
  • No sudo
  • No interactive shells

I want this system to become powerful, but I do not want it to become sloppy.

The worker should behave like a controlled tool server, not like an open terminal.


The Job Ledger

The Orin worker now writes job records to:

/var/lib/razzy-worker/jobs.jsonl

Each job is stored as one JSON object per line.

The job ledger records both successful and failed jobs.

For example, when I tested an unsupported job type, the worker rejected it and still logged the failed attempt. That is good behavior. Failed jobs matter because they show what the parent controller attempted.

This is simple for now. Eventually, I may move this into SQLite once the job schema stabilizes.


Systemd and Reboot Survival

The worker runs through:

razzy-worker.service

It survives reboot.

I also hit a real startup issue: the service sometimes started before Tailscale had assigned the Orin its Tailscale IP.

The failure looked like this:

EADDRNOTAVAIL: address not available 100.86.175.53:8787

The fix was to add a startup wait script:

/srv/razzy-worker/scripts/wait-for-bind-ip.sh

The systemd service now waits for the Tailscale IP to exist before starting the Node server.

That made startup cleaner and more reliable.


Testing From Razzy

After the Orin worker was stable, I tested control from the Raspberry Pi 5.

Razzy can now reach the Orin over Tailscale, authenticate with the bearer token, and submit jobs.

The Razzy-side test passes:

PASS: GET /
PASS: GET /v1/health
PASS: GET /v1/capabilities
PASS: POST /v1/jobs gpu_probe
PASS: POST /v1/jobs list_workspaces
PASS: GET /v1/jobs
=== Razzy can control Orin worker ===

That is the first real milestone.

Razzy can now control the Orin worker.


Current Status

The foundation is complete.

The Orin Nano is now:

  • Cleaned of OpenClaw
  • Reachable over Tailscale
  • Running a protected Node.js worker API
  • Bound only to its Tailscale IP
  • Using bearer token authentication
  • Running as a systemd service
  • Able to probe its GPU
  • Able to create/list controlled workspaces
  • Writing an append-only job ledger
  • Controlled successfully by Razzy on the Raspberry Pi 5

This is no longer just an idea.

It works.


What Comes Next

The next phase is safe workspace file operations.

I want to add job types like:

get_workspace
delete_workspace
list_workspace_files
read_workspace_file
write_workspace_file

Those need to come before command execution.

The correct progression is:

1. Workspace file operations
2. Policy allowlist
3. Per-job logs
4. Controlled command execution
5. GPU/model-specific jobs

Only after that should I add jobs like:

run_allowed_command
run_python_script
run_node_script
git_clone_project
build_project
run_tests
package_artifacts

Even then, command execution should be restricted to approved workspaces, time-limited, logged, and never allowed to use sudo.


Why This Matters

This project is part of a bigger goal.

I want to build a real home AI architecture that is practical, private, and modular.

Not a toy chatbot.

Not a random pile of scripts.

Not multiple machines all pretending to be the main brain.

The direction is:

Razzy thinks and delegates.
The Orin executes approved work.
Everything is logged.
Everything is controlled.
Everything grows one safe layer at a time.

That is the kind of system I want to build.

This is the first stable foundation.

👍 0 | 👎 0

Comments

  • No comments yet.