From e-waste to smart wall panels: I built my own app with AI in two chat sessions

From e-waste to smart wall panels: I built my own app with AI in two chat sessions

Smart-Home 12 min read
Continue Reading

There’s a specific kind of guilt that only makers and home automation enthusiasts will truly understand. It’s the guilt of a device you bought with the best of intentions, mounted on a wall with great pride, and then… completely forgot about. For me, that was nine cheap Android touch panels. Nine. Mounted across the house. Running the stock launcher. Doing precisely nothing useful.

When I bought them — these little px30_evb units running Android 8.1 — the plan was to run Home Assistant dashboards on them. And for a while, they did! But the available apps felt like a compromise. The UI wasn’t made for small square 480×480 screens. Touch targets were fiddly. The whole thing felt like a port of a desktop dashboard squeezed into a form factor it was never designed for. One by one, I stopped using them. Nine wall panels. Nine expensive-ish light switches.

Then I had a shower thought.

The shower thought that started everything

What if I just built my own app?

I know. I know how that sounds. “Just built my own app” is the kind of thing people say before disappearing into a rabbit hole for six months, emerging pale and blinking into the daylight, clutching a half-working prototype and a new appreciation for why Gradle takes so long. But this time I had a secret weapon: GitHub Copilot. Not the tab-autocomplete kind — the full agentic kind, where you describe what you want, point it at your codebase, and it actually does the work.

Two chat sessions later — I’m not exaggerating, two — I have a working smart home panel platform that I’m genuinely proud of. Let me walk you through how it happened.

What I actually wanted

The panels are square (480×480), touch-only, wall-mounted. They needed to feel like they were made for that form factor — not squeezed into it. My list going in:

  • Room-aware: each panel knows its room and shows the right thing
  • Sonos integration: music controls are the biggest daily use case in every room
  • Lighting controls: quick on/off/dim for the room’s lights
  • Camera feeds: see who’s at the door from any panel in the house
  • Responsive to touch: proper touch targets, swipe gestures, no fiddly tap zones
  • Fleet managed: I can push updates to all nine panels without physically touching each one

I also wanted it to feel like a product, not a side project. Rounded cards, smooth transitions, dark theme, album artwork — the works. My wife’s approval rating (WAR, as I call it) depends heavily on things not looking like I’ve wired a Raspberry Pi to a fire hazard.

The architecture: Hub-as-Orchestrator

Before writing a line of code, I spent time with Copilot sketching out the architecture. The approach we landed on I’d call Hub-as-Orchestrator, and it looks like this:

HA ──REST──► Hub ──ADB/WebSocket──► Panel 1...9
                 ◄──Socket.IO────── Admin PWA

There are three components:

1. The Android App (HARoomPanel) — a native Kotlin + Jetpack Compose app installed on each panel. Config-driven: it asks the Hub what pages to show, and renders them. No hardcoded logic per room.

2. The Hub — a TypeScript/Node.js server running on my home server. It talks to Home Assistant over a persistent WebSocket, manages the fleet of panels via ADB, hosts a React admin PWA for configuration, and acts as a proxy so the panels don’t talk to HA directly. One HA connection for all nine panels, not nine.

3. The HA Integration — a Python custom component that exposes hub status, device info, and controls back to HA as proper entities. Optional, but it means I can automate things like “wake the kitchen panel when motion is detected.”

The Hub pattern means everything is centralised. Updates go to the Hub, the Hub pushes the APK to the fleet. Config changes in the admin PWA and immediately push to devices. If a panel is offline, it syncs when it reconnects. The panels are thin clients — intentionally so.

The tech stack

I’ll be honest: some of these choices were Copilot’s recommendations, and they were good ones.

ComponentStack
Android AppKotlin, Jetpack Compose (Material 3), Hilt, Retrofit, OkHttp WebSocket
Hub BackendTypeScript strict, Express, Socket.IO, SQLite (sql.js)
Hub FrontendReact 18, Vite, Tailwind CSS v4, shadcn/ui, Lucide icons
HA IntegrationPython 3.11+, aiohttp, DataUpdateCoordinator

Version management uses CalVer (yyyy.MM.dd.HHmm) — both the Hub and Android app carry the same version string, so I can immediately see if a panel is running today’s code or yesterday’s. The Hub auto-deploys via ADB when it detects a panel is running a stale version.

Where AI came in — and how it actually worked

I want to be precise about this, because “I built it with AI” gets thrown around very loosely these days, and I’m a little tired of it meaning “Copilot filled in a few function stubs.”

What I actually did was open GitHub Copilot in VS Code in agent mode — the full chat panel — and started a conversation. I described features, shared screenshots of what I wanted the UI to look like, pasted error logs when things broke, and let Copilot write, edit, and iterate across multiple files at once.

In the first session, Copilot:

  • Built the entire Android app skeleton — MainActivity, PanelScreen, page routing, WebSocket connection
  • Built the Hub server with device management, ADB integration, and the config API
  • Built the HA custom component with sensors and switches per panel
  • Built the admin PWA with device list, page library, and config editing
  • Built the APK upload, storage, and fleet push pipeline

That’s not “it helped me write some code.” That’s Copilot running dozens of tool calls, reading the file system, writing Kotlin, TypeScript, Python, and CSS at the same time, running Gradle builds, catching its own errors, and deploying to my actual panels over ADB. I was largely directing, reviewing, and providing context. The typing was mostly mine in the chat box, not in the code editor.

In the second session — the one still running as I write this — we’ve been iterating on the Sonos page specifically. This is where it gets fun.

The Sonos page: from basic to genuinely great

The Sonos page started as a simple now-playing card — artwork, track name, play/pause, volume. It’s come a long way.

Speaker grouping

Sonos lets you group speakers together, with one acting as the group coordinator. The panel now shows grouped speakers in a single card with a left accent bar when playing, and each speaker in the group gets its own volume slider so you can dial in the mix per room. Finding the group coordinator requires reading the group_members[0] field from HA’s media player state — something Copilot figured out from the HA entity attributes without me having to explain the Sonos data model.

There’s a subtle bug we hit here. Solo speakers (ungrouped) report themselves as their own group: group_members = [self]. If you naïvely iterate over all speakers and register “coordinator → speaker” pairs, you’d overwrite genuine group relationships with solo self-references. The fix was a single guard:

// Groups speakers by coordinator (first member of group_members list)
// Skip singletons — they self-report but aren't a real group
fun bucketSonosGroups(speakers: List<MediaPlayerState>): List<SpeakerGroup> {
    val grouped = speakers.groupBy { it.attributes["group_members"]
        ?.jsonArray?.firstOrNull()?.jsonPrimitive?.content ?: it.entityId }
    return grouped
        .filter { (_, members) -> members.size > 1 || members.first().entityId == it.key }
        .map { (coordinator, members) ->
            SpeakerGroup(leader = members.first { it.entityId == coordinator }, members = members.drop(1))
        }
}

Drag-only volume sliders

Here’s one I’m particularly pleased with. On a scrollable list, standard sliders are a nightmare — you try to scroll between speakers and accidentally turn the kitchen up to full blast at 11pm. I asked Copilot to make the sliders only respond to deliberate horizontal drag, not vertical scroll events. The result is DragOnlyVolumeSlider: it intercepts touch events, calculates drag angle, and only activates when the touch is within ±30° of horizontal. Vertical scrolls pass straight through to the parent list.

Quick Groups

Via the admin PWA page settings, I can define preset speaker groups with a name — “Whole House”, “Kitchen + Study”, whatever makes sense. On the Sonos page, these appear as pills at the top. Tap one and both speakers immediately group and start playing. Copilot implemented the activation by calling media_player.join to bring speakers into the coordinator’s group, then media_player.play_media to resume.

Quick Favourites

This one had the most entertaining debug story. I wanted to pin specific Sonos favourites on the media page — tap once and it plays, no browsing required. Simple enough idea, but the Sonos browse API in HA is a three-level hierarchy:

root → Favorites (type=favorites, id="") 
     → category folders (Playlists / Radio / Tracks, type=favorites_folder)
     → playable items (type=favorite_item_id, id=FV:2/xxx)

Getting to the actual favourites required browsing all three sub-levels in parallel. Copilot’s first attempt used media_content_id: "favorites" — the wrong string. The correct value is an empty string. I found this by adding a temporary debug endpoint that dumped the raw browse tree from HA and looked at what came back. We fixed it, and now the admin PWA auto-detects a Sonos speaker, fetches all 55 of my favourites with their artwork, and lets me tick which ones to pin on the panel. No copying cryptic IDs like FV:2/146 anywhere.

The media browser

Swipe up on the Sonos page and you get the speaker list. Swipe right and you get the media browser — Sonos favourites, Spotify, with full artwork. Tap a favourite and it plays. All driven by the same HA WebSocket browse API.

The fleet pipeline

One of my favourite parts of the whole project is the deploy pipeline. I genuinely didn’t want to be the person who has to SSH into a server, download an APK, and adb install it nine times. So we built this:

  1. Gradle builds the APK with a CalVer version name (2025.05.02.1430)
  2. The APK uploads to the Hub with version metadata via the API
  3. The Hub marks it as the fleet target
  4. sync-fleet pushes it to all currently-online panels over ADB
  5. Any panel that was offline auto-updates when it reconnects (the Hub spots the version mismatch on WebSocket connect)

The whole thing runs in a single PowerShell block, usually under 90 seconds for a full build and push. Nine panels. No touching any of them. This is the part that would have taken me days to build solo — Copilot built it in an afternoon, including the Hub-side storage, the multipart APK upload endpoint, and the Android-side update receiver.

The git worktree lesson

I’ll finish with a cautionary tale, because this is the kind of thing that burns you exactly once and then you never forget.

The work-in-progress lives in a Git worktree — a separate checkout of the same repo for this feature branch. Halfway through one session, the panels were showing the right version number but not the new features. All the new Kotlin code was definitely in the worktree. The panels were definitely running the new version tag. But they were behaving like the old code.

It turned out the deploy script had a hardcoded path: cd x:\Repos\haroompanel\android. In a worktree session, that path still exists — it just points to the main branch. Gradle was compiling main’s code but stamping it with the new version number we passed in. Version looked right. Code was wrong. No errors anywhere.

The fix, once spotted, is one line:

# Works for both main checkout and git worktrees
$repoRoot = (git rev-parse --show-toplevel) -replace '/', '\'
cd "$repoRoot\android"

git rev-parse --show-toplevel returns the current worktree root, not the main repo. That makes the deploy script location-aware regardless of which worktree you’re running it from. Lesson learned, skill definition updated, never doing that again.

Where it goes from here

The panels are working. The Sonos integration is genuinely great to use daily. Still on the list:

  • Line-in / TV source switching for speakers with AV inputs
  • Per-device page config — set the primary Sonos speaker per room without touching every device config individually
  • Activity logging in the Hub’s admin panel
  • Android TV remote page for the living room

But honestly, even at this point, these nine panels aren’t e-waste anymore. They’re doing exactly what I imagined when I first bought them. They just took about six years and a shower thought to get there.

The thing that still surprises me is how quickly it happened. Two chat sessions. Not six months of weekends. Not a half-finished prototype. A working platform I’m actually using, with a fleet deployment pipeline, Sonos speaker grouping, media browsing, and Jetpack Compose UI on Android 8.1 hardware — all in maybe 12 hours of actual work spread across two days.

That’s what agentic AI feels like when it works properly. Not autocomplete. Not a code suggestion in the margin. A collaborator that can hold the full context of a 2,000-line Kotlin file, a TypeScript backend, a React frontend, a Python integration, and a fleet of Android devices all at once — and make meaningful, coordinated progress on all of them while you drink your coffee ☕.

The panels are on the walls. The music’s playing. Nine fewer pieces of e-waste in the world.


The project is private for now while I tidy it up, but I’ll be publishing it soon. If you’re doing something similar with Android panels and Home Assistant, I’ve learned a lot about ADB fleet management, Sonos WebSocket APIs, and Jetpack Compose on constrained hardware — feel free to reach out.