heymike.dev

From Idea to Infrastructure: How I Designed the Foundation of Arcturus Published:

Arcturus began with a clear goal: to build a productized, project-based system that distilled the core functionalities we found ourselves recreating across various client apps at Globacore. The vision was a modular, plug-and-play platform where content and analytics could be managed centrally, while client applications — whether built in Unity, on the web, or otherwise — could fetch content and interact with a project-scoped API environment. It needed to support real-time data, QR code interactions, gamification, and flexible content tagging — all while staying client-agnostic and scalable.

Defining the Core Concepts

At the heart of Arcturus is a clear distinction between two types of users: internal staff and client applications.

Internal staff are team members with a Google Business account on our domain. They access the Arcturus web application to manage projects, configure settings, and curate content. This includes our producer team, who need to quickly stand up event-ready experiences; on-site techs, who rely on the interface to verify setup and troubleshoot issues during deployment; and developers, who may need to inspect data structures or trigger actions manually during testing. The web app had to be intuitive and fast — purpose-built for mixed technical proficiency, with a focus on clarity, responsiveness, and reducing the chance of user error in high-pressure environments.

Client applications, on the other hand, are machine-to-machine clients — Unity apps, web apps, or other integrations — that communicate directly with the API. When a valid client application token is detected, Arcturus automatically scopes all API activity to the project under which that application was registered. This guarantees strict data isolation and eliminates the risk of cross-project data contamination.

Both user types operate under a shared role-based access control (RBAC) system. Every read and write action is governed by explicit model-level permissions. This makes authorization both flexible and transparent — even as new features or modules are added.

Authentication and authorization are delegated to a third-party OpenID Connect platform, allowing Arcturus to integrate cleanly with Google for staff login while still supporting secure API token flows for applications.

Choosing the Tech Stack

Postgres was the obvious choice — not just for its reliability and maturity, but because it allowed me to express business logic as close to the data as possible. I didn’t want to scatter validation, permissions, or transformation logic across an ORM layer that might be inconsistently used or difficult to enforce across multiple applications. Relying on application-layer logic left too much room for data to be mishandled, especially in a system that multiple client apps and staff tools would be interfacing with simultaneously.

Instead, I leaned into writing explicit SQL and defining behavior through database functions, constraints, and policies. This ensured that core rules — like project scoping, permission checks, and data integrity — were universally enforced no matter how the data was accessed.

On top of that, using GraphQL gave our applications the flexibility to request exactly the data they need in the shape they expect — nothing more, nothing less. That pattern is especially useful in a multi-client environment, where a Unity app, a dashboard, and a content editor might each need slightly different slices of the same data.

PostGraphile gave us a thin but powerful layer that brought these two needs together: data-first business logic and a flexible, strongly typed API — generated directly from the database schema, with almost no glue code required.

On the frontend, I used Next.js with the App Router and Server Actions to handle data fetching and mutations on the server — keeping API calls secure and reducing client-side complexity. To ensure type safety throughout the stack, I generated a type-safe SDK using graphql-codegen and graphql-request, giving the frontend strong TypeScript support for every query and mutation. This setup made it easy to work confidently with the API while keeping the surface area between client and server minimal and predictable.

Schema Design Principles

To keep concerns well-organized and access tightly controlled, the database is split into three main schemas:

  • app_public: This is the primary interface exposed to GraphQL and other external systems. It contains the core tables and functions that are safe and intended for public consumption — things like collectibles, footprints, projects, and any stable APIs that client apps or staff tools rely on.

  • app_hidden: Used for internal data structures and helper logic that support app_public but aren’t meant to be exposed directly. Think of it like internal wiring — necessary for functionality but not part of the public contract. It’s accessible at the same privilege level as app_public, but its contents are kept out of the GraphQL layer by convention.

  • app_private: This is the most restricted layer, intended for sensitive operations and data. Access is locked down via Postgres role-based access control (RBAC), and only granted through carefully scoped SECURITY DEFINER functions. This is where we store secrets, internal audit data, and any logic that shouldn’t be directly callable via the GraphQL API.

This separation helps enforce clear boundaries between what’s public, what’s internal, and what’s sensitive — all while making it easier to reason about permission levels during development.

Developer Ergonomics & Tooling

To support development and testing, I put together a tooling setup that emphasized speed, confidence, and type safety. I use Kanel to generate TypeScript types directly from the PostgreSQL schema. These types are used in the test suite to define the expected structure of data returned by the pg client, as well as to power factory functions that generate mock data. This ensures consistency with the actual database schema and eliminates guesswork when writing or refactoring tests.

Test data is seeded in a way that keeps fixtures realistic but flexible, with faker generating randomized mock content. Testing is handled with Jest, which runs against a PostgreSQL test database using pooled connections. This allows tests to execute real SQL queries directly against the schema — making it easy to validate database behavior without relying on mocks.

Package management is handled with pnpm, chosen for its speed, strict dependency resolution, and support for monorepos. In coordination with Turborepo, it allows Arcturus to manage multiple packages — like shared utilities, frontend apps, and tooling — from a single, efficient workspace. This setup reduces duplication, speeds up development, and enables smarter caching during builds and CI.

Infrastructure Choices

Both the GraphQL API and the Next.js application are deployed to Vercel. As Vercel enterprise customers, we wanted to focus on writing code — not managing or scaling infrastructure. Vercel essentially acts as our devops team, handling CI/CD, scaling, caching, and global delivery out of the box. It lets us move quickly and iterate without worrying about server uptime or deployment pipelines.

For the PostgreSQL database, we chose Neon — a serverless Postgres platform built for modern cloud apps. Just like with Vercel, the goal was to offload scaling concerns and stay focused on building features. Neon’s branching and instant snapshots make it easy to spin up isolated dev environments or test migrations safely.

By leaning into serverless platforms, we’ve reduced operational overhead and gained the flexibility to ship quickly, experiment confidently, and scale when needed — without constantly revisiting infrastructure decisions.

What’s Next

In Part 2, I’ll walk through how Arcturus evolved post-launch — not just in terms of features, but in how the platform matured operationally.

One key area was the CI/CD pipeline. With separate services powering the GraphQL API and the frontend app, I needed precise control over how and when each service was deployed. To handle this, I wrote a custom GitHub Action that detects which parts of the codebase have changed and triggers deployments accordingly. This avoids unnecessary deploys, keeps environments stable, and speeds up the dev loop — especially useful in a monorepo setup.

Part 2 will cover how Arcturus scaled over time: supporting multiple concurrent projects, dealing with real-time data, improving observability, and evolving the dev process while staying nimble.


📝 Fun fact: The name Arcturus comes from a joke we’ve always had about our company name, Globacore — which has always sounded a bit like a fictional evil megacorp, not unlike Globex from The Simpsons. At the end of the episode You Only Move Twice, Hank Scorpio (head of Globex) writes a letter to Homer, thanking him for his help and saying, “Project Arcturus could not have succeeded without you.” In that episode, Project Arcturus was about seizing the East Coast as part of a supervillain scheme.

Our Arcturus is (thankfully) less nefarious — it’s about governing our applications and giving us the tools to manage content cleanly across projects. Still, with Arcturus by our side, it finally feels like we’re seizing control of our apps in the best way possible.

Hank Scorpio