ryOS ryOS / Docs
GitHub Launch

Architecture & Build

Vite + Bun build system with multiple deployment targets, including dual API runtime support (Vercel serverless and standalone Bun server), plus optimized chunk splitting for performance.

System Architecture Overview

graph TB
    subgraph Client["Client Layer"]
        Browser[Browser / PWA]
        Desktop[Tauri Desktop App]
    end
    
    subgraph Presentation["Presentation Layer"]
        Apps[24 App Modules]
        Layout[Layout Components]
        UI[UI Components]
        Themes[4 OS Themes]
    end
    
    subgraph State["State Layer"]
        Zustand[30 Zustand Stores]
        Contexts[React Contexts]
    end
    
    subgraph Persistence["Persistence Layer"]
        LocalStorage[(localStorage)]
        IndexedDB[(IndexedDB)]
    end
    
    subgraph API["API Layer (Vercel Node.js + standalone Bun server)"]
        Shared[apiHandler + api/_utils]
        Chat[Chat API]
        Media[Media APIs]
        Rooms[Chat Rooms API]
        Utility[Utility APIs]
    end
    
    subgraph External["External Services"]
        AI[AI Providers
OpenAI, Anthropic, Google] Realtime[Real-time
Pusher / Local WS] Redis[(Redis
Upstash REST / Standard)] ObjectStorage[(Object Storage
Vercel Blob / S3)] YouTube[YouTube API] end Browser --> Presentation Desktop --> Presentation Presentation --> State State --> Persistence Presentation --> API Shared --> Chat Shared --> Media Shared --> Rooms Shared --> Utility API --> External State --> API

Deployment Targets

TargetTechnologyDescription
Web (PWA)VercelPrimary deployment with CDN, serverless functions
DesktopTauriNative app for macOS, Windows, Linux
DevelopmentVite + BunLocal frontend with HMR and optional standalone API proxy
Self-hosted APIBun (Bun.serve)Run api/ routes without Vercel via scripts/api-standalone-server.ts
Docker / CoolifyDockerfile + BunContainer deployment with health checks at /health
graph LR
    subgraph Source
        A[Source Code]
    end
    
    subgraph Build["Vite + Bun"]
        B[Bundle & Optimize]
    end
    
    subgraph Targets
        C[Web PWA]
        D[Desktop]
        E[Dev Server]
    end
    
    A --> B
    B --> C
    B --> D
    B --> E
    
    C --> F[Vercel CDN]
    D --> G[Tauri - macOS/Windows/Linux]
    E --> H[localhost + HMR]

API Runtime Modes

The API layer is shared across two execution environments:

  • Vercel serverless functions: api/ handlers run as Node.js functions for primary web deployment.

Shared utilities under api/_utils/ (including api-handler.ts, middleware.ts, request-auth.ts, redis.ts, storage.ts, realtime.ts, runtime-config.ts, constants.ts, _cors.ts, _ssrf.ts, _rate-limit.ts, _validation.ts, _logging.ts, _analytics.ts, _sse.ts, and _memory.ts) provide consistent API patterns across routes.

Frontend clients under src/api/ (auth, rooms, admin, songs, listen, and sync) centralize request logic for app modules.

Chunk Splitting Strategy

The build system uses intelligent chunk splitting to optimize initial load time while enabling on-demand loading of heavy dependencies.

Core Chunks (Immediate Load)

These chunks are loaded on initial page load:

ChunkPackagesSize
reactreact, react-dom~150KB
ui-core@radix-ui/ (dialog, dropdown, select, etc.)~80KB
zustandzustand, persist middleware~10KB
motionframer-motion~100KB

Deferred Chunks (Lazy Load)

These chunks are loaded when their corresponding apps are opened:

ChunkContentsTrigger Apps
audiotone.js, wavesurfer.jsSoundboard, iPod, Synth
tiptap@tiptap/ (editor framework)TextEdit
threethree.js (3D rendering)Virtual PC
ai-sdkai, @ai-sdk/*Chats, Internet Explorer
graph TD
    subgraph Core["Core Chunks (Immediate)"]
        R[react]
        UI[ui-core]
        Z[zustand]
        M[motion]
    end
    
    subgraph Deferred["Deferred Chunks (Lazy)"]
        AU[audio]
        TT[tiptap]
        TH[three]
        AI[ai-sdk]
    end
    
    subgraph Apps
        SB[Soundboard]
        IP[iPod]
        SY[Synth]
        TE[TextEdit]
        PC[Virtual PC]
        CH[Chats]
        IE[Internet Explorer]
    end
    
    SB -.->|"on open"| AU
    IP -.->|"on open"| AU
    SY -.->|"on open"| AU
    TE -.->|"on open"| TT
    PC -.->|"on open"| TH
    CH -.->|"on open"| AI
    IE -.->|"on open"| AI

Lazy Component Pattern

Apps use React's lazy loading with a custom wrapper for HMR compatibility:

// Lazy loading with caching for HMR
function createLazyComponent<T = unknown>(
  importFn: () => Promise<{ default: ComponentType<AppProps<T>> }>,
  cacheKey: string
): ComponentType<AppProps<T>> {
  // Return cached component if exists (prevents HMR issues)
  const cached = lazyComponentCache.get(cacheKey);
  if (cached) return cached;

  const LazyComponent = lazy(importFn);
  
  const WrappedComponent = (props: AppProps<T>) => (
    <Suspense fallback={null}>
      <LazyComponent {...props} />
      <LoadSignal instanceId={props.instanceId} />
    </Suspense>
  );
  
  lazyComponentCache.set(cacheKey, WrappedComponent);
  return WrappedComponent;
}

PWA Caching Strategy

The service worker implements different caching strategies based on resource type:

Resource PatternStrategyTTLRationale
Navigation (HTML)NetworkFirst1 dayAlways get latest app shell
JS ChunksNetworkFirst (3s timeout)1 dayFresh code with fast fallback
CSSStaleWhileRevalidate7 daysUse cached, update in background
ImagesCacheFirst30 daysRarely change, prioritize speed
FontsCacheFirst1 yearNever change once deployed
API ResponsesNetworkOnly-Always fresh data

Cache Invalidation

sequenceDiagram
    participant SW as Service Worker
    participant Cache as Cache Storage
    participant Network as Network
    participant App as Application
    
    App->>SW: Request resource
    alt CacheFirst (images, fonts)
        SW->>Cache: Check cache
        Cache-->>SW: Return if exists
        SW->>App: Serve from cache
    else NetworkFirst (JS, HTML)
        SW->>Network: Fetch (with timeout)
        alt Success within timeout
            Network-->>SW: Response
            SW->>Cache: Update cache
            SW->>App: Serve response
        else Timeout/Failure
            SW->>Cache: Fallback to cache
            Cache-->>SW: Cached response
            SW->>App: Serve cached
        end
    else StaleWhileRevalidate (CSS)
        SW->>Cache: Get cached version
        Cache-->>SW: Cached response
        SW->>App: Serve immediately
        SW->>Network: Fetch update (background)
        Network-->>SW: Fresh response
        SW->>Cache: Update cache
    end

Module Resolution

The project uses TypeScript path aliases for clean imports:

AliasPathUsage
@/src/Source code root
@/componentssrc/components/UI components
@/hookssrc/hooks/Custom hooks
@/storessrc/stores/Zustand stores
@/appssrc/apps/App modules
@/utilssrc/utils/Utility functions
@/libsrc/lib/Libraries
@/typessrc/types/TypeScript types

Environment Configuration

Development

# Local development with HMR
bun run dev           # Full stack (API + Vite with proxy) — the default
bun run dev:vite      # Vite dev server only (frontend-only, no API)
bun run dev:api       # Standalone Bun API server only (port 3000)
bun run dev:vercel    # Optional: Vercel dev server (parity/debugging only)

Production Build

bun run build         # Production build
bun run preview       # Preview production build

Desktop (Tauri)

bun run tauri:dev     # Development with native shell
bun run tauri:build   # Build native applications

Performance Optimizations

Runtime Stability & App Coordination

  1. Error Boundaries: Desktop-level and app-level boundaries isolate crashes and allow targeted recovery flows.
  1. Typed App Event Bus: src/utils/appEventBus.ts defines typed primitives for app launch/update, window focus, Spotlight/Expose toggles, and file/document events.

Initial Load Optimizations

  1. Code Splitting: Apps loaded on-demand via React.lazy
  1. Font Loading: System font stacks with web font fallbacks
  2. Image Optimization: Responsive images, WebP format
  3. CSS Layers: Tailwind with theme-specific overrides
  4. Locale Loading: Non-default locales are loaded lazily via dynamic imports

Runtime Optimizations

  1. Zustand Selectors: Fine-grained subscriptions prevent re-renders
  1. Memo/Callback: Strategic memoization for expensive computations
  2. Virtual Lists: Large lists use virtualization (iPod, Finder)
  3. Debounced Actions: User inputs debounced for performance
  4. Spotlight Offloading: Spotlight indexing/search for dynamic datasets runs in a dedicated Web Worker

Audio Optimizations

  1. Shared AudioContext: Single context prevents resource exhaustion
  1. Lazy AudioBuffer Loading: Sounds loaded on first interaction
  2. LRU Cache: Limited audio buffer cache with eviction
  3. Concurrent Source Limiting: Prevents audio overload
graph TD
    subgraph Optimization["Performance Optimization Layers"]
        L1[Initial Load
Code Splitting, Lazy Loading] L2[Runtime
Selectors, Memoization] L3[Audio
Shared Context, Caching] L4[Network
PWA Caching, Prefetch] end L1 --> L2 --> L3 --> L4

Build Pipeline

flowchart LR
    subgraph Input
        TS[TypeScript]
        TSX[React TSX]
        CSS[Tailwind CSS]
        Assets[Static Assets]
    end
    
    subgraph Vite["Vite Build"]
        SWC[SWC Compiler]
        Rollup[Rollup Bundler]
        PostCSS[PostCSS]
    end
    
    subgraph Output
        JS[Optimized JS Chunks]
        StyleSheet[Minified CSS]
        Static[Hashed Assets]
        SW[Service Worker]
    end
    
    TS --> SWC
    TSX --> SWC
    CSS --> PostCSS
    SWC --> Rollup
    PostCSS --> Rollup
    Assets --> Static
    Rollup --> JS
    Rollup --> StyleSheet
    Rollup --> SW

Related Documentation