Architecture & Build
Vite + Bun build system with multiple deployment targets, including dual API runtime support (Vercel serverless and standalone Bun server), plus optimized chunk splitting for performance.
System Architecture Overview
graph TB
subgraph Client["Client Layer"]
Browser[Browser / PWA]
Desktop[Tauri Desktop App]
end
subgraph Presentation["Presentation Layer"]
Apps[24 App Modules]
Layout[Layout Components]
UI[UI Components]
Themes[4 OS Themes]
end
subgraph State["State Layer"]
Zustand[30 Zustand Stores]
Contexts[React Contexts]
end
subgraph Persistence["Persistence Layer"]
LocalStorage[(localStorage)]
IndexedDB[(IndexedDB)]
end
subgraph API["API Layer (Vercel Node.js + standalone Bun server)"]
Shared[apiHandler + api/_utils]
Chat[Chat API]
Media[Media APIs]
Rooms[Chat Rooms API]
Utility[Utility APIs]
end
subgraph External["External Services"]
AI[AI Providers
OpenAI, Anthropic, Google]
Realtime[Real-time
Pusher / Local WS]
Redis[(Redis
Upstash REST / Standard)]
ObjectStorage[(Object Storage
Vercel Blob / S3)]
YouTube[YouTube API]
end
Browser --> Presentation
Desktop --> Presentation
Presentation --> State
State --> Persistence
Presentation --> API
Shared --> Chat
Shared --> Media
Shared --> Rooms
Shared --> Utility
API --> External
State --> API
Deployment Targets
| Target | Technology | Description |
|---|---|---|
| Web (PWA) | Vercel | Primary deployment with CDN, serverless functions |
| Desktop | Tauri | Native app for macOS, Windows, Linux |
| Development | Vite + Bun | Local frontend with HMR and optional standalone API proxy |
| Self-hosted API | Bun (Bun.serve) | Run api/ routes without Vercel via scripts/api-standalone-server.ts |
| Docker / Coolify | Dockerfile + Bun | Container deployment with health checks at /health |
graph LR
subgraph Source
A[Source Code]
end
subgraph Build["Vite + Bun"]
B[Bundle & Optimize]
end
subgraph Targets
C[Web PWA]
D[Desktop]
E[Dev Server]
end
A --> B
B --> C
B --> D
B --> E
C --> F[Vercel CDN]
D --> G[Tauri - macOS/Windows/Linux]
E --> H[localhost + HMR]
API Runtime Modes
The API layer is shared across two execution environments:
- Vercel serverless functions:
api/handlers run as Node.js functions for primary web deployment.
- Standalone Bun server:
scripts/api-standalone-server.tsusesBun.serveand adapts requests/responses to the Vercel handler shape for non-Vercel environments.
Shared utilities under api/_utils/ (including api-handler.ts, middleware.ts, request-auth.ts, redis.ts, storage.ts, realtime.ts, runtime-config.ts, constants.ts, _cors.ts, _ssrf.ts, _rate-limit.ts, _validation.ts, _logging.ts, _analytics.ts, _sse.ts, and _memory.ts) provide consistent API patterns across routes.
Frontend clients under src/api/ (auth, rooms, admin, songs, listen, and sync) centralize request logic for app modules.
Chunk Splitting Strategy
The build system uses intelligent chunk splitting to optimize initial load time while enabling on-demand loading of heavy dependencies.
Core Chunks (Immediate Load)
These chunks are loaded on initial page load:
| Chunk | Packages | Size |
|---|---|---|
react | react, react-dom | ~150KB |
ui-core | @radix-ui/ (dialog, dropdown, select, etc.) | ~80KB |
zustand | zustand, persist middleware | ~10KB |
motion | framer-motion | ~100KB |
Deferred Chunks (Lazy Load)
These chunks are loaded when their corresponding apps are opened:
| Chunk | Contents | Trigger Apps |
|---|---|---|
audio | tone.js, wavesurfer.js | Soundboard, iPod, Synth |
tiptap | @tiptap/ (editor framework) | TextEdit |
three | three.js (3D rendering) | Virtual PC |
ai-sdk | ai, @ai-sdk/* | Chats, Internet Explorer |
graph TD
subgraph Core["Core Chunks (Immediate)"]
R[react]
UI[ui-core]
Z[zustand]
M[motion]
end
subgraph Deferred["Deferred Chunks (Lazy)"]
AU[audio]
TT[tiptap]
TH[three]
AI[ai-sdk]
end
subgraph Apps
SB[Soundboard]
IP[iPod]
SY[Synth]
TE[TextEdit]
PC[Virtual PC]
CH[Chats]
IE[Internet Explorer]
end
SB -.->|"on open"| AU
IP -.->|"on open"| AU
SY -.->|"on open"| AU
TE -.->|"on open"| TT
PC -.->|"on open"| TH
CH -.->|"on open"| AI
IE -.->|"on open"| AI
Lazy Component Pattern
Apps use React's lazy loading with a custom wrapper for HMR compatibility:
// Lazy loading with caching for HMR
function createLazyComponent<T = unknown>(
importFn: () => Promise<{ default: ComponentType<AppProps<T>> }>,
cacheKey: string
): ComponentType<AppProps<T>> {
// Return cached component if exists (prevents HMR issues)
const cached = lazyComponentCache.get(cacheKey);
if (cached) return cached;
const LazyComponent = lazy(importFn);
const WrappedComponent = (props: AppProps<T>) => (
<Suspense fallback={null}>
<LazyComponent {...props} />
<LoadSignal instanceId={props.instanceId} />
</Suspense>
);
lazyComponentCache.set(cacheKey, WrappedComponent);
return WrappedComponent;
}
PWA Caching Strategy
The service worker implements different caching strategies based on resource type:
| Resource Pattern | Strategy | TTL | Rationale |
|---|---|---|---|
| Navigation (HTML) | NetworkFirst | 1 day | Always get latest app shell |
| JS Chunks | NetworkFirst (3s timeout) | 1 day | Fresh code with fast fallback |
| CSS | StaleWhileRevalidate | 7 days | Use cached, update in background |
| Images | CacheFirst | 30 days | Rarely change, prioritize speed |
| Fonts | CacheFirst | 1 year | Never change once deployed |
| API Responses | NetworkOnly | - | Always fresh data |
Cache Invalidation
sequenceDiagram
participant SW as Service Worker
participant Cache as Cache Storage
participant Network as Network
participant App as Application
App->>SW: Request resource
alt CacheFirst (images, fonts)
SW->>Cache: Check cache
Cache-->>SW: Return if exists
SW->>App: Serve from cache
else NetworkFirst (JS, HTML)
SW->>Network: Fetch (with timeout)
alt Success within timeout
Network-->>SW: Response
SW->>Cache: Update cache
SW->>App: Serve response
else Timeout/Failure
SW->>Cache: Fallback to cache
Cache-->>SW: Cached response
SW->>App: Serve cached
end
else StaleWhileRevalidate (CSS)
SW->>Cache: Get cached version
Cache-->>SW: Cached response
SW->>App: Serve immediately
SW->>Network: Fetch update (background)
Network-->>SW: Fresh response
SW->>Cache: Update cache
end
Module Resolution
The project uses TypeScript path aliases for clean imports:
| Alias | Path | Usage |
|---|---|---|
@/ | src/ | Source code root |
@/components | src/components/ | UI components |
@/hooks | src/hooks/ | Custom hooks |
@/stores | src/stores/ | Zustand stores |
@/apps | src/apps/ | App modules |
@/utils | src/utils/ | Utility functions |
@/lib | src/lib/ | Libraries |
@/types | src/types/ | TypeScript types |
Environment Configuration
Development
# Local development with HMR
bun run dev # Full stack (API + Vite with proxy) — the default
bun run dev:vite # Vite dev server only (frontend-only, no API)
bun run dev:api # Standalone Bun API server only (port 3000)
bun run dev:vercel # Optional: Vercel dev server (parity/debugging only)
Production Build
bun run build # Production build
bun run preview # Preview production build
Desktop (Tauri)
bun run tauri:dev # Development with native shell
bun run tauri:build # Build native applications
Performance Optimizations
Runtime Stability & App Coordination
- Error Boundaries: Desktop-level and app-level boundaries isolate crashes and allow targeted recovery flows.
- Typed App Event Bus:
src/utils/appEventBus.tsdefines typed primitives for app launch/update, window focus, Spotlight/Expose toggles, and file/document events.
Initial Load Optimizations
- Code Splitting: Apps loaded on-demand via React.lazy
- Font Loading: System font stacks with web font fallbacks
- Image Optimization: Responsive images, WebP format
- CSS Layers: Tailwind with theme-specific overrides
- Locale Loading: Non-default locales are loaded lazily via dynamic imports
Runtime Optimizations
- Zustand Selectors: Fine-grained subscriptions prevent re-renders
- Memo/Callback: Strategic memoization for expensive computations
- Virtual Lists: Large lists use virtualization (iPod, Finder)
- Debounced Actions: User inputs debounced for performance
- Spotlight Offloading: Spotlight indexing/search for dynamic datasets runs in a dedicated Web Worker
Audio Optimizations
- Shared AudioContext: Single context prevents resource exhaustion
- Lazy AudioBuffer Loading: Sounds loaded on first interaction
- LRU Cache: Limited audio buffer cache with eviction
- Concurrent Source Limiting: Prevents audio overload
graph TD
subgraph Optimization["Performance Optimization Layers"]
L1[Initial Load
Code Splitting, Lazy Loading]
L2[Runtime
Selectors, Memoization]
L3[Audio
Shared Context, Caching]
L4[Network
PWA Caching, Prefetch]
end
L1 --> L2 --> L3 --> L4
Build Pipeline
flowchart LR
subgraph Input
TS[TypeScript]
TSX[React TSX]
CSS[Tailwind CSS]
Assets[Static Assets]
end
subgraph Vite["Vite Build"]
SWC[SWC Compiler]
Rollup[Rollup Bundler]
PostCSS[PostCSS]
end
subgraph Output
JS[Optimized JS Chunks]
StyleSheet[Minified CSS]
Static[Hashed Assets]
SW[Service Worker]
end
TS --> SWC
TSX --> SWC
CSS --> PostCSS
SWC --> Rollup
PostCSS --> Rollup
Assets --> Static
Rollup --> JS
Rollup --> StyleSheet
Rollup --> SW
Related Documentation
- Application Framework - App structure and lifecycle
- State Management - Zustand stores and persistence
- API Architecture - Backend API design
- Self-hosting on VPS - Run API/frontend without Vercel