Architecture Overview
This document provides a high-level overview of the BearFlare architecture and design patterns.
Project Overview
Purpose: Extract and sync Bear.app's ZSFNOTE table schema and data to Cloudflare D1.
Architecture Pattern: Linear Pipeline Architecture - sequential phases with validation gates (schema + data sync pipeline).
Technology Stack: bash, sqlite3, curl, jq, Cloudflare D1 REST API.
Entry Point: bearflare.sh (executable bash script).
Dependencies:
- External: sqlite3, curl, jq
- Optional: bc (used to convert D1 query duration seconds → milliseconds; falls back to raw value if missing)
- Implicit: Bear.app, macOS
Architecture Diagram
sequenceDiagram
participant User
participant Script as bearflare.sh
participant Config as Config Files
participant Bear as Bear SQLite DB
participant LogFile as ~/.bearflare/bearflare.log
participant LastSync as ~/.bearflare/.last_sync
participant D1Module as Cloudflare D1 Module
participant CF as Cloudflare D1 API
User->>Script: Execute with flags
Script->>LogFile: Initialize log file (rotate if needed)
Script->>Config: Load ~/.bearflare/.config
Script->>Config: Load .env
Script->>Script: Validate environment variables
Script->>Script: Check dependencies (sqlite3, curl, jq, optional bc)
Script->>BearModule as Bear Module: Validate database path
BearModule->>Bear: Verify database path
alt Schema Sync Enabled
Script->>BearModule: Extract schema
BearModule->>Bear: Extract ZSFNOTE schema
Script->>Script: Prepare SQL (optional DROP)
Script->>D1Module: POST schema to D1 API
D1Module->>CF: HTTP POST request
CF-->>D1Module: HTTP response
D1Module-->>Script: Success/Error
Script->>LogFile: Log schema sync result + timing
end
alt Data Sync Enabled
alt Incremental Mode
Script->>LastSync: Read last sync timestamp
Script->>BearModule: Query rows modified since timestamp
BearModule->>Bear: Query rows modified since timestamp
else Single Note Mode
Script->>BearModule: Query single note by ZUNIQUEIDENTIFIER
BearModule->>Bear: Query single note by ZUNIQUEIDENTIFIER
else Full Sync
Script->>BearModule: Query all rows (with optional LIMIT/OFFSET)
BearModule->>Bear: Query all rows (with optional LIMIT/OFFSET)
end
Script->>Script: Generate INSERT statements
Script->>Script: Batch INSERTs (default 100 per batch)
loop For each batch
Script->>D1Module: Sync batch to D1
D1Module->>CF: POST batch (with retries/backoff)
CF-->>D1Module: HTTP response
D1Module-->>Script: Success/Error
Script->>LogFile: Log batch result + timing
end
Script->>D1Module: Query D1 row count
D1Module->>CF: POST query request
CF-->>D1Module: Row count
D1Module-->>Script: Row count
Script->>Script: Validate row counts match
alt Incremental Mode
Script->>LastSync: Update timestamp
end
end
Script->>LogFile: Log final summary + total timing
Script-->>User: Success/Error message with timing
Modular Architecture
The codebase has been modularized into 8 distinct module directories under src/lib/, each with comprehensive documentation. See Modular Architecture for comprehensive modular architecture documentation.
Module Categories
- Foundation:
utils/(core utilities, timing helpers) - Infrastructure:
logging/,config/ - Interface:
cli/ - Domain:
bear/,cloudflare/d1/,data/ - Orchestration:
sync/
Each module follows a clear separation of concerns and maintains Bash 3.2 compatibility for macOS. The architecture uses a consistent pattern of sourcing dependencies before dependent modules.
Key Design Patterns
Configuration Priority
- Environment variables (highest - checked before config loading)
~/.bearflare/.config(user-global, loaded first).envin project root (project-specific, loaded second)
Error Handling
- Fail-fast with
set -euo pipefail - Explicit validation at each phase
- Accumulate errors before exiting (e.g., missing env vars)
- Color-coded error messages with contextual hints
Logging
- All operations logged to
~/.bearflare/bearflare.log - Automatic rotation when file exceeds 10MB
- ANSI codes stripped from log entries
- DEBUG logs gated behind
--debugflag
Sync Workflow
- Schema Sync: Extract schema from Bear database, optionally prepend DROP statements, POST to D1 API
- Data Sync: Extract data rows, batch INSERT statements, sync batches to D1 with retry logic
- Validation: Query D1 row counts, compare with source, validate table existence
Navigation
- Modular Architecture - Complete module hierarchy and dependencies
- CLI Module - Command-line interface
- Config Module - Configuration management
- Logging Module - Logging system
- Bear Module - Bear database operations
- Cloudflare D1 Module - D1 API integration
- Data Module - Data batching and transformations
- Sync Module - Sync orchestration
- Utils Module - Core utilities