Skip to main content

Logging Module

This module centralizes log file initialization/rotation and provides a consistent log() API across the codebase.

Module Overview

  • Purpose: Centralize log file initialization/rotation and provide a consistent log() API across the codebase.
  • Primary module:
    • src/lib/logging/logger.sh
  • Designed for: Bash 3.2 compatibility (macOS default).

Dependencies

  • Required:
    • src/lib/utils/core.sh: strip_ansi(), die()
    • date (timestamp formatting)
  • Required for timing logs:
    • src/lib/utils/timing.sh: now_ns() (used by log_with_timing())

Public API

init_logging()

Initialize the log directory and log file; rotate when needed. Creates LOG_DIR if missing (mode 0700), creates LOG_FILE if missing (mode 0600), rotates log file when it exceeds 10MB.

log(level, message...)

Write timestamped, ANSI-stripped entries to LOG_FILE. Levels: ALL, TRACE, VERBOSE, DEBUG, INFO, WARN, ERROR.

Note: Old level names (API, TIMING) are deprecated but still supported for backward compatibility:

  • API → mapped to DEBUG
  • TIMING → mapped to INFO

log_with_timing(operation, start_ns)

Convenience helper to emit a TIMING log entry with elapsed milliseconds.

Console/Log Separation Principle

BearFlare follows a strict separation between console output (user-facing visual feedback) and log file output (structured operational records). This separation ensures that log files remain clean, parseable, and useful for debugging, while the console provides rich visual feedback for interactive use.

Core Principle

  • Console Output (printf to stderr): Visual elements, progress indicators, colors, checkmarks, and user-facing messages
  • Log File Output (log() function): Structured entries with timestamps, levels, and operational context

Why This Matters

  • Log files should be machine-readable and contain structured data for debugging, auditing, and analysis
  • Console output should be human-friendly with visual cues, colors, and progress indicators
  • Never duplicate visual decorators or progress bars in log files
  • Never omit operational context from log files

Correct Patterns

Console Output Pattern

Use printf to stderr for visual feedback:

# Visual feedback for users
printf '%s\n' " ${GREEN}✓${RESET} Operation completed"
print_section "Phase Name"
print_timing "operation" "$elapsed_ms"

# Progress indicators
printf '%s' "Processing..."
printf '%s\n' " done"

# Colored status messages
printf '%s\n' "${GREEN}Success${RESET}: All checks passed"

Log File Pattern

Use log() for structured operational logging:

# Structured operational logging
log "INFO" "Operation started with parameters: ${params}"
log "DEBUG" "Detailed diagnostic information"
log "ERROR" "Operation failed: ${error_details}"
log_with_timing "operation" "$start_ns"

Combined Pattern

When you need both console feedback and structured logging:

# Console feedback + structured logging
printf '%s\n' " ${table_name}: $(format_number "$rows") rows extracted"
log_with_timing "Data extraction" "$start_ns"
log "DEBUG" "Extracted ${rows} rows from ${table_name}"

Real-World Examples

Example 1: Validation with visual feedback and logging

# Console: Visual checkmark
printf '%s\n' " ${GREEN}✓${RESET} Validation passed"

# Log: Structured result
log "INFO" "Validation completed: ${validated_count} items validated"

Example 2: Progress with operational context

# Console: Progress indicator
printf '%s' "Sending request..."
# ... API call ...
printf '%s\n' " done"

# Log: API interaction details
log "DEBUG" "API request: POST ${endpoint}"
log "DEBUG" "Response status: ${http_code}"
log_with_timing "API request" "$start_ns"

Example 3: Error reporting

# Console: Colored error message
printf '%s\n' "${RED}Error${RESET}: ${error_message}" >&2

# Log: Structured error with context
log "ERROR" "Operation failed: ${error_message}"
log "DEBUG" "Error context: ${additional_details}"

Anti-Patterns to Avoid

❌ Never Log Visual Decorators

# BAD: Visual elements in log file
log "INFO" "✓ Operation completed" # Checkmark doesn't belong in log
log "INFO" "━━━ Section Header ━━━" # Decorators don't belong in log

❌ Never Duplicate Console Output

# BAD: Duplicating console output to log
printf '%s\n' "Processing..."
log "INFO" "Processing..." # Redundant - console already shows this

❌ Never Log Progress Bars

# BAD: Progress bars in log file
log "INFO" "[████████░░] 80% complete" # Progress bars are console-only

❌ Never Log Raw SQL or Large Payloads Without Truncation

# BAD: Logging huge SQL statements
log "DEBUG" "SQL: ${huge_sql_statement}" # Can bloat log files

# GOOD: Log summary or truncated version
log "DEBUG" "SQL prepared: ${#sql} bytes, ${table_count} tables"

✅ Correct Approach

# Console: Visual progress
printf '%s' "[████████░░] 80% complete"

# Log: Structured metrics
log "INFO" "Progress: 80% (${processed}/${total} items)"
log_with_timing "Processing" "$start_ns"

Guidelines for New Module Development

When creating new modules, follow these guidelines:

  1. Identify the audience: Is this information for the user (console) or for debugging/auditing (log)?

  2. Visual elements → Console: Checkmarks (✓), separators (━━━), progress bars, colors, and formatting go to console via printf

  3. Operational context → Log: Start/end events, timing, errors, metrics, API responses, and diagnostic information go to log file via log()

  4. Use both when needed: If information is relevant to both user and debugging, output to both (but format appropriately for each)

  5. Timing information: Use log_with_timing() for structured timing logs, and print_timing() for console display

  6. Error handling: Always log errors with log "ERROR", and optionally show user-friendly messages on console

Module Review Checklist

When reviewing or creating modules, verify:

  • Visual decorators (✓, ━━━) only appear in console output
  • Progress bars only appear in console output
  • Log file entries follow format: [timestamp] [LEVEL] message
  • Console output uses colors and formatting appropriately
  • Operational context (start/end, timing, errors, metrics) is logged
  • No duplicate messages between console and log (unless intentionally different formats)
  • Large payloads (SQL, API responses) are truncated or summarized in logs

Environment Detection

The logger automatically detects whether it's running from source (development mode) or as a built artifact (production mode). This detection is based on the script filename:

  • Development mode: Script filename ends with .sh (e.g., bearflare.sh)

    • Default log level: DEBUG (10) - shows all diagnostic messages including DEBUG, VERBOSE, TRACE, and ALL
    • Useful for development and debugging
  • Production mode: Script filename does not end with .sh (e.g., bearflare or dist/bearflare)

    • Default log level: INFO (20) - shows only informational, warning, and error messages
    • Reduces noise in production environments

Overriding Environment Defaults

Environment-based defaults can be overridden using:

  • Verbosity flags: -v, -vv, -vvv, -vvvv (sets DEBUG, VERBOSE, TRACE, ALL respectively)
  • LOG_LEVEL environment variable: Set to numeric value (0, 5, 7, 10, 20, 30, 40) or level name (ALL, TRACE, VERBOSE, DEBUG, INFO, WARN, ERROR)
  • DEBUG_LOG=1: For backward compatibility, sets log level to DEBUG

Examples

# Running from source - automatically uses DEBUG level
./bearflare.sh sync

# Running built artifact - automatically uses INFO level
./dist/bearflare sync

# Override with verbosity flag (works in both environments)
./bearflare.sh -v sync # Enables DEBUG level

# Override with environment variable
LOG_LEVEL=ERROR ./bearflare.sh sync # Shows only ERROR messages

Rotation Policy

  • Threshold: 10MB (10485760 bytes)
  • Policy: single backup file (.old) replacing any prior .old

Integration Order

In bearflare.sh (or a root loader), source modules in this order:

  1. src/lib/utils/core.sh
  2. src/lib/utils/timing.sh
  3. src/lib/logging/logger.sh

Then call init_logging early in startup (before emitting logs). The init_logging() function automatically performs environment detection and sets the appropriate default log level based on whether the script is running from source or as a built artifact.