Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
18 commits
Select commit Hold shift + click to select a range
7224245
Add retry logic to Jira client and jitter to all tracker HTTP clients
harry-miller-trimble Mar 24, 2026
0d3daf5
Add pagination guards to Linear and context checks to Jira client
harry-miller-trimble Mar 24, 2026
100463c
Sanitize external tracker content for terminal display and add respon…
harry-miller-trimble Mar 24, 2026
d118ccd
Add Integration Charter: define scope boundary for tracker integrations
harry-miller-trimble Mar 24, 2026
fe6aea1
Surface sync engine warnings in SyncResult and fix silent failure paths
harry-miller-trimble Mar 24, 2026
ca0e6bd
Add --explain flag to bd ready for dependency-aware reasoning
harry-miller-trimble Mar 24, 2026
17744d7
Expand SECURITY.md with tracker integration trust model
harry-miller-trimble Mar 24, 2026
66a6c94
Enforce graph integrity: extend cycle detection and add bd graph check
harry-miller-trimble Mar 24, 2026
df43a57
Improve quickstart docs with Why Beads section and --explain examples
harry-miller-trimble Mar 24, 2026
0736b05
Fix ADO Init tests leaking environment variables
harry-miller-trimble Mar 24, 2026
1f0cf1e
Improve compaction dry-run and analyze output with per-issue details
harry-miller-trimble Mar 24, 2026
ca2715b
Add integration test coverage for partial failures, warnings, and sel…
harry-miller-trimble Mar 24, 2026
420af96
Enforce .beads/ directory permissions at runtime
harry-miller-trimble Mar 24, 2026
05aabdc
jira: add MaxPages pagination guard and fix Retry-After jitter
harry-miller-trimble Mar 24, 2026
bb08998
github,ado: fix Retry-After jitter in retry loops
harry-miller-trimble Mar 24, 2026
02b778c
tests: add coverage for explain logic, cycle detection, and engine wa…
harry-miller-trimble Mar 24, 2026
23867ce
lint: suppress gosec G404 for retry jitter (math/rand is fine for bac…
harry-miller-trimble Mar 24, 2026
5fb2622
ci: retrigger after flaky Dolt lock contention in TestEmbeddedInit
harry-miller-trimble Mar 24, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 28 additions & 1 deletion SECURITY.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,33 @@ export DOLT_DISABLE_EVENT_FLUSH=1
To verify, block `doltremoteapi.dolthub.com` in your firewall or DNS — beads
continues working normally with no degradation.

### Tracker Integration Trust Model

When syncing with external trackers (GitHub Issues, Jira, Linear, GitLab, Azure DevOps), all data crossing the integration boundary is treated as **untrusted input**.

**Trust boundaries:**
- Issue titles and descriptions from external trackers may contain arbitrary content, including ANSI escape sequences, control characters, or prompt injection payloads targeting AI agents
- External content is sanitized before terminal display (ANSI stripping, control character removal)
- API responses are size-limited to prevent out-of-memory conditions from malformed responses
- External issue identifiers are validated before use in SQL queries

**Credential handling:**
- Tracker API tokens stored in beads config (`bd config set`) are **plaintext** in the Dolt database
- Prefer platform-native authentication when available (`gh auth`, `glab auth`, Azure CLI) — these use the platform's secure credential store
- Never store tokens in environment variables in shared environments
- Tokens are scoped to the permissions you grant — use minimal required scopes

**Sync security model:**
- Sync is always **user-initiated** — no background daemons, no inbound webhooks, no listening ports
- No data is sent to external trackers unless the user explicitly runs `bd push` or `bd sync`
- Conflict resolution strategies are deterministic and auditable via Dolt history

**Content safety for AI agents:**
- Issue descriptions imported from external trackers may contain prompt injection payloads
- Consuming agents should treat all issue content as untrusted input
- The `--json` output flag provides structured data that separates metadata from free-text content
- beads does not execute or interpret issue content — it is stored and displayed only

### Command Injection Protection

bd uses parameterized SQL queries to prevent SQL injection. However:
Expand All @@ -69,7 +96,7 @@ bd has minimal dependencies:
- Dolt (version-controlled SQL database)
- Cobra CLI framework

All dependencies are regularly updated. Run `go mod verify` to check integrity.
All dependencies are pinned via `go.sum` and verified with `go mod verify`. Renovate (or Dependabot) monitors for known vulnerabilities. Run `go mod verify` locally to check integrity.

## Supported Versions

Expand Down
117 changes: 93 additions & 24 deletions cmd/bd/compact.go
Original file line number Diff line number Diff line change
Expand Up @@ -221,22 +221,43 @@ func runCompactSingle(ctx context.Context, compactor *compact.Compactor, store s
originalSize := len(issue.Description) + len(issue.Design) + len(issue.Notes) + len(issue.AcceptanceCriteria)

if compactDryRun {
ageDays := 0
var closedAtStr string
if issue.ClosedAt != nil {
ageDays = int(time.Since(*issue.ClosedAt).Hours() / 24)
closedAtStr = issue.ClosedAt.Format(time.RFC3339)
}

candidate := map[string]interface{}{
"id": issueID,
"title": issue.Title,
"closed_at": closedAtStr,
"age_days": ageDays,
"content_size": originalSize,
}

if jsonOutput {
output := map[string]interface{}{
"dry_run": true,
"tier": compactTier,
"issue_id": issueID,
"original_size": originalSize,
"estimated_reduction": "70-80%",
"dry_run": true,
"tier": compactTier,
"candidates": []interface{}{candidate},
"summary": map[string]interface{}{
"total_candidates": 1,
"total_content_bytes": originalSize,
},
}
outputJSON(output)
return
}

fmt.Printf("DRY RUN - Tier %d compaction\n\n", compactTier)
fmt.Printf("Issue: %s\n", issueID)
fmt.Printf("Original size: %d bytes\n", originalSize)
fmt.Printf("Estimated reduction: 70-80%%\n")
fmt.Printf(" %-12s %-40s %8s %10s\n", "ID", "TITLE", "AGE", "SIZE")
title := issue.Title
if len(title) > 40 {
title = title[:37] + "..."
}
fmt.Printf(" %-12s %-40s %5dd %10d B\n", issueID, title, ageDays, originalSize)
fmt.Printf("\nSummary: 1 candidate, %d bytes total content\n", originalSize)
return
}

Expand Down Expand Up @@ -323,31 +344,64 @@ func runCompactAll(ctx context.Context, compactor *compact.Compactor, store stor
}

if compactDryRun {
type dryRunCandidate struct {
ID string `json:"id"`
Title string `json:"title"`
ClosedAt string `json:"closed_at"`
AgeDays int `json:"age_days"`
ContentSize int `json:"content_size"`
}

var dryCandidates []dryRunCandidate
totalSize := 0
for _, id := range candidates {
issue, err := store.GetIssue(ctx, id)
if err != nil {
continue
}
totalSize += len(issue.Description) + len(issue.Design) + len(issue.Notes) + len(issue.AcceptanceCriteria)
contentSize := len(issue.Description) + len(issue.Design) + len(issue.Notes) + len(issue.AcceptanceCriteria)
totalSize += contentSize

ageDays := 0
var closedAtStr string
if issue.ClosedAt != nil {
ageDays = int(time.Since(*issue.ClosedAt).Hours() / 24)
closedAtStr = issue.ClosedAt.Format(time.RFC3339)
}

dryCandidates = append(dryCandidates, dryRunCandidate{
ID: issue.ID,
Title: issue.Title,
ClosedAt: closedAtStr,
AgeDays: ageDays,
ContentSize: contentSize,
})
}

if jsonOutput {
output := map[string]interface{}{
"dry_run": true,
"tier": compactTier,
"candidate_count": len(candidates),
"total_size_bytes": totalSize,
"estimated_reduction": "70-80%",
"dry_run": true,
"tier": compactTier,
"candidates": dryCandidates,
"summary": map[string]interface{}{
"total_candidates": len(dryCandidates),
"total_content_bytes": totalSize,
},
}
outputJSON(output)
return
}

fmt.Printf("DRY RUN - Tier %d compaction\n\n", compactTier)
fmt.Printf("Candidates: %d issues\n", len(candidates))
fmt.Printf("Total size: %d bytes\n", totalSize)
fmt.Printf("Estimated reduction: 70-80%%\n")
fmt.Printf(" %-12s %-40s %8s %10s\n", "ID", "TITLE", "AGE", "SIZE")
for _, c := range dryCandidates {
title := c.Title
if len(title) > 40 {
title = title[:37] + "..."
}
fmt.Printf(" %-12s %-40s %5dd %10d B\n", c.ID, title, c.AgeDays, c.ContentSize)
}
fmt.Printf("\nSummary: %d candidates, %d bytes total content\n", len(dryCandidates), totalSize)
return
}

Expand Down Expand Up @@ -546,23 +600,38 @@ func runCompactAnalyze(ctx context.Context, store storage.DoltStorage) {
}

if jsonOutput {
outputJSON(candidates)
totalSize := 0
for _, c := range candidates {
totalSize += c.SizeBytes
}
output := map[string]interface{}{
"candidates": candidates,
"summary": map[string]interface{}{
"total_candidates": len(candidates),
"total_content_bytes": totalSize,
},
}
outputJSON(output)
return
}

// Human-readable output
fmt.Printf("Compaction Candidates (Tier %d)\n\n", compactTier)
fmt.Printf(" %-12s %-40s %8s %10s\n", "ID", "TITLE", "AGE", "SIZE")
totalSize := 0
for _, c := range candidates {
compactStatus := ""
if c.Compacted {
compactStatus = " (already compacted)"
compactStatus = " *"
}
title := c.Title
if len(title) > 40 {
title = title[:37] + "..."
}
fmt.Printf("ID: %s%s\n", c.ID, compactStatus)
fmt.Printf(" Title: %s\n", c.Title)
fmt.Printf(" Size: %d bytes\n", c.SizeBytes)
fmt.Printf(" Age: %d days\n\n", c.AgeDays)
fmt.Printf(" %-12s %-40s %5dd %10d B%s\n", c.ID, title, c.AgeDays, c.SizeBytes, compactStatus)
totalSize += c.SizeBytes
}
fmt.Printf("Total: %d candidates\n", len(candidates))
fmt.Printf("\nSummary: %d candidates, %d bytes total content\n", len(candidates), totalSize)
}

func runCompactApply(ctx context.Context, store storage.DoltStorage) {
Expand Down
73 changes: 73 additions & 0 deletions cmd/bd/graph.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ package main
import (
"context"
"fmt"
"os"
"sort"
"strings"

Expand Down Expand Up @@ -162,6 +163,77 @@ Examples:
},
}

var graphCheckCmd = &cobra.Command{
Use: "check",
Short: "Check dependency graph integrity",
Long: `Check the dependency graph for cycles, orphans, and other integrity issues.

Returns exit code 0 if the graph is clean, 1 if issues are found.`,
Run: func(cmd *cobra.Command, args []string) {
ctx := rootCtx

type GraphCheckResult struct {
Clean bool `json:"clean"`
Cycles [][]string `json:"cycles"`
Summary struct {
CycleCount int `json:"cycle_count"`
} `json:"summary"`
}

result := GraphCheckResult{Clean: true}

// Detect cycles
cycles, err := store.DetectCycles(ctx)
if err != nil {
FatalErrorRespectJSON("cycle detection failed: %v", err)
}

for _, cycle := range cycles {
ids := make([]string, len(cycle))
for i, issue := range cycle {
ids[i] = issue.ID
}
result.Cycles = append(result.Cycles, ids)
}
result.Summary.CycleCount = len(cycles)

if len(cycles) > 0 {
result.Clean = false
}

if jsonOutput {
outputJSON(result)
if !result.Clean {
os.Exit(1)
}
return
}

// Human-readable output
if result.Clean {
fmt.Printf("\n%s Graph integrity check passed\n\n", ui.RenderPass("✓"))
} else {
fmt.Printf("\n%s Graph integrity issues found\n\n", ui.RenderFail("✗"))
}

if len(result.Cycles) > 0 {
fmt.Printf("%s Cycles (%d):\n\n", ui.RenderFail("⚠"), len(result.Cycles))
for _, cycle := range result.Cycles {
fmt.Printf(" %s → %s\n", strings.Join(cycle, " → "), cycle[0])
}
fmt.Println()
} else {
fmt.Printf(" %s No dependency cycles\n", ui.RenderPass("✓"))
}

fmt.Println()

if !result.Clean {
os.Exit(1)
}
},
}

func init() {
graphCmd.Flags().BoolVar(&graphAll, "all", false, "Show graph for all open issues")
graphCmd.Flags().BoolVar(&graphCompact, "compact", false, "Tree format, one line per issue, more scannable")
Expand All @@ -170,6 +242,7 @@ func init() {
graphCmd.Flags().BoolVar(&graphHTML, "html", false, "Output self-contained interactive HTML (redirect to file)")
graphCmd.ValidArgsFunction = issueIDCompletion
rootCmd.AddCommand(graphCmd)
graphCmd.AddCommand(graphCheckCmd)
}

// loadGraphSubgraph loads an issue and its subgraph for visualization
Expand Down
6 changes: 3 additions & 3 deletions cmd/bd/init.go
Original file line number Diff line number Diff line change
Expand Up @@ -276,8 +276,8 @@ environment variable.`,
useLocalBeads := !hasExplicitBeadsDir || filepath.Clean(initDBDirAbs) == filepath.Clean(beadsDirAbs)

if useLocalBeads {
// Create .beads directory
if err := os.MkdirAll(beadsDir, 0750); err != nil {
// Create .beads directory with owner-only permissions (0700).
if err := os.MkdirAll(beadsDir, config.BeadsDirPerm); err != nil {
FatalError("failed to create .beads directory: %v", err)
}

Expand Down Expand Up @@ -335,7 +335,7 @@ environment variable.`,
// Ensure storage directory exists (.beads/dolt).
// In server mode, dolt.New() connects via TCP and doesn't create local directories,
// so we create the marker directory explicitly.
if err := os.MkdirAll(initDBPath, 0750); err != nil {
if err := os.MkdirAll(initDBPath, config.BeadsDirPerm); err != nil {
FatalError("failed to create storage directory %s: %v", initDBPath, err)
}

Expand Down
2 changes: 2 additions & 0 deletions cmd/bd/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -146,6 +146,8 @@ func loadEnvironment() {
// and resolves BEADS_DIR, redirects, and worktree paths.
if beadsDir := beads.FindBeadsDir(); beadsDir != "" {
loadBeadsEnvFile(beadsDir)
// Non-fatal warning if .beads/ directory has overly permissive access.
config.CheckBeadsDirPermissions(beadsDir)
}
}

Expand Down
Loading
Loading