Examples¶
Practical OracleTrace usage patterns for local development and CI.
Basic trace run¶
Best for:
- Quickly identifying heavy functions
- Understanding call flow before optimization
Save trace data to JSON¶
Best for:
- Keeping historical performance snapshots
- Sharing results between local and CI environments
Save trace data to CSV¶
Best for:
- Spreadsheet analysis
- Custom dashboard ingestion
Compare two executions¶
This highlights function-level deltas so you can catch regressions right after a change.
Compare two code versions¶
# on version A
oracletrace app.py --json v1.json
# on version B
oracletrace app.py --json v2.json --compare v1.json
Great for release validation and refactor checks.
Fail the run on regression¶
oracletrace my_script.py --json baseline.json
oracletrace my_script.py --json current.json --compare baseline.json --fail-on-regression --threshold 25
This is useful in CI when you want the run to fail if performance gets worse by more than 25 percent.
Exit code behavior in this case:
0when no regression exceeds threshold2when at least one regression exceeds threshold
Print only regressions¶
Ignore noisy functions or files¶
Use this to remove known noisy paths/functions from summaries and call flow.
Focus on top offenders¶
Useful when you want a concise summary of the heaviest functions.
Lightweight CI check pattern¶
# one-time baseline (store artifact)
oracletrace my_script.py --json baseline.json
# in CI pipeline
oracletrace my_script.py --json current.json --compare baseline.json --fail-on-regression --threshold 10
Use this when you want a simple, scriptable guardrail before merging changes.
Read the logic flow tree¶
Typical output shape:
This is useful for spotting unexpected call paths after feature updates.
Tips for accurate comparisons¶
- Keep input data consistent between runs
- Capture a stable baseline from a clean environment
- Compare after focused changes (small PRs are easier to diagnose)
- Combine with unit tests for correctness plus performance confidence