Migrating from 42Crunch to Probely
What middleBrick covers
- Black-box scanning with no agents or SDK dependencies
- Under-one-minute scan time with prioritized findings
- Authentication support for Bearer, API key, Basic, and cookies
- OpenAPI 3.0/3.1/Swagger 2.0 parsing with $ref resolution
- LLM security probe coverage across multiple tiers
- Continuous monitoring with diff detection and alerts
Purpose and scope of migration
This guide outlines the practical steps to move from 42Crunch to this self-service API security scanner when evaluating a change in tooling. The focus is on what can be exported, how scan definitions and CI wiring can be reconstructed, and the inherent gaps that stem from the difference in deployment model. The tool does not remediate findings; it surfaces prioritized risk with contextual guidance to support informed remediation decisions.
Data export and evidence preservation
Begin by exporting findings and configurations from 42Crunch through its native reporting and API endpoints. Save scan summaries, issue listings, and any custom rule sets you rely on. This data is the source material for rebuilding audit evidence and mapping to frameworks such as PCI-DSS 4.0, SOC 2 Type II, and OWASP API Top 10 (2023). Use the exported artifacts to maintain continuity in audit trails and to compare historical security posture across scanning tools.
For ongoing monitoring, configure recurring exports and store checksums of report artifacts to detect changes over time. When ingesting results into a new workflow, validate that issue severity and categorization align with your internal risk model, because mapping is interpretive and requires manual reconciliation.
Rebuilding scan configurations and CI integration
Reconstruct scan coverage by translating API lists and environment definitions into the new tool. Provide the equivalent URLs, authentication schemes, and header allowlists, which support Bearer, API key, Basic auth, and cookies after domain verification. Use the CLI to replicate one-off scans, and integrate the GitHub Action to enforce score thresholds in pull requests, failing the build when risk degrades.
Example CLI invocation to mirror a previous scan target:
middlebrick scan https://api.example.com/v1/openapi.json --auth-type bearer --auth-token ${{ secrets.API_TOKEN }}For CI/CD, adjust job steps to reference the new tool and update thresholds to match your risk appetite. The MCP Server enables AI-assisted scanning from developer environments, which can be aligned with existing code review practices.
Known gaps and coverage differences
Because this scanner operates as a black-box, read-only system, certain intrusive or behavioral tests present in 42Crunch may not have direct equivalents. It does not perform active SQL injection or command injection, nor does it detect business logic vulnerabilities or blind SSRF, which rely on out-of-band infrastructure or domain-specific knowledge. These classes of checks require manual investigation or specialized tools that can simulate exploitation under controlled conditions.
OpenAPI contract analysis is supported for versions 2.0, 3.0, and 3.1 with recursive $ref resolution, but differences in how each tool interprets extensions or custom security schemes may lead to discrepancies in findings. You should plan for manual review and iterative tuning when migrating detection logic.
Operational continuity and monitoring
Shift to continuous monitoring with scheduled rescans at six-hour, daily, weekly, or monthly intervals to maintain visibility. Configure email alerts with hourly rate limiting and HMAC-SHA256 signed webhooks, noting that webhooks disable after five consecutive failures. This helps preserve notification workflows without overwhelming recipients.
Use the dashboard to track score trends, download branded compliance PDFs aligned to PCI-DSS 4.0, SOC 2 Type II, and OWASP API Top 10 (2023), and review diffs between scans to identify new findings or regressions. Data deletion on demand is supported, with stored scan results purged within 30 days of cancellation, and customer data is never used for model training.