The Email That Costs You Four More Months

You submitted your managed package six weeks ago. You've been watching the partner dashboard, refreshing, waiting. Then the email arrives — a single paragraph from the Salesforce security review team. Your app has been rejected.

The reason is something you could have fixed in an afternoon. A missing with sharing declaration on three Apex classes. CRUD/FLS checks not enforced before two SOQL queries. Nothing architecturally broken — just security patterns that Salesforce's reviewers expect and your scanner didn't catch.

Now you fix it. You resubmit. And you go back to the end of the queue. Another 8–12 weeks. If you were timing this launch for a Dreamforce demo or a partner tier advancement, you've just missed your window. If you have customers waiting on the AppExchange listing to go live, they're still waiting.

This is the standard experience for independent Salesforce ISVs. Not because the security requirements are unreasonable — they're not — but because the gap between what the automated scanner catches and what the human reviewer actually checks is large enough to drive a managed package through. Most developers don't find out about that gap until rejection.

AI changes that equation.

What the Security Review Actually Checks (That Your Scanner Misses)

Salesforce's AppExchange security review has two layers: an automated scan using the Security Source Scanner, and a manual human review against an internal checklist. The scanner finds obvious pattern violations. The human reviewer finds the contextual ones — issues that require understanding how your code actually flows at runtime, not just what it looks like statically.

The most common rejection triggers, based on developer community reports and Salesforce partner documentation:

CRUD and Field-Level Security Violations

This is the most frequent rejection category. Every SOQL query and DML operation that touches Salesforce data needs to enforce the running user's object and field permissions — not just assume admin-level access because your tests run as a system administrator. Reviewers check for Schema.sObjectType.Account.isAccessible() calls, stripped fields validation, and whether your Lightning components respect FLS on every data operation.

The trap: most developers know this rule and still fail it, because FLS checking has to happen in the right layer of the stack. Putting it in the controller doesn't help if your wire adapter or APEX REST endpoint bypasses it. The reviewer checks every path.

SOQL Injection

Dynamic SOQL — queries built by concatenating strings — is vulnerable to injection if untrusted input is included. Salesforce expects either bound variables or explicit escaping via String.escapeSingleQuotes(). The rejection trigger isn't just the pattern — it's anywhere user-supplied input reaches a dynamic query, even indirectly through a wrapper method.

XSS in Visualforce and Aura Components

Any output rendered to the browser that includes unescaped user data is a failure. Visualforce has built-in output encoding functions. Aura and LWC have their own patterns. Reviewers look for {!HTMLENCODE()} in Visualforce, proper use of lwc:ref and template directives in LWC, and anywhere innerHTML or equivalent appears in JavaScript.

Hardcoded Credentials and Sensitive Data

API keys, passwords, org-specific IDs, and endpoint URLs hardcoded in Apex, JavaScript, or metadata files are automatic failures. This includes values stored in Custom Settings without encryption — the expected pattern is Protected Custom Metadata Types for configuration that should be version-controlled, and encrypted Custom Settings for runtime secrets. Reviewers scan all files, not just Apex classes.

Sharing Declaration Gaps

Every Apex class must have an explicit sharing declaration: with sharing, without sharing, or inherited sharing. An implicit decision (no keyword) is a rejection. More importantly, the declaration has to be semantically correct — classes handling user data shouldn't be without sharing unless there's a documented justification. Reviewers check both presence and appropriateness.

Remote Site Settings and HTTP Callouts

External callouts need validated endpoints. Broad remote site settings that allow wildcard domains are flagged. Any HTTP callout where the endpoint is constructed from user input is a security violation. Reviewers trace callout endpoints back to their sources.

Platform Encryption Gaps

If your app stores data that should be compatible with Salesforce Shield Platform Encryption, reviewers expect you to have tested for it. Apps that break when encryption is enabled, or that explicitly bypass encryption for sensitive fields, fail this check.

Why the Scanner Isn't Enough

The Security Source Scanner Salesforce provides catches the obvious pattern violations: clearly dynamic SOQL without escaping, output encoding missing on a specific tag. What it doesn't catch:

  • Call chain violations — Your controller enforces CRUD/FLS, but a REST endpoint calls an internal method that skips it. The scanner sees each method in isolation; the reviewer traces the data flow.
  • Configuration-level issues — Sharing declarations, remote site settings, and Custom Settings encryption aren't code — the scanner doesn't evaluate them in context the way a reviewer does.
  • Indirect injection paths — User input that travels through a helper method before reaching a dynamic query. Each step looks clean individually. Together, they're a violation.
  • Missing declarations vs. incorrect declarations — The scanner flags absence. The reviewer flags both absence and semantic incorrectness (a without sharing class that handles PII).
  • Package-level patterns — Reviewers evaluate the package holistically. If your Apex is clean but your Lightning components pass data in ways that bypass the controller's security checks, that's a failure the scanner misses.

The scanner gives you a first pass. The human reviewer gives you the real pass. The problem is the human reviewer takes 8–12 weeks and doesn't tell you what they found until after the queue wait.

What an AI Pre-Review Triage Report Does

An AI-generated triage report front-loads the human review layer. It doesn't just run pattern matching — it reasons about your code the way a reviewer would, tracing data flows, evaluating context, and mapping findings against the actual rejection checklist.

A well-built triage report for an AppExchange submission would:

Map Every Data Path for CRUD/FLS

Trace every SOQL query and DML statement back through the call chain to identify where enforcement is and isn't present. Flag not just missing checks but misplaced checks — enforced at the wrong layer, bypassed by an alternate code path, or absent in REST endpoints even when present in the controller.

Audit Every Dynamic SOQL Pattern

Find every string concatenation that might reach a SOQL query and trace whether untrusted input can reach it. Flag unbound variables and missing escape calls. Generate remediation snippets showing the bound variable equivalent.

Inventory All Output Rendering

Identify every point where data reaches the browser — Visualforce tags, Aura expressions, LWC template bindings, JavaScript DOM operations — and flag unencoded output. Prioritize by whether the data source is user-controlled.

Scan All Files for Hardcoded Values

Not just Apex classes. JavaScript, metadata XML, custom object definitions, named credentials, remote site settings. Pattern match for strings that look like API keys, passwords, org IDs, and domain-specific endpoints.

Validate Every Sharing Declaration

Check presence across all Apex classes. Flag implicit decisions. Evaluate semantic appropriateness — does the declared sharing mode match what the class actually does with data?

Generate a Pre-Submission Sign-Off Checklist

The final output isn't just a list of issues. It's a checklist formatted the way the Salesforce reviewer would evaluate it — each item marked Pass, Fail, or Review Required, with severity and remediation guidance. Developers can work through the checklist, fix each item, verify, and sign off before submission with actual evidence that the review-critical patterns are addressed.

The Business Case Is Simple

Every rejection costs you the queue wait again. If your first submission has three fixable issues, you spend 16–24 weeks in review instead of 8–12. If you're a solo developer or a small ISV team, that's months of revenue blocked, partner tier advancement delayed, and the competitive window for your launch potentially closed.

The triage report is the difference between submitting with confidence and submitting and hoping. It doesn't guarantee a pass — Salesforce's reviewers have judgment that goes beyond any checklist. But it eliminates the preventable failures. The ones that cost you months for an afternoon's worth of fixes.

For independent ISVs who've been through one rejection cycle, this kind of tool changes the submission calculus entirely. You don't submit until the triage report is clean. You treat the AI report as your internal security reviewer and the Salesforce review as the external audit. The pass rate difference is significant.

Building the Triage Stack

The current approach most ISVs take: run the Security Source Scanner, fix what it flags, submit, and hope. A better stack:

  1. PMD with Salesforce rules — Open-source static analysis with Apex-specific rulesets. Catches many pattern violations. Good first pass, not sufficient on its own.
  2. AI code review pass — Upload your full codebase to a capable AI (Claude, GPT-4) with a system prompt that instructs it to evaluate against the AppExchange security review checklist. Ask for findings by severity and rejection likelihood. Ask for remediation code, not just flags.
  3. Manual checklist review — Use the Salesforce Security Review Checklist from the Partner Community to walk through configuration-level items the AI may not have full context for (named credentials setup, remote site settings intent).
  4. Security Source Scanner final pass — After fixing AI-flagged items, run the official scanner to confirm you haven't introduced new issues and to see what the automated layer of the official review will see.

The AI pass is the highest-leverage step. It finds the contextual, call-chain, and semantic issues that PMD and the scanner miss. It also explains findings in plain language and generates fix examples — which matters when you're a developer who writes functional Apex but doesn't specialize in security patterns.

The Prompt That Gets Results

If you want to run an AI pre-review triage yourself today, the system prompt that works:

"You are a Salesforce AppExchange security reviewer. I'm going to share my managed package codebase. Review it against the Salesforce AppExchange security review checklist and identify every issue that would cause a rejection. For each issue: describe the specific violation, where it appears in the code, why it fails the Salesforce standard, and provide the corrected code snippet. Prioritize by rejection likelihood. Format your output as a pre-submission triage report with a sign-off checklist."

Then paste your Apex classes, LWC components, and metadata XML in context. For larger packages, batch by functional area — all classes related to data access first, then UI components, then configuration metadata.

The report you get back won't be perfect. The AI doesn't know Salesforce's internal reviewer behavior the way a 10-year Salesforce partner does. But it will catch a significant portion of the rejection triggers that the scanner misses — and it will do it in minutes, not weeks.

Submit Once. Ship.

The AppExchange security review isn't going away, and it shouldn't — Salesforce customers trust that listed apps meet a security standard. The review process being slow and opaque is a Salesforce operational problem that ISVs can't fix. What they can fix is the preventable rejection rate.

AI pre-review triage is the tool that closes the gap between what the scanner checks and what the reviewer actually evaluates. Run the triage, fix the findings, submit with a clean report. The developers who start doing this systematically will have dramatically better first-pass rates — and that means shipping months faster than the developers who are still learning what the checklist says from rejection emails.

You built the app. You understand the business problem it solves. Don't let a fixable CRUD/FLS gap or a missing sharing declaration be the reason you're still in the queue when your launch window closes.