Skip to content

AI & the Salesforce Developer: How the Role Is Changing

Updated 30/04/2026

Salesforce Dev Hero Image

Salesforce development today still rests on the same foundations: the data model, Apex, triggers, governor limits, bulkification, asynchronous patterns, testing, and deployment. Whether you learned those by working through this series, on the job, or somewhere else, every one of those skills is something you are expected to write, debug, and reason about. That is still how the platform expects you to think.

But it is no longer how most developers spend most of their time.

AI coding assistants are fundamentally changing the daily work of a Salesforce developer. They can draft Apex classes, generate test suites, scaffold Lightning Web Components, write SOQL queries from natural language, and even execute development workflows inside the IDE. The time from idea to an initial draft has collapsed from days or hours to minutes or seconds.

But “initial draft code” is not production code. AI tools are accelerators, not substitutes (today at least). They don’t understand your org’s history, your governor limit budget, your integration boundaries, or your business rules. The developer’s value is shifting away from typing code line by line, and toward designing systems, curating context, reviewing output, and owning outcomes.

This article explores where AI fits into the Salesforce developer workflow, where it struggles, and how the role itself is evolving, regardless of how you arrived at it.


In Salesforce Development Fundamentals: Part 1, we briefly introduced AI-assisted development tools. Let’s look at the current landscape in more detail.

  • Agentforce Vibes Extension (previously Agentforce for Developers, formerly Einstein for Developers): The VS Code extension for AI-assisted Salesforce development. It can generate Apex from natural-language prompts, explain existing code, and suggest inline completions with stronger Salesforce context than generic coding assistants.

  • Agentforce Vibes IDE: The browser-based Salesforce development environment with Agentforce built in. It offers the same core AI workflow in a hosted setup and is useful when you want an org-aware environment without local IDE setup.

  • MCP-powered Agentforce tooling (including Agentforce DX / Salesforce-hosted MCP tools): A capability layer that lets the assistant discover and use tools for richer org and metadata context. This is what enables more agentic workflows than simple code autocomplete.

For additional learning, start with Agentforce Vibes IDE: Quick Look for a practical walkthrough of the browser-based IDE. Next, try the Agentforce Vibes Extension module for the VS Code workflow. If you want a more hands-on exercise, continue with the Quick Start: Troubleshoot Code with Agentforce Vibes project. For in-depth documentation covering features, workflows, and platform integration, visit the official Agentforce Vibes Developer Guide.

General-purpose LLM-based, generative coding assistants in your IDE (for example GitHub Copilot in VS Code, Cursor, or Claude-powered coding assistants) and in the terminal/CLI workflow (for example command-line assistants that help draft commands, scripts, or deployment steps) work well alongside the Salesforce Extension Pack. These tools don’t have native org metadata awareness by default, but they are strong at broad coding tasks and have seen large amounts of Apex, SOQL, JavaScript, and testing patterns. For common Salesforce patterns: CRUD service methods, trigger handlers, queueable scaffolds, and test factories, they are often surprisingly competent starting points.

Imagine you need a Queueable job that calls an external API to verify Account addresses. A weak prompt would ask only for “a class that verifies addresses.” A better prompt gives the assistant the platform constraints you already know matter:

“Write a with sharing Queueable Apex class named AccountAddressVerificationJob that implements Database.AllowsCallouts. It should accept a List<Id> of Account IDs, ignore blank input, query only the billing address fields it needs, and call a Named Credential endpoint called Address_Verification_API using the callout: syntax. Send the addresses in one JSON POST request if the API supports bulk verification; if you choose one request per Account, explain how the code avoids the 100-callout limit. For successful verified results, set Verified__c = true. Use Database.update(accountsToUpdate, false) so one failed Account update does not roll back the rest. Include basic handling for non-200 responses and malformed responses, and add an Apex test class using HttpCalloutMock for a success response and a failure response.”

A good AI tool can turn that into a reasonable initial draft in seconds: Queueable, Database.AllowsCallouts, a Named Credential endpoint, request/response wrappers, and a matching test class. But the draft still needs review. You should check:

  • Is the SOQL query selective and limited to the fields the job actually needs?
  • Does the code use one bulk callout where possible, or otherwise guard against the 100-callout-per-transaction limit?
  • Does it handle partial DML failures with Database.update(accounts, false) and inspect the returned SaveResult values?
  • Are non-200 responses, malformed JSON, missing address fields, and empty input handled intentionally?
  • Do the tests use HttpCalloutMock and assert both the successful path and at least one failure path?

Every one of those checks requires platform knowledge: governor limits, callout patterns, bulk DML, error handling, and meaningful assertions.


✍️ AI-Assisted Coding: Apex, Triggers & Bulk Patterns

Section titled “✍️ AI-Assisted Coding: Apex, Triggers & Bulk Patterns”

AI is strongest at generating structural code: well-documented patterns and scaffolding that appear across many Salesforce codebases.

  • Trigger handler scaffolding: The one-trigger-per-object pattern with a switch on Trigger.operationType dispatcher and handler class skeleton. This structure is well documented and AI usually drafts it quickly and correctly.
  • Bulkification patterns: The collect -> query once -> process in memory pattern. AI often produces the Set<Id> collection, one SOQL query into a Map<Id, SObject>, and a single DML operation outside loops.
  • Async boilerplate: Batch Apex (start / execute / finish), Queueable classes (implements Queueable), and Scheduled Apex wrappers. These shapes are highly templated and generally generated well.
  • DML structure: Building a List<SObject> during processing and calling insert, update, or Database.update(..., false) once after the loop. AI is usually strong at this baseline pattern.

The failure modes are also predictable:

  • SOQL or DML inside loops. Even though this is a classic anti-pattern, models still produce it when prompts are vague or framed around single-record examples. It passes casual review but fails at scale under governor limits.
  • Ignoring field-change checks with Trigger.oldMap. AI often drafts update logic that runs for every row in Trigger.new without validating whether the relevant field changed. The result is wasted CPU, unnecessary DML, and unexpected automation side effects.
  • Weak null and data-shape handling. AI assumes tidy data. Production orgs have incomplete lookups, optional fields, and edge-case values. Without explicit null checks and guards, generated code fails on realistic records.
  • Wrong async choice for the workload. Models may pick @future where Queueable (or chained Queueables) is more appropriate, or use synchronous logic where Batch Apex is required for volume. Choosing the wrong mechanism hurts reliability and operability.
  • Overconfident tests. AI-generated tests frequently assert only the happy path, skip bulk scenarios, or avoid meaningful failure assertions. The code may show high coverage but still miss real regressions.

🛡️ Solving for These Failures: Skills, Guardrails & Prompt Discipline

Section titled “🛡️ Solving for These Failures: Skills, Guardrails & Prompt Discipline”

These failure modes are predictable, which means they’re preventable. The fix isn’t to stop using AI; it’s to give it better instructions and build review habits that catch what it misses.

1. Embed platform rules directly in your prompts.

Don’t assume the model remembers Salesforce constraints. State them explicitly every time:

“All SOQL queries must be outside loops. Collect IDs into a Set<Id>, query once into a Map<Id, SObject>, and iterate over the map. No DML inside loops. Use Database.update(records, false) for partial success.”

This is repetitive, but it works. Many AI tools support custom instructions, system prompts, or skills files (reusable instruction snippets you attach to your workspace or profile). Write your Salesforce guardrails once and load them into every session. A good baseline skill file includes: bulkification rules, governor limit reminders, WITH USER_MODE for SOQL, CRUD/FLS enforcement, Trigger.oldMap field-change checks, and your preferred error-handling pattern.

2. Require field-change checks in trigger prompts.

When prompting for trigger handler logic, explicitly tell the model to compare Trigger.new against Trigger.oldMap:

“Only process records where Status__c has changed: newRecord.Status__c != Trigger.oldMap.get(newRecord.Id).Status__c. Skip unchanged records.”

If you include this in a reusable skill or instruction file, every trigger prompt inherits the guard automatically.

3. Specify null and data-shape handling up front.

Tell the model what “messy” looks like in your org:

“The AccountId lookup on Contact can be null. BillingCity may be blank. Guard against null references before accessing parent fields.”

A practical rule to include in your prompt or skill file: “Before accessing any relationship field, check that the lookup is not null. Before using any string field in logic, check for null and blank.”

4. State the async requirement and why.

Instead of letting the model choose the async pattern, tell it which one to use and what constraint drives the choice:

“Use a Queueable, not @future, because this job needs to chain into a second callout step. The method must implement Database.AllowsCallouts.”

If you’re unsure which mechanism fits, include volume context: “This runs on up to 50,000 records nightly” steers the model toward Batch Apex without you needing to name it.

5. Prompt for meaningful test assertions and negative cases.

AI will generate happy-path tests unless you tell it otherwise. Be specific:

“Include a test that inserts 200 records with mixed valid and invalid data. Assert that valid records are updated correctly and invalid records are not modified. Include a test where the user lacks edit permission and assert that the operation respects FLS. Do not use SeeAllData=true.”

A strong skill file includes a test-generation checklist: bulk insert (200+), at least one negative/failure case, at least one permission-aware case, assertions on field values rather than just record counts, and no reliance on org data.

6. Use static analysis as an automated safety net.

Prompt discipline catches most issues, but it’s not foolproof. Add automated checks that run regardless of who (or what) wrote the code:

  • PMD for Apex flags SOQL and DML inside loops, missing CRUD checks, and other common violations. Run it in your CI/CD pipeline so every pull request is checked before review.
  • Salesforce Code Analyzer (the sf scanner CLI plugin) bundles PMD rules tuned for Salesforce and catches governor limit risks, security issues, and performance anti-patterns.
  • Make these gates non-negotiable: if the scan fails, the code doesn’t merge, whether it was written by a human or generated by AI.

If you want to turn this section into hands-on practice, follow this Trailhead sequence: start with Quick Start: Apex Coding for Admins, a short project that walks you through writing, running, and deploying basic Apex so you are comfortable with the moving parts before anything agent related. Next, take Agent Customization with Apex, where you learn to expose Apex (including @InvocableMethod patterns) as well defined actions that an agent can invoke safely and predictably. Then use Agent Customization Quick Look for a compact overview of how Agentforce fits topics, actions, and instructions together so you can place your Apex work in the full agent model. Finish with Apex for Agentforce Superbadge, a capstone exercise that validates end-to-end design, implementation, and testing of Apex that powers real Agentforce behaviour.


One of AI’s most immediately useful capabilities is translating natural language into SOQL queries. Instead of looking up field API names and relationship syntax, you can describe what you need:

“Find all Contacts where the parent Account’s Industry is ‘Technology’ and the Contact has not been modified in the last 90 days. Return the Contact Name, Email, and the Account Name.”

A good AI tool will produce:

SELECT Name, Email, Account.Name
FROM Contact
WHERE Account.Industry = 'Technology'
AND LastModifiedDate < LAST_N_DAYS:90

That’s correct, and it uses the LAST_N_DAYS date literal from Part 2 and the relationship query syntax you learned. But you still need to evaluate:

  • Selectivity: Is Account.Industry indexed? On a large org, this query might not be selective (Part 3’s guidance on query selectivity).
  • Row limits: How many rows will this return? Will it breach the 50,000 query-row limit?
  • Context: Is this running in a trigger (100 SOQL limit shared across the transaction) or in a Batch start() method (50 million rows allowed)?

For a deeper exploration of AI-driven SOQL workflows, including practical prompting techniques and exporting schema context for AI tools, see the dedicated guide: Revolutionizing SOQL with AI.


Testing is where AI offers enormous productivity gains but also where its limitations are most dangerous.

  • Test data factories. The TestDataFactory pattern from Part 5 is highly templated. AI can generate createAccount, createContacts, and createOpportunities methods with the right required fields and relationships almost perfectly.
  • Test method structure. The Arrange/Act/Assert pattern, @isTest annotations, Test.startTest()/Test.stopTest() wrappers, @TestSetup methods. AI produces these accurately because they’re structurally consistent.
  • Bulk test scaffolding. Generating a loop that creates 200 records, inserts them, and queries results is exactly the kind of repetitive code AI handles best.
  • HTTP mock classes. The HttpCalloutMock pattern from Part 5 is boilerplate heavy. AI generates the mock class, the respond method, and the Test.setMock registration cleanly.
  • Assertions that don’t assert anything meaningful. AI-generated tests often assert that a record was inserted (trivially true) rather than asserting the business behaviour: that a field was set to the right value, that a validation rule blocked the save, or that a child record was created with the correct parent. Coverage without meaningful assertions is a false safety net.
  • Missing negative test cases. AI tends to test the happy path. It won’t think to test: What happens when the Account has no SLA? What if the Contact’s AccountId is null? What if the user doesn’t have edit permission? You need to prompt for these explicitly, or add them yourself.
  • Test isolation assumptions. AI sometimes generates tests that depend on org data (SeeAllData=true) or assume records exist that don’t. Part 5’s guidance on test data isolation is the check against this.

🧠 Context & Brownfield Orgs: What AI Cannot Know

Section titled “🧠 Context & Brownfield Orgs: What AI Cannot Know”

AI output improves when the tool has the right context, not just a better-sounding prompt. A general-purpose assistant can recognise Apex, SOQL, and LWC patterns, but it cannot automatically know your org’s custom fields, automation, security model, or business rules. Salesforce-native tools and MCP-enabled workflows can discover more metadata for you, but you still need to decide what context matters and check what the tool used.

The core principle: do not give AI everything; give it the smallest complete picture of the work.

  • Object and field schema. Provide the relevant object names, field API names, data types, required fields, picklist values, lookup relationships, and formula fields. This dramatically improves generated Apex, SOQL, and test data.
  • Business rules. A prompt that says “create a trigger on Case” produces generic code. A prompt that says “create a trigger on Case that sets Priority to High when the parent Account’s SLA_Level__c is Platinum, and creates a review Task for the Account Owner” produces something you can actually use.
  • Existing automation and code paths. If a Flow, trigger, validation rule, managed package, or scheduled job already touches the same object or field, include that context. Otherwise, AI may create logic that conflicts with what already runs in the transaction.
  • Security and sharing expectations. State whether the code should run with sharing, use WITH USER_MODE, enforce create/read/update/delete (CRUD) and field-level security (FLS), or behave differently for different users.
  • Volume and governor limit context. Tell the tool where the code runs and at what scale. “This runs in an after update trigger for up to 200 records” leads to different code than “this runs nightly over 50,000 records.”
  • Local project patterns. Include your trigger framework, service-layer conventions, test factory style, naming standards, and error-handling pattern. AI is much more useful when it extends the codebase you already have rather than inventing a new style.

🏗️ Why Brownfield Orgs Are the Hard Case

Section titled “🏗️ Why Brownfield Orgs Are the Hard Case”

Most Salesforce development happens in brownfield environments: orgs that have been running for years, configured by multiple admins and developers, with layers of automation that nobody fully documented or that have drifted from whatever documentation once existed.

AI tools assume a clean slate. They don’t know that:

  • A record-triggered Flow updates the same field after insert, so your trigger runs first and the Flow overwrites its value.
  • A validation rule blocks certain field combinations that your trigger bypasses (because triggers run in system mode), but a downstream integration then rejects the record because it expects those rules to hold.
  • A managed package installed its own trigger on the same object and consumes 40 SOQL queries before your code even executes.
  • Multiple custom Apex triggers exist on the same object with no documented execution order, and their interactions produce side effects nobody planned for.

Before asking AI to generate code for an existing org, you need to do the work that AI cannot:

  1. Audit existing automation. Check for triggers, Flows, validation rules, and any other automation on the object you’re touching. Setup → Object Manager → [Object] → each automation tab.
  2. Trace the Order of Execution. Understand where your new code will run relative to everything else (Part 3’s Order of Execution section).
  3. Check governor limit headroom. If existing automation already uses 60 SOQL queries in a typical save, your new trigger handler has a budget of 40, not 100.
  4. Identify undocumented dependencies. Are there scheduled jobs that update these records nightly? Are there integrations that insert or update via the API in bulk? Are there Platform Events or Change Data Capture subscriptions listening for changes on this object?

Only after this mapping can you give AI the context it needs to generate code that works in your org, not just in a clean scratch org.

Previously, you carried org context in your head and expressed it as code. Now, you carry that same context and express it as instructions, examples, metadata, and constraints. The knowledge is the same: data model, limits, execution order, business rules, and security expectations. The output format has changed.

The developers who get the most from AI tools are not just better at writing prompts. They are better at selecting useful evidence: the schema fields, relevant automations, nearby classes, failing test output, and business constraints that turn generic code into org-aware code. This is a skill that improves with practice and platform depth.


Beyond code-generation assistants, Salesforce is investing heavily in autonomous AI agents through Agentforce. This is a broader shift that changes what developers build, not just how they build it.

An Agentforce agent is an AI system that can reason about a user’s request, decide which actions to take, execute those actions on the Salesforce platform, and respond with results, all without step-by-step human instruction. Think of it as the difference between a calculator (you tell it exactly what to compute) and an assistant (you describe the goal and it figures out the steps).

Developers don’t write the agent’s reasoning. That’s handled by the underlying language model. Instead, developers build and configure the components that agents use:

  • Topics: Define what the agent can help with (e.g., “Order Management”, “Account Support”).
  • Actions: The specific operations an agent can invoke. In practice these are often either Flow actions or Apex invocable actions (methods annotated with @InvocableMethod). You define the action’s behavior; the agent decides when to call it.
  • Instructions: Natural language guardrails that tell the agent what it should and shouldn’t do, how to handle edge cases, and what tone to use.
  • Data access: Agents need to query and sometimes update Salesforce data. The security model (profiles, permission sets, field-level security) you learned in the fundamentals series determines what the agent can see and do.

🛠️ The Developer’s New Surface Area

Section titled “🛠️ The Developer’s New Surface Area”

This means the developer role expands to include:

  • Defining reliable action contracts (Flow actions and Apex @InvocableMethod actions): clear inputs, predictable outputs, idempotent behavior where possible, and explicit failure paths.
  • Designing guardrails and approval boundaries so high-impact operations (for example deletes, permission changes, or external side effects) require the right level of human review.
  • Testing agent behavior at multiple layers: unit tests for action logic, integration tests for action side effects, and scenario testing for action selection, chaining, and fallback behavior.
  • Applying least-privilege security principles to agent access (data scope, object/field permissions, and external system credentials) and validating those assumptions in realistic test scenarios.

This is a large and rapidly evolving space that deserves a dedicated deep-dive. For now, the key takeaway is that the Apex and automation patterns you’ve learned (especially invocable actions, bulk-safe DML, and robust error handling) are the building blocks agents rely on.

For a broad, high-level introduction to Salesforce’s autonomous agent platform, start with the Agentforce Overview before moving on to the more developer focused Agentforce for Developers. For architectural guidance on building trustworthy AI solutions, see Salesforce Well-Architected: AI.


The biggest shift is not that AI writes code. It is that the developer moves further into the work that surrounds the code: choosing the right design, feeding the assistant the right context, reviewing the output, and owning what happens after it ships.

Salesforce describes this as developers moving “up the stack” toward system design, quality engineering, and cross-system thinking (Salesforce Developers blog: The Future of the Salesforce Developer in the Agentic AI Era). The shorter way to say it is: wisdom over speed. Typing fast matters less when an assistant can generate a class in seconds. Knowing whether that class should exist, how it fits the org, and what it might break matters more.

In practice, the job starts to feel different in a few concrete ways:

  • You spend less time starting from a blank file and more time shaping the first draft.
  • You spend more time collecting context before you ask AI to build anything.
  • You review more code that you did not personally type line by line.
  • You become more responsible for tests, guardrails, observability, and production behaviour.
  • You sit closer to business decisions because technical choices can now be generated faster than they can be understood.

🧭 From Code Writer to Technical Decision-Maker

Section titled “🧭 From Code Writer to Technical Decision-Maker”

AI can produce options quickly. That makes your judgment more important, not less. You are the person deciding what problem is actually being solved and which platform pattern should solve it.

  • Choosing the right tool. AI can draft a trigger, Queueable, Flow, Batch Apex class, or Platform Event subscriber. It cannot reliably decide which one belongs in your org for this business process.
  • Breaking work into safe pieces. Planning prompts and autonomous agents can help map out the work, but strong developers still refine that plan into safe units: service methods, invocable actions, tests, permissions, deployment steps, monitoring, and rollback paths.
  • Owning trade-offs. Synchronous vs async, declarative vs code, package vs source deploy, build vs buy: AI can list options and pros/cons, but you decide what is safe, maintainable, and aligned with the business.

When generation becomes cheap, review becomes the scarce skill. You may review more output, faster, from more tools and more people. The danger is accepting plausible code because it looks clean.

  • Generated code still needs expert review. Check governor limits, bulk safety, null handling, sharing/CRUD/FLS expectations, automation side effects, and whether the code fits the transaction it runs inside.
  • Generated tests still need truth. AI can produce coverage without verification. Your job is to demand assertions that prove business behaviour, bulk scenarios, security assumptions, and failure paths.
  • Guardrails become engineering work. PR checklists, Salesforce Code Analyzer, targeted regression suites, CI validation gates, and deployment validation are part of how teams scale AI safely.
  • You must understand before you merge. AI makes it easier to create code you do not fully understand. A developer is still accountable for the code that reaches production.

AI compresses the time between idea and implementation. That means developers can be pulled into decisions earlier, because the cost of generating a technical option is lower than the cost of choosing the wrong one.

  • Translate business intent into constraints. Vague goals become object names, field names, sharing expectations, validation rules, integration boundaries, acceptance criteria, and test scenarios.
  • Explain risk in business terms. “If this vendor API is down, orders queue and fulfilment slips by hours” is more useful than “this integration is risky.”
  • Design for maintainability. Naming, modularity, documentation, and clear boundaries matter more in an AI-assisted codebase, because future changes may also be generated from partial context.
  • Own production impact. Logs, monitoring, error notifications, data repair paths, and incident response are part of shipping the feature, not someone else’s problem.

⚖️ Limitations, Risks & the Human Developer

Section titled “⚖️ Limitations, Risks & the Human Developer”

AI tools are powerful, but their failure modes are specific and predictable. Understanding them is part of the job.

Hallucinated Apex is code that looks like Salesforce development, but is not actually valid Apex, not available in your org, or not supported by the API version you are using. This is common because Apex looks similar to Java and C#, while also having its own platform-specific rules.

  • Invented platform APIs. Models can invent plausible helper classes, static utilities, or standard-library methods that do not exist in Apex or in your project. The names often sound reasonable, which is why they can slip through a quick visual review.

  • Java- or C#-style Apex. This includes syntax Apex does not support, such as try-with-resources, var, or Java-style instanceof pattern matching like if (value instanceof Account account). It can also mean calling a real Apex method as if it behaved like another language. For example, Apex String.format uses numbered placeholders and a List<String>, not printf-style placeholders and separate arguments.

  • Wrong platform or timeline. Models may reference features from the wrong Salesforce API version, confuse Apex with another Salesforce runtime, or describe capabilities that are not generally available yet.

The fix is simple but important: compile early, trust the Apex reference over the model, and treat any unfamiliar method, class, or syntax as something to verify before you build on it.

AI generated Apex almost never includes CRUD/FLS (Create, Read, Update, Delete / Field-Level Security) checks. In production code that runs in user context, you must enforce Schema.sObjectType.Account.isAccessible() or use WITH USER_MODE in SOQL (Spring ‘23+). AI typically generates code that runs in system mode with full access. A security review will reject it.

AI tools don’t track cumulative governor limit usage across a transaction. They generate code that works in isolation but may fail when combined with other triggers, Flows, and automation in the same save operation. The developer must maintain the mental model of the full transaction.

Current AI models can only process a limited amount of text at once. A complex Salesforce org might have hundreds of Apex classes, dozens of triggers, and thousands of Flow elements. AI can’t hold all of this context simultaneously, which is why the context curation skill discussed earlier matters so much.

Perhaps the most dangerous trait: AI presents incorrect output with the same confidence as correct output. There’s no “I’m not sure about this” flag. A method that puts SOQL inside a loop reads just as authoritatively as one that doesn’t. Your platform knowledge is the only filter.


The Salesforce developer role is not disappearing. It is expanding.

AI compresses the implementation phase: the time between “I know what to build” and “I have a first draft.” That’s genuinely transformative. But the phases before and after implementation (understanding the problem, designing the architecture, reviewing the output, testing at scale, deploying safely, and maintaining the system over time) are all still human work. In many cases, they’re more work now, because AI enables teams to build faster, which means the volume of code to review, test, and maintain increases.

Be honest with yourself about what this means: organisations will expect more output. AI raises the productivity baseline. The pressure doesn’t disappear; it shifts from “Can you write this code?” to “Can you design, validate, and deliver this solution end-to-end?”

The developers who thrive in this landscape are those who combine:

  • Deep platform fundamentals. Governor limits, execution order, bulk patterns, security model (everything in this series). This is what lets you evaluate AI output instead of blindly trusting it.
  • Architectural judgment. Knowing when to use a trigger vs. a Flow vs. a Queueable vs. a Platform Event. Knowing how systems connect and where they can fail.
  • Context curation. The ability to articulate your org’s constraints, business rules, and existing automation clearly enough that AI tools produce usable output.
  • Quality ownership. Building the review processes, test strategies, and guardrails that ensure all code, whether human written or AI generated, meets production standards.

The tools have changed. The fundamentals haven’t. That’s why you learned them first.