Skip to content

Salesforce Development Fundamentals: Part 6 - Testing & Deployment

Updated 24/04/2026

Salesforce Dev Hero Image

You’ve learned the language, you understand limits, you can write bulkified triggers, you know how to move heavy work off the synchronous path with asynchronous Apex, and you can connect Salesforce to external systems through integrations. But before your code can ever see a production environment, Salesforce enforces a strict requirement: you must prove it works.

Unlike many other platforms where testing is optional (even if highly recommended), Salesforce has a hard gate. To deploy Apex code to production, at least 75% of your code lines must be covered by unit tests, and those tests must all pass.

This final article in the fundamentals series covers how to write test classes, how to assert your code behaves correctly, and how to navigate the deployment process. We’ll also briefly look ahead at Lightning Web Components to set up your next learning journey.


A test class is an Apex class you write to execute your code and verify its behavior with assertions.

Test classes are marked with the @isTest annotation. This tells Salesforce to treat the class as test-only code (it does not count against your org’s Apex code size limit) and to run it in an isolated test context. By default, tests do not see your org’s real records. You can explicitly opt in with @isTest(SeeAllData=true) when a test genuinely must read existing org configuration or metadata-dependent records that are impractical to create in setup.

In most cases, keep SeeAllData off. Salesforce recommends creating the data each test needs so the test behaves the same way in a scratch org, sandbox, or production deployment. Tests that depend on existing org data can fail because a record was renamed, deleted, shared differently, or matched differently by a query, and they can also run into shared data locking problems when tests execute in parallel. Any records your test inserts, updates, or deletes exist only for that test run and are rolled back automatically when execution ends.

@isTest
private class AccountTriggerHandlerTest {
@isTest
static void testIndustryUpdateSetsRating() {
// 1. Arrange — set up test data
Account testAcc = new Account(Name = 'Test Tech', Industry = 'Technology');
// 2. Act — execute the code (trigger runs automatically on insert)
Test.startTest();
insert testAcc;
Test.stopTest();
// 3. Assert — verify the results
Account insertedAcc = [SELECT Rating FROM Account WHERE Id = :testAcc.Id];
Assert.areEqual('Hot', insertedAcc.Rating, 'Rating should be Hot for Tech accounts.');
}
}

Every good test method follows a strict AAA pattern: Arrange, Act, Assert.

  1. Arrange (Set Up Data): By default, test classes cannot see data in your org, so you must build every dependency the test relies on within the test class itself, including users, records, and relationships. You can create this data directly in each test method or in a shared @TestSetup method. This keeps tests deterministic and prevents failures caused by someone changing or deleting unrelated sandbox data.
  2. Act (Execute the Code): Call the specific unit you are testing inside the Test.startTest() / Test.stopTest() block. That unit might be a service method (InvoiceService.calculateTotals(...)), a DML statement that fires a trigger, a REST handler, or an async enqueue. All preparation work (record inserts, user creation, etc.) should happen before Test.startTest() so the governor limits inside the block belong entirely to the code under test. In the example above, insert testAcc is the Act step because the trigger we want to verify fires as a side effect of that insert.
  3. Assert (Verify Results): After Test.stopTest(), verify the outcome. Depending on what you tested, that could mean querying records to check field values, inspecting a return value, confirming an exception was thrown, or asserting that an outbound message was enqueued. Use the Assert class (Assert.areEqual, Assert.isTrue, Assert.isNotNull, etc.) to prove the result matches what you expected. Without assertions, all you know is that the code ran without crashing, you have no idea whether it actually did the expected thing.

🏗️ @TestSetup: Run Setup Once Per Class

Section titled “🏗️ @TestSetup: Run Setup Once Per Class”

@TestSetup is an optional method you can add to a test class to create shared test data before any test methods run. Use it when multiple test methods need the same baseline data, such as a shared Account with related Contacts. Salesforce creates that data once, then gives each test method a fresh isolated copy to work with, so one test’s changes cannot affect another. This keeps your test methods focused on the behavior they are verifying instead of repeating the same setup code.

@isTest
private class ExpenseClaimServiceTest {
@TestSetup
static void makeData() {
Account acc = new Account(Name = 'Acme Corp');
insert acc;
List<Expense_Claim__c> claims = new List<Expense_Claim__c>();
for (Integer i = 0; i < 5; i++) {
claims.add(new Expense_Claim__c(
Account__c = acc.Id,
Amount__c = 1000 + i,
Status__c = 'Draft'
));
}
insert claims;
}
@isTest
static void testApprovalFlow() {
// Test method example: reads the @TestSetup data and verifies approval behavior.
}
@isTest
static void testRejectionFlow() {
// Test method example: reads the @TestSetup data and verifies rejection behavior.
}
}

You already know tests run in isolation and that @TestSetup can seed a shared baseline once per class. The next friction point is shape: real features rarely need a single row. They need a small graph of related records (for example an Account, a few Contacts, Cases in specific statuses, and maybe custom objects) with fields that satisfy validation rules and lookups. If you rebuild that graph in lots of test methods, you end up with the same setup code in many places. The first required field tweak or validation change shows you how many of those copies you have to fix.

This section is about keeping test data central and reusable, patterns that make setup fast to write, easy to read, and straightforward to update when the data model evolves.

A common and practical pattern is to add a utility class, often called a test data factory. Here we put reusable methods that build the records your tests need. Salesforce documents this approach as common test utility classes for test data creation.

Here’s an example of a reusable test data factory class that centralizes and streamlines your test data creation logic.

@isTest
public class TestDataFactory {
public static Account createAccount(String name, Boolean doInsert) {
Account acc = new Account(Name = name);
if (doInsert) {
insert acc;
}
return acc;
}
public static List<Contact> createContacts(Id accountId, Integer numContacts) {
List<Contact> contacts = new List<Contact>();
for (Integer i = 0; i < numContacts; i++) {
contacts.add(new Contact(
FirstName = 'Test',
LastName = 'Contact ' + i,
AccountId = accountId
));
}
insert contacts;
return contacts;
}
}

Now, in your actual test classes, setup is a breeze:

@isTest
private class ContactLogicTest {
static Account testAccount;
static List<Contact> testContacts;
@testSetup
static void setupTestData() {
testAccount = TestDataFactory.createAccount('Acme Corp', true);
testContacts = TestDataFactory.createContacts(testAccount.Id, 5);
}
static void testContactLogic() {
// testAccount and testContacts are available, created in @testSetup
// ... run test
}
}

A trigger that looks fine with one Account in a test can still blow up in production. The platform can deliver up to 200 records in a single trigger invocation, and non-bulkified patterns (queries or DML inside loops, collections that grow without bounds) show up as governor limit failures or wrong partial results. Tests that only ever insert a single row rarely catch that class of bug.

This section is about simulating bulk DML in tests—building a list at the size you care about (often 200 for triggers), running it inside Test.startTest() / Test.stopTest(), and asserting that every record ended up in the expected state.

@isTest
static void testBulkAccountInsert() {
List<Account> testAccounts = new List<Account>();
for (Integer i = 0; i < 200; i++) {
testAccounts.add(new Account(Name = 'Bulk Account ' + i, Industry = 'Technology'));
}
Test.startTest();
insert testAccounts; // If your trigger isn't bulkified, it will fail here!
Test.stopTest();
// Query to verify
List<Account> insertedAccounts = [SELECT Rating FROM Account WHERE Name LIKE 'Bulk Account%'];
for (Account acc : insertedAccounts) {
Assert.areEqual('Hot', acc.Rating, 'All 200 accounts should be updated.');
}
}

👤 Testing as Different Users with System.runAs

Section titled “👤 Testing as Different Users with System.runAs”

By default, Apex tests run in the system context, so they do not always reflect how code behaves for an ordinary user’s session (record ownership, sharing rules, and user permissions may not be enforced). If your code logic depends on the currently logged-in user, record ownership, sharing, or role hierarchy, you should use System.runAs(user) to execute test code in the context of a specific user. Remember, you need to explicitly create (or retrieve) any test users you want to use for runAs, typically using a factory method or inline user creation in your test setup.

@isTest
static void testStandardUserCannotEditApprovedClaim() {
User standardUser = TestDataFactory.createStandardUser();
Expense_Claim__c claim = TestDataFactory.createApprovedClaim();
System.runAs(standardUser) {
claim.Status__c = 'Draft';
// allOrNone=false returns errors on the SaveResult instead of throwing for a failed row
Database.SaveResult sr = Database.update(claim, false);
Assert.isFalse(
sr.isSuccess(),
'Standard user should not be able to revert an approved claim.'
);
Assert.isFalse(
sr.getErrors().isEmpty(),
'Expected a DML error when reverting an approved claim.'
);
}
}

This is one way to assert negative cases for DML: use Database.update(..., false) (or insert/delete with the same flag), then check SaveResult.isSuccess() and getErrors() instead of parsing exception message text, which breaks easily when labels or language change. For code that truly must throw, a try/catch with Assert.fail('...') if no exception is still valid.

Integration-style tests (HTTP callouts, Apex REST, Platform Events) are written the same way as any other Apex tests: isolate data, exercise the code under Test.startTest() / Test.stopTest(), and assert. Specific to integration tests is that the platform blocks real HTTP callouts in tests, so you must register an HttpCalloutMock with Test.setMock and return the responses you want to exercise.

For explanations and worked examples of integration testing, see Salesforce Development Fundamentals: Part 5 — Integrations, in the Testing Integration Code section. There you will find full examples of mocking outbound HTTP with HttpCalloutMock and Test.setMock, exercising a custom Apex REST class by setting RestContext, and publishing a Platform Event then calling Test.getEventBus().deliver() so subscribers run during the test.

For mocking your own Apex types (not HTTP), the Apex Stub API (Test.createStub + System.StubProvider) lets you swap in fake implementations without changing production code.

If you’d like to reinforce these testing concepts with a guided, hands-on path, the official Apex Testing Trailhead module is a strong companion to this chapter. It walks you through writing test classes, structuring assertions, building reusable test data, and running tests in a Trailhead Playground org so the patterns you’ve just read about become muscle memory.


You have code that builds and tests that pass. Deployment is how you promote that metadata from the sandbox it was developed in into the target org, most often production. Before we run any deploy commands, it helps to know how the change actually gets there: through source control and a pull request, then a validation, and finally the deploy itself.

On modern Salesforce teams, the Git repository is the source of truth, not any one org. Developers retrieve metadata into a local force-app folder, commit changes on a feature branch, and open a pull request (PR) when the work is ready for review. A CI job typically runs on the PR to compile the code, run the relevant Apex tests, and (often) perform a validation deploy against a sandbox.

The happy path looks like this:

  • Create a feature branch from main (or your team’s integration branch).
  • Push commits and open a pull request for review.
  • CI validation runs: compile, Apex tests, and (optionally) a validation deploy.
  • A reviewer approves; the PR is merged.
  • The release pipeline deploys the merged metadata to the target org.

For a fuller treatment of branching strategy, environments, and CI/CD, see Administration Part 7 → Change Management: The CI/CD Pipeline. This chapter focuses on what the developer does in and around the PR: writing reviewable code, then validating and deploying it.

If you are not on a CI/CD setup yet, you are still doing the same job by hand with the Salesforce CLI or Change Sets: package what needs to move, validate (including tests when the target requires them), then apply the change.

Before a PR is merged is the cheapest place to catch problems—long before production runs every test. This isn’t exhaustive, but it covers the issues that cause the most production failures:

  • Bulkification: No SOQL or DML inside loops. Collections are used for grouping and lookup.
  • Query selectivity: Queries have selective filters and sensible limits (avoid “query everything” patterns and leading-wildcard searches that scan large tables).
  • CRUD/FLS enforcement: When code must respect user permissions, enforce it intentionally (for example WITH SECURITY_ENFORCED, WITH USER_MODE where supported, and Security.stripInaccessible() before DML).
  • Sharing keywords: Classes declare with sharing, without sharing, or inherited sharing intentionally (avoid accidental defaults).
  • Named Credentials: Callouts use callout: prefix. No hardcoded URLs, API keys, or tokens in Apex.
  • No hardcoded IDs: Record IDs, Profile IDs, and org-specific values are queried or stored in Custom Metadata, never pasted into code.
  • Meaningful assertions: Tests assert behaviour and outcomes, not just that code runs without throwing.
  • Negative test cases: At least one test verifies what happens with bad input, missing data, or insufficient permissions.
  • Bulk test: At least one test exercises the maximum expected batch size (often 200) to prove the code behaves under bulk load.
  • Error handling: Integration code catches expected exceptions and logs failures with enough detail to diagnose.

Once the PR is reviewed and ready to release, the next step is a validation: ask the target org to dry-run the exact bundle you intend to ship. A validation packages the code, sends it to the target org, runs all the required tests, and reports back if it would succeed. No metadata is actually changed.

Using the Salesforce CLI (which you set up in Part 1):

Terminal window
# Validate deployment to production and run all local tests
sf project deploy start --target-org my-production-org --test-level RunLocalTests --dry-run
  • --dry-run: This is what makes it a validation. It means “check, but don’t save.”
  • --test-level RunLocalTests: This tells Salesforce to run all the custom tests in your org, but skip tests belonging to managed packages (which saves a lot of time).
  • Missing Test Coverage: Your overall coverage across the org drops below 75%, or an Apex trigger has no coverage. Org-wide coverage is the headline gate, but every trigger must also be exercised by at least one test.
  • Test Failures: One or more tests fail during validation. That can mean an assertion failed, but it can also be an unhandled exception, a missing required field in your test data, or a limit error that only shows up under the full test run.
  • Missing Dependencies: Your Apex (or Flow) references metadata that is not in the deploy (a custom field, a record type, a custom permission, or a Named Credential). This commonly shows up as a compile failure or “entity does not exist” error in the target org.
  • Profile / Permission Set drift: Access models often differ between sandbox and production (licenses, object/field permissions, record types, page layouts). If you deploy permissions as part of the release, mismatches can block the deploy or leave users with different access than you expected. Many teams prefer permission sets (and permission set groups) over profiles to reduce this drift.

Once validation passes, you remove the --dry-run flag to perform the actual deployment.

Because production deployments must run tests, a full validation can take a long time. Salesforce’s Quick Deploy feature lets you skip the test run on the actual deployment if a recent validation already proved the same package passes. The validation must usually be less than 10 days old, and you must deploy the same set of components that was validated.

The CLI flow looks like this:

Terminal window
# 1. Validate (this runs all required tests and returns a deployId)
sf project deploy validate --target-org my-production-org --test-level RunLocalTests
# 2. Quick deploy using that validation's deployId
sf project deploy quick --target-org my-production-org --job-id <deployId>

For a typical release pipeline, this can turn a long deployment window into a few minutes because the heavy lifting happened during the validation run.

You don’t always want to run every test in the org. The CLI supports finer grained options:

Terminal window
# Run only specific test classes during a deployment
sf project deploy start --target-org my-sandbox \
--test-level RunSpecifiedTests \
--tests AccountTriggerHandlerTest --tests ExpenseClaimServiceTest
# Resume monitoring a long-running deploy if your terminal disconnects
sf project deploy resume --job-id <deployId>

For local development, run tests directly without a deployment:

Terminal window
sf apex run test --target-org my-sandbox \
--test-level RunSpecifiedTests \
--tests AccountTriggerHandlerTest \
--code-coverage --result-format human

📦 Beyond Source Deploys: Unlocked Packages

Section titled “📦 Beyond Source Deploys: Unlocked Packages”

Source format deploys (what we’ve shown above) are the most common starting point and are usually all you need while learning. As teams scale, many graduate to unlocked packages: versioned, dependency aware units of metadata you can install and upgrade across orgs. Unlocked packages are still built from your source (often the same force-app folder tracked in Git), but instead of pushing “whatever changed,” you cut an installable package version (for example with sf package version create) and promote that version through environments. This makes multi-org deployments more repeatable and gives you a clearer place to declare dependencies. Rollback is still a deliberate operation, but packaging gives you more control than ad-hoc source pushes. If you want the deeper platform view, see the Trailhead Architect Journey trailmix below.

You’ll encounter two packaging models in the Salesforce ecosystem. They serve different purposes:

FactorUnlocked PackageManaged Package
PurposeInternal team deployment and modular org developmentDistributing an app to other orgs (AppExchange, ISVs)
NamespaceOptional (usually none)Required (globally unique prefix)
Code visibilitySubscribers can view and modify the sourceCode is obfuscated and IP-protected
UpgradeableYes, with versioningYes, with stricter upgrade rules
RollbackUninstall or deploy a previous versionUninstall only (cannot downgrade a managed package version)
Best forInternal dev teams managing modular metadataISVs, consulting firms distributing products

For most internal teams, unlocked packages are the right choice. They give you versioning and dependency management without the rigidity of managed packages. Use managed packages only when you need to distribute IP-protected software to external customers.


You now have the foundation to build backend logic in Salesforce. The natural next step in the platform stack is the UI layer: when configuration and Screen Flows do not give you the experience you need, you write a custom front end.

🖥️ User Interfaces: Visualforce and LWC

Section titled “🖥️ User Interfaces: Visualforce and LWC”

Two front-end technologies you’ll encounter on the platform:

  • Visualforce (Legacy): An older, server-side rendering framework using <apex:page> tags and Apex controllers. You will see it in established orgs, but it is not where new development happens.
  • Lightning Web Components (LWC): The modern, standard UI framework for Salesforce. LWC uses standard web technologies — HTML, CSS, modern JavaScript — with three key decorators worth knowing now:
    • @api exposes a property or method as part of the component’s public contract.
    • @track (now mostly automatic) makes object/array changes reactive in the template.
    • @wire declaratively binds a property or function to a Salesforce data source, including any @AuraEnabled(cacheable=true) Apex method you wrote in this series.

LWC is a deep topic in its own right, so it is not covered in this series. A dedicated LWC guide may follow later in the journey.


The leap from Administrator to Developer isn’t about throwing away clicks and writing code for everything. It’s about knowing you have a tool for every job.

You know how to use Flow for orchestration. You now know how to use Apex for heavy lifting, bulk data processing, and complex validation. You know how to test that logic to ensure it doesn’t break, and you know how to navigate the deployment cycle to get it into the hands of your users.

Don’t be afraid of error messages or failing tests. Every NullPointerException or System.LimitException is just the platform teaching you how to build more robust systems.

Welcome to Salesforce Development.

This chapter closes the Salesforce Development Fundamentals series. The recommended next read is a complementary bonus, not another fundamentals chapter: AI & the Salesforce Developer — how AI tools are reshaping the developer role, what they’re good at, where they fall short, and why the fundamentals you just learned matter even more in an AI-assisted workflow.