Skip to content

Salesforce Development Fundamentals: Part 2 - Apex Fundamentals

Updated 24/04/2026

Salesforce Dev Hero Image

In Part 1, you set up your development environment and learned how the tools fit together. Now, it’s time to learn the language.

Apex is Salesforce’s proprietary programming language. If you’ve used Java or C#, Apex will look very familiar: it’s strongly typed and object-oriented. But what makes Apex special isn’t its syntax; it’s where it runs.

Apex runs natively on the Salesforce platform, right next to your data. It understands your objects, your fields, and your security model. When you write a query in Apex, the compiler verifies that the object and fields actually exist in your org. If an admin deletes a custom field that your Apex code relies on, Salesforce will block the deletion. This tight coupling makes Apex incredibly robust for business applications.

This article covers the core building blocks of the language: how to store data (variables and collections), how to structure logic (control flow and classes), and how to interact with the database (SOQL and DML). To learn more about what Apex is and where it fits into the platform, check out the Apex Fundamentals for Developers Trailhead module.


Before you can manipulate data, you need to store it in memory. In Apex, every variable must have a specific data type declared when you create it. This section walks through primitives (basic types), sObjects (your org’s records as types), enums, and constants, the vocabulary you’ll use in almost every class.

Just as in Flow, Apex variables have an explicit type, and each type behaves differently. Primitives are the fundamental building blocks: numbers, text, Booleans, dates, times, and Salesforce IDs. They’re straightforward once you’ve seen them once, and they unlock the rest of the language.

The most common primitives you’ll use are:

// Text
String greeting = 'Hello, Salesforce Developer!';
String emptyString = '';
// Whole numbers
Integer recordCount = 150;
Long largeNumber = 2147483648L; // For numbers larger than 2.14 billion
// Numbers with decimal places
Decimal amount = 1500.50; // Use Decimal for currency and precise calculations
Double scientificValue = 3.14159;
// True or false
Boolean isActive = true;
Boolean hasPermission = false;
// Salesforce unique identifiers (15- or 18-character IDs)
Id accountId = '0015g00000XyZ12AAF';
// Dates and Times
Date today = Date.today();
Datetime now = Datetime.now();

Each of these types is used for a specific kind of information. You’ll use them to declare variables, assign values, and pass data between methods. Just as you shouldn’t put text into a number formula field in admin land, you have to pick the right data type for the job in code.

This is where Apex shines. In addition to primitives, Apex treats every standard and custom object in your org as a first-class data type, known collectively as sObjects.

Think about how you add a new record in Salesforce Setup: you click “New,” choose the object, and enter values for each field. In Apex, this same process happens in code: you create (instantiate) an sObject variable for the object type you want, then set its fields by assigning values to its properties:

// Creating a standard object in memory
Account newAccount = new Account();
newAccount.Name = 'Acme Corporation';
newAccount.Industry = 'Technology';
newAccount.NumberOfEmployees = 500;
// Or use the shorthand constructor:
Contact newContact = new Contact(
FirstName = 'Jane',
LastName = 'Doe',
Email = 'jane.doe@example.com',
AccountId = accountId // Linking to the Account ID defined above
);
// Creating a custom object
Expense_Claim__c claim = new Expense_Claim__c(
Amount__c = 1500.00,
Status__c = 'Draft',
Description__c = 'Client Dinner'
);

Notice the syntax for custom fields: you must include the __c suffix, exactly as it appears in the API Name.

Enums (enumerations) are a way to define a custom data type that has a fixed, restricted set of possible values. They are great for representing states or categories without using fragile string literals.

public enum CaseStatus {
NEW_CASE,
WORKING,
ESCALATED,
CLOSED
}
CaseStatus currentStatus = CaseStatus.WORKING;
if (currentStatus == CaseStatus.ESCALATED) {
System.debug('This case needs immediate attention!');
}

When a value should never change after assignment, mark the variable final. Combined with static at the class level, this gives you a constant:

public class ExpenseLimits {
public static final Decimal VP_THRESHOLD = 10000;
public static final Decimal DIRECTOR_THRESHOLD = 5000;
}
if (claim.Amount__c > ExpenseLimits.VP_THRESHOLD) {
// ...
}

Using named constants instead of “magic numbers” makes thresholds obvious in code review and easy to update in one place.


In Salesforce development, you rarely work with just one item at a time. Particually with SObjects but other variables as well, it is one or more at a time. Because of Governor Limits (which we’ll explore deeply in Part 3), you must process data in batches. Collections are how you manage those batches of SObjects or other values.

There are three types of collections in Apex: Lists, Sets, and Maps.

A List is an ordered collection of elements that can contain duplicates. You can add as many items as needed, and each item is accessed by its position (called an index), starting from 0. Lists are used to group multiple values or records together like a batch of Accounts, a list of IDs to process, or a set of user-supplied answers. You can add, remove, or change elements by their index, and the order always stays as you defined it. Lists allow duplicate entries if you add the same value more than once, it simply appears again in the order you added it.

// Create an empty list of strings
List<String> colors = new List<String>();
// Add elements
colors.add('Red');
colors.add('Green');
colors.add('Blue');
colors.add('Red'); // Duplicates are allowed
// Access by index
String firstColor = colors.get(0); // Returns 'Red'
// The most common use case: a list of records
List<Account> accountsToInsert = new List<Account>();
accountsToInsert.add(new Account(Name = 'Company A'));
accountsToInsert.add(new Account(Name = 'Company B'));

A Set is an unordered collection that automatically enforces uniqueness, each element can appear only once. If you try to add a value that’s already in the Set, it will be ignored. Sets are heavily used in Salesforce development, especially for gathering SObject IDs before running SOQL queries, efficiently checking membership, or ensuring that a collection contains only distinct values.

Sets are a go-to tool for de-duplicating data: If you have a list that might contain repeated values and you need only the unique elements, convert it to a Set. For example, turning a list of Opportunity OwnerIds into a Set ensures you only query each user once, not multiple times for repeated owners.

Common patterns include:

  • Preventing duplicate DML operations (like insert/update) on the same record.
  • Collecting record IDs in a trigger to do a single SOQL query outside a loop.
  • Removing duplicates from lists or other input sources by converting them to Sets and back to Lists if needed.
Set<Id> accountIds = new Set<Id>();
accountIds.add('001...AAA');
accountIds.add('001...BBB');
accountIds.add('001...AAA'); // Ignored, already exists
System.debug(accountIds.size()); // Outputs 2
// Checking if an element exists is very fast
Boolean hasAccount = accountIds.contains('001...AAA'); // Returns true

A Map is a collection of elements, where each item consists of a unique key and a value (the value can be duplicated). You use Maps to associate one set of values with another: for example, mapping Account Ids to Account records, user emails to User records, or custom settings to configuration values. Data is accessed by the key, making Maps extremely useful for instantly retrieving related values without searching or looping through lists.

Maps are heavily used in Salesforce development for:

  • Mapping record IDs to their sObjects after a SOQL query, so you can efficiently access records by ID later.
  • Grouping related data for aggregation, transformation, or quick lookup in bulk processing.
  • Caching values or relationships in memory to avoid running queries or DML inside loops.

If you need to quickly look up something “by key” (like Account by AccountId), or pair related values together for easy access, reach for a Map.

// Map<KeyType, ValueType>
Map<String, String> countryCodes = new Map<String, String>();
countryCodes.put('US', 'United States');
countryCodes.put('UK', 'United Kingdom');
String fullCountryName = countryCodes.get('US'); // Returns 'United States'
// The most powerful pattern: Mapping IDs to sObjects
Map<Id, Account> accountMap = new Map<Id, Account>(
[SELECT Id, Name FROM Account WHERE CreatedDate = LAST_N_DAYS:30]
);
// Later, we can instantly retrieve an account without querying the database again:
Id specificAccountId = '001xxxxxxxxxxxx'; // Example: initialize with a real Account Id
Account myAccount = accountMap.get(specificAccountId);

If you’re coming from an admin background and want to build a stronger mental model of how objects, classes, and collections fit together, the Object-Oriented Programming for Admins trailhead module below is a great companion to this section:


Variables and collections give your code something to hold. Control flow is what lets it decide. Most real Salesforce logic like routing approvals, rejecting bad data, fanning out work across records, is just a handful of decisions and loops applied to the data in front of you.

If you’ve built Flows before, the mental model carries straight over: Decision elements become if/else and switch statements, and Loop elements become for and while loops. The difference is that in Apex you can express richer conditions, nest logic more freely, and run on much larger record volumes than Flow comfortably handles.

This section covers the three constructs you’ll reach for daily: conditional branching (if/else), multi-value branching (switch on), and iteration (loops).

The if/else statement evaluates any expression that results in a Boolean value and runs the matching branch. You can use comparison operators (==, !=, >, >=, <, <=), logical operators (&&, ||, !), or any Boolean expression to define these conditions.

Decimal expenseAmount = 15000;
if (expenseAmount > 10000) {
System.debug('Requires VP Approval');
} else if (expenseAmount > 5000) {
System.debug('Requires Director Approval');
} else {
System.debug('Requires Manager Approval');
}

A pattern you’ll meet constantly in Salesforce work is acting only when a value has changed. In trigger logic (which we’ll cover fully in Part 3), Apex gives you both the new and old versions of each record, so you can compare them and branch accordingly:

// Conceptual snippet — full trigger context comes in Part 3.
Account newAcc = (Account) Trigger.new[0];
Account oldAcc = (Account) Trigger.oldMap.get(newAcc.Id);
if (newAcc.Industry != oldAcc.Industry) {
System.debug('Industry changed from ' + oldAcc.Industry + ' to ' + newAcc.Industry);
}

This “did the field change?” check is the building block behind almost every “fire automation only when X changes” requirement.

When you need to branch logic based on a single value with several possible options, switch on offers a clearer, more concise alternative to long if/else if/else chains. You can use switch on with data types such as Integer, Long, String, Id, enums, and sObject types.

Id targetOpportunityId = '006xxxxxxxxxxxx'; // Example: initialize with a real Opportunity Id
Opportunity opp = [
SELECT Id, StageName
FROM Opportunity
WHERE Id = :targetOpportunityId
LIMIT 1
];
switch on opp.StageName {
when 'Prospecting', 'Qualification' {
System.debug('Early stage opportunity');
}
when 'Proposal/Price Quote' {
System.debug('Mid-funnel');
}
when 'Closed Won' {
System.debug('Celebrate!');
}
when null {
System.debug('Stage not set');
}
when else {
System.debug('Other stage: ' + opp.StageName);
}
}

A few things worth noting:

  • A single when can match multiple values by listing them comma-separated (when 'Prospecting', 'Qualification').
  • when null handles a missing value explicitly — useful when fields are optional.
  • when else is the catch-all; include it to make sure unexpected values don’t fall through silently.

switch on is particularly robust when used with enums, as the Apex compiler ensures you handle all possible enum values (and will warn you if any cases are omitted, though it does not require exhaustive handling). Let’s demonstrate this using the previously defined CaseStatus enum:

CaseStatus currentStatus = CaseStatus.ESCALATED;
switch on currentStatus {
when NEW_CASE, WORKING {
System.debug('Case is still in progress');
}
when ESCALATED {
System.debug('Notify the duty manager');
}
when CLOSED {
System.debug('Read-only from here on');
}
}

Since currentStatus uses the CaseStatus enum type, you benefit from compile-time type safety. For example, the code won’t compile if you make a typo in an enum value or try to use a value not defined in the enum. This reduces the risk of runtime errors compared to using plain strings.

Loops allow you to process multiple values, one at a time, without having to write repetitive code manually. In Apex, loops are essential for working with collections (like Lists, Sets, and Maps), and you have three main types to choose from, each best suited for particular scenarios:

The go-to loop for most cases. Use this when you want to perform an action on every element of a collection (List, Set, or even the keys/values of a Map) and you don’t need to know the position or index. This loop is concise, readable, and helps avoid errors.

  • Example use: Logging every Contact name in a List, updating a field on each Account returned by a SOQL query.

This one looks more like the classic “for” loops from Java or JavaScript, and is helpful when iteration depends on the position (index) within a list or array. Necessary if you ever need to compare neighboring elements, build output with the index, or skip elements in a non-sequential way.

  • Example use: Pairing two Lists by index, iterating backward, or skipping elements based on position.

Use when you aren’t simply processing a collection, but looping based on a changing condition that could become true or false at any time. This is good for scenarios like draining a queue, waiting for some condition to be met, or polling until a limit is reached. While loops are less common in day-to-day Salesforce work, but important when you need their flexibility.

  • Example use: Process records until a certain cumulative value is reached, poll for an integration callback, or repeatedly try a task until successful or a maximum count.

Understanding which loop to use, and why, is foundational for writing clear and efficient Apex code, especially whenever you deal with batches of records from SOQL queries or need to transform collections for business logic.

List<String> userNames = new List<String>{'Alice', 'Bob', 'Charlie'};
// 1. For-each loop — the one you'll use 90% of the time.
// Syntax: for (DataType variableName : collection)
for (String name : userNames) {
System.debug('Processing user: ' + name);
}
// 2. Traditional for loop — when you need the index.
for (Integer i = 0; i < userNames.size(); i++) {
System.debug('User at index ' + i + ' is ' + userNames.get(i));
}
// 3. While loop — when the exit condition is dynamic, not a collection size.
Integer counter = 0;
while (counter < 3) {
System.debug('Counter is: ' + counter);
counter++;
}

You’ll also see two short keywords used inside loops:

  • break exits the loop immediately.
  • continue skips to the next iteration.

Both are useful for guard clauses (“skip records that don’t qualify”), but most Salesforce loops are short enough that a simple if check is just as readable.

With variables, collections, and control flow in place, you can express almost any business rule in memory. The next step is the part that makes Apex genuinely useful: reading and writing data in the database.

If you’d like hands-on practice with these fundamentals before moving on, the Apex Basics for Admins Trailhead module provides a guided introduction with interactive challenges:


🗄️ Interacting with the Database: SOQL and DML

Section titled “🗄️ Interacting with the Database: SOQL and DML”

Everything you’ve written so far lives in memory and disappears the moment execution ends. To build anything lasting, you need to read records from the Salesforce database and write changes back to it. That’s the job of SOQL (for querying) and DML (for inserting, updating, and deleting).

🔍 SOQL (Salesforce Object Query Language)

Section titled “🔍 SOQL (Salesforce Object Query Language)”

SOQL is Salesforce’s query language, it is similar in feel to SQL, but designed specifically for the platform’s object model. If you’d like a deeper dive into SOQL on its own, see the SOQL Guide Series. Here, we’ll focus on how you use inline SOQL inside Apex to retrieve data directly into your variables and collections.

// Querying a single record into an sObject variable.
// The query is wrapped in square brackets — this is "inline SOQL."
// LIMIT 1 is crucial here: without it, if multiple Accounts match,
// Apex throws a QueryException because a single variable can't hold more than one row.
Account singleAcc = [
SELECT Id, Name, Industry
FROM Account
WHERE Name = 'Acme Corp'
LIMIT 1
];
// Querying multiple records into a List.
// Because the result type is List<Contact>, Apex happily returns
// zero, one, or many rows — no LIMIT needed unless you want one.
List<Contact> allContacts = [
SELECT Id, FirstName, LastName, Email
FROM Contact
WHERE Email != null
];

In both cases, the pattern is the same: write a SOQL query inside square brackets, and Apex returns the results directly into the variable on the left. The data type you assign to determines whether Apex expects a single record or a collection. The sObject type of the variable must also match the object you’re querying — an Account query goes into an Account or List<Account>, not a Contact. If you need to be flexible, you can use the generic SObject or List<SObject> type instead, though you’ll lose compile-time field checking.

So far, our WHERE clauses have used hard-coded values like 'Acme Corp'. In practice, those filter values usually come from elsewhere in your code e.g. a variable passed into a method, a record’s field value, or a collection of IDs gathered in a loop. Apex lets you inject any in-scope variable directly into a SOQL query by prefixing it with a colon (:). These are called bind variables.

This keeps your queries dynamic without resorting to string concatenation, and because the platform handles the substitution, bind variables are inherently safe from SOQL injection.

// Define filter criteria as Apex variables
String targetIndustry = 'Technology';
Decimal minRevenue = 1000000;
// Use bind variables (prefixed with :) in the WHERE clause.
// Apex substitutes the variable values at runtime.
List<Account> techAccounts = [
SELECT Id, Name, AnnualRevenue
FROM Account
WHERE Industry = :targetIndustry
AND AnnualRevenue > :minRevenue
];

You can bind primitives, IDs, Strings, Dates, and even entire collections. For example, passing a Set<Id> into an IN clause is one of the most common patterns you’ll write:

Set<Id> accountIds = new Set<Id>{'001...AAA', '001...BBB'};
// The :accountIds set expands into an IN list automatically
List<Contact> relatedContacts = [
SELECT Id, FirstName, LastName
FROM Contact
WHERE AccountId IN :accountIds
];

SOQL queries one object (or object family) at a time. When you need a full-text search across many different objects at once (similar to the global search bar in Salesforce) reach for SOSL (Salesforce Object Search Language) instead. For a full walkthrough, see the SOSL Guide.

SOSL returns a List<List<SObject>> this is a list of lists. The reason is that a single SOSL search can span multiple objects, and each object’s results come back as a separate inner list. The order of the inner lists matches the order of the objects in your RETURNING clause: the first inner list contains the Account results, the second contains the Contact results, and so on.

// SOSL searches across Account, Contact, and Lead in one call.
// The result is a List of Lists — one inner List per object in the RETURNING clause.
List<List<SObject>> results = [
FIND 'Acme*' IN ALL FIELDS
RETURNING Account(Id, Name), Contact(Id, Email), Lead(Id, Company)
];
// results[0] = the Account matches, results[1] = Contact matches, results[2] = Lead matches
List<Account> accounts = (List<Account>) results[0];
List<Contact> contacts = (List<Contact>) results[1];
List<Lead> leads = (List<Lead>) results[2];

Use SOQL when you know the object and your filter is a structured WHERE; use SOSL when the user input is fuzzy or the search spans objects.

Earlier in the Collections section, we saw how a Map<Id, Account> lets you look up a record instantly by its ID. Apex has a shortcut that combines querying and map-building into a single line: pass an inline SOQL query directly into the Map constructor. Apex automatically uses each record’s Id as the key and the record itself as the value so no manual looping required.

// Querying directly into a Map
Map<Id, Account> customerAccountsMapById = new Map<Id, Account>([
SELECT Id, Name, Industry
FROM Account
WHERE Industry = 'Technology'
]);
// Now you can do lookups without nested loops or extra SOQL:
if (customerAccountsMapById.containsKey(someAccountId)) {
Account targetAccount = customerAccountsMapById.get(someAccountId);
System.debug('Found account: ' + targetAccount.Name);
}

SOQL gets data out of the database; but what about putting data back in? That’s where DML comes in. DML statements are how you persist changes: creating new records, updating existing ones, or removing records you no longer need. Apex provides five DML operations: insert, update, upsert, delete, and undelete, each one can work on a single record or a whole List at once.

// --- INSERT ---
List<Account> newAccounts = new List<Account>();
newAccounts.add(new Account(Name = 'New Corp 1'));
newAccounts.add(new Account(Name = 'New Corp 2'));
insert newAccounts; // Inserts both accounts in one database transaction
// --- UPDATE ---
// First, query the records you want to update
List<Contact> contactsToUpdate = [SELECT Id, Title FROM Contact WHERE Title = 'Junior Developer'];
// Modify the records in memory
for(Contact con : contactsToUpdate) {
con.Title = 'Developer';
}
// Save the changes to the database
update contactsToUpdate;
// --- UPSERT ---
// Upsert creates new records and updates existing ones based on a specified field (or ID by default)
// The explicit field should be an External ID (or another idLookup field).
List<Customer_Reference__c> recordsToUpsert = new List<Customer_Reference__c>();
// ... imagine populating this list ...
upsert recordsToUpsert External_Id__c;
// --- DELETE ---
List<Case> spamCases = [SELECT Id FROM Case WHERE Subject LIKE '%SPAM%'];
delete spamCases;

Standard DML statements like insert newAccounts; follow an “all or none” rule. If even one record in the list fails (say it trips a validation rule or has a missing required field,) the entire operation rolls back and nothing is saved. That’s often what you want (it keeps your data consistent), but sometimes you’d rather save the records that can succeed and deal with the failures separately.

That’s what the Database class methods are for. Each DML keyword has a corresponding Database method: Database.insert(), Database.update(), Database.upsert(), Database.delete(), and Database.undelete(). They accept an optional second parameter called allOrNone. When you set it to false, Salesforce processes each record independently: successful records are committed, and failures are collected into a results array that you can inspect afterwards.

The results come back as an array of result objects: Database.SaveResult[] for insert/update, Database.UpsertResult[] for upsert, and Database.DeleteResult[] for delete. Each result tells you whether that specific record succeeded or failed, and if it failed, exactly why.

List<Account> accs = new List<Account>{
new Account(Name = 'Valid Account'),
new Account() // Invalid — missing the required Name field
};
// Pass false as the second argument to allow partial success.
// The first Account will be inserted; the second will fail without
// rolling back the first.
Database.SaveResult[] results = Database.insert(accs, false);
// Loop through the results to see what happened to each record.
for (Database.SaveResult sr : results) {
if (sr.isSuccess()) {
// sr.getId() returns the new record's Id
System.debug('Successfully inserted ID: ' + sr.getId());
} else {
// Each failed result can contain multiple errors
// (e.g., a validation rule AND a required-field error on the same record)
for (Database.Error err : sr.getErrors()) {
System.debug('Error: ' + err.getStatusCode() + ' - ' + err.getMessage());
// err.getFields() returns which fields caused the error
}
}
}

When should you use Database methods over plain DML?

  • Data migrations or integrations where incoming data is messy and you expect some records to fail. Choose Database methods when you want the good ones to save while you log and retry the bad ones.
  • User-facing features where you want to show partial feedback (“3 of 5 records saved; here’s what went wrong with the other 2”).
  • Batch Apex where you’re processing thousands of records and a single bad record shouldn’t tank the entire batch.

For most day-to-day trigger and helper logic, the standard all-or-none DML statements are simpler and safer. Reach for Database methods when you have a specific reason to tolerate partial failure.

For additional learning on DML in Apex, take a look at Apex Basics & Database


🧱 Classes and Methods: Organizing Your Code

Section titled “🧱 Classes and Methods: Organizing Your Code”

So far, every snippet has been a loose block of code. That works for experiments in Anonymous Apex, but production code needs structure. In Apex, that structure comes from Classes and Methods.

A Class is a named container that groups related data and behaviour together. Methods are the individual actions a class can perform e.g. calculating a value, creating records, calling an API. If variables are your nouns and control flow is your grammar, classes and methods are the paragraphs and chapters that turn raw logic into something readable, testable, and reusable.

You’ve already seen classes implicitly: every trigger handler, every service layer, every test factory in the examples ahead is a class. This section makes the mechanics explicit.

// The access modifier 'public' means other classes can use this one
public class ExpenseHelper {
// A method that calculates tax.
// 'public' means it can be called from outside this class.
// 'static' means you call it on the class itself, not an instance of the class.
// 'Decimal' is the return type (what the method gives back).
public static Decimal calculateTax(Decimal amount, Decimal taxRate) {
if (amount == null || taxRate == null) {
return 0.00;
}
Decimal tax = amount * (taxRate / 100);
return tax.setScale(2); // Round to 2 decimal places
}
// A method that performs an action but doesn't return a value ('void')
public static void assignComplianceTask(List<Expense_Claim__c> claims) {
List<Task> tasksToCreate = new List<Task>();
for (Expense_Claim__c claim : claims) {
if (claim.Amount__c > 10000) {
tasksToCreate.add(new Task(
Subject = 'Compliance Review Required',
WhatId = claim.Id,
OwnerId = UserInfo.getUserId() // Use a real user/queue ID in production logic
));
}
}
if (!tasksToCreate.isEmpty()) {
insert tasksToCreate;
}
}
}

Because these methods are static, you don’t need to create an instance of ExpenseHelper first, you call them directly on the class name:

Decimal myTax = ExpenseHelper.calculateTax(1000.00, 5.0);

To learn a little more and write your first Apex Class follow the Quick Start: Apex trailhead project.

🧪 Instance Methods, Constructors, and Properties

Section titled “🧪 Instance Methods, Constructors, and Properties”

The ExpenseHelper methods above are static; they take inputs, produce outputs, and don’t remember anything between calls. That’s perfect for utility functions like “calculate tax.”

But what if you need a class that holds onto configuration and uses it across multiple method calls? For example, a calculator that always applies the same tax rate without you passing it in every time. That’s where instance classes come in: you create an object (an “instance”) using the new keyword, store data inside it via a constructor, and then call methods on that specific object:

public class ExpenseCalculator {
private final Decimal taxRate;
// Constructor: runs when you call `new ExpenseCalculator(...)`
public ExpenseCalculator(Decimal taxRate) {
this.taxRate = taxRate;
}
// Instance method: uses the state stored on `this`
public Decimal calculateTotal(Decimal amount) {
return (amount + (amount * (this.taxRate / 100))).setScale(2);
}
}
// Create an instance with 15% tax (e.g., NZ GST)
ExpenseCalculator gstCalc = new ExpenseCalculator(15);
// Every call to calculateTotal now uses the 15% rate automatically
Decimal lunchTotal = gstCalc.calculateTotal(100); // 115.00
Decimal flightTotal = gstCalc.calculateTotal(820); // 943.00
// Need a different rate? Create a separate instance — no conflict.
ExpenseCalculator ukVatCalc = new ExpenseCalculator(20);
Decimal hotelTotal = ukVatCalc.calculateTotal(200); // 240.00

Notice the payoff: once you’ve created gstCalc, you never have to pass the tax rate again — the object remembers it. And because each instance holds its own state, gstCalc and ukVatCalc can coexist without interfering with each other. This is the core advantage over a static utility method where you’d have to pass the rate into every single call.

A class can have multiple constructors with different parameter lists (for example, one that takes a tax rate and another that also takes a currency code). One constructor can delegate to another using this(...), which avoids duplicating setup logic.

In the ExpenseCalculator above, taxRate is private. This means nothing outside the class can read or change it. But sometimes you want external code to read a value, or read it but not write it. That’s what properties provide: a field with built-in access rules.

public class ExpenseLine {
// { get; set; } — anyone can read AND write this field
public String description { get; set; }
// { get; private set; } — anyone can read, but only this class can write
public Decimal amount { get; private set; }
public ExpenseLine(String description, Decimal amount) {
this.description = description;
this.amount = amount;
}
}
ExpenseLine line = new ExpenseLine('Client Dinner', 85.50);
// Reading works fine from outside the class
System.debug(line.description); // 'Client Dinner'
System.debug(line.amount); // 85.50
// Writing to description is allowed (public set)
line.description = 'Team Lunch';
// Writing to amount would fail to compile (private set)
// line.amount = 100; // ❌ Compile error

Properties give you the simplicity of a public field with the safety of controlled access — no need to write verbose getAmount() / setAmount() methods by hand.

You now know how to build classes with static methods, instance methods, constructors, and properties. But knowing how to write a class is different from knowing what to put in one. As your Apex codebase grows, the difference between a maintainable project and a tangled mess comes down to a few organising principles.

A class should have one reason to change. Look at the ExpenseHelper class from earlier: it calculates tax and creates compliance Tasks. Those are two different responsibilities. If the tax calculation logic changes, you’re editing the same file that handles task creation, and vice versa. In a small org this might feel harmless, but as more people and features depend on the code, it increases the chance of merge conflicts, accidental side effects, and tests that break for unrelated reasons.

A cleaner split:

public class TaxCalculator {
public static Decimal calculate(Decimal amount, Decimal taxRate) {
if (amount == null || taxRate == null) {
return 0.00;
}
return (amount * (taxRate / 100)).setScale(2);
}
}
public class ComplianceTaskService {
public static void createReviewTasks(List<Expense_Claim__c> claims) {
List<Task> tasksToCreate = new List<Task>();
for (Expense_Claim__c claim : claims) {
if (claim.Amount__c > 10000) {
tasksToCreate.add(new Task(
Subject = 'Compliance Review Required',
WhatId = claim.Id,
OwnerId = UserInfo.getUserId()
));
}
}
if (!tasksToCreate.isEmpty()) {
insert tasksToCreate;
}
}
}

Each class now has a clear, singular purpose. You can test TaxCalculator without inserting records, and ComplianceTaskService without worrying about tax logic. When the tax formula changes, only one file is touched.

In the trigger handler pattern you’ll learn in Part 3, a trigger file delegates work to a handler class. But what should that handler do? The best practice is to keep the handler thin and have it call service classes that contain the actual business logic.

// Trigger handler — thin orchestration layer
public class ExpenseClaimTriggerHandler {
public static void handleAfterInsert(List<Expense_Claim__c> newClaims) {
ComplianceTaskService.createReviewTasks(newClaims);
}
}

The handler knows when to act (after insert) and who to call (the service). The service knows how to do the work. This separation means the same service can be called from a trigger, a Flow (via an Invocable method), a scheduled job, or a test, without duplicating logic.

A factory is a class whose job is to create things so that callers don’t repeat setup logic. You’ll build a full TestDataFactory in Part 6 for generating test records. The same idea applies in production code: if multiple places need to construct a complex object with specific defaults, put that logic in a factory rather than scattering it across callers.

Sometimes the what stays the same but the how varies. For example, different expense categories might have different approval rules. Rather than a long if/else chain checking the category, you can define an interface and swap implementations:

public interface ApprovalStrategy {
Boolean requiresManualApproval(Expense_Claim__c claim);
}
public class TravelApprovalStrategy implements ApprovalStrategy {
public Boolean requiresManualApproval(Expense_Claim__c claim) {
return claim.Amount__c > 5000;
}
}
public class EntertainmentApprovalStrategy implements ApprovalStrategy {
public Boolean requiresManualApproval(Expense_Claim__c claim) {
return true; // Always requires approval
}
}

The code that processes claims doesn’t need to know which strategy it’s using, it just calls strategy.requiresManualApproval(claim). You can add new categories without changing existing logic. This is the Strategy pattern, and it builds directly on the interface concepts covered in the next section.

These principles aren’t rules to follow rigidly on day one. They’re guides that become more valuable as your org’s codebase grows beyond a handful of classes. Start simple, and reach for these patterns when you notice a class doing too many things or when testing becomes unnecessarily difficult.

If you want to see how these principles come together in a structured architecture, the Apex Enterprise Patterns: Service Layer module on Trailhead walks through building a dedicated service layer that separates business logic from triggers, controllers, and batch jobs.

As your codebase grows, you’ll notice classes that share common behaviour. Inheritance lets you define that shared logic in a parent class and have child classes reuse or override it, so you write the common parts once instead of copying them into every class. Interfaces take a different approach: instead of sharing code, they define a contract, a set of method signatures that any implementing class must provide. This lets you write code that works with any class that fulfils the contract, without knowing (or caring) which specific class it is.

Apex supports single inheritance (extends) and multiple interface implementation (implements):

  • A class marked virtual can be extended; methods marked virtual can be overridden.
  • A class marked abstract cannot be instantiated and may declare methods without a body.
  • An interface defines a contract: a list of methods a class promises to implement.
public interface Approvable {
Boolean isAutoApprovable();
}
public virtual class BaseExpense implements Approvable {
public virtual Boolean isAutoApprovable() {
return false; // Safe default: most expenses need manual review
}
}
public class TravelExpense extends BaseExpense {
public override Boolean isAutoApprovable() {
return true; // Travel under policy auto-approves
}
}

The real power shows up when you write code against the interface rather than a specific class. Because both BaseExpense and TravelExpense implement Approvable, you can process them in a single loop without caring which concrete type each one is:

List<Approvable> pending = new List<Approvable>{
new BaseExpense(), // General expense — needs review
new TravelExpense() // Travel — auto-approves
};
for (Approvable item : pending) {
if (item.isAutoApprovable()) {
System.debug('Auto-approved');
} else {
System.debug('Routing for manual approval');
}
}
// Output: "Routing for manual approval", then "Auto-approved"

This is polymorphism: the same method call (isAutoApprovable()) behaves differently depending on the actual object behind it. You can add a MealExpense class tomorrow that returns its own logic, and the loop above doesn’t need to change at all.

Interfaces are how Salesforce platform features such as Schedulable, Queueable, and Database.Batchable plug into your code; you’ll meet them in later chapters.

Every class, method, and variable in Apex has a visibility level that controls who can use it. You’ve already seen public and private in the examples above. Here are the full set:

  • public: Accessible from any Apex code in the same namespace. For most orgs (where everything is unmanaged code), this effectively means “available everywhere.” But if you’re building a managed package, public members are only visible inside that package and not to the subscriber’s code.
  • private: Accessible only within the class where it is defined. This is the default for class members, and ideal for helper methods and internal state that shouldn’t be touched from outside.
  • protected: Accessible within the defining class and any subclass that extends it. Useful when you want child classes to access something but keep it hidden from everything else.
  • global: Accessible across namespace boundaries. Meaning subscriber orgs can call it from a managed package, and external systems can reach it via web services. Use sparingly; once a global member is published in a managed package, removing it is a breaking change.

🛡️ Sharing Keywords (with sharing / without sharing / inherited sharing)

Section titled “🛡️ Sharing Keywords (with sharing / without sharing / inherited sharing)”

Salesforce’s security model doesn’t just control which objects and fields a user can access, it also controls which records they can see, through sharing rules, role hierarchy, and manual shares. By default, Apex runs in system context and ignores all of these record-level restrictions. That’s powerful, but dangerous: it means your code could show or modify records the running user shouldn’t be able to touch.

Sharing keywords let you explicitly declare how a class should behave:

  • with sharing: enforces the running user’s sharing rules. Use this for any class that runs on the user’s behalf (controllers, LWC backends, REST endpoints).
  • without sharing: ignores sharing rules. Use only when the operation must legitimately bypass sharing (system-level housekeeping, queue assignment).
  • inherited sharing: takes the sharing mode of the calling class. This is the safest default for shared utility classes as it behaves like with sharing when called directly from a Lightning context but inherits the caller’s intent otherwise.
public with sharing class AccountController {
@AuraEnabled(cacheable=true)
public static List<Account> getMyAccounts() {
// Runs as the logged-in user. Sharing rules and CRUD/FLS still
// need to be respected via WITH USER_MODE or stripInaccessible.
return [
SELECT Id, Name FROM Account
WITH USER_MODE
ORDER BY Name LIMIT 50
];
}
}

Things go wrong. DML operations fail validation rules, null values appear unexpectedly, and sometimes runtime assumptions break. Robust code anticipates errors and handles them gracefully. It also sometimes needs to signal errors explicitly so that calling code can react.

The try/catch block lets you attempt an operation and intercept any errors that occur, instead of letting them crash the entire transaction. You can catch specific exception types (like DmlException for database errors) before falling back to a generic Exception catch-all. The optional finally block runs regardless of whether an error occurred. This can be useful for cleanup or logging.

try {
// Query into a list so "no rows" becomes an expected branch, not an exception.
List<Account> accounts = [SELECT Id, Industry FROM Account WHERE Name = 'Acme Corp' LIMIT 1];
if (accounts.isEmpty()) {
System.debug(LoggingLevel.WARN, 'No matching account found, skipping update.');
return;
}
Account acc = accounts[0];
acc.Industry = 'Finance';
update acc;
} catch (DmlException de) {
// Executes if the update fails (e.g., validation rule fires)
System.debug('Failed to update account: ' + de.getMessage());
} catch (Exception e) {
// A catch-all for any other type of error
System.debug('An unexpected error occurred: ' + e.getMessage());
} finally {
// This block ALWAYS executes, whether an error occurred or not.
// Useful for cleaning up resources or logging.
System.debug('Attempted account update process finished.');
}

Catching errors is only half the picture. Sometimes your code needs to stop execution and tell the caller that something is wrong; maybe a required parameter is missing, a business rule has been violated, or data is in an unexpected state. That’s what throw does: it immediately halts the current method and passes the error up to whoever called it, where it can be caught with a try/catch.

public static Account getAccountOrFail(Id accountId) {
if (accountId == null) {
// Stop right here — the caller made a mistake
throw new IllegalArgumentException('accountId cannot be null');
}
List<Account> accounts = [SELECT Id, Name FROM Account WHERE Id = :accountId LIMIT 1];
if (accounts.isEmpty()) {
throw new QueryException('No Account found with Id: ' + accountId);
}
return accounts[0];
}

For domain-specific errors that don’t fit a built-in type, you can define your own custom exception class. In Apex, any class whose name ends in Exception and extends Exception becomes a throwable type:

public class ExpenseException extends Exception {}
// Usage
if (claim.Amount__c <= 0) {
throw new ExpenseException('Expense amount must be greater than zero');
}

Custom exceptions make your error handling more expressive. A caller can catch ExpenseException separately from a DmlException, and handle each differently.

System.debug() writes to the debug log, but debug logs are transient, they expire after a few hours, have size limits, and aren’t easy to search across transactions. In production, you’ll want something more durable.

Many teams adopt a logging framework like Nebula Logger (free and open-source) which stores log entries as custom records in Salesforce. This means you can query them, build reports and dashboards, set up alerts, and retain history long-term. A typical pattern in a catch block looks like this:

} catch (DmlException de) {
Logger.error('Failed to update accounts', de);
Logger.saveLog(); // Persists the log entry as a record
}

Even if you don’t adopt a framework right away, keep this in mind: System.debug() is fine for development, but production error handling needs persistent, searchable logs. Plan for it early as retrofitting logging into an existing codebase is tedious work.


When your code isn’t doing what you expect, your primary diagnostic tool is the debug log. A debug log is a detailed, timestamped record of everything that happens during a transaction: your own System.debug() messages, every SOQL query and its row count, every DML statement, workflow and flow evaluations, governor limit consumption, and any exceptions that are thrown. It’s not just a place to print variables. It’s a full execution trace of the platform’s work on your behalf.

You can add your own messages to the log using System.debug():

String status = 'Processing';
System.debug('Current status is: ' + status);
// You can specify log levels to categorise your output
System.debug(LoggingLevel.ERROR, 'Something went terribly wrong!');

In a sandbox environment, you can freely add System.debug() statements to existing code to trace execution paths, inspect variable values at runtime, and confirm which branches your logic is taking.

How to view debug logs:

Option 1: Setup → Debug Logs

  1. In Setup, search for Debug Logs.
  2. Create a new Trace Flag for your user (this tells Salesforce to start capturing logs for your transactions).
  3. Perform the action that triggers your code (e.g., save a record).
  4. Refresh the Debug Logs page and click View on the generated log.

Option 2: Developer Console

  1. Open the Developer Console from the gear icon (⚙️) in the top-right corner of Salesforce.
  2. Logs are captured automatically while the console is open, no need to set up a Trace Flag manually.
  3. Perform the action that triggers your code.
  4. Double-click the log entry in the Logs tab at the bottom to open it in the Log Inspector.

The raw log is a dense wall of text, but it’s structured into labelled event lines that you can scan or filter. Here are the ones you’ll look for most often:

  • USER_DEBUG — your System.debug() output.
  • SOQL_EXECUTE_BEGIN / SOQL_EXECUTE_END — every query, including the number of rows returned. Essential for spotting queries inside loops.
  • DML_BEGIN / DML_END — every insert, update, delete, showing how many rows were affected.
  • LIMIT_USAGE_FOR_NS — a summary of governor limit consumption at the end of the transaction. This is where you check how close you came to the ceiling.
  • EXCEPTION_THROWN / FATAL_ERROR — runtime errors and unhandled exceptions.

In the Developer Console’s Log Inspector, use the Filter bar at the bottom of the Execution Log panel — type USER_DEBUG or SOQL_EXECUTE_BEGIN to instantly narrow thousands of lines down to just the events you care about.

You can also run quick experiments without a trigger using anonymous Apex from the Salesforce CLI:

Terminal window
echo "System.debug('Hello from anonymous Apex');" | sf apex run --target-org my-sandbox

This is the fastest feedback loop for trying out a snippet, inspecting data, or reproducing a bug.


🔧 Working with the Platform: Useful Built-ins

Section titled “🔧 Working with the Platform: Useful Built-ins”

While Apex has an extensive array of built-in classes and methods, a handful of Apex utilities come up in almost every codebase. Knowing they exist saves a lot of reinvention.

Salesforce has separate types for date-only fields (Date) and timestamp fields (Datetime). Use Date for things like Close Date or Birthdate, and Datetime when you need the time component (Created Date, Last Modified Date, scheduling). Both types have rich built-in methods for arithmetic, comparison, and formatting.

If you receive a date as a string (for example, from an integration), use Date.valueOf('2026-04-30') to parse it.

// Get today's date (date-only, no time component)
Date today = Date.today();
// Add or subtract days, months, or years
Date nextWeek = today.addDays(7);
// Calculate the number of days between two dates
Integer daysUntilQuarterEnd = today.daysBetween(Date.newInstance(today.year(), 12, 31));
// Get the current date AND time (timestamp)
Datetime now = Datetime.now();
// Format a Datetime as a human-readable string in a specific time zone
String formatted = now.format('yyyy-MM-dd HH:mm', 'Pacific/Auckland');
// Parsing a date from a string (common in integrations)
Date parsed = Date.valueOf('2026-04-30'); // returns 2026-04-30
// Comparing dates
if (today.isSameDay(parsed)) {
System.debug('Dates match');
}

Strings are everywhere in Salesforce: field values, error messages, API responses, log output. Apex’s String class has dozens of built-in methods that save you from writing manual parsing logic. A few you’ll reach for constantly:

// Null-safe blank check; handles null, empty, and whitespace-only strings
if (String.isBlank(inputValue)) {
System.debug('No value provided');
}
// Splitting, searching, and extracting
List<String> parts = 'one,two,three'.split(','); // ['one', 'two', 'three']
Boolean found = 'Acme Corporation'.containsIgnoreCase('acme'); // true
String domain = 'jane@example.com'.substringAfter('@'); // 'example.com'
// Building strings with placeholders (like String.format in Java/C#)
String msg = String.format('Created {0} records in {1}ms',
new List<String>{ String.valueOf(count), String.valueOf(elapsed) });

Prefer String.isBlank() over == null or == '' it catches all three cases (null, empty, whitespace) in one call.

Communicating between Apex and external APIs almost always means JSON. Apex has built-in JSON methods for converting between Apex objects and JSON strings. Use JSON.serialize() to turn an Apex object (a Map, List, or custom wrapper class) into a JSON string like when building the body of an outbound API callout. In the other direction, use JSON.deserializeUntyped() when the incoming JSON structure is dynamic or unknown, and JSON.deserialize(json, MyClass.class) when you have a strongly-typed wrapper class to parse into, this is the typical approach when handling responses from external API callouts. (For LWC communication, the framework handles JSON conversion automatically behind the scenes when you use @AuraEnabled methods and wrapper classes; you don’t need to call these methods yourself.)

// Build a Map to represent the JSON structure you want to send
Map<String, Object> payload = new Map<String, Object>{
'amount' => 100,
'status' => 'Submitted'
};
// Convert the Map into a JSON string (e.g., for an HTTP request body)
String body = JSON.serialize(payload);
// body = '{"amount":100,"status":"Submitted"}'
// Convert a JSON string back into a Map (useful when parsing an API response)
Map<String, Object> parsed = (Map<String, Object>) JSON.deserializeUntyped(body);

Use JSON.serializePretty() during development, it produces the same JSON but with line breaks and indentation, making it far easier to scan in debug logs.

👤 UserInfo — the running user’s context

Section titled “👤 UserInfo — the running user’s context”

The UserInfo class gives you details about the user running the current transaction without needing a SOQL query, it can provide their Id, profile, time zone, locale, and more. This is useful for assigning ownership, logging, or branching logic based on the user’s context.

Id currentUserId = UserInfo.getUserId(); // 18-character Id of the running user
String profileId = UserInfo.getProfileId(); // Id of the user's assigned Profile
String userTimeZone = UserInfo.getTimeZone().getDisplayName(); // e.g., 'New Zealand Standard Time'
String userLocale = UserInfo.getLocale(); // e.g., 'en_NZ'

Most of the time you reference objects and fields directly in your code (Account.Name, Opportunity.StageName) and the compiler checks they exist. But sometimes you need to discover what’s available at runtime; for example, building a generic utility that works across multiple objects, reading picklist values for a dynamic UI, or validating field-level access before displaying data. That’s what the Schema namespace is for: it lets you introspect your org’s metadata programmatically.

// Get metadata about the Account object (label, fields, record types, etc.)
Schema.DescribeSObjectResult describe = Account.SObjectType.getDescribe();
// Get a map of all fields on the object, keyed by API name
Map<String, Schema.SObjectField> fields = describe.fields.getMap();
// Get the picklist values for a specific field (useful for building dynamic UIs)
List<Schema.PicklistEntry> stages = Opportunity.StageName.getDescribe().getPicklistValues();

When you’re working with inheritance hierarchies (like the Approvable interface from earlier) or processing generic SObject lists, you sometimes need to check an object’s concrete type at runtime before casting it. The instanceof keyword returns true if an object is an instance of a specific class or interface, letting you safely branch and cast.

SObject record = [SELECT Id, Name FROM Account LIMIT 1];
if (record instanceof Account) {
Account acc = (Account) record;
System.debug('Account name: ' + acc.Name);
}

This comes up most often with polymorphic lookups (like Task.WhatId, which can point to an Account, Opportunity, or any other object) and when writing utility methods that accept the generic SObject type.

📊 Limits — checking governor limit consumption

Section titled “📊 Limits — checking governor limit consumption”

Salesforce enforces strict governor limits on every transaction (number of SOQL queries, DML statements, CPU time, and more). We’ll cover these limits in depth in Part 3, but it’s worth knowing now that the Limits class lets you check your consumption mid-transaction. Each limit has a pair of methods: one that returns how much you’ve used so far (e.g., Limits.getQueries()) and one that returns the maximum allowed (e.g., Limits.getLimitQueries()). This is invaluable for debugging and for building safeguards into code that processes variable-sized data sets.

// getQueries() = SOQL queries used so far; getLimitQueries() = max allowed (typically 100)
System.debug('Queries used: ' + Limits.getQueries() + ' of ' + Limits.getLimitQueries());
// getDmlStatements() = DML operations used so far; getLimitDmlStatements() = max allowed (typically 150)
System.debug('DML statements: ' + Limits.getDmlStatements() + ' of ' + Limits.getLimitDmlStatements());
// getCpuTime() = milliseconds of CPU consumed so far; getLimitCpuTime() = max allowed (typically 10,000ms)
System.debug('CPU time: ' + Limits.getCpuTime() + 'ms of ' + Limits.getLimitCpuTime() + 'ms');

You now understand the syntax and structure of Apex. You can create variables, loop through collections, query the database, and wrap that logic in reusable classes.

However, writing Apex that compiles is different from writing Apex that scales. Salesforce is a shared environment, and poorly written code can bring a business process to a grinding halt.

In Part 3, we will tackle the most critical concepts in Salesforce development: Triggers, Governor Limits, and Bulkification. We’ll learn how to write code that runs automatically when data changes, and more importantly, how to ensure that code handles one record just as efficiently as it handles one thousand.

Apex that compiles is not the same as Apex that scales. In Part 3 — Triggers, Limits & Bulk Patterns, you’ll learn how to react to data changes with triggers, respect Salesforce’s governor limits, and write bulkified code that handles 1 record or 1,000 with the same reliability.