Salesforce Development Fundamentals: Part 3 - Triggers, Limits & Bulk Patterns
Updated 24/04/2026
In Part 2, you learned the building blocks of Apex. You can write classes to perform calculations and manipulate data. But code needs a way to execute. While you can call Apex manually (like clicking a button) or schedule it to run at night, the most common way Apex executes is in response to data changing in the database.
This is the domain of the Apex Trigger.
A trigger is a piece of code that fires automatically before or after records are inserted, updated, deleted, or undeleted. They are incredibly powerful, but with great power comes the responsibility to write code that doesn’t break the system. Because Salesforce is a multitenant platform, you must write triggers that respect Governor Limits and process data in bulk.
⚡ Anatomy of a Trigger
Section titled “⚡ Anatomy of a Trigger”A trigger is defined on a specific object and specifies the events that cause it to fire. The syntax has three parts: a name (by convention, <ObjectName>Trigger), the object it listens on (after the on keyword), and one or more events in parentheses that determine when it fires. The body of the trigger (the code between the curly braces) runs every time one of those events occurs on that object.
trigger AccountTrigger on Account (before insert, after update) { // This code runs every time an Account is about to be inserted // OR has just been updated — the two events listed above. System.debug('The Account trigger fired!');}Unlike a class, you don’t call a trigger yourself. Salesforce calls it for you whenever the matching database event happens; whether that’s a user clicking Save, an API call inserting records, or another piece of Apex performing DML on that object.
🎯 Trigger Events
Section titled “🎯 Trigger Events”You can hook into the database save process at several distinct moments:
before insert: The record is about to be saved for the first time but isn’t in the database yet. Ideal for validation and setting default field values.after insert: The record has been saved and now has an ID, but the transaction isn’t fully committed. Ideal for creating related records that need the new record’s ID.before update: An existing record is about to be saved with new values. Ideal for validation and modifying field values before the save.after update: The updated record has been saved to the database. Ideal for cross-object updates that depend on the committed state.before delete: A record is about to be removed. Ideal for validation checks that should prevent deletion.after delete: The record has been removed from the database. Ideal for cleaning up related data or logging.after undelete: A record has been restored from the Recycle Bin. Ideal for re-establishing relationships or recalculating summaries.
📋 Context Variables
Section titled “📋 Context Variables”When a trigger fires, how do you know which records caused it to fire? Salesforce provides Trigger Context Variables that contain the data in flight.
Trigger.new: AList<sObject>containing the new versions of the records. (Available in insert and update triggers).Trigger.old: AList<sObject>containing the old versions of the records before they were updated. (Available in update and delete triggers).Trigger.newMap: AMap<Id, sObject>of the new records. (Available inafter insert, andbefore/after update).Trigger.oldMap: AMap<Id, sObject>of the old records.Trigger.isExecuting,Trigger.isBefore,Trigger.isAfter,Trigger.isInsert,Trigger.isUpdate,Trigger.isDelete,Trigger.isUndelete: Boolean flags that tell you which event fired.Trigger.size: Number of records currently being processed (1 from a UI save, up to 200 from a typical API call).Trigger.operationType: ASystem.TriggerOperationenum (e.g.BEFORE_INSERT,AFTER_UPDATE) — handy inswitch onstatements when dispatching to handler methods.
Example: A Simple Validation Trigger
trigger ExpenseClaimTrigger on Expense_Claim__c (before update) {
// Iterate over the records being updated for (Expense_Claim__c claim : Trigger.new) {
// Get the old version of this specific claim Expense_Claim__c oldClaim = Trigger.oldMap.get(claim.Id);
// Check if the status changed from Approved back to Draft if (oldClaim.Status__c == 'Approved' && claim.Status__c == 'Draft') {
// Prevent the save and show an error to the user claim.addError('You cannot revert an Approved claim back to Draft.'); } }}Use the Apex Triggers trailhead module to further your knowledge.
🧩 The Trigger Handler pattern
Section titled “🧩 The Trigger Handler pattern”The example above puts logic directly inside the .trigger file. For a very simple org, this might be fine. But as your org grows, placing logic directly in triggers becomes an anti-pattern for three reasons:
- Order of Execution: If you have multiple triggers on the same object (e.g.,
AccountTrigger1andAccountTrigger2), Salesforce does not guarantee which one runs first. - Reusability: Code stuck inside a trigger can’t be easily called from another class or a Lightning Component.
- Testing: It is much harder to isolate and test logic that is trapped in a trigger file.
✅ The Solution: One Trigger, One Handler
Section titled “✅ The Solution: One Trigger, One Handler”The industry standard is to have exactly one trigger per object, and that trigger does nothing but delegate work to a Handler Class.
1. The “Logic-less” Trigger:
trigger AccountTrigger on Account (before insert, after update) {
if (Trigger.isBefore && Trigger.isInsert) { AccountTriggerHandler.handleBeforeInsert(Trigger.new); }
if (Trigger.isAfter && Trigger.isUpdate) { AccountTriggerHandler.handleAfterUpdate(Trigger.new, Trigger.oldMap); }}2. The Handler Class:
public class AccountTriggerHandler {
public static void handleBeforeInsert(List<Account> newAccounts) { // Validation and defaulting logic goes here for (Account acc : newAccounts) { if (acc.Industry == 'Technology') { acc.Rating = 'Hot'; } } }
public static void handleAfterUpdate(List<Account> newAccounts, Map<Id, Account> oldAccountsMap) { // Cross-object logic goes here // e.g., if Account Status changes, update related Contacts }}This pattern keeps your org organized and ensures you have complete control over the order in which logic executes.
🔀 Using switch on for Cleaner Dispatch
Section titled “🔀 Using switch on for Cleaner Dispatch”Remember the Trigger.operationType enum from the context variables table? It lets you replace chains of if checks with a clean switch on block — especially useful as your trigger grows to handle more events:
// AccountTrigger.trigger — using switch on operationTypetrigger AccountTrigger on Account ( before insert, before update, after insert, after update, after delete) { switch on Trigger.operationType { when BEFORE_INSERT { AccountTriggerHandler.handleBeforeInsert(Trigger.new); } when BEFORE_UPDATE { AccountTriggerHandler.handleBeforeUpdate(Trigger.new, Trigger.oldMap); } when AFTER_INSERT { AccountTriggerHandler.handleAfterInsert(Trigger.new); } when AFTER_UPDATE { AccountTriggerHandler.handleAfterUpdate(Trigger.new, Trigger.oldMap); } when AFTER_DELETE { AccountTriggerHandler.handleAfterDelete(Trigger.old); } }}This is functionally identical to the if-based version above, but it scales more cleanly and makes it immediately obvious which events are handled. Most trigger frameworks use this pattern (or something similar) under the hood.
🚧 Governor Limits: The rules of the multitenancy
Section titled “🚧 Governor Limits: The rules of the multitenancy”Salesforce is a multitenant architecture. Each Salesforce org shares server resources with many other orgs belonging to other customers. To ensure one customer’s bad code doesn’t crash the server for everyone, Salesforce enforces strict runtime limits on Apex.
Think of it like a highway: everyone can drive, but there are speed limits, and you can’t block all the lanes. If your code exceeds these limits, Salesforce terminates the transaction instantly with an uncatchable error (e.g., System.LimitException: Too many SOQL queries).
🔴 The Big Four Limits (Per Transaction)
Section titled “🔴 The Big Four Limits (Per Transaction)”- SOQL Queries: Your code can execute a maximum of 100 SOQL queries per transaction. Every
[SELECT ...]statement counts as one, so a query inside a loop can exhaust this limit in seconds. - DML Statements: Your code can execute a maximum of 150 DML statements (insert, update, delete, or undelete) per transaction. Each individual DML call counts as one statement regardless of how many records it processes.
- CPU Time: All Apex code in a synchronous transaction must complete within 10,000 milliseconds (10 seconds). Complex loops, string manipulation, and JSON parsing are common consumers.
- Heap Size: The total memory used by all variables, collections, and objects in a synchronous transaction cannot exceed 6 MB. Large lists of sObjects or verbose JSON responses are the usual culprits.
A “transaction” starts when a user clicks save (or an API call is made) and ends when the data is finally committed to the database. All triggers, classes, and workflows that fire during that save process share these limits.
📊 Other limits worth knowing
Section titled “📊 Other limits worth knowing”The table below includes some of the Big Four again so you can compare their synchronous and asynchronous thresholds side by side. It also introduces a few additional limits that cause specific failures you should recognise:
| Limit | Synchronous | Asynchronous (Batch / Future / Queueable) |
|---|---|---|
| SOQL queries | 100 | 200 |
| DML statements | 150 | 150 |
| Total query rows returned | 50,000 | 50,000 |
| Records processed per DML | 10,000 | 10,000 |
| HTTP callouts | 100 | 100 |
@future invocations | 50 | 50 |
| Heap size | 6 MB | 12 MB |
| CPU time | 10,000 ms | 60,000 ms |
Notice that heap size and CPU time double in asynchronous contexts, and SOQL queries jump from 100 to 200. Asynchronous Apex (@future, Queueable, Batch) is the standard escape hatch when you legitimately need higher limits or callouts from a trigger context. We’ll dedicate the whole of Part 4 to these patterns.
For the complete, up-to-date list of every governor limit, see the official Apex Governor Limits reference.
You can also check limits at runtime, which is invaluable when debugging a slow transaction:
System.debug('SOQL used: ' + Limits.getQueries() + ' / ' + Limits.getLimitQueries());System.debug('CPU used: ' + Limits.getCpuTime() + ' / ' + Limits.getLimitCpuTime());🛡️ Defensive Coding and Graceful Degradation
Section titled “🛡️ Defensive Coding and Graceful Degradation”Governor limits tell you what the platform won’t allow. Defensive coding is about what you choose to handle before the platform forces an error.
Null Safety
Section titled “Null Safety”The most common runtime error in Apex is NullPointerException: calling a method on a variable that is null. Apex provides the safe navigation operator (?.) to short-circuit these calls:
// Without safe navigation — throws NullPointerException if con.Account is nullString accountName = con.Account.Name;
// With safe navigation — returns null instead of throwingString accountName = con.Account?.Name;Use safe navigation when traversing relationships that might not be populated. For critical values that must exist, fail explicitly with a clear message rather than silently returning null:
if (con.AccountId == null) { con.addError('A Contact must be associated with an Account.'); return;}Check Before You Spend
Section titled “Check Before You Spend”When your code path might run expensive operations conditionally, check remaining limits before proceeding:
public static void enrichAccounts(List<Account> accounts) { for (Account acc : accounts) { // Stop if we're close to the SOQL limit if (Limits.getQueries() >= Limits.getLimitQueries() - 5) { System.debug(LoggingLevel.WARN, 'Approaching SOQL limit, stopping enrichment.'); break; } // ... perform query-heavy enrichment ... }}This is not a substitute for bulkification (which avoids the problem entirely), but it’s a safety net for code that integrates with third-party libraries or handles unpredictable input volumes.
Log and Continue vs. Throw and Abort
Section titled “Log and Continue vs. Throw and Abort”Not every error should crash the transaction. Consider the impact:
- Throw and abort when data integrity is at risk. If a validation fails or a required related record is missing, stopping the transaction is the right choice.
- Log and continue when the failure is isolated. If one record out of 200 has bad data but the other 199 are fine, log the failure and keep processing. Use
Database.insert(records, false)for partial success, and collectDatabase.SaveResulterrors for reporting.
Database.SaveResult[] results = Database.insert(recordsToInsert, false);
for (Integer i = 0; i < results.size(); i++) { if (!results[i].isSuccess()) { System.debug(LoggingLevel.ERROR, 'Failed to insert record ' + i + ': ' + results[i].getErrors()[0].getMessage() ); }}📈 Bulkification: Writing code that scales
Section titled “📈 Bulkification: Writing code that scales”Because limits are strictly enforced per transaction, your code must be written to handle 1 record just as efficiently as it handles 200 records. This practice is called Bulkification.
When a trigger fires, Trigger.new is a List. If a user updates one record in the UI, the list has 1 item. If an integration updates 200 records via the API, that exact same trigger fires once, and the list has 200 items.
If your code assumes there will only ever be 1 record, it may fail horribly with multiple records.
🚫 Anti-Pattern 1: SOQL inside a Loop
Section titled “🚫 Anti-Pattern 1: SOQL inside a Loop”This is the most common mistake new developers make.
// BAD EXAMPLE: Do not do this!public static void handleAfterUpdate(List<Contact> updatedContacts) {
for (Contact con : updatedContacts) { // We need the Account Name for some logic. // Doing a SOQL query inside the loop! Account relatedAcc = [SELECT Name FROM Account WHERE Id = :con.AccountId];
if (relatedAcc.Name == 'Acme Corp') { // Do something... } }}If an integration updates 150 Contacts, this loop runs 150 times. It executes 150 SOQL queries. The limit is 100. The transaction crashes at contact #101.
✅ The Solution: Collect, Query, Process
Section titled “✅ The Solution: Collect, Query, Process”To bulkify your code, follow this standard three-step pattern:
-
Collect — Loop through the records and gather the IDs you need to query into a
Set. -
Query — Perform a single SOQL query outside the loop to get all related data, storing the results in a
Mapfor fast lookup. -
Process — Loop through the records again, using the queried
Mapdata instead of making individual queries.
// GOOD EXAMPLE: Bulkified Codepublic static void handleAfterUpdate(List<Contact> updatedContacts) {
// 1. COLLECT the Account IDs we need Set<Id> accountIds = new Set<Id>(); for (Contact con : updatedContacts) { if (con.AccountId != null) { accountIds.add(con.AccountId); } }
// 2. QUERY once, storing the results in a Map for fast lookup Map<Id, Account> accountMap = new Map<Id, Account>([ SELECT Id, Name FROM Account WHERE Id IN :accountIds ]);
// 3. PROCESS the records using the Map for (Contact con : updatedContacts) { if (con.AccountId != null) { // Retrieve the related Account from memory, not the database Account relatedAcc = accountMap.get(con.AccountId);
if (relatedAcc != null && relatedAcc.Name == 'Acme Corp') { // Do something... } } }}This bulkified version uses exactly 1 SOQL query, regardless of whether updatedContacts has 1 record or 200 records.
🚫 Anti-Pattern 2: DML inside a Loop
Section titled “🚫 Anti-Pattern 2: DML inside a Loop”The same principle applies to DML statements. Consider this code that creates a follow-up Task for every Opportunity that just closed:
// BAD EXAMPLE: Do not do this!public static void handleAfterUpdate(List<Opportunity> updatedOpps, Map<Id, Opportunity> oldOpps) {
for (Opportunity opp : updatedOpps) { Opportunity oldOpp = oldOpps.get(opp.Id);
if (opp.StageName == 'Closed Won' && oldOpp.StageName != 'Closed Won') { Task followUp = new Task( Subject = 'Follow up on closed deal: ' + opp.Name, WhatId = opp.Id, OwnerId = opp.OwnerId, ActivityDate = Date.today().addDays(7) ); insert followUp; // DML inside the loop! } }}If 150 Opportunities close in a single batch, this fires 150 insert statements. The limit is 150, so you’re right on the edge and may get away with it but what if it was more than 150 Opportunities in the transaction, this will push you over.
✅ The Fix: Collect Records, DML Once
Section titled “✅ The Fix: Collect Records, DML Once”Build a List inside the loop, then perform the DML operation on the entire list after the loop:
// GOOD EXAMPLE: Bulkified DMLpublic static void handleAfterUpdate(List<Opportunity> updatedOpps, Map<Id, Opportunity> oldOpps) {
List<Task> tasksToInsert = new List<Task>();
for (Opportunity opp : updatedOpps) { Opportunity oldOpp = oldOpps.get(opp.Id);
if (opp.StageName == 'Closed Won' && oldOpp.StageName != 'Closed Won') { tasksToInsert.add(new Task( Subject = 'Follow up on closed deal: ' + opp.Name, WhatId = opp.Id, OwnerId = opp.OwnerId, ActivityDate = Date.today().addDays(7) )); } }
// Single DML outside the loop — handles 1 or 200 records equally if (!tasksToInsert.isEmpty()) { insert tasksToInsert; }}This version uses exactly 1 DML statement no matter how many Opportunities are processed.
🔍 Selective Queries and Indexes
Section titled “🔍 Selective Queries and Indexes”Bulkification keeps your query count low. Query selectivity keeps each individual query fast. As your data volume grows, a non-selective query can time out long before you hit the 50,000-row limit.
A query is considered selective when its WHERE clause efficiently narrows down which records to return. On smaller objects this isn’t something you need to worry about, as Salesforce can scan the entire table quickly. However, on large objects (100,000+ records), the query optimizer needs an indexed field in the WHERE clause that filters the results down to a small percentage of total rows (generally under 10% for standard indexes, or under 5% for custom indexes). If the optimizer can’t find a selective path, Salesforce may perform a full table scan, which can cause timeouts. Salesforce automatically indexes:
- All standard ID and lookup fields
- Fields marked
UniqueorExternal Idon custom fields - Some standard fields (
Name,OwnerId,CreatedDate, etc.) - Custom fields you’ve requested an index on
// Likely non-selective on a 5M-row Account: scans most of the table[SELECT Id FROM Account WHERE Industry = 'Technology'];
// Selective: filters on indexed Id values from upstream context[SELECT Id FROM Account WHERE Id IN :accountIds];When you genuinely need to filter on a non-indexed field at scale, batch the work asynchronously rather than running it in a trigger. For a deeper dive into query performance, indexing strategies, and optimisation techniques, see the SOQL Performance Optimization guide.
🪜 Order of Execution
Section titled “🪜 Order of Execution”When a record is saved, whether by a user clicking “Save,” an API call, or any other DML operation, a complex sequence of events occurs. Apex triggers are just one part of it. Understanding the Order of Execution is critical for Salesforce developers, especially when debugging why things don’t behave as expected.
Here’s a simplified version of what happens during a save (insert or update). Don’t worry about memorising every step right now. The key takeaway is knowing where your triggers sit relative to everything else:
- Original record loaded from the database (for updates only).
- New field values from the request are applied on top of the existing record.
- System validations run (required fields, foreign keys, field-format rules) along with most field-level validations.
- Before-save record-triggered Flows run. These are the fast “before-save” variant designed for setting field values on the same record without a DML call.
beforeApex triggers run. This is your first chance to inspect or modify the record in code.- Most custom validation rules run, plus duplicate rules.
- Record is saved to the database, but the transaction is not yet committed. Roll-up summary fields are recalculated at this point.
afterApex triggers run. The record now has an ID (for inserts) and is in the database, but the transaction can still be rolled back.- Assignment rules, auto-response rules, workflow rules (legacy), and escalation rules run.
- After-save record-triggered Flows run. (previously Process Builders also ran here, but Salesforce retired them in favour of Flows.)
- Roll-up summary fields and cross-object formulas re-evaluate if needed.
- Sharing rules are recalculated.
- Commit to database: all changes become permanent and post-commit logic (such as
@futurejobs queued earlier) is dispatched.
🔁 Recursion Prevention
Section titled “🔁 Recursion Prevention”Because triggers can cause other records to update, which fire other triggers (or the same trigger again due to the order of execution), you can accidentally create infinite loops.
A simple way to prevent a trigger handler from running twice in the same transaction is to use a static Boolean variable. Static variables in Apex live for the duration of a single transaction (essentially a single user request or API call) and reset between transactions, which makes them perfect for “have we already run this?” flags.
Most trigger frameworks (like the Trigger Actions Framework, fflib, and Kevin O’Hara’s TriggerHandler mentioned earlier) have recursion prevention built in, so you won’t need to manage these flags manually in a production org. But it’s important to understand the underlying technique:
public class AccountTriggerHandler { // Static variable to track if we've run public static Boolean hasRun = false;
public static void handleAfterUpdate(List<Account> newAccounts) { // If we've already run, exit immediately if (hasRun) { return; }
// Mark that we are running hasRun = true;
// ... perform expensive logic ... }}The Triggers and Order of Execution developer guide provides in-depth knowledge on the execution order. There is also an awesome Order of Execution Flowchart diagram.
🔒 Idempotency: Safe to Run Twice
Section titled “🔒 Idempotency: Safe to Run Twice”Recursion prevention stops your code from re-entering itself within a single transaction. But what about situations where the same transaction runs more than once across separate executions? This happens more often than you might expect:
- A Platform Event is redelivered after a subscriber failure.
- A batch job is re-run after a partial failure.
- An integration retries a callout that already succeeded (the acknowledgement was lost in transit).
- A user clicks Save twice quickly before the page refreshes.
In these cases, your code should produce the same result whether it runs once or multiple times. This property is called idempotency.
The simplest pattern is check before you act: query for existing results before creating new ones.
public class ComplianceTaskService { public static void createReviewTasks(List<Expense_Claim__c> claims) { if (claims == null || claims.isEmpty()) { return; }
// 1. Collect IDs for claims that qualify and already have an Id. // This method is intended for after-insert or after-update logic. Set<Id> claimIds = new Set<Id>(); for (Expense_Claim__c claim : claims) { if (claim.Id != null && claim.Amount__c != null && claim.Amount__c > 10000) { claimIds.add(claim.Id); } }
if (claimIds.isEmpty()) { return; }
// 2. Check which claims already have a review task. Set<Id> claimsWithExistingTask = new Set<Id>(); for (Task taskRecord : [ SELECT WhatId FROM Task WHERE WhatId IN :claimIds AND Subject = 'Compliance Review Required' ]) { claimsWithExistingTask.add(taskRecord.WhatId); }
// 3. Only create tasks for claims that do not already have one. List<Task> tasksToCreate = new List<Task>(); for (Id claimId : claimIds) { if (!claimsWithExistingTask.contains(claimId)) { tasksToCreate.add(new Task( Subject = 'Compliance Review Required', WhatId = claimId, OwnerId = UserInfo.getUserId() )); } }
if (!tasksToCreate.isEmpty()) { insert tasksToCreate; } }}If this method runs a second time for the same claims, the query in step 2 finds the existing Tasks and step 3 skips them. No duplicates are created. In a production org, you would usually use a stronger duplicate marker than the Task subject alone, such as a custom lookup or external reference field.
⚖️ Triggers vs. Flow: When to use what?
Section titled “⚖️ Triggers vs. Flow: When to use what?”You now know how to write a trigger, but when should you? As an admin stepping into development, it’s easy to assume code is always better. It’s not. Salesforce actively encourages a “clicks before code” philosophy, and for good reason: declarative tools are easier to maintain and more accessible to the wider team.
Use Record-Triggered Flows when:
- Updating fields on the same record (Before-Save Flows are extremely fast since they skip DML).
- Creating or updating a small number of related records.
- Sending email alerts or posting to Chatter.
- The logic needs to be visible to admins or is likely to change frequently.
Use Apex Triggers when:
- The logic spans multiple objects with complex relationship queries.
- You’re working with large data volumes where performance is critical (Apex is generally faster at processing bulk lists than Flow).
- You need to query or validate against external objects or make HTTP callouts.
- You need to delete related records (Flow handles deletion poorly).
- You require fine-grained transaction control, such as partial rollbacks or custom error handling.
The Hybrid Approach: In practice, many teams combine both. A Record-Triggered Flow evaluates criteria and, when complex processing is needed, calls an Invocable Apex method. This gives admins control over when the logic runs, while developers own what it does. The Apex side is simply a static method annotated with @InvocableMethod:
public class ExpenseActions { @InvocableMethod(label='Create Compliance Review Task') public static void createComplianceTasks(List<Id> claimIds) { // bulk-safe logic here }}Once deployed, this method appears in Flow Builder as a callable action that can be used in any Flow.
For a more detailed breakdown of when to choose Flow over Apex (and vice versa), see the official Flow vs Apex Decision Guide from Salesforce Architects.
� Putting it all together
Section titled “� Putting it all together”Let’s combine everything from this article into a single, realistic scenario. Imagine a business rule: “When a Case is created, if the parent Account’s SLA_Level__c is ‘Platinum’, set the Case Priority to ‘High’ and create a Task for the Account Owner to review within 24 hours.”
This requires a trigger, a handler, a bulk-safe SOQL query, and a bulk-safe DML operation.
The Trigger:
trigger CaseTrigger on Case (after insert) { switch on Trigger.operationType { when AFTER_INSERT { CaseTriggerHandler.handleAfterInsert(Trigger.new); } }}The Handler:
public class CaseTriggerHandler {
public static void handleAfterInsert(List<Case> newCases) { setPlatinumCasePriority(newCases); }
private static void setPlatinumCasePriority(List<Case> newCases) {
// 1. COLLECT — gather the Account IDs we need Set<Id> accountIds = new Set<Id>(); for (Case c : newCases) { if (c.AccountId != null) { accountIds.add(c.AccountId); } }
if (accountIds.isEmpty()) { return; // Nothing to do — exit early }
// 2. QUERY — one query to get Account details Map<Id, Account> accountMap = new Map<Id, Account>([ SELECT Id, SLA_Level__c, OwnerId FROM Account WHERE Id IN :accountIds ]);
// 3. PROCESS — build lists for DML List<Case> casesToUpdate = new List<Case>(); List<Task> tasksToInsert = new List<Task>();
for (Case c : newCases) { if (c.AccountId == null) { continue; }
Account parentAccount = accountMap.get(c.AccountId);
if (parentAccount != null && parentAccount.SLA_Level__c == 'Platinum') { // Update the Case priority (we need a new sObject with the Id // because after-insert triggers receive read-only records) casesToUpdate.add(new Case( Id = c.Id, Priority = 'High' ));
// Create a review Task for the Account Owner tasksToInsert.add(new Task( Subject = 'Review Platinum SLA Case: ' + c.CaseNumber, WhatId = c.Id, OwnerId = parentAccount.OwnerId, ActivityDate = Date.today().addDays(1) )); } }
// 4. DML — one update, one insert, outside all loops if (!casesToUpdate.isEmpty()) { update casesToUpdate; } if (!tasksToInsert.isEmpty()) { insert tasksToInsert; } }}Let’s count the governor limit consumption:
| Resource | Count | Limit |
|---|---|---|
| SOQL queries | 1 | 100 |
| DML statements | 2 (one update, one insert) | 150 |
Whether 1 Case fires this trigger or 200, the numbers stay the same. That’s bulkification in action.
🎯 Final Thoughts
Section titled “🎯 Final Thoughts”You now understand the rules of the road for writing production quality Apex. Let’s recap what you’ve learned:
- Triggers fire automatically when records are inserted, updated, deleted, or undeleted. You know how to use context variables to access the data in flight.
- The Trigger Handler pattern keeps your org organised with one trigger per object, delegating all logic to testable, reusable handler classes.
- Governor Limits are the guardrails of the multitenant platform: respect them, and your code will scale; ignore them, and the platform will stop you.
- Bulkification ensures your code handles 1 record just as efficiently as 200, using the Collect → Query → Process pattern for SOQL and the collect-then-DML pattern for inserts and updates.
- Order of Execution determines when your trigger runs relative to validation rules, Flows, and other automation. Understanding it is the key to debugging unexpected behaviour.
Every bulk pattern you’ve written here must be proven with test data at volume. Inserting 200 records, asserting the results, and confirming you stay within limits. That’s exactly where Part 5 picks up.
But first, there’s one more topic to cover. Your triggers are synchronous, they run while the user waits. What happens when you need to process 50,000 records, make an HTTP callout from a trigger, or schedule a job to run every night at 2 AM?
In Part 4, we’ll tackle Asynchronous Apex: @future methods, Queueable jobs, Batch Apex, and Scheduled Apex, giving you the tools to move heavy work off the synchronous path.
🚀 Next steps
Section titled “🚀 Next steps”Synchronous code is fine until you need to process 50,000 records, make a callout from a trigger, or run work overnight. In Part 4 — Asynchronous Apex, you’ll learn @future methods, Queueable jobs, Batch Apex, and Scheduled Apex — the patterns that move heavy work off the synchronous path so users don’t have to wait for it.