Mastering Apex Triggers: Patterns That Scale

Stop writing triggers that break at scale. Learn the handler pattern, the one-trigger rule, bulkification, and how to test 200-record scenarios your org will actually see in production.

Why Most Triggers Are Broken

In nearly every Salesforce org I've consulted on, I find the same pattern: triggers written quickly to solve an immediate problem, with no thought to what happens when a data loader processes 50,000 records. They work fine in testing. They fall apart in production.

The core problem is that Apex triggers operate in batches of up to 200 records. Salesforce groups records to minimize server round-trips. If your trigger assumes it's always dealing with a single record — because a user only clicks Save once — you're setting a trap.

Key Concept

Salesforce processes triggers in batches of up to 200 records. Any trigger that assumes Trigger.new[0] is the only record will fail under bulk operations, data migrations, or even a Flow firing on multiple records simultaneously.

The One-Trigger-Per-Object Rule

The first thing to enforce: one trigger per object, period. Multiple triggers on the same object execute in an unpredictable order. There is no way to guarantee TriggerA runs before TriggerB. As soon as you have two triggers on Account, you've introduced a race condition you can't control.

The single trigger acts as a dispatcher — routing execution to appropriate handler methods based on context:

trigger AccountTrigger on Account (
  before insert, before update, before delete,
  after insert, after update, after delete, after undelete
) {
  AccountTriggerHandler handler = new AccountTriggerHandler();

  if (Trigger.isBefore) {
    if (Trigger.isInsert) handler.beforeInsert(Trigger.new);
    if (Trigger.isUpdate) handler.beforeUpdate(Trigger.new, Trigger.oldMap);
    if (Trigger.isDelete) handler.beforeDelete(Trigger.old);
  }
  if (Trigger.isAfter) {
    if (Trigger.isInsert)   handler.afterInsert(Trigger.newMap);
    if (Trigger.isUpdate)   handler.afterUpdate(Trigger.newMap, Trigger.oldMap);
    if (Trigger.isDelete)   handler.afterDelete(Trigger.oldMap);
    if (Trigger.isUndelete) handler.afterUndelete(Trigger.newMap);
  }
}

The Handler Pattern

The handler class is where the work happens. Each trigger context gets its own method. Business logic is delegated to service classes. The handler only orchestrates:

public class AccountTriggerHandler {

  public void beforeInsert(List<Account> newAccounts) {
    AccountService.setDefaultRating(newAccounts);
    AccountService.validateNames(newAccounts);
  }

  public void afterInsert(Map<Id, Account> newAccountMap) {
    AccountService.createDefaultContacts(newAccountMap);
    AccountService.notifyAccountTeam(newAccountMap.keySet());
  }

  public void beforeUpdate(
    List<Account> newAccounts,
    Map<Id, Account> oldAccountMap
  ) {
    AccountService.preventCriticalFieldChange(newAccounts, oldAccountMap);
  }
}
Tip

Keep service methods static and accepting collections. AccountService.setDefaultRating(List<Account>) — never a single record. This forces bulkification at the service layer, where it belongs.

Bulkification in Practice

Here's the most common mistake I see — code that looks bulkified but isn't:

// ❌ WRONG — SOQL inside a loop
public void afterInsert(Map<Id, Account> newMap) {
  for (Account acc : newMap.values()) {
    // Fires a SOQL query for EVERY account — hits limits fast
    List<Contact> contacts = [SELECT Id FROM Contact WHERE AccountId = :acc.Id];
  }
}

// ✅ CORRECT — Single SOQL, Map lookup in loop
public void afterInsert(Map<Id, Account> newMap) {
  Map<Id, List<Contact>> contactsByAccountId = new Map<Id, List<Contact>>();

  for (Contact c : [SELECT Id, AccountId FROM Contact
                    WHERE AccountId IN :newMap.keySet()]) {
    if (!contactsByAccountId.containsKey(c.AccountId)) {
      contactsByAccountId.put(c.AccountId, new List<Contact>());
    }
    contactsByAccountId.get(c.AccountId).add(c);
  }

  for (Id accountId : newMap.keySet()) {
    List<Contact> contacts = contactsByAccountId.get(accountId);
    if (contacts != null) { /* process */ }
  }
}

Testing for 200 Records

Your test class must prove the trigger works at bulk scale. A test that inserts one record proves almost nothing:

@isTest
private class AccountTriggerHandlerTest {

  @isTest
  static void testBeforeInsert_Bulk() {
    List<Account> accounts = new List<Account>();
    for (Integer i = 0; i < 200; i++) {
      accounts.add(new Account(Name = 'Test Account ' + i, Industry = 'Technology'));
    }

    Test.startTest();
    insert accounts;
    Test.stopTest();

    List<Account> inserted = [SELECT Id, Rating FROM Account WHERE Name LIKE 'Test Account%'];
    System.assertEquals(200, inserted.size(), 'All 200 accounts should be inserted');
    for (Account acc : inserted) {
      System.assertNotEquals(null, acc.Rating, 'Rating should be set by trigger');
    }
  }
}

Production-Ready Checklist


Triggers written this way won't just pass code review — they'll still work correctly two years from now when someone runs a bulk data migration touching every Account in the org. That's the real test.

Keep reading

Related Articles

View all