Salesforce-Platform-Integration-Architect Practice Test Questions

Total 106 Questions


Last Updated On : 28-Aug-2025 - Spring 25 release



Preparing with Salesforce-Platform-Integration-Architect practice test is essential to ensure success on the exam. This Salesforce SP25 test allows you to familiarize yourself with the Salesforce-Platform-Integration-Architect exam questions format and identify your strengths and weaknesses. By practicing thoroughly, you can maximize your chances of passing the Salesforce certification spring 2025 release exam on your first attempt.

Surveys from different platforms and user-reported pass rates suggest Salesforce-Platform-Integration-Architect practice exam users are ~30-40% more likely to pass.

Universal Containers (UC) is currently managing a custom monolithic web service that runs on an on-premise server. This monolithic web service is responsible for Point-to-Point (P2P) integrations between:
1. Salesforce and a legacy billing application
2. Salesforce and a cloud-based Enterprise Resource Planning application
3. Salesforce and a data lake.
UC has found that the tight interdependencies between systems is causing integrations to fail. What should an architect recommend to decouple the systems and improve performance of the integrations?



A. Re-write and optimize the current web service to be more efficient.


B. Leverage modular design by breaking up the web service into smaller pieces for a microservice architecture.


C. Use the Salesforce Bulk API when integrating back into Salesforce.


D. Move the custom monolithic web service from on-premise to a cloud provider.





B.
  Leverage modular design by breaking up the web service into smaller pieces for a microservice architecture.

Explanation:

The scenario describes Universal Containers (UC) relying on a custom monolithic web service to handle multiple integrations between Salesforce, a legacy billing system, an ERP, and a data lake. Because this single service is responsible for all Point-to-Point (P2P) integrations, the result is tight coupling — meaning if one system fails or changes, the entire integration can break. This leads to poor performance, high maintenance costs, and difficulty scaling.
To solve this, the architect’s recommendation should aim to decouple the systems and improve resiliency and performance.

Option Analysis:

A. Re-write and optimize the current web service to be more efficient
This might improve performance temporarily, but it does not solve the core architectural problem: the tight coupling between systems.
The architecture would still be monolithic, which means future failures or changes would continue to impact all integrations.
❌ Not the right long-term solution.

B. Leverage modular design by breaking up the web service into smaller pieces for a microservice architecture
This is the best answer.
By breaking down the monolith into microservices, each integration (Salesforce → billing, Salesforce → ERP, Salesforce → data lake) can run independently.
Microservices communicate via APIs, message queues, or an enterprise service bus (ESB), reducing dependencies and improving scalability, performance, and fault tolerance.
✅ Correct.

C. Use the Salesforce Bulk API when integrating back into Salesforce
The Bulk API is useful for large data volume operations, but it doesn’t address the core issue of tight coupling across systems.
This might help with performance inside Salesforce but doesn’t improve the architecture overall.
❌ Not the right solution for this scenario.

D. Move the custom monolithic web service from on-premise to a cloud provider
Hosting in the cloud could make it more scalable and reliable at the infrastructure level, but the monolithic design problem still remains.
You’d still have tight coupling and cascading failures.
❌ Not the right solution.

Why B?
✅ Microservices architecture decouples integrations by breaking them into smaller, independent services.
✅ Each service can fail or scale without affecting the others.
✅ This modernizes UC’s integration layer and supports long-term growth, agility, and better error isolation.

Why not A, C, D?
➡️ A. only improves efficiency of the existing monolith but doesn’t solve coupling.
➡️ C. improves Salesforce-side performance but not the integration architecture.
➡️ D. changes where the monolith runs, but doesn’t change how it works.

References:
Salesforce Architect – Integration Patterns
Salesforce Trailhead – Integration Patterns and Practices
Martin Fowler: Microservices Architecture

Northern Trail Outfitters needs to make synchronous callouts "available to promise" services to query product availability and reserve inventory during customer checkout process. Which two considerations should an integration architect make when building a scalable integration solution?
Choose 2 answers



A. The typical and worst-case historical response times.


B. The number batch jobs that can run concurrently.


C. How many concurrent service calls are being placed.


D. The maximum query cursors open per user on the service.





A.
  The typical and worst-case historical response times.

C.
  How many concurrent service calls are being placed.

Explanation:

The scenario involves Northern Trail Outfitters needing to make synchronous callouts to "available to promise" services during the customer checkout process to query product availability and reserve inventory. This must be done in a scalable way, meaning the solution should handle many users and transactions without slowing down. Let’s analyze the options:

A. The typical and worst-case historical response times
This option is correct. Synchronous callouts depend on external services responding quickly. Knowing the typical and worst-case historical response times helps the architect plan how long the checkout process might take. If response times are slow, it could delay customers, so the architect needs to ensure the system can handle these delays without failing, making this a key consideration for scalability.

C. How many concurrent service calls are being placed
This option is correct. During checkout, many customers might use the system at the same time, leading to multiple concurrent callouts to the service. The architect must ensure the integration can handle this load without crashing or slowing down. This is critical for scalability, as it determines how many users can shop and check out together.

B. The number batch jobs that can run concurrently
This option is incorrect. Batch jobs are for processing large amounts of data in the background, not for real-time synchronous callouts during checkout. Since the requirement is for immediate responses during the customer process, batch job limits don’t apply here and aren’t relevant to scalability in this context.

D. The maximum query cursors open per user on the service
This option is incorrect. Query cursors relate to how many database queries a user can have open at once in Salesforce, not the external service’s availability or response. This limit applies to internal Salesforce operations, not the synchronous callouts to the "available to promise" service, so it’s not a key factor for this integration.

Why A and C?
Considering the typical and worst-case historical response times (A) ensures the system can handle delays from the external service, keeping the checkout smooth. Evaluating how many concurrent service calls are being placed (C) ensures the solution can scale to support many customers at once. Together, these address performance and capacity, which are essential for a scalable integration.

Why not B and D?
The number of batch jobs (B) doesn’t apply to real-time callouts, which are about immediate action, not background processing. The maximum query cursors (D) is an internal Salesforce limit unrelated to the external service callouts, so it doesn’t impact this integration’s scalability.

References:
Salesforce Help: Callout Limits and Best Practices – Explains considerations for synchronous callouts and response times.
Salesforce Help: Integration Patterns and Practices – Covers scalability for concurrent callouts in integrations.
Trailhead Module: Apex Integration Services – Discusses building scalable integrations with external services.

Northern Trail Outfitters has recently experienced intermittent network outages in its call center. When network service resumes, Sales representatives have inadvertently created duplicate orders in the manufacturing system because the order was placed but the return acknowledgement was lost during the outage. Which solution should an architect recommend to avoid duplicate order booking?



A. Use Outbound Messaging to ensure manufacturing acknowledges receipt of order.


B. Use scheduled apex to query manufacturing system for potential duplicate or missing orders.


C. Implement idempotent design and have Sales Representatives retry order(s) in question.


D. Have scheduled Apex resubmit orders that do not have a successful response.





C.
  Implement idempotent design and have Sales Representatives retry order(s) in question.

Explanation:

The scenario describes a classic problem in distributed systems: ensuring exactly-once processing of a message (in this case, an order) when network failures can cause acknowledgements to be lost. The core issue is that from the perspective of the Salesforce call center, an order was sent but it's unknown if the manufacturing system received and processed it. Retrying the order could lead to a duplicate. Let's analyze the options:

A. Use Outbound Messaging to ensure manufacturing acknowledges receipt of order.
This option does not solve the problem; it is the mechanism that is currently failing. Outbound Messaging is likely the technology already being used to send the orders to the manufacturing system. The problem states that the "return acknowledgement was lost during the outage." Using the same unreliable channel for the acknowledgement does not make the process more resilient. The solution needs to handle the failure of this mechanism, not rely on it working perfectly.

B. Use scheduled Apex to query the manufacturing system for potential duplicate or missing orders.
While this could eventually identify and help clean up duplicates, it is a reactive and complex solution. It requires building a separate polling mechanism, managing reconciliation logic, and handling cleanup after the fact. It does not prevent the duplicates from being created in the first place, which is the architect's goal. This adds operational overhead instead of designing a robust integration.

C. Implement idempotent design and have Sales Representatives retry order(s) in question.
This is the correct solution. An idempotent API or integration means that performing the same operation multiple times has the same effect as performing it once. In this context, the manufacturing system's order booking endpoint should be designed to be idempotent.

How it works: When Salesforce sends an order, it includes a unique identifier (e.g., a unique Order ID from Salesforce). The manufacturing system checks if it has already processed an order with that unique ID.

→ If not, it processes the order and records the ID.
→ If it has, it ignores the new request and simply re-sends the acknowledgement for the original order.

Benefit: This design allows the Sales Representative (or an automated process) to safely retry any order whose status is uncertain after a network outage. The manufacturing system will ensure that each unique order is only booked once, completely eliminating duplicates. This directly solves the stated problem.

D. Have scheduled Apex resubmit orders that do not have a successful response.
This is a dangerous option that would exacerbate the problem. Without an idempotent receiver, blindly resubmitting orders is the very action that creates duplicates. The manufacturing system would have no way of knowing that the new message is a retry and not a brand new, separate order. This automated retry would systematically create duplicates for every order impacted by an outage.

Why C is Correct:
Idempotent design is the standard, robust pattern for handling exactly this type of failure scenario in integrations. It moves the responsibility for duplicate prevention to the receiving system (the manufacturing system), which is the only component that can definitively determine if a request has already been processed. This allows the sender (Salesforce) to retry requests safely without any risk of creating duplicates.

Why not A, B, and D:
A. relies on the unreliable channel.
B. is a complex, post-hoc cleanup operation.
D. actively causes the problem it's trying to solve.

References:
Integration Patterns: The concept of idempotency is a cornerstone of reliable messaging and integration architecture.
Salesforce Architect Resources: Integration Architecture & Design Patterns modules often emphasize the need for idempotent receivers when dealing with potential duplicate messages from platforms like Salesforce.

An integration architect needs to build a solution that will be using the Streaming API, but the data loss should be minimized, even when the client re-connects every couple of days. Which two types of Streaming API events should be considered?
(Choose 2 answers)



A. Generic Events


B. Change Data Capture Events


C. PushTopic Events


D. High Volume Platform Events





B.
  Change Data Capture Events

D.
  High Volume Platform Events

Explanation:

The scenario requires a Streaming API solution that minimizes data loss, even when a client disconnects and reconnects "every couple of days." The key to solving this problem is understanding the event retention policies of the different Streaming API event types. Events are temporarily stored on the Salesforce event bus, and the length of time they are retained determines how long a client can be disconnected and still retrieve missed events upon reconnecting.

B. Change Data Capture Events
This is correct. Change Data Capture (CDC) events are a modern streaming technology used to track record changes in Salesforce. A key feature of CDC is that change events are stored on the event bus for a specific retention period. According to Salesforce documentation, change events are stored for three days. This retention period allows a disconnected client to reconnect within a 72-hour window and retrieve all events it missed using a replay ID, thus minimizing data loss. This makes CDC a perfect fit for a client that reconnects "every couple of days."

D. High Volume Platform Events
This is also correct. High Volume Platform Events are a powerful, scalable event type designed for custom events. Like CDC, they have a durable streaming capability with a significant event retention period. High Volume Platform Events are retained for three days, allowing subscribers to retrieve events published during a disconnection period. This matches the requirement of a client that might reconnect after a few days, ensuring no data loss.

Why A and C are Incorrect?

A. Generic Events: This is incorrect. Generic events are a legacy product with very limited event retention. They are not tied to Salesforce record changes and are primarily used for broadcasting custom messages. Their event retention is only 24 hours, which is insufficient to ensure no data loss for a client that reconnects "every couple of days."

C. PushTopic Events: This is incorrect. PushTopic events are an older, legacy Streaming API technology that publishes notifications for Salesforce record changes based on a SOQL query. A major limitation of PushTopics is their event retention, which is also 24 hours. This short retention window makes them a poor choice for a client that needs to retrieve events after being disconnected for more than a day. Salesforce recommends using Change Data Capture events as a replacement for PushTopics.

References:
Salesforce Help: Streaming API Developer Guide: Message Durability — Explains the event retention policies for different Streaming API types.
Salesforce Help: Change Data Capture Developer Guide: Change Event Storage and Delivery — Confirms the three-day retention period for Change Data Capture events.
Salesforce Help: Platform Events Developer Guide: Platform Event Allocations — Provides details on the retention period for High Volume Platform Events.

An Integration Developer is developing an HR synchronization app for a client. The app synchronizes Salesforce record data changes with an HR system that's external to Salesforce. What should the integration architect recommend to ensure notifications are stored for up to three days if data replication fails?



A. Change Data Capture


B. Generic Events


C. Platform Events


D. Callouts





A.
  Change Data Capture

Explanation:

The scenario is about an HR synchronization app that needs to send record data changes from Salesforce to an external HR system. The critical requirement is:
If the external replication fails, notifications must be stored for up to three days.

This means the solution must be able to:
1. Capture Salesforce data changes automatically.
2. Keep undelivered notifications available for replay for up to three days.

Option Analysis:

A. Change Data Capture (CDC)
CDC publishes events when Salesforce record data changes (create, update, delete, undelete).
These events are stored for up to 72 hours (3 days) in the event bus for subscribers to replay if a failure occurs.
Perfectly fits the requirement: external system can reconnect and replay missed events.
✅ Correct.

B. Generic Events
Generic events are custom events published by applications, but they do not directly track Salesforce record changes.
While they can be replayed for 3 days, the developer would have to manually publish them for every data change — duplicating what CDC already does out of the box.
❌ Not optimal for this scenario.

C. Platform Events
Platform Events are custom event messages, similar to Generic Events.
They are great for event-driven architecture but are not automatically tied to record changes.
You’d need to write triggers/flows to publish events for HR synchronization, adding overhead.
❌ Not the best fit when CDC already provides built-in record change events.

D. Callouts
Callouts are how Salesforce makes HTTP requests to external systems.
They do not store failed notifications; if the callout fails, the message is lost unless custom retry logic is built.
❌ Incorrect for a guaranteed replay mechanism.

Why A?
Change Data Capture was built exactly for this scenario: synchronizing Salesforce data changes with external systems.
It ensures reliable delivery with 3-day replay capability if the subscriber (HR system) is down or data replication fails.

Why not B, C, D?
➡️ B and C: require custom publishing logic, adding unnecessary overhead.
➡️ D: provides no guaranteed replay or retention of failed events.

References:
Salesforce Help: Change Data Capture Overview
Trailhead: Change Data Capture Basics
Salesforce Docs: Event Retention Window

✅ Final Answer: A. Change Data Capture

Northern Trail Outfitters needs to send order and line items directly to an existing finance application webservice when an order if fulfilled. It is critical that each order reach the finance application exactly once for accurate invoicing. What solution should an architect propose?



A. Trigger invokes Queueable Apex method, with custom error handling process.


B. Trigger makes @future Apex method, with custom error handling process.


C. Button press invokes synchronous callout, with user handling retries in case of error


D. Outbound Messaging, which will automatically handle error retries to the service.





D.
  Outbound Messaging, which will automatically handle error retries to the service.

Explanation:

✅ Correct Answer: D. Outbound Messaging, which will automatically handle error retries to the service
Outbound Messaging (OM) is a declarative feature in Salesforce that:
→ Sends a SOAP message to an external web service when a record changes (like an order fulfillment).
→ Provides guaranteed delivery — it keeps retrying until the external system acknowledges receipt with a proper SOAP response.
→ Retries follow an exponential backoff schedule for 24 hours.
→ It ensures no duplicates and reliable once-only delivery when designed correctly.
Since the scenario requires each order to be delivered exactly once for financial accuracy, OM is the best fit. Salesforce handles retries automatically, which reduces the risk of developer error and makes the integration more robust.

❌ Why not the others?

A. Trigger invokes Queueable Apex with error handling
Queueable Apex allows async processing and custom retries, but you would need to build retry logic yourself.
Risk: duplicate calls or missed retries if not carefully coded.
More complex than necessary.

B. Trigger makes @future Apex method with error handling
Similar issue: @future does not guarantee retries on failure.
No built-in retry mechanism.
Not reliable enough for financial transactions.

C. Button press invokes synchronous callout, with user retries
Relies on the user manually retrying if an error occurs.
Not reliable or scalable for “exactly once” delivery.
Human error could lead to duplicate invoices.

📖 Salesforce Reference:
Salesforce Help: Outbound Messaging
Key point: Outbound Messaging ensures reliable delivery with retries until acknowledgment, which matches the requirement.

✨ Final Answer: D. Outbound Messaging, which will automatically handle error retries to the service.

A US business-to-consumer (B2C) company is planning to expand to Latin America. They project an initial Latin American customer base of about one million, and a growth rate of around 10% every year for the next 5 years. They anticipate privacy and data protection requirements similar to those in the European Union to come into effect during this time. Their initial analysis indicates that key personal data is stored in the following systems:
1. Legacy mainframe systems that have remained untouched for years and are due to be decommissioned.
2. Salesforce Commerce Cloud Service Cloud, Marketing Cloud, and Community Cloud.
3. The company's CIO tasked the integration architect with ensuring that they can completely delete their Latin American customer's personal data on demand.
Which three requirements should the integration architect consider?
(Choose 3 answers)



A. Manual steps and procedures that may be necessary.


B. Impact of deleted records on system functionality.


C. Ability to delete personal data in every system.


D. Feasibility to restore deleted records when needed.


E. Ability to provide a 360-degree view of the customer.





A.
  Manual steps and procedures that may be necessary.

B.
  Impact of deleted records on system functionality.

C.
  Ability to delete personal data in every system.

Explanation:

✅ A. Manual steps and procedures that may be necessary.
Why this matters:
Some systems, especially the legacy mainframe systems mentioned, might not have automated ways to delete data. These old systems may require manual processes, like running specific scripts or accessing the database directly. The integration architect needs to plan for these manual steps to ensure compliance with data deletion requests, as required by privacy laws like GDPR. For example, if a customer asks to be “forgotten,” the architect must ensure there’s a process to remove their data even from systems that don’t support automatic deletion.

Reference:
Salesforce documentation on data privacy emphasizes the need to comply with regulations like GDPR, which includes the “right to erasure.” Manual processes may be needed for non-Salesforce systems (Salesforce Trailhead: Data Protection and Privacy).

✅ B. Impact of deleted records on system functionality.
Why this matters: Deleting a customer’s personal data could affect how systems work. For example, in Salesforce Service Cloud, deleting a customer’s contact record might break links to case histories or affect reporting in Marketing Cloud. In the legacy mainframe, removing data might cause errors if other systems rely on it. The architect needs to understand these impacts to avoid disrupting business operations while meeting deletion requirements.

Example:
If a customer’s order history is deleted from Commerce Cloud, it might affect analytics or customer service processes.

Reference:
Salesforce’s Data Management documentation highlights the importance of understanding record relationships and dependencies before deletion (Salesforce Help: Data Deletion Considerations).

✅ C. Ability to delete personal data in every system.
Why this matters:
Privacy laws, like those similar to GDPR, require that all personal data about a customer can be deleted upon request. The architect must ensure that every system—legacy mainframe, Commerce Cloud, Service Cloud, Marketing Cloud, and Community Cloud—can delete personal data completely. This might be challenging, especially for legacy systems that weren’t designed with modern privacy laws in mind, or for Salesforce clouds where data is spread across multiple objects.

Example:
In Marketing Cloud, personal data might exist in data extensions, and the architect needs to ensure all instances are removed.

Reference:
Salesforce’s GDPR compliance guide stresses the need for comprehensive data deletion across all systems holding personal data (Salesforce GDPR Resources).

❌ Why Not the Other Options?

❌ D. Feasibility to restore deleted records when needed.
Why this is not a priority:
Privacy laws like GDPR focus on the permanent deletion of data when requested, not restoring it. Restoring deleted data could even violate compliance if it’s done without customer consent. While some businesses might want to recover data for operational reasons, this isn’t a key requirement for the architect in the context of privacy-driven deletion requests.

Example:
If a customer requests deletion, restoring their data later could breach GDPR-like regulations unless they explicitly agree.

❌ E. Ability to provide a 360-degree view of the customer.
Why this is not relevant:
A 360-degree view of the customer is about combining data to understand customer interactions across systems, which is useful for marketing or service but not directly related to deleting personal data. While it might help identify where customer data exists, it’s not a requirement for ensuring data deletion.

Example:
A 360-degree view might show a customer’s purchase history, but the focus here is on deleting that data, not viewing it.

Summary:
The integration architect needs to focus on manual steps (A) for systems like the legacy mainframe, the impact of deletion on functionality (B) to avoid breaking systems, and the ability to delete data in every system (C) to comply with privacy laws. These three requirements ensure the company can meet data deletion demands while maintaining system stability.

An Enterprise Customer is planning to implement Salesforce to support case management. Below, is their current system landscape diagram. Considering Salesforce capabilities, what should the Integration Architect evaluate when integrating Salesforce with the current system landscape?



A. Integrating Salesforce with Order Management System, Email Management System and Case Management System.


B. Integrating Salesforce with Order Management System, Data Warehouse and Case Management System.


C. Integrating Salesforce with Data Warehouse, Order Management and Email Management System.


D. Integrating Salesforce with Email Management System, Order Management System and Case Management System.





D.
  Integrating Salesforce with Email Management System, Order Management System and Case Management System.

Explanation:

The key to this question lies in the business objective: "to implement Salesforce to support case management." An Integration Architect must evaluate systems that will be directly involved in the end-to-end case management process to provide a unified agent experience and a complete customer view.
Salesforce Service Cloud is a full-featured Case Management system. Therefore, integrating it with the existing Case Management System would be redundant and create data duplication, conflicting processes, and a poor agent experience. The architect's goal is to consolidate case management into Salesforce, not to integrate two parallel case systems.
Here’s why the systems in option D are the correct ones to evaluate for integration:

1. Email Management System: This is a critical integration. Cases are often created from customer emails. Salesforce must integrate with the existing email system to:
→ Ingest emails and automatically create cases in Salesforce.
→ Send outbound emails from within Salesforce (e.g., agent responses, notifications) using the corporate email system.
→ Track email threads and attachments associated with a case record.
Without this integration, the case management process would be siloed and inefficient.

2. Order Management System: To effectively support customers, service agents need context. A common reason for a customer to open a case is to inquire about an order (e.g., status, return, problem). Integrating Salesforce with the Order Management System allows agents to:
→ View order history, status, and details directly on the case layout in Salesforce.
→ Initiate processes like returns or exchanges directly from the case.
This integration is essential for providing fast, informed, and effective customer service.

3. Data Warehouse (Not a primary integration for case management): While a Data Warehouse is important for analytics and historical reporting, it is not part of the real-time, operational flow of case management. Pushing data to the warehouse is typically a separate, asynchronous process (e.g., nightly ETL jobs) and is not required for the core functionality of creating, updating, and resolving cases. Therefore, it is a lower priority for this specific evaluation.

Why the other options are incorrect:

A. Integrating Salesforce with Order Management System, Email Management System and Case Management System: This is incorrect because it includes integrating with the existing Case Management System. Since Salesforce is the new case management system, integrating with the old one suggests a co-existence strategy, which is architecturally unsound for this scenario. The goal should be to decommission the old system, not integrate with it.

B. Integrating Salesforce with Order Management System, Data Warehouse and Case Management System: This is incorrect for two reasons. It includes the redundant Case Management System and prioritizes the Data Warehouse over the more critical Email Management System. Email is a direct channel for case creation, while the data warehouse is for reporting.

C. Integrating Salesforce with Data Warehouse, Order Management and Email Management System: This option correctly excludes the old Case Management System and includes Email and Order Management. However, it is less correct than D because it includes the Data Warehouse instead of the Case Management System. While not ideal, an architect might still need to evaluate a migration path from the old system, making it a more relevant consideration than the data warehouse. Option D's list is the most directly relevant to the operational process.

Key Architectural Principle:
An Integration Architect must first identify systems of record and systems of engagement. For this project:
→ Salesforce is becoming the System of Engagement for service agents and the System of Record for Cases.
→ The Order Management System remains the System of Record for orders.
→ The Email System is a System of Engagement for communication.
The integration strategy focuses on bringing the data from these systems of record into the system of engagement to empower agents.

Reference:
Salesforce Integration Architecture Guidelines: The evaluation focuses on "key master data" and "operational systems" that are part of the business process being implemented in Salesforce.
Trailhead Module: "Define Your Integration Strategy" emphasizes understanding the business process (Case Management) and identifying which systems hold the data needed to support that process.

Which two requirements should the Salesforce Community Cloud support for self registration and SSO?
Choose 2 answers



A.

SAML SSO and Registration Handler


B.

OpenId Connect Authentication Provider and Registration Handler


C.

SAML SSO and just-in-time provisioning


D.

OpenId Connect Authentication Provider and just-in-time provisioning





B.
  

OpenId Connect Authentication Provider and Registration Handler



C.
  

SAML SSO and just-in-time provisioning



Explanation:

1. SAML SSO and Just-in-Time Provisioning
SAML (Security Assertion Markup Language) is a standard for exchanging authentication and authorization data between an identity provider (IdP) and a service provider (SP).
➡️ SSO (Single Sign-On): It allows users to log in to one application (the IdP) and then access other applications (the SP, in this case, Salesforce Community Cloud) without needing to re-enter their credentials.
➡️ Just-in-Time (JIT) Provisioning: This is a method of user provisioning that works with SAML SSO. Instead of pre-creating user accounts, a user record is automatically created in Salesforce the first time a user logs in via SAML, using the attributes from the SAML assertion. This satisfies the self-registration requirement.

2. OpenID Connect Authentication Provider and Registration Handler
OpenID Connect (OIDC) is an identity layer built on top of the OAuth 2.0 framework. It is often used for social logins.
➡️ Authentication Provider: Salesforce can act as a service provider and use an external identity provider (like Google, Facebook, or a custom OIDC provider) for authentication.
➡️ Registration Handler: When a user logs in for the first time via an OIDC provider, Salesforce uses a custom Apex Registration Handler class. This handler can be configured to either create a new user account (self-registration) or link to an existing one. This provides a flexible way to handle the user provisioning process and meets the self-registration requirement.

Incorrect Answers:

A. SAML SSO and Registration Handler: SAML typically uses Just-in-Time provisioning for user creation, not a separate Apex Registration Handler. The Registration Handler is specifically for authentication providers like OpenID Connect and OAuth.

D. OpenId Connect Authentication Provider and Just-in-Time provisioning: OpenID Connect uses a Registration Handler for provisioning, not the "just-in-time provisioning" feature that is natively associated with SAML.

Universal Containers is a global financial company that sells financial products and services. There is a daily scheduled Batch Apex job that generates invoice from a given set of orders. UC requested building a resilient integration for this batch apex job in case the invoice generation fails. What should an integration architect recommend to fulfill the requirement?



A.

Build Batch Retry & Error Handling in the Batch Apex Job itself.


B.

Batch Retry & Error Handling report to monitor the error handling.


C.

Build Batch Retry & Error Handling using BatchApexErrorEvent.


D.

Build Batch Retry & Error Handling in the middleware.





C.
  

Build Batch Retry & Error Handling using BatchApexErrorEvent.



Explanation:

✅ Correct Answer: C. Build Batch Retry & Error Handling using BatchApexErrorEvent
Salesforce introduced the BatchApexErrorEvent platform event (from Winter ’19) specifically for error handling in batch jobs.
→ If any record fails during a batch execution, Salesforce can automatically publish a BatchApexErrorEvent.
→ This event captures details like job ID, batch ID, and exception info.
→ Developers can subscribe to this event (via a trigger or platform event subscriber) to take actions such as:
⇒ Retrying the failed records
⇒ Sending alerts
⇒ Logging to monitoring systems
This makes the integration resilient because failures are detected automatically and recovery can be automated, instead of silently failing.

❌ Why not the others?

A. Build Batch Retry & Error Handling in the Batch Apex Job itself
You could build custom try/catch and retry logic, but it’s manual and error-prone.
Doesn’t leverage Salesforce’s native event-driven failure handling.

B. Batch Retry & Error Handling report
A report only gives visibility.
It does not provide actual resilience or retry logic.

D. Build Batch Retry & Error Handling in the middleware
Middleware can handle retries if the integration call fails.
But here the failure is in the Batch Apex job itself inside Salesforce, so middleware won’t help with resilience at the Salesforce side.

📖 Salesforce Reference:
Salesforce Docs: Handle Batch Apex Errors with BatchApexErrorEvent

✨ Final Answer: C. Build Batch Retry & Error Handling using BatchApexErrorEvent

Page 2 out of 11 Pages
Salesforce-Platform-Integration-Architect Practice Test Home