Total 106 Questions
Last Updated On : 2-Jun-2025
Preparing with Integration-Architect practice test is essential to ensure success on the exam. This Salesforce SP25 test allows you to familiarize yourself with the Integration-Architect exam questions format and identify your strengths and weaknesses. By practicing thoroughly, you can maximize your chances of passing the Salesforce certification spring 2025 release exam on your first attempt. Surveys from different platforms and user-reported pass rates suggest Integration-Architect practice exam users are ~30-40% more likely to pass.
Universal Containers (UC) is currently managing a custom monolithic web service that runs on an on-premise server. This monolithic web service is responsible for Point-to-Point (P2P) integrations between:
1. Salesforce and a legacy billing application
2. Salesforce and a cloud-based Enterprise Resource Planning application
3. Salesforce and a data lake.
UC has found that the tight interdependencies between systems is causing integrations to fail.
What should an architect recommend to decouple the systems and improve performance of the integrations?
A.
Re-write and optimize the current web service to be more efficient.
B.
Leverage modular design by breaking up the web service into smaller pieces for a microservice architecture.
C.
Use the Salesforce Bulk API when integrating back into Salesforce.
D.
Move the custom monolithic web service from on-premise to a cloud provider.
Leverage modular design by breaking up the web service into smaller pieces for a microservice architecture.
Explanation:
A tightly coupled, monolithic web service becomes a single point of failure and performance bottleneck. By decomposing it into a set of independently deployable microservices—each handling one integration use-case—you achieve fault isolation, independent scaling, and shorter development cycles. Each microservice can own a bounded context (e.g., “Billing sync”, “ERP orders”, “Data lake bridge”) and publish events or expose APIs for the rest of the ecosystem. This aligns with modern architecture principles that maximize decoupling, resiliency, and team autonomy while improving performance over a single on-premise endpoint.
Northern Trail Outfitters needs to make synchronous callouts "available to promise" services to query product availability and reserve inventory during customer checkout process. Which two considerations should an integration architect make when building a scalable integration solution?
Choose 2 answers
A.
The typical and worst-case historical response times.
B.
The number batch jobs that can run concurrently.
C.
How many concurrent service calls are being placed.
D.
The maximum query cursors open per user on the service.
The typical and worst-case historical response times.
How many concurrent service calls are being placed.
Explanation:
When designing real-time “available to promise” (ATP) callouts, you must dimension both performance and scale against Salesforce’s own limits and your external service’s SLAs. First, measure your external system’s typical and worst-case response times, since Salesforce enforces a 120 sec callout timeout (and will block the user UI until it returns). Second, track concurrent callout volume: Salesforce limits you to 100 HTTP callouts per Apex transaction, and long-running synchronous transactions (over 5 sec CPU) count toward your org’s 10 concurrent long-running transaction limit
. Knowing both metrics lets you decide if you need middleware, caching layers, or a sync patterns to avoid hitting timeouts and concurrency governors under load.
Northern Trail Outfitters has recently experienced intermittent network outages in its call center. When network service resumes, Sales representatives have inadvertently created duplicate orders in the manufacturing system because the order was placed but the return acknowledgement was lost during the outage. Which solution should an architect recommend to avoid duplicate order booking?
A.
Use Outbound Messaging to ensure manufacturing acknowledges receipt of order.
B.
Use scheduled apex to query manufacturing system for potential duplicate or missing orders.
C.
Implement idempotent design and have Sales Representatives retry order(s) in
question.
D.
Have scheduled Apex resubmit orders that do not have a successful response.
Implement idempotent design and have Sales Representatives retry order(s) in
question.
Explanation:
When a network drop causes a lost acknowledgement, retries can inadvertently create a second order unless your integration is idempotent (“at most once” semantics). By assigning each order a unique message ID (or idempotency key) that your manufacturing system tracks, repeated submissions with the same key are recognized and ignored. Salesforce and many REST best-practices guides recommend this idempotent receiver pattern to guarantee single delivery even if clients retry. This allows reps to simply retry without fear of duplicates, and it builds robust fault-tolerance without custom polling or batch-reconciliation jobs.
An integration architect needs to build a solution that will be using the Streaming API, but the data loss should be minimized, even when the client re-connects every couple of days. Which two types of Streaming API events should be considered? Choose 2 answers
A.
Generic Events
B.
Change Data Capture Events
C.
PushTopic Events
D.
High Volume Platform Events
Change Data Capture Events
High Volume Platform Events
Explanation:
To minimize data loss across days-long disconnects, you need durable, high-retention channels. Change Data Capture (CDC) and High-Volume Platform Events are both implemented on the High-Volume Streaming API, offering a 72-hour retention window and replay-ID-based durable subscriptions. In contrast, Generic or PushTopic events (standard-volume) expire after 24 hours and have lower throughput. Choosing CDC and high-volume Platform Events ensures that even if a client reconnects infrequently, it can reliably replay missed changes without data loss.
An Integration Developer is developing an HR synchronization app for a client. The app synchronizes Salesforce record data changes with an HR system that's external to Salesforce. What should the integration architect recommend to ensure notifications are stored for up to three days if data replication fails?
A.
Change Data Capture
B.
Generic Events
C.
Platform Events
D.
Callouts
Platform Events
Explanation:
Of all event types, High-Volume Platform Events provide the longest built-in retention—up to 72 hours—so subscribers can reconnect within three days and still retrieve missed messages. Change Data Capture events are implemented as high-volume PEs behind the scenes, but the exam choice is explicitly “Platform Events.” Generic and standard-volume events only persist for 24 hours, and Apex callouts can’t buffer for days. By modeling your notifications as high-volume Platform Events, you get guaranteed at-least-once delivery with multi-day replay support.
Northern Trail Outfitters needs to send order and line items directly to an existing finance application webservice when an order if fulfilled. It is critical that each order reach the finance application exactly once for accurate invoicing. What solution should an architect propose?
A.
Trigger invokes Queueable Apex method, with custom error handling process.
B.
Trigger makes @future Apex method, with custom error handling process.
C.
Button press invokes synchronous callout, with user handling retries in case of error
D.
Outbound Messaging, which will automatically handle error retries to the service.
Outbound Messaging, which will automatically handle error retries to the service.
Explanation:
Salesforce Outbound Messaging delivers a SOAP message to your finance endpoint and automatically retries on failure for up to 24 hours, giving you built-in delivery guarantees without custom Apex. Each message contains its own unique notification ID, and the service must acknowledge receipt with a 2xx HTTP response; otherwise, Salesforce queues and retries until it succeeds or drops after the retry window. This “fire-and-forget” pattern offloads retry logic to the platform and gives you near-exactly-once semantics as long as your endpoint is idempotent and acknowledges duplicates idempotently.
A US business-to-consumer (B2C) company is planning to expand to Latin America. They project an initial Latin American customer base of about one million, and a growth rate of around 10% every year for the next 5 years. They anticipate privacy and data protection requirements similar to those in the European Union to come into effect during this time. Their initial analysis indicates that key personal data is stored in the following systems:
1. Legacy mainframe systems that have remained untouched for years and are due to be decommissioned.Which three requirements should the integration architect consider?
Choose 3 answers
A.
Manual steps and procedures that may be necessary.
B.
Impact of deleted records on system functionality.
C.
Ability to delete personal data in every system.
D.
Feasibility to restore deleted records when needed.
E.
Ability to provide a 360-degree view of the customer.
Impact of deleted records on system functionality.
Ability to delete personal data in every system.
Feasibility to restore deleted records when needed.
Explanation:
Under GDPR‐style data protection laws, “erasure” isn’t just a one-click delete—it requires careful coordination across every data store and backup to ensure compliance and operational continuity. First, you must be able to delete personal data in every system (CRM, Commerce Cloud, ERP, legacy mainframes, backups) so that a deletion request truly removes the subject’s information everywhere. Second, you need to assess the impact of deleted records on system functionality—for example, will orphaned orders, service cases, or analytic summaries break if the customer record is purged? This assessment drives exception handling and fallback designs. Third, you must evaluate the feasibility to restore deleted records to recover from accidental erasures or to comply with other legal holds—this includes designing audit logs or isolated recovery copies that respect data-minimization while still enabling rollback when legitimately needed.
An Enterprise Customer is planning to implement Salesforce to support case management. Below, is their current system landscape diagram. Considering Salesforce capabilities, what should the Integration Architect evaluate when integrating Salesforce with the current system landscape?
A.
Integrating Salesforce with Order Management System, Email Management System and Case Management System.
B.
Integrating Salesforce with Order Management System, Data Warehouse and Case Management System.
C.
Integrating Salesforce with Data Warehouse, Order Management and Email Management System.
D.
Integrating Salesforce with Email Management System, Order Management System and Case Management System.
Integrating Salesforce with Email Management System, Order Management System and Case Management System.
Explanation:
When Salesforce becomes the central case management platform, it must exchange data with:
→ Email Management System – to capture inbound customer emails as cases and push outbound responses back into users’ mailboxes.
→ Order Management System – so agents can reference order history, shipment details, and billing context when resolving order-related cases.
→ Existing Case Management System – to migrate legacy case records or synchronize ongoing cases, ensuring seamless continuity and archival access.
Other landscape elements like data warehouses are downstream analytics targets rather than part of transactional case workflows. Order Management and Email are mission-critical for day-to-day support operations, while the legacy Case Management system holds the historical data that agents still need. Choosing these three ensures you address both the operational inputs (emails, orders) and the data migration/synchronization requirements for cases.
Which two requirements should the Salesforce Community Cloud support for selfregistration and SSO?
Choose 2 answers
A.
SAML SSO and Registration Handler
B.
OpenId Connect Authentication Provider and Registration Handler
C.
SAML SSO and just-in-time provisioning
D.
OpenId Connect Authentication Provider and just-in-time provisioning
SAML SSO and just-in-time provisioning
OpenId Connect Authentication Provider and just-in-time provisioning
Explanation:
To provide instant community access on first login, Salesforce must auto-provision users when they authenticate via SSO.
→ SAML SSO + JIT: You configure a SAML identity provider and enable Just-in-Time provisioning so that Salesforce consumes assertion attributes (e.g., Federation ID) to create the user, contact, and profile in one transaction.
→ OpenID Connect + JIT: You set up an OpenID Connect Authentication Provider in Setup and implement a Registration Handler class (Auth.RegistrationHandler) that Salesforce invokes on login, using the ID token claims to spin up the user record automatically.
Without JIT, you’d force users through a manual registration flow or pre-provisioning process, delaying access and complicating self-registration.
Universal Containers is a global financial company that sells financial products and services. There is a daily scheduled Batch Apex job that generates invoice from a given set of orders. UC requested building a resilient integration for this batch apex job in case the invoice generation fails. What should an integration architect recommend to fulfill the requirement?
A.
Build Batch Retry & Error Handling in the Batch Apex Job itself.
B.
Batch Retry & Error Handling report to monitor the error handling.
C.
Build Batch Retry & Error Handling using BatchApexErrorEvent.
D.
Build Batch Retry & Error Handling in the middleware.
Build Batch Retry & Error Handling using BatchApexErrorEvent.
Explanation:
Salesforce’s BatchApexErrorEvent is a built-in platform event that fires whenever a batch Apex job fails or throws an unhandled exception. By subscribing to this event—either in Apex (trigger on the event) or via middleware—you can automatically detect failures in your daily invoice generation and implement retry logic or alerting without embedding complex error-handling inside the batch itself. This decouples your business logic from the resilience framework and leverages Salesforce’s event-driven model. A pure “in-job” retry loop risks hitting governor limits or masking systemic issues, and a simple report can’t react in real time. Events give you both visibility and automation around failure recovery for robust, scalable batch processing.
Page 2 out of 11 Pages |
Integration-Architect Practice Test Home |