Total 106 Questions
Last Updated On : 2-Jun-2025
Preparing with Integration-Architect practice test is essential to ensure success on the exam. This Salesforce SP25 test allows you to familiarize yourself with the Integration-Architect exam questions format and identify your strengths and weaknesses. By practicing thoroughly, you can maximize your chances of passing the Salesforce certification spring 2025 release exam on your first attempt. Surveys from different platforms and user-reported pass rates suggest Integration-Architect practice exam users are ~30-40% more likely to pass.
An Architect is asked to build a solution that allows a service to access Salesforce through the API. What is the first thing the Architect should do?
A.
Create a new user with System Administrator profile.
B.
Authenticate the integration using existing Single Sign-On.
C.
Authenticate the integration using existing Network-BasedSecurity.
D.
Create a special user solely for the integration purposes.
Create a special user solely for the integration purposes.
Explanation:
The architect should first create a dedicated integration user (D) rather than using an existing admin user (A) or relying on SSO (B) or network security (C). A dedicated integration user follows the principle of least privilege, ensuring the service has only the necessary permissions. This approach improves security (reduced attack surface), auditability (clear separation of integration activities), and stability (avoids disruptions from credential changes). While SSO or network-based authentication might supplement this, they aren't substitutes for a properly scoped integration user. Salesforce best practices explicitly recommend dedicated integration users for API access to avoid coupling integrations with human user accounts.
A company's cloud-based single page application consolidates data local to the application with data from on premise and 3rd party systems. The diagram below typifies the application's combined use of synchronous and asynchronous calls. The company wants to use the average response time of its application's user interface as a basis for certain alerts. For this purpose, the following occurs:
1. Log every call's start and finish date and time to a central analytics data store.
2. Compute response time uniformly as the difference between the start and finish date and time — A to H in the diagram.
Which computation represents the end-to-end response time from the user's perspective?
A.
Sum of A to H
B.
Sum of A to F
C.
Sum of A, G, and H
D.
Sum of A and H
Sum of A and H
Explanation:
The user-perceived response time is the delta between the initial request (A) and the final UI update (H). Steps B–G represent backend asynchronous processes (e.g., parallel API calls to on-premise/3rd-party systems) that don't block the UI. While these steps contribute to data freshness, they don't affect the user's perception of responsiveness. The diagram implies A and H are the only synchronous touchpoints from the user's perspective. This aligns with frontend performance monitoring principles, where "time to first render" (A) and "time to final interaction" (H) are critical metrics.
Northern Trail Outfitters (NTO) use Salesforce to track leads, opportunities, and to capture order details. However, Salesforce isn't the system that holds or processes orders. After the order details are captured in Salesforce, an order must be created in the remote system, which manages the orders lifecylce. The Integration Architect for the project is recommending a remote system that will subscribe to the platform event defined in Salesforce. Which integration pattern should be used for this business use case?
A.
Remote Call In
B.
Request and Reply
C.
Fire and Forget
D.
Batch Data Synchronization
Fire and Forget
Explanation:
When Salesforce publishes a Platform Event and a remote system subscribes to it, the communication follows a Fire and Forget pattern. Salesforce emits the event without waiting for a response from the subscriber. This decouples systems and supports scalability, but also means there's no delivery guarantee or acknowledgment within the platform. It's suitable for event-driven architectures where real-time responsiveness is desired, and the receiving system is responsible for error handling and retries.
Northern Trail Outfitters (NTO) uses different shipping services for each of the 34 countries it serves. Services are added and removed frequently to optimize shipping times and costs. Sales Representatives serve all NTO customers globally and need to select between valid service(s) for the customer's country and request shipping estimates from that service. Which two solutions should an architect propose?
Choose 2 answers
A.
Use Platform Events to construct and publish shipper-specific events.
B.
Invoke middleware service to retrieve valid shipping methods.
C.
Use middleware to abstract the call to the specific shipping services.
D.
Store shipping services in a picklist that is dependent on a country picklist.
Invoke middleware service to retrieve valid shipping methods.
Use middleware to abstract the call to the specific shipping services.
Explanation:
Since services vary frequently across countries, hardcoding options (like picklists) isn't scalable. Middleware offers a flexible, centralized abstraction layer that hides the complexity of integrating with multiple shipping providers. It can dynamically return the available options based on country and invoke appropriate services without requiring Salesforce to manage service-specific logic. Platform Events are not suited for this synchronous UI-based interaction, and picklists lack the dynamism needed.
A company is planning on sending orders from Salesforce to a fulfillment system. The integration architect has been asked to plan for the integration. Which two questions should the integration architect consider?
Choose 2 answers
A.
Can the fulfillment system create new addresses within the Order Create service?
B.
Can the fulfillment system make a callback into Salesforce?
C.
Can the fulfillment system implement a contract-first Outbound Messaging interface?
D.
Is the product catalog data identical at all times in both systems?
Can the fulfillment system make a callback into Salesforce?
Is the product catalog data identical at all times in both systems?
Explanation:
Callbacks into Salesforce are critical if you plan to use asynchronous communication patterns like Outbound Messaging or Platform Events that require acknowledgment or updates. Meanwhile, ensuring product catalog consistency is essential for order accuracy. If catalogs are misaligned, users might create orders with invalid or outdated items. Questions about address creation and contract-first design are more implementation details and less about initial architectural feasibility.
A developer has been tasked by the integration architect to build a solution based on the Streaming API. The developer has done some research and has found there are different implementations of the events in Salesforce (Push Topic Events, Change Data Capture, Generic Streaming, Platform Events), but is unsure of to proceed with the implementation.The developer asks the system architect for some guidance. What should the architect consider when making the recommendation?
A.
Push Topic Event can define a custom payload.
B.
Change Data Capture does not have record access support.
C.
Change Data Capture can be published from Apex.
D.
Apex triggers can subscribe to Generic Events.
Change Data Capture does not have record access support.
Explanation:
The architect should note that:
→ CDC Limitations: Change Data Capture (B) doesn't support filtering by record access permissions (unlike PushTopics).
→ Platform Events (Not Listed): Would be better for custom event publishing.
PushTopics (A) allow query-based payloads but are legacy. The correct guidance depends on whether row-level security is needed for the streaming data.
A customer imports data from an external system into Salesforce using Bulk API. These jobs have batch sizes of 2000 and are run in parallel mode. The batc fails frequently with the error "Max CPU time exceeded". A smaller batch size will fix this error. Which two options should be considered when using a smaller batch size? Choose 2 answers
A.
Smaller batch size may cause record-locking errors.
B.
Smaller batch size may increase time required to execute bulk jobs.
C.
Smaller batch size may exceed the concurrent API request limits.
D.
Smaller batch size can trigger "Too many concurrent batches" error.
Smaller batch size may increase time required to execute bulk jobs.
Smaller batch size can trigger "Too many concurrent batches" error.
Explanation:
When using Salesforce Bulk API, large batch sizes can exceed CPU limits during processing—especially if triggers, flows, or validation rules are intensive. Reducing the batch size is a logical mitigation step, as smaller chunks reduce CPU time per execution unit. However, this increases the number of batches needed to complete the job.
More batches mean longer total execution time (B), since each batch must be queued, processed, and possibly retried. Additionally, Salesforce imposes limits on concurrent batches—typically 5 for synchronous and up to 100 for asynchronous jobs depending on org limits. Exceeding this results in “Too many concurrent batches” errors (D), halting or delaying processing.
While it’s tempting to reduce batch sizes drastically, it’s important to balance performance and limit thresholds. Options A and C are incorrect: smaller batch sizes reduce locking issues, and they don’t inherently violate concurrent API request limits, which are separate from batch execution concurrency.
Northern Trail Outfitters (NTO) has recently changed their Corporate Security Guidelines. The guidelines require that all cloud applications pass through a secure firewall before accessing on-premise resources. NTO is evaluating middleware solutions to integrate cloud applications with on-premise resources and services. What are two considerations an Integration Architect should evaluate before choosing a middleware solution?
Choose 2 answers
A.
The middleware solution is capable of establishing a secure API gateway between cloud applications and on-premise resources.
B.
An API gateway component is deployable behind a Demilitarized Zone (DMZ) or perimeter network.
C.
The middleware solution enforces the OAuth security protocol.
D.
The middleware solution is able to interface directly with databases via an ODBC connection string.
The middleware solution is capable of establishing a secure API gateway between cloud applications and on-premise resources.
An API gateway component is deployable behind a Demilitarized Zone (DMZ) or perimeter network.
Explanation:
When integrating Salesforce (a cloud platform) with on-premise resources, the architect must overcome challenges like firewall restrictions, network security, and data governance. Middleware becomes a bridge, often deployed in a DMZ to allow limited, controlled access from external systems while maintaining a strong internal security posture.
A key requirement is that the middleware can act as a secure API gateway (A)—this enables controlled exposure of internal services to Salesforce or other cloud platforms. The ability to deploy components of the middleware inside the DMZ (B) is critical. It enables routing or proxying of requests while ensuring that no direct access is granted to internal systems.
Options C and D are less critical or incorrect: OAuth (C) is typically used for user authentication, not always for middleware; and direct ODBC connections (D) from Salesforce via middleware are rarely recommended due to security and scalability issues.
Which WSDL should an architect consider when creating an integration that might be used for more than one salesforce organization and different met
A.
Corporate WSDL
B.
Partner WSDL
C.
SOAP API WSDL
D.
Enterprise WSDL
Partner WSDL
Explanation:
Salesforce offers two main WSDLs for SOAP integrations: Enterprise and Partner. The Enterprise WSDL is strongly typed and tightly coupled with a specific org’s metadata (custom objects, fields, etc.). This means it must be regenerated if metadata changes, and is not portable across orgs.
In contrast, the Partner WSDL is loosely typed and uses a more flexible schema. It represents objects and fields as generic name-value pairs (like sObject and fieldsToNull), which makes it ideal for cross-org integrations where metadata varies or changes frequently.
For ISVs or scenarios where the integration must be reusable across different Salesforce environments (e.g., dev, staging, production, or multiple clients), the Partner WSDL is the better choice. It’s also better suited for dynamic scenarios like schema discovery or integration with systems that don't maintain tight data models.
Thus, Partner WSDL provides maximum flexibility, making it the preferred option when metadata cannot be guaranteed to be identical across orgs.
A company's security assessment noted vulnerabilities on the un managed packages in their Salesforce orgs, notably secrets that are easily accessible and in plain text, such as usernames, passwords, and OAuth tokens used in callouts from Salesforce. Which two persistence mechanisms should an integration architect require to be used to ensure that secrets are protected from deliberate or inadvertent exposure?
Choose 2 answers
A.
Encrypted Custom Fields
B.
Named Credentials
C.
Protected Custom Metadata Types
D.
Protected Custom Settings
Named Credentials
Protected Custom Metadata Types
Explanation:
Salesforce provides multiple mechanisms to store secrets securely and avoid hardcoding sensitive data like OAuth tokens, API keys, and credentials. The best practice is to use Named Credentials (B), which securely store authentication settings (e.g., username/password, OAuth tokens) and abstract them from Apex code. This ensures secrets aren't exposed in code or config and simplifies endpoint management.
Protected Custom Metadata Types (C) allow you to store config data like endpoints, keys, or feature flags. Marking metadata records as "protected" ensures they are not visible outside managed packages, shielding secrets from org admins and preventing accidental exposure.
Encrypted Custom Fields (A) are better for storing secure business data (e.g., SSNs), not integration secrets. Protected Custom Settings (D) are legacy tools and lack the security enforcement of Protected Custom Metadata—making them less safe for storing secrets.
Using Named Credentials and Protected Metadata Types ensures compliance with secure coding practices and reduces risk from accidental disclosure or misuse of sensitive integration data.
Page 3 out of 11 Pages |
Integration-Architect Practice Test Home | Previous |