Salesforce-Platform-Integration-Architect Practice Test Questions

Total 106 Questions


Last Updated On : 3-Nov-2025 - Spring 25 release



Preparing with Salesforce-Platform-Integration-Architect practice test is essential to ensure success on the exam. This Salesforce SP25 test allows you to familiarize yourself with the Salesforce-Platform-Integration-Architect exam questions format and identify your strengths and weaknesses. By practicing thoroughly, you can maximize your chances of passing the Salesforce certification spring 2025 release exam on your first attempt.

Surveys from different platforms and user-reported pass rates suggest Salesforce-Platform-Integration-Architect practice exam users are ~30-40% more likely to pass.

undraw-questions

Think You're Ready? Prove It Under Real Exam Conditions

Enroll Now

An Architect is asked to build a solution that allows a service to access Salesforce through the API. What is the first thing the Architect should do?



A.

Create a new user with System Administrator profile.


B.

Authenticate the integration using existing Single Sign-On.


C.

Authenticate the integration using existing Network-BasedSecurity.


D.

Create a special user solely for the integration purposes.





D.
  

Create a special user solely for the integration purposes.



Explanation

When an external service needs to access Salesforce via API, the very first step an Integration Architect must take is to create a dedicated integration user. This is a foundational security best practice in Salesforce and is emphasized across official documentation and the Integration Architect exam objectives.

Why D is the correct first step:

A dedicated integration user (e.g., integration.api@company.com) ensures:
Clear ownership and traceability of all API actions in logs and audit trails.
Application of the principle of least privilege — the user gets only the permissions needed (via permission sets or a custom profile), never full admin access.
Isolation of risk — if the integration is compromised, only API access is affected, not a human administrator or shared account.
Support for automation — this user can be used with OAuth 2.0 JWT Bearer Flow, Named Credentials, or Connected Apps without relying on interactive login.
This user will later be associated with a Connected App and used in authentication flows such as JWT or Web Server OAuth.

Salesforce explicitly states:
“Use a dedicated Salesforce user account for each integration. Do not use a user account that belongs to a person.”
— Salesforce Integration Best Practices

Why the other options are incorrect as the first step:

A. Create a new user with System Administrator profile
This violates least privilege and creates a critical security risk. Admin profiles should never be used for integrations — they grant far more access than needed.

B. Authenticate the integration using existing Single Sign-On
SSO (like SAML or OpenID Connect) is designed for interactive human logins, not headless service-to-service API access. Integrations cannot complete SSO login flows without user interaction.

C. Authenticate the integration using existing Network-Based Security
Network-based security (e.g., IP allowlisting) is a supplementary control applied after authentication. It does not authenticate the integration — it only restricts from where a session can originate.

Recommended Next Steps (After Creating the Integration User):

Create a Connected App with appropriate OAuth scopes.
Assign a custom profile or permission set with “API Enabled” and minimal object/field access.
Use Named Credentials or JWT Bearer Flow for secure, passwordless authentication.
Enforce IP restrictions and login hours via profile or session policies.

References:
Salesforce Help: Integration Security
https://help.salesforce.com/s/articleView?id=sf.security_integration_best_practices.htm&type=5
Architect Journey – Integration Security
https://architect.salesforce.com/design/integration/security
Trailhead: Secure Your Integration
(Module in Integration Architect learning path)

Key Takeaway:
Always begin API integrations by creating a dedicated, non-human, least-privileged user — never jump to authentication mechanisms or admin users. This is the first and most critical decision in secure integration design.1.3sFast

A company's cloud-based single page application consolidates data local to the application with data from on premise and 3rd party systems. The diagram below typifies the application's combined use of synchronous and asynchronous calls. The company wants to use the average response time of its application's user interface as a basis for certain alerts. For this purpose, the following occurs:
1. Log every call's start and finish date and time to a central analytics data store.
2. Compute response time uniformly as the difference between the start and finish date and time — A to H in the diagram.
Which computation represents the end-to-end response time from the user's perspective?



A.

Sum of A to H


B.

Sum of A to F


C.

Sum of A, G, and H


D.

Sum of A and H





D.
  

Sum of A and H



Explanation

The question is about measuring the end-to-end response time from the user's perspective. From the user's point of view, the response time is the total time between when they initiate a request (e.g., by clicking a button) and when the user interface (UI) is fully updated and they can interact with it again.

Let's break down the timeline in the diagram:

Point A:
This marks the start of the user's request. It is the moment the user action triggers the initial call from the client-side application.

Points B to G:
These represent various internal, back-end, and third-party processes.

These can include:

Synchronous calls to the application's own server (B-C).
Asynchronous calls to on-premise systems (D-E).
Asynchronous calls to third-party systems (F-G).

Point H:
This marks the finish from the user's perspective. It is the moment when the final callback is executed, the UI is updated with all the consolidated data, and the single-page application is ready for the next user interaction.

Why the Other Options Are Incorrect

A. Sum of A to H:
This would be incorrect because it double-counts time. In a typical single-page application architecture, many of these processes (like the on-premise and third-party calls) happen concurrently (in parallel), not sequentially. Adding all the individual durations together would grossly overstate the total time the user actually waits.

B. Sum of A to F:
This option ends at point F, which is the finish of a third-party asynchronous call. This call's completion does not, by itself, update the UI. The application still needs to receive the callback and process the data (G-H) before the user sees the result.

C. Sum of A, G, and H:
This is also incorrect. While it correctly identifies the final step (H), it misses the initial synchronous request (A) and incorrectly includes G (the start of the final callback) without its initiating finish point. More importantly, it ignores the fact that the entire journey from the user's click to the final UI update is captured by the total elapsed time between A and H.

Key Concept
The key concept tested here is User-Perceived Response Time in an asynchronous, service-oriented architecture.
An Integration Architect must understand that from an end-user's viewpoint, performance is defined by the total latency of a business process, not the sum of its individual, often parallel, components. Monitoring and optimizing for this end-to-end elapsed time is critical for ensuring a positive user experience in composite applications that leverage multiple systems.

Reference
This concept is central to the design principles covered in the Salesforce Integration Patterns documentation, particularly patterns involving composite services and parallel processing. The official Salesforce study guide for the Platform Integration Architect credential emphasizes the importance of designing and monitoring integration solutions with a focus on the overall business process latency and user experience, rather than just individual service-level agreements (SLAs).

Northern Trail Outfitters (NTO) use Salesforce to track leads, opportunities, and to capture order details. However, Salesforce isn't the system that holds or processes orders. After the order details are captured in Salesforce, an order must be created in the remote system, which manages the orders lifecylce. The Integration Architect for the project is recommending a remote system that will subscribe to the platform event defined in Salesforce. Which integration pattern should be used for this business use case?



A.

Remote Call In


B.

Request and Reply


C.

Fire and Forget


D.

Batch Data Synchronization





C.
  

Fire and Forget



Explanation:

In this scenario:

Salesforce is used to capture order details, but it does not process or manage orders.
Once an order is captured in Salesforce, it must be communicated to a remote system that handles the full order lifecycle.
The remote system subscribes to a platform event in Salesforce.
This is a classic case of asynchronous, event-driven integration.

The key points are:

Salesforce is the publisher – it publishes an event (Platform Event) whenever an order is created.
Remote system is the subscriber – it listens for the platform event and processes the order independently.
No synchronous response is required – Salesforce doesn’t wait for the remote system to confirm the order creation.
This matches the Fire and Forget integration pattern, which is designed for one-way, asynchronous communication where the sender does not wait for a response and the receiver processes the message independently.

Correct Option:

C. Fire and Forget:
Salesforce publishes a Platform Event for every new order.
The external system subscribes and creates the order without Salesforce needing to wait for a response.
Ensures decoupled, scalable, and real-time processing.

Incorrect Options:

A. Remote Call In:
Used when an external system calls Salesforce to retrieve or modify data.
Not applicable here because Salesforce initiates the communication, not the external system.

B. Request and Reply:
This is synchronous communication. Salesforce sends a request and waits for a response before proceeding.
Not suitable here because order creation does not require an immediate response from the external system.

D. Batch Data Synchronization:
Involves periodic bulk data transfers, typically scheduled.
Not appropriate for real-time, event-driven processing where every order must be handled as it occurs.

Reference:
Salesforce Integration Patterns and Practices – Event-Driven Architecture
Salesforce Platform Events Overview – Platform Events Developer Guide
This Fire and Forget pattern ensures that the integration is loosely coupled, reliable, and scalable, which is crucial for handling order processing across multiple systems without impacting Salesforce performance.

Northern Trail Outfitters (NTO) uses different shipping services for each of the 34 countries it serves. Services are added and removed frequently to optimize shipping times and costs. Sales Representatives serve all NTO customers globally and need to select between valid service(s) for the customer's country and request shipping estimates from that service. Which two solutions should an architect propose?
Choose 2 answers



A.

Use Platform Events to construct and publish shipper-specific events.


B.

Invoke middleware service to retrieve valid shipping methods.


C.

Use middleware to abstract the call to the specific shipping services.


D.

Store shipping services in a picklist that is dependent on a country picklist.





B.
  

Invoke middleware service to retrieve valid shipping methods.



C.
  

Use middleware to abstract the call to the specific shipping services.



Explanation

This scenario describes a need for dynamic integration with multiple external systems (34 different shipping services) that are frequently changing. The Integration Architect should design a solution that decouples the Salesforce application (Sales Representatives' workflow) from the complexity and volatility of the external services.

C. Use middleware to abstract the call to the specific shipping services.

Abstraction and Decoupling:
Middleware (like Mulesoft or a dedicated Enterprise Service Bus/Integration Platform) is the ideal solution to handle the complexity of 34 different services. It can act as a single, consistent interface for Salesforce. Salesforce calls one endpoint on the middleware, and the middleware handles the logic of determining the correct service, applying any necessary data transformations, and invoking that specific service's API. This isolates Salesforce from changes to the external service APIs.

B. Invoke middleware service to retrieve valid shipping methods.

Dynamic Data Retrieval:
The "valid service(s) for the customer's country" is a dynamic and frequently changing piece of information. Storing this directly in Salesforce (like in a picklist, as in option D) would require constant manual or complex automated maintenance. The best practice is for the Salesforce application to call the middleware (which is already integrating with all services and has the logic for "validity") to dynamically retrieve the current valid shipping options for a given country. This ensures the Sales Rep always sees up-to-date information.

❌ Why the Other Options are Incorrect

A. Use Platform Events to construct and publish shipper-specific events.

Use Case Mismatch:
Platform Events are an excellent solution for asynchronous, fire-and-forget, event-driven communication (e.g., notifying external systems after an Order is created). Requesting an estimate and a list of valid methods is a synchronous requirement—the Sales Rep needs the answer immediately to proceed. Middleware invoked via an outbound callout (e.g., using Apex or External Services) is the correct pattern.

D. Store shipping services in a picklist that is dependent on a country picklist.

Maintenance Nightmare:
With services "added and removed frequently," managing this through standard Salesforce configuration like dependent picklists would be highly error-prone, require constant manual updates, and likely violate the principle of having a single source of truth for dynamic, external data. The data should be retrieved dynamically from the integration layer (middleware).

📚 Reference
This solution aligns with the principles of the Integration Layer/Middleware Pattern, which is fundamental for the Integration Architect role.
Pattern: Middleware / Enterprise Service Bus (ESB)
Principle: Decoupling and Abstraction. A central layer should shield the Salesforce application from the complexity, volatility, and heterogeneity of multiple backend systems.
Source: Salesforce Integration Architecture Designer Trailmix (specifically modules covering integration patterns).

A company is planning on sending orders from Salesforce to a fulfillment system. The integration architect has been asked to plan for the integration. Which two questions should the integration architect consider?

Choose 2 answers



A.

Can the fulfillment system create new addresses within the Order Create service?


B.

Can the fulfillment system make a callback into Salesforce?


C.

Can the fulfillment system implement a contract-first Outbound Messaging interface?


D.

Is the product catalog data identical at all times in both systems?





B.
  

Can the fulfillment system make a callback into Salesforce?



D.
  

Is the product catalog data identical at all times in both systems?



Explanation

When planning a Salesforce-to-fulfillment system integration for sending orders, the Integration Architect must focus on data synchronization, system capabilities, and interaction patterns. The two most critical questions are B and D, as they directly impact integration design, reliability, and data consistency.

B. Can the fulfillment system make a callback into Salesforce?
Why this is correct:

Many fulfillment workflows require bidirectional communication.
Example: After receiving an order, the fulfillment system may need to update order status (e.g., “Shipped”, “Backordered”) or send tracking numbers back to Salesforce.

If callbacks are supported, the architect can design asynchronous updates using:
REST/SOAP APIs from fulfillment → Salesforce
Platform Events or Outbound Messages (if Salesforce initiates)
Apex Callouts + Named Credentials
If not supported, the solution must rely on polling, batch sync, or middleware (e.g., MuleSoft, Boomi), increasing complexity and latency.
This question determines whether real-time status sync is feasible — a common business requirement.

D. Is the product catalog data identical at all times in both systems? Why this is correct:

Orders reference Product SKUs, Prices, Descriptions, Taxes, etc.
If catalog data diverges (e.g., price changes in Salesforce but not in fulfillment), it leads to:
Rejected orders
Pricing disputes
Reconciliation issues

The architect must clarify:
Which system is the source of truth for products?
Is real-time sync needed (via API, Platform Events, CDC)?
Or is nightly batch sync acceptable?

This drives decisions on:
Master Data Management (MDM)
Data mapping and transformation
Error handling for mismatches

Mismatched catalog data is one of the top causes of integration failures in order-to-fulfillment scenarios.

Why the other options are incorrect:

A. Can the fulfillment system create new addresses within the Order Create service?

This is a secondary detail, not a planning priority.
Address creation is typically handled in Salesforce (source of truth for customer data).
Even if supported, it’s a feature, not a core architectural decision.
This comes up during API contract design, not initial planning.

C. Can the fulfillment system implement a contract-first Outbound Messaging interface?

Outbound Messaging is a Salesforce-specific push mechanism using SOAP.
It requires the external system to host a public SOAP endpoint — rare in modern APIs.
Most fulfillment systems expect REST, not SOAP.
Contract-first applies to WSDL, but Outbound Messaging is Salesforce-initiated, not a mutual contract.
Better alternatives: Platform Events, Apex REST callouts, or middleware.
This is a tactical implementation question, not a strategic planning one.

References:
Trailhead – Integration Architect:
Plan Your Integration → “Ask: What data needs to flow in which direction?”
https://trailhead.salesforce.com/content/learn/modules/integration-architect-planning
Salesforce Integration Patterns:
Remote Process Invocation – Request and Reply vs Fire and Forgethttps://architect.salesforce.com/design/decision-guides/remote-process-invocation
Data Synchronization Best Practices:
https://help.salesforce.com/s/articleView?id=sf.integration_data_synchronization.htm

A developer has been tasked by the integration architect to build a solution based on the Streaming API. The developer has done some research and has found there are different implementations of the events in Salesforce (Push Topic Events, Change Data Capture, Generic Streaming, Platform Events), but is unsure of to proceed with the implementation.The developer asks the system architect for some guidance. What should the architect consider when making the recommendation?



A.

Push Topic Event can define a custom payload.


B.

Change Data Capture does not have record access support.


C.

Change Data Capture can be published from Apex.


D.

Apex triggers can subscribe to Generic Events.





C.
  

Change Data Capture can be published from Apex.



Explanation

The question centers on guiding a developer on the correct use of Streaming API events. The key differentiator among the options is which feature is true and impactful for making an architectural decision.

Let's evaluate each option:

A. Push Topic Event can define a custom payload.
This is incorrect. PushTopics are based on a SOQL query, and the payload is the result of that query. You cannot define a fully custom, free-form payload with a PushTopic. Platform Events are designed for that purpose.

B. Change Data Capture does not have record access support.
This is misleading and generally incorrect. Change Data Capture events honor the sharing and field-level security of the subscribing user. The event payload will only contain fields and records that the user is permitted to see. Therefore, it does have record access support.

C. Change Data Capture can be published from Apex.
This is correct. While Change Data Capture is primarily an automatic service that publishes events on standard object record changes (create, update, delete, undelete), you can also publish Change Data Capture-like events for standard objects programmatically using the EventBus.publish method in Apex. This is a powerful feature that allows for simulating or forcing change events, which is crucial for testing and certain replication scenarios.

D. Apex triggers can subscribe to Generic Events.
This is incorrect. Apex triggers cannot act as subscribers for any Streaming API event (PushTopic, Generic, Platform Event, or Change Data Capture). Subscribers are always external clients (using CometD), Lightning components, or, in the case of Platform Events and Change Data Capture, Process Builder, Flow, or Apex Triggers that are fired when the event is received. The key is that the trigger is on the event object itself (e.g., My_Event__e), not that the trigger "subscribes" to a generic channel.

Therefore, the architect should recommend based on the accurate and powerful feature that Change Data Capture events can be published programmatically, which is a critical piece of information for implementation and testing.

Key Concept
The key concept is understanding the capabilities, use cases, and limitations of the different Streaming API event types (PushTopics, Generic Events, Platform Events, and Change Data Capture). An Integration Architect must be able to select the right event-based mechanism based on requirements like event source (data change vs. business event), payload flexibility, and how the event is published and consumed.

Reference
This distinction is covered in the Salesforce documentation on "Choose an Event Type for Your Use Case." Specifically, the documentation for Change Data Capture states that while it's automatic, you can "publish change events for standard objects" using Apex. This is a defining characteristic that differentiates it from other automated data-centric events and is essential knowledge for an architect designing a solution.

A customer imports data from an external system into Salesforce using Bulk API. These jobs have batch sizes of 2000 and are run in parallel mode. The batc fails frequently with the error "Max CPU time exceeded". A smaller batch size will fix this error. Which two options should be considered when using a smaller batch size? Choose 2 answers



A.

Smaller batch size may cause record-locking errors.


B.

Smaller batch size may increase time required to execute bulk jobs.


C.

Smaller batch size may exceed the concurrent API request limits.


D.

Smaller batch size can trigger "Too many concurrent batches" error.





B.
  

Smaller batch size may increase time required to execute bulk jobs.



D.
  

Smaller batch size can trigger "Too many concurrent batches" error.



Explanation:

The job is failing with the error "Max CPU time exceeded", which often occurs when processing too many records per batch triggers complex automation (triggers, flows, validation rules, rollups, etc.). Reducing the batch size helps distribute processing and avoid exceeding CPU limits—but it introduces other trade-offs.

Below is the impact analysis of using a smaller batch size:

✅ Correct Options

B. Smaller batch size may increase time required to execute bulk jobs.
With more batches required to process the same number of records, the total execution time increases.
More batches = more overhead for setup, API calls, commit operations.

D. Smaller batch size can trigger "Too many concurrent batches" error.
Bulk API allows a maximum of 100 batches queued or processing at once.
Reducing the batch size increases the batch count, which can exceed this limit and cause the error.

❌ Incorrect Options

A. Smaller batch size may cause record-locking errors.
Actually, the opposite is true: smaller batch sizes reduce record-locking issues because fewer records are processed together.

C. Smaller batch size may exceed the concurrent API request limits.
Bulk API operations consume fewer API calls because each job counts as a single call + minimal batch overhead.
Smaller batch sizes do not significantly affect API request limits in most cases.

✅ Final Answer:

B and D

Reference:
Salesforce Bulk API Limits & Best Practices
Introduction to Bulk API 2.0 and Bulk API
Record Locking & CPU Limits Considerations
About This Quick Reference

Northern Trail Outfitters (NTO) has recently changed their Corporate Security Guidelines. The guidelines require that all cloud applications pass through a secure firewall before accessing on-premise resources. NTO is evaluating middleware solutions to integrate cloud applications with on-premise resources and services. What are two considerations an Integration Architect should evaluate before choosing a middleware solution?
Choose 2 answers



A.

The middleware solution is capable of establishing a secure API gateway between cloud applications and on-premise resources.


B.

An API gateway component is deployable behind a Demilitarized Zone (DMZ) or perimeter network.


C.

The middleware solution enforces the OAuth security protocol.


D.

The middleware solution is able to interface directly with databases via an ODBC connection string.





A.
  

The middleware solution is capable of establishing a secure API gateway between cloud applications and on-premise resources.



B.
  

An API gateway component is deployable behind a Demilitarized Zone (DMZ) or perimeter network.



Explanation

The core requirement is to pass all cloud application traffic through a secure firewall before accessing on-premise resources. This is a classic perimeter security and network topology challenge that must be addressed by the middleware infrastructure.

B. An API gateway component is deployable behind a Demilitarized Zone (DMZ) or perimeter network.

Perimeter Security:
The DMZ (Demilitarized Zone) is the standard network segment placed between the internal, trusted network and the external, untrusted network (the internet/cloud). To satisfy the requirement of passing traffic through a secure firewall, the API Gateway (a core component of modern integration/middleware) that receives external requests must be strategically placed behind the external firewall in the DMZ. This allows for strict control, logging, and inspection of all inbound traffic before it ever reaches the internal resources.

A. The middleware solution is capable of establishing a secure API gateway between cloud applications and on-premise resources.

Centralized Control and Security:
The API Gateway is the component that enforces security policies, handles throttling, performs message transformation, and ensures a secure connection (like TLS/SSL) between the cloud application (Salesforce) and the on-premise services. The middleware solution must inherently include or support a robust API Gateway to meet the secure access requirement.

❌ Why the Other Options are Incorrect

C. The middleware solution enforces the OAuth security protocol.

Too Specific:
While OAuth is a great, common security protocol, the requirement only states secure firewall access. Many other secure methods like mutual TLS (mTLS), JWT validation, or Basic Auth over HTTPS might be used depending on the endpoint. OAuth is a capability the gateway should have, but the fundamental architectural evaluation must focus on the network placement (DMZ) and component (API Gateway).

D. The middleware solution is able to interface directly with databases via an ODBC connection string.

Architectural Anti-Pattern:
A best practice is to never expose databases directly to integration middleware. Integration should be done via services and APIs (e.g., REST, SOAP) that enforce business logic, security, and transactionality. Directly connecting to an on-premise database via ODBC or JDBC bypasses the security layer and is highly discouraged.

📚 Reference
This relates to the Integration Security and Network Topology topics of the Integration Architect exam:

Key Concept:
Hybrid Integration Architecture. This requires an integration component (often called an Agent, Runtime, or Gateway) to be deployed on-premise, typically within a DMZ, to act as a secure bridge between the cloud and the internal network.

DMZ:
The role of the Demilitarized Zone in protecting the private network while allowing controlled access to services from an untrusted network.

Which WSDL should an architect consider when creating an integration that might be used for more than one salesforce organization and different met



A.

Corporate WSDL


B.

Partner WSDL


C.

SOAP API WSDL


D.

Enterprise WSDL





B.
  

Partner WSDL



Explanation

The key requirement in the question is an integration that "might be used for more than one salesforce organization and different metadata." This directly points to the need for a generic, dynamic WSDL that is not tied to the specific configuration (custom objects or fields) of a single Salesforce org.

Let's evaluate each option:

A. Corporate WSDL:
This is incorrect. The Corporate WSDL (also known as the Enterprise WSDL) is strongly-typed and specific to a single Salesforce organization. It includes all of that org's custom objects, fields, and settings in its structure. If the metadata changes, the WSDL must be re-generated and the client code recompiled. This makes it unsuitable for use across multiple, different orgs.

B. Partner WSDL:
This is correct. The Partner WSDL is a single, generic, loosely-typed WSDL that works for any Salesforce organization. It represents sObjects and fields as generic types (e.g., sObject and XmlElement), allowing the client application to discover the metadata of any org at runtime. This makes it the ideal choice for ISVs building packaged applications or for companies building a single integration to connect to multiple Salesforce orgs with different configurations.

C. SOAP API WSDL:
This is a distractor. "SOAP API WSDL" is a generic term that describes the API itself. In practice, when you generate a WSDL for the SOAP API in Setup, you are explicitly choosing between the Partner WSDL and the Enterprise WSDL. This option is not specific enough.

D. Enterprise WSDL:
This is incorrect and is simply another name for the Corporate WSDL (Option A). It has the same limitation of being tightly coupled to a single org's metadata.

Key Concept
The key concept is understanding the critical architectural choice between a strongly-typed WSDL (Enterprise) and a loosely-typed WSDL (Partner).

Enterprise WSDL:
Used for stable, point-to-point integrations with a single, known Salesforce org. It provides the benefit of compile-time type checking.

Partner WSDL:
Used for dynamic, multi-tenant integrations that must work across multiple Salesforce orgs with varying metadata. It requires more complex client-side code to handle the generic sObjects but offers ultimate flexibility.

Reference
This is a foundational topic for Salesforce integrations. The official Salesforce documentation, specifically the "Generate the Enterprise WSDL" and "Generate the Partner WSDL" pages, clearly distinguishes these two types. The Partner WSDL is explicitly described as the correct choice for "an independent software vendor (ISV) who is creating a client application for multiple organizations" because it is not affected by organization-specific metadata.

A company's security assessment noted vulnerabilities on the un managed packages in their Salesforce orgs, notably secrets that are easily accessible and in plain text, such as usernames, passwords, and OAuth tokens used in callouts from Salesforce. Which two persistence mechanisms should an integration architect require to be used to ensure that secrets are protected from deliberate or inadvertent exposure?
Choose 2 answers



A.

Encrypted Custom Fields


B.

Named Credentials


C.

Protected Custom Metadata Types


D.

Protected Custom Settings





B.
  

Named Credentials



C.
  

Protected Custom Metadata Types



Explanation:

During a security review, the company discovered hard-coded secrets (usernames, passwords, OAuth tokens) in unmanaged package components. To prevent exposure of credentials in code, configuration, or metadata, Salesforce recommends secure storage mechanisms that encrypt or restrict visibility of secrets.

The solution should ensure:

No plain text credentials in org metadata
Restricted visibility to administrators only
Secure handling of authentication for outbound callouts

✅ Correct Options

B. Named Credentials
Best practice for protecting secrets used in callouts
Securely stores OAuth tokens, passwords, and authentication endpoints
Credentials are never exposed in plain text to developers or subscribers
Supports OAuth 2.0, AWS IAM, and External Credential Framework
Simplifies callouts: no need to handle tokens manually in Apex

C. Protected Custom Metadata Types
Data marked as protected is hidden from subscribers of unmanaged or managed packages
Only visible in the packaging org
Secure choice when deploying credentials via a managed package configuration
Can store configuration securely without exposing sensitive fields

❌ Incorrect Options

A. Encrypted Custom Fields
Only protects data at rest
Admins can still view decrypted values
Not intended for integration secrets or programmatic authentication

D. Protected Custom Settings
Although protected settings limit visibility to subscribers, Salesforce recommends Custom Metadata Types over Custom Settings for secure configuration in packages
Custom Settings are being deprecated for new secure configuration use cases

✅ Final Answer:

B. Named Credentials
C. Protected Custom Metadata Types

Reference:
Salesforce Security Guide – Handling Secrets in Integrations
https://developer.salesforce.com/docs
Named Credentials Overview
Named Credentials as Callout Endpoints
Protected Custom Metadata for Managed Packaging
ISVforce Guide: Build and Distribute AppExchange Solutions

Page 3 out of 11 Pages
Salesforce-Platform-Integration-Architect Practice Test Home Previous

Experience the Real Salesforce-Platform-Integration-Architect Exam Before You Take It

Our new timed practice test mirrors the exact format, number of questions, and time limit of the official Salesforce-Platform-Integration-Architect exam.

The #1 challenge isn't just knowing the material; it's managing the clock. Our new simulation builds your speed and stamina.



Enroll Now

Ready for the Real Thing? Introducing Our Real-Exam Simulation!


You've studied the concepts. You've learned the material. But are you truly prepared for the pressure of the real Salesforce Agentforce Specialist exam?

We've launched a brand-new, timed practice test that perfectly mirrors the official exam:

✅ Same Number of Questions
✅ Same Time Limit
✅ Same Exam Feel
✅ Unique Exam Every Time

This isn't just another Salesforce-Platform-Integration-Architect practice exam. It's your ultimate preparation engine.

Enroll now and gain the unbeatable advantage of:

  • Building Exam Stamina: Practice maintaining focus and accuracy for the entire duration.
  • Mastering Time Management: Learn to pace yourself so you never have to rush.
  • Boosting Confidence: Walk into your exam knowing exactly what to expect, eliminating surprise and anxiety.
  • A New Test Every Time: Our question pool ensures you get a different, randomized set of questions on every attempt.
  • Unlimited Attempts: Take the test as many times as you need. Take it until you're 100% confident, not just once.

Don't just take a test once. Practice until you're perfect.

Don't just prepare. Simulate. Succeed.

Enroll For Salesforce-Platform-Integration-Architect Exam