Salesforce-MuleSoft-Platform-Architect Practice Test Questions (2026)

Total 152 Questions


Last Updated On : 7-Apr-2026


undraw-questions

Think You're Ready? Prove It Under Real Exam Conditions

Take Exam

An API client calls one method from an existing API implementation. The API implementation is later updated. What change to the API implementation would require the API client's invocation logic to also be updated?



A. When the data type of the response is changed for the method called by the API client


B. When a new method is added to the resource used by the API client


C. When a new required field is added to the method called by the API client


D. When a child method is added to the method called by the API client





C.
  When a new required field is added to the method called by the API client

Explanation:

This question tests the understanding of what constitutes a breaking change versus a non-breaking change in an API contract. A breaking change forces the client to update their invocation logic, while a non-breaking change does not.

Why C is Correct:
Adding a new required field, either to the request payload or as a required query or header parameter, is a breaking change. Existing client requests will now be invalid because they do not include the newly required information. The API will likely return a 400 Bad Request or 422 Unprocessable Entity error. The client must be updated to provide this new field to successfully call the method. This changes the contract in a way that fails existing, unchanged clients.

Why A is Incorrect:
Changing the data type of the response for the called method is also a breaking change and would require a client update, for example changing from a string to an integer or altering the structure of a JSON object. This option acts as a distractor because it is a more obvious and severe breaking change. While both A and C are technically breaking changes, option C represents the more subtle and commonly tested scenario in API versioning discussions. When only one answer is expected, C is typically chosen as the classic example of a contract violation that is easy to overlook.

Why B is Incorrect:
Adding a new method to a resource is a non-breaking, backward-compatible change. Existing clients that invoke the original method continue to function without modification. This is a standard way to extend an API’s functionality.

Why D is Incorrect:
Adding a child method, such as a new nested endpoint like /resource/{id}/newChild, is also a non-breaking change. It introduces a new endpoint without altering the behavior of any existing endpoints used by clients.

Clarification on Option A vs. C:
In a strict interpretation, both A and C represent breaking changes. However, in the context of typical MuleSoft certification exams, option C is the quintessential example of a breaking change because it violates backward compatibility through stricter validation rules rather than an obvious structural change. Adding required fields is a very common real-world mistake that breaks clients, making it the expected answer.

Best Practice:
Any change that causes an existing, valid client request to become invalid is a breaking change and requires a MAJOR version increment, for example moving from version 2.1.0 to 3.0.0, along with proper client coordination.

Reference:
MuleSoft’s API design guidance and the Semantic Versioning specification classify adding new required parameters or fields as a MAJOR breaking change because it breaks backward compatibility.

A company uses a hybrid Anypoint Platform deployment model that combines the EU control plane with customer-hosted Mule runtimes. After successfully testing a Mule API implementation in the Staging environment, the Mule API implementation is set with environment-specific properties and must be promoted to the Production environment. What is a way that MuleSoft recommends to configure the Mule API implementation and automate its promotion to the Production environment?



A. Bundle properties files for each environment into the Mule API implementation's deployable archive, then promote the Mule API implementation to the Production environment using Anypoint CLI or the Anypoint Platform REST APIsB.


B. Modify the Mule API implementation's properties in the API Manager Properties tab, then promote the Mule API implementation to the Production environment using API Manager


C. Modify the Mule API implementation's properties in Anypoint Exchange, then promote the Mule API implementation to the Production environment using Runtime Manager


D. Use an API policy to change properties in the Mule API implementation deployed to the Staging environment and another API policy to deploy the Mule API implementation to the Production environment





A.
  Bundle properties files for each environment into the Mule API implementation's deployable archive, then promote the Mule API implementation to the Production environment using Anypoint CLI or the Anypoint Platform REST APIsB.

Explanation:

In a hybrid deployment with a cloud-hosted control plane and customer-hosted or on-premises Mule runtimes, environment-specific configurations such as endpoints and credentials cannot be managed directly through the Runtime Manager UI Properties tab. That feature is limited to CloudHub deployments.

Recommended Approach:
MuleSoft recommends the following pattern for customer-hosted runtimes:

Use YAML or .properties files bundled inside the Mule application’s deployable JAR.
Configure the application to load the correct configuration file based on an environment variable or classifier, for example mule.env=prod.
Automate deployment and promotion using the Anypoint CLI such as anypoint-cli runtime-mgr:deploy or the Runtime Manager REST APIs, typically integrated into CI/CD pipelines like Jenkins or Azure DevOps.

This approach enables consistent, repeatable, and automated promotion from Staging to Production without manual intervention.

Why the other options are incorrect:

B. Modify properties in API Manager Properties tab
API Manager properties are used for API-level configuration such as policies and governance, not application runtime properties. Additionally, the Runtime Manager Properties tab is unavailable or limited for customer-hosted runtimes.

C. Modify properties in Anypoint Exchange
Anypoint Exchange is used for sharing assets such as API specifications, connectors, and examples. It is not designed for configuring or deploying runtime application properties.

D. Use API policies
API policies enforce governance on incoming requests, such as rate limiting or security controls. They cannot be used to configure application properties or to deploy applications.

Reference:
MuleSoft documentation on deploying to customer-hosted runtimes and hybrid deployments recommends bundling environment-specific configuration files and using the Anypoint CLI or Runtime Manager REST APIs for automated deployments in CI/CD pipelines. This is a standard pattern for on-premises and hybrid environments in MuleSoft Architect certifications.

What Anypoint Connectors support transactions?



A. Database, JMS, VM


B. Database, 3MS, HTTP


C. Database, JMS, VM, SFTP


D. Database, VM, File





A.
  Database, JMS, VM

Explanation:

In MuleSoft, transactions are used to ensure that a group of operations either all succeed or all fail together, maintaining consistency and reliability. Mule runtime supports transactional resources through specific connectors that can participate in local transactions or XA transactions.

Connectors that support transactions:

Database Connector:
Supports transactional operations when interacting with relational databases. Multiple SQL statements can be grouped into a single transaction, ensuring rollback if one fails.

JMS Connector:
Supports transactions when interacting with message queues. JMS can participate in XA transactions, ensuring that message consumption and database updates occur atomically.

VM Connector:
Supports transactional message handling within Mule applications. VM queues can be transactional, ensuring reliable delivery and rollback in case of failures.

These connectors are explicitly designed to integrate with Mule’s transaction management framework, allowing developers to configure transactional scopes using transaction elements in flows.

Why other connectors do not support transactions:

Connectors such as HTTP, File, or SFTP do not support transactions. They operate in a stateless, request-response or file-based manner, where rollback semantics are not applicable. For example, once an HTTP request is sent or a file is written to disk, the action cannot be rolled back like a database insert or a JMS message acknowledgment.

Correct Answer:
Option A: Database, JMS, VM

❌ Option B
Database, 3MS, HTTP
Incorrect. "3MS" is a typo, likely intended to be JMS. HTTP does not support transactions because requests cannot be rolled back.

❌ Option C
Database, JMS, VM, SFTP
Incorrect. SFTP does not support transactional semantics. File transfers cannot be rolled back once executed.

❌ Option D
Database, VM, File
Incorrect. The File connector does not support transactions. Once a file is written or deleted, rollback is not possible.

📖 References
MuleSoft Documentation: Transactions in Mule
MuleSoft Documentation: Database Connector Transactions
MuleSoft Certified Platform Architect I Exam Guide — Transactional Resources section

👉 In summary:
Option A is correct because only Database, JMS, and VM connectors support transactions in MuleSoft. Other connectors such as HTTP, File, and SFTP do not provide transactional rollback semantics.

An API implementation is being designed that must invoke an Order API, which is known to repeatedly experience downtime. For this reason, a fallback API is to be called when the Order API is unavailable. What approach to designing the invocation of the fallback API provides the best resilience?



A. Search Anypoint Exchange for a suitable existing fallback API, and then implement invocations to this fallback API in addition to the Order API


B. Create a separate entry for the Order API in API Manager, and then invoke this API as a fallback API if the primary Order API is unavailable


C. Redirect client requests through an HTTP 307 Temporary Redirect status code to the fallback API whenever the Order API is unavailable


D. Set an option in the HTTP Requester component that invokes the Order API to instead invoke a fallback API whenever an HTTP 4xx or 5xx response status code is returned from the Order API





A.
   Search Anypoint Exchange for a suitable existing fallback API, and then implement invocations to this fallback API in addition to the Order API

Explanation:

✅ Why A is correct:
For maximum resilience, MuleSoft recommends handling fallback logic explicitly in the application flow, rather than relying on implicit platform behavior or redirects.

A resilient design typically includes:
- Primary API invocation
- Explicit error handling / circuit-breaker logic
- Fallback API invocation when the primary API fails

By searching Anypoint Exchange for an existing fallback or alternative API (e.g., a cached, degraded, or read-only service) and invoking it when the primary Order API is unavailable, you:
- Maintain control over fallback behavior
- Avoid tight coupling or hidden runtime behavior
- Align with API-led connectivity and reuse principles

This is the most reliable and architecturally correct approach.

❌ Why the other options are incorrect:

B. Create a second API in API Manager and invoke it as fallback
API Manager is for governance and policy enforcement, not dynamic fallback routing. Creating a second API instance does not inherently provide resilience.

C. Redirect using HTTP 307
Redirecting clients pushes responsibility to the consumer and breaks abstraction. Clients may not support or expect redirects, which violates good API design practices.

D. Use an HTTP Requester option to auto-fallback
The Mule HTTP Requester does not provide a built-in fallback option for handling 4xx/5xx responses. Error handling and fallback logic must be explicitly implemented in the flow using patterns like on-error-continue, choice, or circuit breaker.

✅ Summary:
The most resilient and MuleSoft-aligned approach is to explicitly design fallback behavior in the application logic, typically using an alternative API discovered via Anypoint Exchange.

A system API is deployed to a primary environment as well as to a disaster recovery (DR) environment, with different DNS names in each environment. A process API is a client to the system API and is being rate limited by the system API, with different limits in each of the environments. The system API's DR environment provides only 20% of the rate limiting offered by the primary environment. What is the best API fault-tolerant invocation strategy to reduce overall errors in the process API, given these conditions and constraints?



A. Invoke the system API deployed to the primary environment; add timeout and retry logic to the process API to avoid intermittent failures; if it still fails, invoke the system API deployed to the DR environment


B. Invoke the system API deployed to the primary environment; add retry logic to the process API to handle intermittent failures by invoking the system API deployed to the DR environment


C. In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment; add timeout and retry logic to the process API to avoid intermittent failures; add logic to the process API to combine the results


D. Invoke the system API deployed to the primary environment; add timeout and retry logic to the process API to avoid intermittent failures; if it still fails, invoke a copy of the process API deployed to the DR environment





A.
  Invoke the system API deployed to the primary environment; add timeout and retry logic to the process API to avoid intermittent failures; if it still fails, invoke the system API deployed to the DR environment

Explanation:

In this scenario, the system API is deployed in two environments:
- Primary environment with full rate limits.
- Disaster Recovery (DR) environment with only 20% of the rate limiting capacity of the primary.

The process API consumes the system API and must be resilient to failures. The challenge is to design a fault-tolerant invocation strategy that reduces errors while respecting the constraints of rate limits and DR capacity.

The best approach is to prioritize the primary environment and only fall back to the DR environment when necessary. This is achieved by:
- Invoking the primary system API first.
- This ensures the process API benefits from the higher rate limits and avoids overwhelming the DR environment unnecessarily.
- Adding timeout and retry logic.
- Timeouts prevent the process API from hanging indefinitely when the primary system API is unresponsive.
- Retries handle transient failures (e.g., network glitches, temporary overloads).
- Failover to the DR environment only if retries fail.
- This ensures the DR environment is used sparingly, preserving its limited capacity.
- The DR environment acts as a safety net, not a primary path.

This strategy aligns with MuleSoft’s resilience best practices:
- Fail fast, retry smartly, and fallback gracefully.
- Avoid parallel invocation (Option C), which would overwhelm the DR environment and waste resources.
- Avoid invoking DR too early (Option B), which risks hitting rate limits quickly.
- Avoid deploying duplicate process APIs (Option D), which adds unnecessary complexity and does not solve the rate limiting issue.

By keeping the DR environment as a last resort, the process API minimizes errors while ensuring continuity of service during outages in the primary environment.

❌ Option B
Retry logic that immediately invokes DR
Incorrect. This would quickly consume the DR environment’s limited rate limit capacity, leading to errors.

❌ Option C
Parallel invocation of primary and DR
Incorrect. This doubles traffic and overwhelms the DR environment unnecessarily. It also complicates result handling.

❌ Option D
Invoke a copy of the process API in DR
Incorrect. Duplicating the process API does not solve the rate limiting issue. It adds complexity without resilience benefits.

📖 References
MuleSoft Documentation: Resilience Patterns
MuleSoft Blog: Designing Fault-Tolerant APIs with Retry and Fallback
MuleSoft Certified Platform Architect I Exam Guide — Resilience and DR Strategies section

👉 In summary:
Option A is correct because the most resilient strategy is to invoke the primary system API with timeout and retry logic, and only failover to the DR environment if the primary fails completely. This minimizes errors and respects the DR environment’s limited rate limits.

An organization is deploying their new implementation of the OrderStatus System API to multiple workers in CloudHub. This API fronts the organization's on-premises Order Management System, which is accessed by the API implementation over an IPsec tunnel. What type of error typically does NOT result in a service outage of the OrderStatus System API?



A. A CloudHub worker fails with an out-of-memory exception


B. API Manager has an extended outage during the initial deployment of the API implementation


C. The AWS region goes offline with a major network failure to the relevant AWS data centers


D. The Order Management System is Inaccessible due to a network outage in the organization's on-premises data center





B.
  API Manager has an extended outage during the initial deployment of the API implementation

Explanation:

MuleSoft separates management functions from data processing to ensure resilience:
- Independence of Planes: Once a Mule application is deployed and its policies are cached locally in the runtime, it no longer requires a continuous connection to API Manager to function.
- Initial Deployment vs. Runtime: Even if API Manager experiences an outage during a deployment attempt, it typically only prevents management actions (like updating policies or viewing analytics). If the workers have already pulled the application and its policies, the API will remain online and serve requests. If the "outage" occurs just as the deployment is initiated, the deployment might fail to start, but it does not cause a "service outage" for an API that is intended to be running.
- High Availability (HA): Deploying to multiple workers in CloudHub provides horizontal scale and redundancy. If one worker fails, others continue to handle traffic, avoiding a total service outage.

Why Other Options DO Result in Service Outages:
- A (Worker Failure): While multiple workers provide redundancy, if a worker fails with an Out-of-Memory (OOM) error, that specific node is out of service. While not a total outage (if other workers are healthy), it represents a partial failure. However, the question asks what typically does not result in an outage. An API Manager outage is the most "disconnected" from actual request processing.
- C (AWS Region Offline): CloudHub runs on AWS. If a major AWS region or data center goes offline, the workers hosted there will fail, causing a complete service outage unless you have a multi-region disaster recovery plan.
- D (Backend System Inaccessible): The OrderStatus API is a "front" for the Order Management System (OMS). If the OMS or the IPsec tunnel goes down, the API can no longer fulfill its primary purpose. Every request will return an error (like 503 Service Unavailable), which constitutes a functional service outage.

Key Takeaway:
For the Platform Architect exam, remember that the Control Plane (API Manager, Runtime Manager) is for Management, while the Runtime Plane (CloudHub Workers) is for Execution. An outage in the Control Plane does not stop the Runtime Plane from processing existing traffic.

Reference:
MuleSoft CloudHub Architecture

A system API has a guaranteed SLA of 100 ms per request. The system API is deployed to a primary environment as well as to a disaster recovery (DR) environment, with different DNS names in each environment. An upstream process API invokes the system API and the main goal of this process API is to respond to client requests in the least possible time. In what order should the system APIs be invoked, and what changes should be made in order to speed up the response time for requests from the process API?



A. In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment, and ONLY use the first response


B. In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment using a scatter-gather configured with a timeout, and then merge the responses


C. Invoke the system API deployed to the primary environment, and if it fails, invoke the system API deployed to the DR environment


D. Invoke ONLY the system API deployed to the primary environment, and add timeout and retry logic to avoid intermittent failures





A.
  In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment, and ONLY use the first response

Explanation:

This question presents a different optimization goal than the previous disaster recovery question. Here, the primary goal is the least possible response time for the Process API, and the System API has a guaranteed SLA of 100ms. The presence of a DR environment is a secondary fact to be leveraged for performance, not just resilience.

Why A is Correct:
This implements a "fastest response" or "race" pattern, which is optimal for minimizing latency when you have multiple, functionally equivalent endpoints.
- Parallel Invocation: The Process API sends requests to both the primary and DR System API endpoints simultaneously.
- Use First Response: It immediately returns the result from whichever endpoint responds first, discarding the slower response. This statistically guarantees the lowest possible latency for the client, as it eliminates the risk of the chosen endpoint being temporarily slower. It turns the DR environment into a performance asset, not just a resilience backup.

Why B is Incorrect:
Using a scatter-gather to merge responses adds unnecessary complexity and increases latency. Scatter-gather waits for all parallel routes to complete (or timeout) before proceeding to merge results. This means the response is delayed until the slowest of the two endpoints responds, which is the opposite of the goal to get the fastest possible response. It's used for aggregating different data, not for speed.

Why C is Incorrect:
This is a sequential, primary-first failover strategy. It is excellent for resilience and conserving DR capacity, but it is poor for minimizing response time. If the primary is slow (but not failing), the client still waits for the primary's timeout before even trying the DR, resulting in higher overall latency. It optimizes for reliability, not speed.

Why D is Incorrect:
Invoking only the primary with retries is the baseline approach. It does nothing to leverage the DR environment to improve speed. Timeout and retry logic adds latency on failure but doesn't improve the best-case response time. It fails to use available resources to meet the stated goal.

Trade-off Consideration:
Pattern A (Race): Optimizes for minimum latency but doubles the load on the backend systems (both primary and DR get every request). This is acceptable only if both environments are scaled to handle 100% of the traffic, which may have cost implications.
Pattern C (Failover): Optimizes for resource efficiency and resilience but accepts higher latency in failure scenarios.

Given the explicit goal of "respond in the least possible time," the race pattern (A) is the architecturally correct choice.

Implementation in Mule 4:
This can be implemented using a Scatter-Gather where each route calls a different endpoint, but with a critical difference: you would not aggregate. Instead, you would use Error Handling and Choice logic to capture the first successful response and cancel the other route, or use a custom aggregation strategy that picks the first successful result. More elegantly, it can be done with the async scope and competing callbacks.

Reference:
This pattern is known as "Parallel Request" or "Competing Consumers" in integration design. It's a standard technique for reducing latency when idempotent calls can be made to multiple identical endpoints. MuleSoft's documentation on performance optimization discusses parallel processing for reducing overall flow execution time.

An Anypoint Platform organization has been configured with an external identity provider (IdP) for identity management and client management. What credentials or token must be provided to Anypoint CLI to execute commands against the Anypoint Platform APIs?



A. The credentials provided by the IdP for identity management


B. The credentials provided by the IdP for client management


C. An OAuth 2.0 token generated using the credentials provided by the IdP for client management


D. An OAuth 2.0 token generated using the credentials provided by the IdP for identity management





D.
  An OAuth 2.0 token generated using the credentials provided by the IdP for identity management

Explanation:

When an Anypoint Platform organization is configured with an external identity provider (IdP), authentication and authorization are delegated to that IdP. This means that both users and clients authenticate against the IdP rather than directly against Anypoint’s native identity system.

For tools such as Anypoint CLI, which execute commands against the Anypoint Platform APIs, the CLI must authenticate using an OAuth 2.0 token. This token is generated by the IdP and represents the authenticated user’s identity and permissions.

Key points:
- Identity management via IdP: The IdP issues OAuth 2.0 tokens for users. These tokens are then used by Anypoint CLI to call Anypoint Platform APIs.
- Client management via IdP: This applies to API client applications (e.g., apps consuming APIs via Client ID/Secret). It is not relevant for CLI authentication, which requires user identity tokens.
- OAuth 2.0 token usage: The CLI does not use raw credentials (username/password) directly. Instead, it requires a valid OAuth 2.0 token issued by the IdP.
- Why identity management, not client management: CLI commands are executed on behalf of a user, not an API client application. Therefore, the token must come from the IdP’s identity management flow, not client management.

This aligns with MuleSoft’s best practices:
- CLI authentication → OAuth 2.0 token from IdP (identity management).
- API client authentication → Client ID/Secret from IdP (client management).

Thus, the correct answer is Option D, because the CLI requires an OAuth 2.0 token generated using IdP credentials for identity management.

Option A
The credentials provided by the IdP for identity management — Incorrect. Raw credentials (username/password) are not used directly; they must generate an OAuth 2.0 token.

Option B
The credentials provided by the IdP for client management — Incorrect. These are for API client applications, not CLI user authentication.

Option C
An OAuth 2.0 token generated using the credentials provided by the IdP for client management — Incorrect. This applies to API clients, not CLI users. CLI requires identity tokens.

📖 References:
MuleSoft Documentation: External Identity Providers
MuleSoft Documentation: Anypoint CLI Authentication
MuleSoft Certified Platform Architect I Exam Guide — Identity and Access Management section

👉 In summary:
Option D is correct because Anypoint CLI requires an OAuth 2.0 token from the IdP’s identity management flow, not client management credentials.

A company has started to create an application network and is now planning to implement a Center for Enablement (C4E) organizational model. What key factor would lead the company to decide upon a federated rather than a centralized C4E?



A. When there are a large number of existing common assets shared by development teams


B. When various teams responsible for creating APIs are new to integration and hence need extensive training


C. When development is already organized into several independent initiatives or groups


D. When the majority of the applications in the application network are cloud based





C.
  When development is already organized into several independent initiatives or groups

Explanation:

A Center for Enablement (C4E) is designed to shift the IT operating model from a centralized, bottleneck-prone delivery team to a cross-functional enablement team. When choosing an organizational model for the C4E, the primary deciding factor between centralized and federated is the existing structure and independence of the business units:

Organizational Context: If an organization is large and its various lines of business (LoBs) or development teams already operate with a high degree of autonomy (independent budgets, schedules, and specific domain initiatives), a centralized C4E would likely become a bottleneck and face resistance.

The Federated Advantage: A federated C4E model works by embedding C4E principles and "champions" directly into these existing independent groups. This allows the organization to maintain central standards (the "hub") while letting local teams (the "spokes") maintain the speed and agility required for their specific initiatives.

Efficiency: Coordinating multiple independent initiatives from a single centralized team requires significantly more manual process effort than allowing those teams to self-serve through a federated structure.

Why Other Options are Incorrect:
A: A large number of existing common assets is a result of a mature C4E rather than a reason to choose a federated model over a centralized one.
B: If teams are new to integration and need extensive training, a centralized or highly guided model is often better initially to ensure standards are established before decentralizing into a federated model.
D: Whether applications are cloud-based or on-premises is a technical deployment detail and does not dictate the organizational or human reporting structure of the C4E.

Key Takeaway:
For the Platform Architect exam, remember that federation is the architectural answer for autonomy and scale. Choose the federated model when you need to enable disparate groups to work independently within a unified set of guardrails.

Due to a limitation in the backend system, a system API can only handle up to 500 requests per second. What is the best type of API policy to apply to the system API to avoid overloading the backend system?



A. Rate limiting


B. HTTP caching


C. Rate limiting - SLA based


D. Spike control





D.
  Spike control

Explanation:

When an API must be protected from exceeding a specific backend capacity (like 500 requests per second), the choice of policy depends on whether you want to reject excess traffic or smooth it out.

Correct Answer:
Option D: Spike control
The Spike Control policy is specifically designed to protect backend systems from being overwhelmed by traffic surges. Unlike a standard rate-limiting policy, which enforces a "hard" quota and rejects traffic immediately upon exceeding it, Spike Control uses a Sliding Window Algorithm to smooth out traffic.

Smoothing/Queuing: If the backend limit is reached, Spike Control can queue requests and retry them after a short delay rather than failing them immediately. This ensures that the backend never sees more than the allowed 500 requests per second while still allowing some of the "burst" traffic to be processed successfully with slightly higher latency.

Backend Protection: MuleSoft documentation explicitly recommends Spike Control for backend protection scenarios where the primary goal is to ensure the system does not crash or degrade due to sudden volume increases.

Incorrect Answers:
Option A: Rate limiting
While Rate Limiting can protect a backend by setting a hard cap, it is primarily used for accountability and quota management. It uses a "fixed-window" algorithm and rejects all requests with a 429 Too Many Requests error once the quota is hit. This provides a "jagged" traffic profile to the backend compared to the "smooth" profile of Spike Control.

Option B: HTTP caching
Caching improves performance and reduces backend load by serving frequent requests from memory. However, it is not a traffic management policy. If 1,000 unique requests hit the API per second, caching would not prevent those requests from reaching the backend.

Option C: Rate limiting - SLA based
This is used when you want to offer different performance tiers (e.g., Bronze, Silver, Gold) to different Client Applications. It requires clients to provide a Client ID and Secret. Like standard rate limiting, it enforces a hard quota per client but does not provide the traffic-smoothing "spike" protection needed to safely guard a backend system's specific physical limit.

References:
MuleSoft Documentation: Spike Control Policy – "The Spike Control policy ensures that the backend server does not serve more requests than it can handle... it protects the backend by smoothing traffic."
MuleSoft Documentation: Rate Limiting Policy – "Use Rate Limiting for accountability... to enforce a hard limit."
MCPA Exam Guide: Section 1: Explaining and Application of the Anypoint Platform (Traffic Management Policies).

Page 5 out of 16 Pages
PreviousNext
34567
Salesforce-MuleSoft-Platform-Architect Practice Test Home

Experience the Real Exam Before You Take It

Our new timed 2026 Salesforce-MuleSoft-Platform-Architect practice test mirrors the exact format, number of questions, and time limit of the official exam.

The #1 challenge isn't just knowing the material; it's managing the clock. Our new simulation builds your speed and stamina.



Enroll Now

Ready for the Real Thing? Introducing Our Real-Exam Simulation!


You've studied the concepts. You've learned the material. But are you truly prepared for the pressure of the real Salesforce Certified MuleSoft Platform Architect - Mule-Arch-201 exam?

We've launched a brand-new, timed Salesforce-MuleSoft-Platform-Architect practice exam that perfectly mirrors the official exam:

✅ Same Number of Questions
✅ Same Time Limit
✅ Same Exam Feel
✅ Unique Exam Every Time

This isn't just another Salesforce-MuleSoft-Platform-Architect practice questions bank. It's your ultimate preparation engine.

Enroll now and gain the unbeatable advantage of:

  • Building Exam Stamina: Practice maintaining focus and accuracy for the entire duration.
  • Mastering Time Management: Learn to pace yourself so you never have to rush.
  • Boosting Confidence: Walk into your Salesforce-MuleSoft-Platform-Architect exam knowing exactly what to expect, eliminating surprise and anxiety.
  • A New Test Every Time: Our Salesforce Certified MuleSoft Platform Architect - Mule-Arch-201 exam questions pool ensures you get a different, randomized set of questions on every attempt.
  • Unlimited Attempts: Take the test as many times as you need. Take it until you're 100% confident, not just once.

Don't just take a Salesforce-MuleSoft-Platform-Architect test once. Practice until you're perfect.

Don't just prepare. Simulate. Succeed.

Take Salesforce-MuleSoft-Platform-Architect Practice Exam