Total 60 Questions
Last Updated On :
Preparing with Salesforce-MuleSoft-Developer-II practice test is essential to ensure success on the exam. This Salesforce SP25 test allows you to familiarize yourself with the Salesforce-MuleSoft-Developer-II exam questions format and identify your strengths and weaknesses. By practicing thoroughly, you can maximize your chances of passing the Salesforce certification spring 2025 release exam on your first attempt. Surveys from different platforms and user-reported pass rates suggest Salesforce-MuleSoft-Developer-II practice exam users are ~30-40% more likely to pass.
When implementing a synchronous API where the event source is an HTTP Listener, a
developer needs to return the same correlation ID backto the caller in the HTTP response
header.
How can this be achieved?
A. Enable the auto-generate CorrelationID option when scaffolding the flow
B. Enable the CorrelationID checkbox in the HTTP Listener configuration
C. Configure a custom correlation policy
D. NO action is needed as the correlation ID is returned to the caller in the response header by default
Explanation:
MuleSoft automatically generates a correlation ID when an event is received by an HTTP Listener.
If the incoming request includes an X-Correlation-ID header, Mule will use that value.
If not, Mule generates a new one using its correlation ID generator.
This ID is stored in the event context and is automatically propagated in the response headers unless explicitly disabled.
So, no manual configuration is required to return it — Mule does this by default for traceability and logging purposes2.
Want to verify it?
You can inspect the response headers using a tool like Postman or curl and look for:
X-Correlation-ID:
Why the Other Options Are Incorrect:
A. Auto-generate CorrelationID when scaffolding
No such scaffolding option exists — correlation ID is runtime behavior.
B. CorrelationID checkbox in HTTP Listener
There’s no checkbox for this in the HTTP Listener config.
C. Custom correlation policy
Only needed if you want to override the default behavior — not required for basic propagation.
A Mule application uses API autodiscovery to access and enforce policies for a RESTful implementation.
A. Northing because flowRef is an optional attribute which can be passed runtime
B. The name of the flow that has APlkit Console to receive all incoming RESTful operation requests.
C. Any of the APIkit generate implement flows
D.
Explanation:
API autodiscovery is used to associate a Mule application with an API instance in Anypoint Platform, allowing policies (e.g., rate limiting, security) to be applied. The flowRef attribute in the
Role of flowRef:
The flowRef must reference the flow containing the HTTP Listener that accepts incoming requests. This ensures that autodiscovery correctly ties the API instance to the flow where policies are enforced and requests are routed.
APIkit Context:
When using APIkit to implement a RESTful API, it generates flows (e.g., main flow) with an HTTP Listener to handle all operations. The flowRef should point to this flow, which acts as the root for receiving and dispatching requests to other APIkit-generated flows.
Why Option D:
The name of the flow that has HTTP listener to receive all incoming RESTful operation requests?
This is correct because the flowRef must specify the flow with the HTTP Listener that serves as the entry point for the RESTful API. In an APIkit project, this is typically the main flow (e.g., api-main) generated by APIkit, which listens for incoming requests and routes them to the appropriate operation flows. Autodiscovery uses this reference to apply policies and track the API.
Why Not the Other Options?
Option A:
Nothing because flowRef is an optional attribute which can be passed runtime
This is incorrect. The flowRef attribute is not optional in
Option B:
The name of the flow that has APIkit Console to receive all incoming RESTful operation requests
This is incorrect. The APIkit Console is a tool for testing and interacting with the API, not a flow that receives requests. The flowRef should point to a flow with an HTTP Listener, not one associated with the console, which is a separate component hosted by the Mule runtime.
Option C:
Any of the APIkit generate implement flows
This is incorrect. APIkit generates multiple flows for each operation defined in the RAML/OAS (e.g., get:\resource, post:\resource), but these are implementation flows that handle specific endpoints. The flowRef must point to the main entry flow with the HTTP Listener, not any arbitrary implementation flow, to ensure proper request routing and policy enforcement.
Detailed Behavior
Autodiscovery Setup:
The
Example Flow:
In an APIkit project, the main flow might look like a single HTTP Listener with a base path (e.g., /api/*), which routes requests to operation-specific flows. The flowRef ties this entry point to the autodiscovery configuration.
Key Considerations for the MuleSoft Developer II Exam
API Autodiscovery:
Understand the role of
APIkit:
Know how APIkit generates flows and the importance of the main flow with the HTTP Listener.
Policy Enforcement:
Recognize that autodiscovery links the API to Anypoint Platform for governance.
Configuration:
Be aware that flowRef must match an existing flow name in the application.
Refer to the exhibit.m
A Mule application pom.xml configures the Maven Resources plugin to exclude parsing
binary files in theproject’s src/main/resources/certs directory.
Which configuration of this plugin achieves a successful build?
BR>
A. Option A
B. Option B
C. Option C
D. Option D
Explanation:
Key Problem:
The Mule application needs to exclude binary files (e.g., .p12, .jks, .crt) in src/main/resources/certs from Maven resource filtering to prevent corruption during the build.
Correct Configuration (Option C):
The Maven Res
ources Plugin must:
Enable filtering (
Exclude binary files using
Why Option C is Correct?
The configuration in Option C (from the exhibit):<Explicitly lists binary certificate formats to skip filtering.
Matches the files in src/main/resources/certs (.p12, .jks, etc.).
Why Other Options Fail?
Option A:
Missing
Option B:
Incorrect file extensions (e.g., p1, pen).
Option D:
May exclude wrong files or lack critical extensions.
Key Maven Concept:
Filtering replaces placeholders (e.g., ${env}) in files—but must not process binaries (they get corrupted).
Which command is used to convert a JKS keystore to PKCS12?
A. Keytool-importkeystore –srckeystore keystore p12-srcstoretype PKCS12 –destkeystore keystore.jks –deststoretype JKS
B. Keytool-importkeystore –srckeystore keystore p12-srcstoretype JKS –destkeystore keystore.p12 –deststoretype PKCS12
C. Keytool-importkeystore –srckeystore keystorejks-srcstoretype JKS –destkeystore keystore.p13 –deststoretype PKCS12
D. Keytool-importkeystore –srckeystore keystore jks-srcstoretype PKCS12 –destkeystore keystore.p12 –deststoretype JKS
Explanation:
The question asks for the correct keytool command to convert a JKS (Java KeyStore) to a PKCS12 keystore. JKS is a Java-specific format, while PKCS12 is a standard format for storing cryptographic keys and certificates. The keytool -importkeystore command is used, requiring parameters like -srckeystore (source file), -srcstoretype (source type), -destkeystore (destination file), and -deststoretype (destination type). The correct command must specify a JKS source and a PKCS12 destination.
✅ Correct Answer: Option B
Command: keytool -importkeystore –srckeystore keystore.p12 -srcstoretype JKS –destkeystore keystore.p12 –deststoretype PKCS12
➡️ Option B is correct because it specifies the source keystore as JKS (-srcstoretype JKS) and the destination as PKCS12 (-deststoretype PKCS12), aligning with the requirement to convert from JKS to PKCS12. Although the source file name keystore.p12 is unconventional for a JKS file (typically .jks), the -srcstoretype JKS parameter explicitly defines the source format as JKS, ensuring the command works. The destination file keystore.p12 matches the PKCS12 format, making this the correct command despite the naming ambiguity.
Incorrect Options:
Option A: keytool -importkeystore –srckeystore keystore.p12 -srcstoretype PKCS12 –destkeystore keystore.jks –deststoretype JKS
➡️ Option A is incorrect because it converts a PKCS12 keystore to JKS, which is the opposite of the required conversion (JKS to PKCS12). The command specifies the source as keystore.p12 with -srcstoretype PKCS12, indicating a PKCS12 source, and the destination as keystore.jks with -deststoretype JKS, indicating a JKS output. This reverses the desired process, as the question explicitly asks for converting a JKS keystore to a PKCS12 keystore, making this command unsuitable for the task.
Option C: keytool -importkeystore –srckeystore keystorejks -srcstoretype JKS –destkeystore keystore.p13 –deststoretype PKCS12
➡️ Option C is incorrect due to the non-standard destination file extension keystore.p13. While it correctly specifies the source as JKS (-srcstoretype JKS) and the destination as PKCS12 (-deststoretype PKCS12), the .p13 extension is not a recognized standard for PKCS12 files, which typically use .p12 or .pfx. This deviation could cause compatibility issues or errors in tools expecting standard PKCS12 extensions. A proper PKCS12 file extension is critical for correct recognition and usage, rendering this option invalid.
Option D: keytool -importkeystore –srckeystore keystore.jks -srcstoretype PKCS12 –destkeystore keystore.p12 –deststoretype JKS
➡️ Option D is incorrect because it specifies the source as PKCS12 (-srcstoretype PKCS12) and the destination as JKS (-deststoretype JKS), which is the reverse of the required JKS-to-PKCS12 conversion. Despite the source file name keystore.jks suggesting a JKS file, the -srcstoretype PKCS12 incorrectly defines it as PKCS12. This command would attempt to convert a PKCS12 keystore to JKS, failing to meet the question’s requirement to convert a JKS keystore to a PKCS12 keystore.
Reference:
Oracle Documentation on keytool: Java SE 8 keytool Documentation
General guide on keystore conversion: Baeldung - Convert JKS to PKCS12
PKCS12 and JKS format details: Java KeyStore API
A developer has created the first version of an API designed for business partners to work commodity prices. What should developer do to allow more than one major version of the same API to be exposed by the implementation?
A. In Design Center, open the RAML and modify each operation to include the major version number
B. In Anypoint Studio, generate scaffolding from the RAML, and the modify the
C. In Design Center, open the RAML and modify baseUn to include a variable that indicates the version number
D. In Anypoint Studio, generate scaffolding from the RAML, and then modify the flownames generated by APIKit to include a variable with the major version number
Explanation:
The question asks how a developer can expose multiple major versions of an API (designed for commodity prices) within the same implementation. The API is defined using RAML (RESTful API Modeling Language) in MuleSoft’s Design Center, and the implementation is likely managed in Anypoint Studio with APIkit. The correct approach must allow different major versions (e.g., v1, v2) to coexist. The solution involves modifying the API specification to support versioning, ideally in a scalable and standard way.
Correct Answer: Option C
✅ Option C: In Design Center, open the RAML and modify baseUri to include a variable that indicates the version number.
Option C is correct because modifying the baseUri in the RAML file to include a version variable (e.g., baseUri: http://api.example.com/{version}/commodities) allows multiple major versions of the API to be exposed. This approach embeds the version (e.g., v1, v2) in the URI, a common RESTful practice. APIkit in Anypoint Studio uses the baseUri to route requests to the appropriate versioned flows, enabling the implementation to handle multiple versions without duplicating the entire API specification or codebase.
Incorrect Options:
❌ Option A: In Design Center, open the RAML and modify each operation to include the major version number.
Option A is incorrect because modifying each operation in the RAML to include the major version number is impractical and non-standard. Adding version numbers to individual operations (e.g., endpoints or methods) violates RESTful principles, as versioning is typically handled at the API level via the URI or headers. This approach would require redundant changes across all endpoints, increase maintenance complexity, and make the API less intuitive for consumers, leading to a poorly designed API specification.
❌ Option B: In Anypoint Studio, generate scaffolding from the RAML, and then modify the
Option B is incorrect because modifying the
❌ Option D: In Anypoint Studio, generate scaffolding from the RAML, and then modify the flow names generated by APIkit to include a variable with the major version number.
Option D is incorrect because modifying flow names in Anypoint Studio to include a version variable does not effectively expose multiple API versions. Flow names are internal to the Mule application and do not influence the API’s external URI structure. This approach would not update the API’s routing or endpoint structure, failing to meet the requirement of exposing multiple versions to clients. Versioning should be handled in the RAML specification, not in implementation-specific flow names.
Reference:
MuleSoft Documentation on API Versioning: MuleSoft API Versioning Best Practices
RAML Specification: RAML 1.0 Specification
MuleSoft APIkit Overview: APIkit Documentation
A heathcare customer wants to use hospital system data, which includes code that was developed using legacy tools and methods. The customer has created reusableJava libraries in order to read the data from the system. What is the most effective way to develop an API retrieve the data from the hospital system?
A. Refer to JAR files in the code
B. Include the libraries writes deploying the code into the runtime
C. Create the Java code in your project and invoice the data from the code
D. Install libraries in a local repository and refer to it in the pm.xml file
Explanation:
When integrating legacy Java code (e.g., reusable JAR libraries) in a MuleSoft application, the best practice is to manage dependencies through Maven. MuleSoft uses Maven to manage dependencies, plugins, and build lifecycles.
Option D is correct because:
Installing the legacy JAR files into a local Maven repository (or a shared one like Nexus/Artifactory) and referencing them in the pom.xml file allows:
➝ Reusability across multiple projects
➝ Proper dependency management and versioning
➝ Seamless integration during build and deployment processes
This approach follows MuleSoft and Java ecosystem best practices.
❌ Let's review why the other options are incorrect:
A. Refer to JAR files in the code
Not scalable or maintainable.
Hardcoding or manually including JARs is not recommended in modern build systems.
B. Include the libraries while deploying the code into the runtime
This is not maintainable, especially for CI/CD environments.
Increases risk of version conflicts and harder debugging.
C. Create the Java code in your project and invoke the data from the code
Rewriting legacy code into your Mule project is not efficient or maintainable.
Ignores the reuse of already-built and tested components.
📘 Reference:
MuleSoft Docs - Maven Dependency Management
MuleSoft - How to Use Custom Java Classes and JARs
A custom policy needs to be developed to intercept all cutbound HTTP requests made by Mule applications. Which XML element must be used to intercept outbound HTTP requests?
A. It is not possible to intercept outgoing HTTP requests, only inbound requests
B. http-policy:source
C. htt-policy:operation
D. http-policy:processor
Explanation:
This XML element is used to intercept outbound HTTP requests in a MuleSoft custom policy. When a Mule application acts as a client (making a request to an external system), it goes through the operation scope. The http-policy:operation tag allows developers to apply logic (such as logging, header injection, authentication, rate limiting, etc.) to outgoing HTTP calls. This scope is part of the MuleSoft API Gateway Policy Framework and is essential when enforcing security, audit, or transformation policies on outgoing traffic.
For example, if an API calls another service to fetch patient records from an external system, http-policy:operation is the place to define any logic that must run during that call.
📘 Reference:
MuleSoft Docs – Custom Policy XML Elements
❌ Incorrect Answers:
A. It is not possible to intercept outgoing HTTP requests, only inbound requests
This is incorrect. MuleSoft provides the ability to intercept both inbound and outbound requests via custom policies. Intercepting outbound requests is done through the http-policy:operation scope. Therefore, this statement is false.
B. http-policy:source
This is used to intercept inbound requests — that is, when an API receives a request from a client. It's typically used for tasks such as authentication, request validation, and logging at the entry point. It cannot be used for outbound calls from the Mule app to another service.
D. http-policy:processor
This is not a valid element in MuleSoft’s policy XML configuration. It doesn't exist in the official schema and will result in an error if used. The valid scopes for HTTP policy elements are http-policy:source and http-policy:operation.
Summary:
Use http-policy:operation for outbound requests.
Use http-policy:source for inbound requests.
Only these two are valid scopes in custom HTTP policies.
A Mule API receives a JSON payload and updates the target system with the payload. The developer uses JSON schemas to ensure the data is valid. How can the data be validation before posting to the target system?
A. Use a DataWeave 2.09 transform operation, and at the log of the DataWeave script,
add:
%dw 2.0
Import.json-moduls
B. Using the DataWeave if Else condition test the values of the payload against the examples included in the schema
C. Apply the JSON Schema policy in API Manager and reference the correct schema in the policy configuration
D. Add the JSON module dependency and add the validate-schema operation in the flow, configured to reference the schema
Explanation:
To validate a JSON payload against a JSON schema before posting it to the target system in a Mule API, the most appropriate approach is to use the validate-schema operation provided by the JSON module in Mule. This operation is specifically designed to validate JSON payloads against a defined schema, ensuring the data is valid before further processing or posting to the target system.
Here’s why D is the correct choice:
➤ JSON Module Dependency: The JSON module in MuleSoft provides operations like validate-schema, which can be used to validate a JSON payload against a JSON schema. Adding this module as a dependency in the Mule project is a prerequisite.
➤ Validate-Schema Operation: This operation allows developers to reference a JSON schema (stored in the Mule project, typically in the src/main/resources folder) and validate the incoming payload against it. If the payload does not conform to the schema, an error is thrown, preventing invalid data from being sent to the target system.
➤ Configuration: The validate-schema operation is configured in the Mule flow to point to the specific JSON schema file, ensuring that validation is performed seamlessly within the flow.
Why not the other options?
A. Use a DataWeave 2.0 transform operation with %dw 2.0 import json-module:
This option is incorrect because DataWeave is primarily used for data transformation, not schema validation. While DataWeave can manipulate JSON data, it does not provide a built-in mechanism for JSON schema validation. Additionally, the syntax %dw 2.0 import json-module is incorrect and not a valid way to import a JSON schema validation module in DataWeave.
B. Using the DataWeave if-else condition to test payload values against schema examples:
This approach is not practical or recommended. Manually testing payload values using if-else conditions in DataWeave against examples in the schema is error-prone, inefficient, and does not leverage the full power of JSON schema validation. Schema validation should be done using a dedicated mechanism, not manual checks.
C. Apply the JSON Schema policy in API Manager:
While API Manager policies can enforce certain rules, the JSON Schema policy is typically applied at the API gateway level to validate incoming requests before they reach the Mule flow. However, the question focuses on validating the data before posting to the target system, which implies validation within the Mule flow after the payload has been received. Using a policy in API Manager would not address validation within the Mule application’s processing logic.
Correct Approach (D):
➞ Add the JSON module dependency to the Mule project (via the Mule Palette in Anypoint Studio or by updating the pom.xml file).
➞ Place the JSON schema file in the src/main/resources folder of the Mule project.
➞ Add the validate-schema operation to the Mule flow, configuring it to reference the schema file.
➞ If the payload is invalid, the operation will throw an error, which can be handled using Mule’s error-handling mechanisms.
➞ If the payload is valid, the flow can proceed to post the data to the target system.
This approach ensures robust, reusable, and accurate validation of the JSON payload against the schema before it is sent to the target system.
A system API that communicates to an underlying MySQL database is deploying to CloudHub. The DevOps team requires a readiness endpoint to monitor all system APIs. Which strategy should be used to implement this endpoint?
A. Create a dedicated endpoint that responds with the API status and reachability of the underlying systems
B. Create a dedicated endpoint that responds with the API status and health of the server
C. Use an existing resource endpoint of the API
D. Create a dedicated endpoint that responds with the API status only
Explanation:
When deploying a system API to CloudHub, the DevOps team requires a readiness endpoint to monitor the health and availability of the API and its dependencies. A readiness endpoint is typically used in cloud environments (like CloudHub) to indicate whether the application is ready to handle requests. For a system API that communicates with an underlying MySQL database, the readiness endpoint should not only confirm the API's operational status but also verify the reachability of the underlying systems (e.g., the MySQL database). This ensures that the API is fully functional and capable of processing requests.
Here’s why A is the correct choice:
➤ Dedicated Endpoint: A readiness endpoint should be a separate, dedicated endpoint (e.g., /health or /readiness) to provide a clear and standardized way for monitoring tools to check the API’s status. This aligns with best practices in microservices and cloud-native applications.
➤ API Status and Reachability: The endpoint should return information about the API’s operational status (e.g., "UP" or "DOWN") and the reachability of the underlying MySQL database (e.g., whether the database connection is active). This ensures that the DevOps team can confirm both the API and its dependencies are functioning correctly.
➤ CloudHub Monitoring: CloudHub uses readiness and liveness probes to monitor applications. A readiness endpoint that includes both API status and database reachability provides comprehensive monitoring, enabling CloudHub to determine if the application is ready to serve traffic.
❌ Why not the other options?
B. Create a dedicated endpoint that responds with the API status and health of the server:
While checking the health of the server (e.g., CPU, memory, or disk usage) is useful for liveness probes, it does not fully address the readiness requirement. Readiness endpoints focus on whether the application and its dependencies (e.g., the MySQL database) are ready to process requests. Server health alone does not confirm database connectivity, which is critical for a system API.
C. Use an existing resource endpoint of the API:
Using an existing resource endpoint (e.g., /users or /orders) is not a good practice for readiness checks. Resource endpoints are designed for business logic and may require specific inputs, authentication, or database queries, which could add unnecessary complexity or fail for reasons unrelated to readiness. A dedicated endpoint is preferred for monitoring purposes.
D. Create a dedicated endpoint that responds with the API status only:
While a dedicated endpoint is appropriate, reporting only the API status (e.g., "API is running") does not provide enough information for a readiness check. The endpoint must also verify the reachability of the underlying MySQL database to ensure the API can process requests successfully.
Reference:
MuleSoft Documentation: CloudHub Health Check Endpoints – Explains how CloudHub uses health check endpoints (liveness and readiness probes) to monitor applications.
MuleSoft Best Practices: API Monitoring Best Practices – Discusses the importance of dedicated health endpoints for API monitoring.
Kubernetes Readiness Probes (relevant for CloudHub, which aligns with cloud-native practices): Kubernetes Documentation on Readiness Probes – Provides context on readiness probes, which CloudHub adapts for Mule applications.
Refer to the exhibit.
The flow name is ‘’implementation’’ with code for the MUnit test case.
When the MUnit test case is executed,what is the expected result?
A. The test case fails with an assertion error
B. The test throws an error and does not start
C. The test case fails with an unexpected error type
D. The test case passes
Explanation:
Since the question refers to an exhibit that contains the Mule flow named "implementation" and the code for an MUnit test case, but the exhibit is not provided, I will base the explanation on typical MUnit test case scenarios in MuleSoft and the provided answer choices. MUnit is MuleSoft's testing framework used to test Mule flows, and assertion errors typically occur when an assertion in the test case (e.g., using the assert-that or assert-equals operation) fails to validate the expected outcome.
Here’s a reasoned analysis of why A is the most likely answer:
MUnit Test Case Execution: An MUnit test case typically includes a Mule flow (in this case, named "implementation") and assertions to verify the flow’s behavior. The test case will execute the flow and compare the actual output (e.g., payload, attributes, or variables) against expected values defined in the test.
Assertion Error: An assertion error occurs when the actual output of the flow does not match the expected output defined in the MUnit test case. For example, if the assert-equals operation checks that the payload is "expectedValue" but the flow produces "actualValue", the test will fail with an assertion error.
Common Scenario: In MuleSoft MUnit tests, assertion errors are common when:
➜ The flow logic produces an unexpected payload, variable, or attribute.
➜ The test case is configured with incorrect expected values.
➜ The flow has a bug that causes it to deviate from the expected behavior.
Given the answer choices, A. The test case fails with an assertion error is the most specific and aligns with a typical MUnit failure caused by a mismatch between expected and actual results.
Why not the other options?
B. The test throws an error and does not start:
This is unlikely because MUnit tests are designed to start unless there is a severe configuration issue (e.g., invalid XML, missing dependencies, or a syntax error in the test case). If the test case is well-formed, it will start and execute the flow, even if it fails due to an assertion. This option suggests a pre-execution failure, which is less common in MUnit.
C. The test case fails with an unexpected error type:
This option is vague and less likely. An "unexpected error type" could imply a runtime exception (e.g., a NullPointerException or a database connection error) in the flow or test case. However, MUnit tests are typically designed to handle expected error scenarios, and assertion errors are the standard failure mode for validation issues. Without specific evidence of an unexpected error in the flow or test, this is not the best choice.
D. The test case passes:
If the test case passes, it means the flow’s output matches all assertions in the MUnit test. However, since the question implies a failure (by offering multiple failure-related options), and without the exhibit confirming a perfect match between expected and actual results, this is unlikely.
Reference:
➜ MuleSoft MUnit Documentation: MUnit Testing Framework – Explains how MUnit tests Mule flows and uses assertions to validate outcomes.
➜ MUnit Assertions: MUnit Assertions Documentation – Describes assertion operations like assert-equals and assert-that, which throw assertion errors when validation fails.
➜ MuleSoft Best Practices: Testing Mule Applications – Discusses common testing scenarios and failure modes in MUnit.
Page 2 out of 6 Pages |
Salesforce-MuleSoft-Developer-II Practice Test Home |