Total 61 Questions
Last Updated On : 11-Sep-2025 - Spring 25 release
Preparing with Marketing-Cloud-Intelligence practice test is essential to ensure success on the exam. This Salesforce SP25 test allows you to familiarize yourself with the Marketing-Cloud-Intelligence exam questions format and identify your strengths and weaknesses. By practicing thoroughly, you can maximize your chances of passing the Salesforce certification spring 2025 release exam on your first attempt. Surveys from different platforms and user-reported pass rates suggest Marketing-Cloud-Intelligence practice exam users are ~30-40% more likely to pass.
After uploading a standard file into Marketing Cloud intelligence via total Connect, you noticed that the number of rows uploaded (to the specific data stream) is NOT equal to the number of rows present in the source file. What are two resource that may cause this gap?
A. All mapped Measurements for a given row have values equal to zero
B. Main entity is not mapped
C. The source file does not contain the mediaBuy entity
D. The file does not contain any measurements (dimension only)
Explanation:
When uploading a standard file into Salesforce Marketing Cloud Intelligence via Total Connect, discrepancies between the number of rows in the source file and those uploaded to a specific data stream can occur due to data processing rules.
A. All mapped Measurements for a given row have values equal to zero: This is correct. Marketing Cloud Intelligence may exclude rows where all mapped measurement values (e.g., Impressions, Revenue) are zero, as these are often considered invalid or incomplete data points during the upload process, leading to a reduced row count.
B. Main entity is not mapped: This is correct. The main entity (e.g., Campaign Key or Media Buy Key) is essential for structuring the data stream. If it is not mapped correctly or is missing, the system may reject or filter out rows, causing a gap between the source file and uploaded rows.
C. The source file does not contain the mediaBuy entity: This is incorrect. The absence of a Media Buy entity might affect specific analyses but does not inherently cause rows to be excluded during upload, as long as other required entities or mappings are present.
D. The file does not contain any measurements (dimension only): This is incorrect. A file with only dimensions (no measurements) can still be uploaded if properly mapped, though it may not contribute to measurement-based insights; it wouldn’t necessarily reduce row count unless other validation rules fail.
Reference:
This aligns with Marketing Cloud Intelligence’s data ingestion and validation rules, as described in the platform’s documentation for file uploads via Total Connect.
A client Ingested the following We into Marketing Cloud Intelligence:
The mapping of the above file can be seen below:
Date — Day
Media Buy Key — Media Buy Key
Campaign Name — Campaign Name
Campaign Group -. Campaign Custom Attribute 01
Clicks —> Clicks
Media Cost —> Media Cost
Campaign Planned Clicks —> Delivery Custom Metric 01
The client would like to have a "Campaign Planned Clicks" measurement.
This measurement should return the "Campaign Planned Clicks" value per Campaign, for
example:
For Campaign Name 'Campaign AAA", the "Campaign Planned Clicks" should be 2000, rather than 6000 (the total sum by the number of Media Buy keys).
In order to create this measurement, the client considered multiple approaches. Please review the different approaches and answer the following question:
Which two options will yield a false result:
A. Option 2
B. Option 5
C. Option 3
D. Option 4
E. Option 1
Explanation:
The client wants the "Campaign Planned Clicks" measurement to return the value per Campaign, not aggregated across multiple Media Buy Keys. For example:
Campaign "AAA" has a planned value of 2000, regardless of how many Media Buy Keys it contains.
Currently, the raw data has multiple rows per campaign (e.g., "Campaign AAA" appears 3 times with the same planned value of 2000).
Let's evaluate each option:
Option 1: Change Aggregation Function to SUM
This would sum the planned clicks across all Media Buy Keys. For "Campaign AAA", it would return 2000 + 2000 + 2000 = 6000, which is incorrect (should be 2000).
Option 2: Change Aggregation Function to AVG
This would average the planned clicks across Media Buy Keys. For "Campaign AAA", it would return (2000 + 2000 + 2000) / 3 = 2000, which accidentally gives the right number in this case because all values are identical.
However, this is unreliable. If a campaign had different planned values for different Media Buys (e.g., 1000 and 3000), the average would be 2000, which might not be the intended per-campaign value. The client explicitly wants the value "per Campaign", not the average. Thus, this method is flawed and yields a false result in general.
Option 3: MAX at Media Buy Key Granularity
This calculates the maximum planned clicks per Media Buy Key. Since each Media Buy Key has only one value (e.g., each row has 2000 for "Campaign AAA"), this returns the same value. When rolled up to Campaign level, it will correctly show 2000 for "Campaign AAA". This works.
Option 4: MIN at Media Buy Key Granularity
Similar to Option 3, since each Media Buy Key has the same value for a given campaign, the min is also 2000. When rolled up to Campaign, it remains 2000. This also works.
Option 5: AVG at Campaign Key Granularity
This averages the planned clicks per Campaign Key. Since all Media Buy Keys under the same campaign have the same value (2000), the average is 2000. This returns the correct result.
Why Options 1 and 2 are False:
Option 1 (SUM) clearly gives 6000 for "Campaign AAA", which is wrong.
Option 2 (AVG) seems correct only by coincidence because all values are identical. If the planned values were not uniform (e.g., if a campaign had values 1000, 2000, 3000), the average would be 2000, but the true "per campaign" value might be defined as 3000 or 1000 in the business logic. Since the client wants the value "per Campaign" (which is 2000 in this case), using AVG is not robust and is considered a false approach.
Conclusion:
Options 1 and 2 are incorrect because they do not reliably return the intended per-campaign value. Thus, the two options that yield a false result are A. Option 2 and E. Option 1.
Your client would like to create a new harmonization field - Exam Topic. The below table represents the harmonization logic from each source.
The client suggested to create, without any mapping manipulations, several patterns via the harmonization center that will generate two Harmonized Dimensions:
Exam ID
Exam Topic
Given the above information, which statement is correct regarding the ability to implement this request with the above suggestion?
A. The above Patterns setup will not work for this use case.
B. The solution will work - the client will be able to view Exam Topic with Email Sends.
C. Only if 5 different Patterns are created, from 5 different fields - the solution will work.
D. The Harmonized field for Exam ID is redundant. One Harmonized dimension for Exam Topic is enough for a sustainable and working solution
Explanation:
Why:
Harmonization patterns can extract a value from a field and map it to a single harmonized dimension per stream, but they don’t create relationships between two harmonized dimensions (e.g., Exam ID → Exam Topic) and they don’t propagate a value across streams that don’t contain that field.
Here, Source B (Messaging) has Exam ID only; it has no Exam Topic field to extract. If you only create patterns for Exam ID and Exam Topic, Source B will still lack Exam Topic, so you cannot slice Email Sends by Exam Topic as required.
What would work instead (conceptually):
Create a small parent lookup (or a parent stream) with Exam ID → Exam Topic and set up a Parent-Child connection on Exam ID so Exam Topic is inherited by all streams, including Source B; then you can report Cost, Email Sends, and Video Views by Exam Topic.
A technical architect is provided with the logic and Opportunity file shown below:
The opportunity status logic is as follows:
For the opportunity stages “Interest”, “Confirmed Interest” and “Registered”, the status should be “Open”.
For the opportunity stage “Closed”, the opportunity status should be closed Otherwise, return null for the opportunity status
Given the above file and logic and assuming that the file is mapped in a GENERIC data stream type with the following mapping:
“Day” — Standard “Day” field
“Opportunity Key” > Main Generic Entity Key
“Opportunity Stage” — Generic Entity Key 2
“Opportunity Count” — Generic Custom Metric
A pivot table was created to present the count of opportunities in each stage. The pivot table is filtered on January (entire month). What is the number of opportunities in the Interest stage?
A. 1
B. 3
C. 2
D. 0
Explanation:
# Counting Opportunities in the Interest Stage for January
To determine the number of opportunities in the "Interest" stage within January, we need to filter the provided "Opportunity File" based on two criteria: the "Day" falling within January and the "Opportunity Stage" being "Interest".
Step 1: Filter by Month (January)
We examine the "Day" column in the "Opportunity File" and identify all entries that occurred in January:
06-Jan (123AA01, 123AA02, 123AA03)
08-Jan (123AA01)
09-Jan (123AA02)
10-Jan (123AA01, 123AA02)
14-Jan (123AA02, 123AA01)
All listed dates are indeed within January.
Step 2: Filter by Opportunity Stage ("Interest")
Next, from the January entries identified in Step 1, we filter further to include only those with an "Opportunity Stage" of "Interest":
06-Jan 123AA01 - Interest
06-Jan 123AA02 - Interest
06-Jan 123AA03 - Interest
The subsequent entries for January (08-Jan, 09-Jan, 10-Jan, and 14-Jan) have "Opportunity Stage" values other than "Interest" (e.g., "Confirmed Interest", "Registered", "Rejected", "Closed").
Step 3: Count the Filtered Opportunities
Finally, we count the number of opportunities that satisfy both conditions (January and "Interest" stage). Based on Step 2, there are three such opportunities: 123AA01, 123AA02, and 123AA03, all on 06-Jan.
Answer:
The number of opportunities in the Interest stage in January is (B) 3.
What are two potential reasons for performance issues (when loading a dashboard) when using the CRM data stream type?
A. When a data stream type ''CRM - Leads' is created, another complementary 'CRM - Opportunity' is created automatically.
B. Pacing - daily rows are being created for every lead and opportunity keys
C. No mappable measurements - all measurements are calculated
D. The data is stored at the workspace level.
Explanation:
Pacing (B)
The Salesforce CRM connector, by default, is a pacing-type connector. This means that instead of ingesting a single record for a lead or opportunity and updating it, it creates a new row for the same record every day to capture changes. This can lead to a massive number of rows being stored in the database over time, significantly slowing down dashboard loading times as the system has to process a huge volume of data.
No Mappable Measurements (C)
When you have no direct, mappable measurements (e.g., Number of Leads, Revenue) and instead rely on calculated measurements, you can run into performance issues. Calculated measurements are computed on the fly, which can be resource-intensive, especially when applied across a large dataset. If all your key performance indicators (KPIs) are based on these calculated fields, it can put a heavy strain on the system every time a dashboard is loaded, leading to slow performance.
Why the other options are incorrect:
A. When a data stream type 'CRM - Leads' is created, another complementary 'CRM - Opportunity' is created automatically. This is not a standard feature of the CRM connector and would not cause performance issues. You must manually create each data stream type.
D. The data is stored at the workspace level. This statement is fundamentally incorrect. All data ingested into Marketing Cloud Intelligence is stored at the data stream level and then harmonized at the workspace level for reporting. The storage location is not the direct cause of performance issues; it's the sheer volume and the way the data is being stored (pacing) that creates the problem.
A client has provided you with sample files of their data from the following data sources:
1.Google Analytics
2.Salesforce Marketing Cloud
The link between these sources is on the following two fields:
Message Send Key
A portion of: web_site_source_key
Below is the logic the client would like to have implemented in Datorama:
For ‘web site medium’ values containing the word “email” (in all of its forms), the section after the “_” delimiter in ‘web_site_source_key’ is a 4 digit number, which matches the 'Message Send Key’ values from the Salesforce Marketing Cloud file. Possible examples of this can be seen in the following table:
Google Analytics:
In order to achieve this, what steps should be taken?
A. Within both files, map the desired value to Custom Classification Key as follows Salesforce Marketing Cloud: map entire Message Key to Custom Classification Key. Google Analytics: map the extraction logic to Custom Classification Key.
B. Create a Web Analytics Site custom attribute and populate it with the extraction logic. Create a Data Fusion between the newly created attribute and the Message Send Key.
C. Upload the two files and create a Parent-Child relationship between them. The Override Media Buy Hierarchy checkbox is checked in Google Analytics.
D. Create a Web Analytics Site Source custom attribute and populate it with the extraction logic. Create a Data Fusion between the newly created attribute and the Message Send Key.
Explanation:
The client’s objective is to link data from Google Analytics and Salesforce Marketing Cloud based on 'Message Send Key' and a portion of 'web_site_source_key' (specifically the 4-digit number after the "_" delimiter when 'web site medium' contains "email"). The goal is to visualize mutual key values alongside measurements from both files in a table.
A. Within both files, map the desired value to Custom Classification Key as follows: Salesforce Marketing Cloud: map entire Message Send Key to Custom Classification Key. Google Analytics: map the extraction logic to Custom Classification Key: This is correct. Custom Classification allows mapping and harmonizing data across sources. For Salesforce Marketing Cloud, mapping the entire 'Message Send Key' (e.g., 6783) to a Custom Classification Key ensures it’s available as a harmonized field. For Google Analytics, the extraction logic (extracting the 4-digit number after "_" from 'web_site_source_key' when 'web site medium' contains "email") can be applied to map to the same Custom Classification Key, establishing the link. This approach aligns the data for visualization.
B. Create a Web Analytics Site custom attribute and populate it with the extraction logic. Create a Data Fusion between the newly created attribute and the Message Send Key: This is incorrect. While a custom attribute could hold the extraction logic, Data Fusion is used for merging datasets, not for establishing a key-based link. This wouldn’t directly achieve the table visualization requirement.
C. Upload the two files and create a Parent-Child relationship between them. The Override Media Buy Hierarchy checkbox is checked in Google Analytics: This is incorrect. A Parent-Child relationship with hierarchy override is used for hierarchical data integration (e.g., Media Buy structures), not for linking based on a mutual key like 'Message Send Key'. This approach doesn’t address the extraction logic or table visualization.
D. Create a Web Analytics Site Source custom attribute and populate it with the extraction logic. Create a Data Fusion between the newly created attribute and the Message Send Key: This is incorrect. Similar to B, using a Web Analytics Site Source custom attribute for extraction is possible, but Data Fusion is not the appropriate method for linking keys; it’s meant for combining datasets, not aligning specific fields for visualization.
Option A leverages Custom Classification to harmonize the keys effectively, meeting the client’s requirement to link and visualize the data.
Reference:
This aligns with Salesforce Marketing Cloud Intelligence’s harmonization and classification features, as described in the platform’s documentation.
An implementation engineer has been asked to perform a QA for a newly created harmonization field, Color, implemented by a client.
The source file that was ingested can be seen below:
A. A Harmonized dimension was created via a pattern over the Creative Name.
B. A calculated dimension was created with the formula: EXTRACT([Creative_Namel, #1)
C. An EXTRACT formula (for Color) was written and mapped to a Media Buy custom attribute.
D. An EXTRACT formula (for Color) was written and mapped to a Creative custom attribute.
Explanation:
Let's analyze the QA pivot table and the options:
The QA Pivot Table Result: The pivot table shows In View Impressions aggregated by Media Buy Key, Media Buy Name, and the new Color field. Importantly, for Media Buy Key = MBK1, the two original rows (for "Creative#Red" and "Creative#Green") have been rolled up into a single row for each color: "Red" with 25 impressions and "Green" with 20 impressions.
However, the final pivot table shown only displays "Red" for MBK1, which is inconsistent with the source data that has two creatives under MBK1. This might be a filtering or display issue in the example, but the key point is that Color is acting as a dimension that can be used alongside Media Buy Key.
Why D is Correct: The Creative Name is a field that exists at the Creative entity level. To create a new attribute from this field (like extracting the color), the logical place to map it is to a Creative Custom Attribute. This creates a new dimension (Color) that is part of the Creative hierarchy. This allows the new dimension to be used for slicing and dicing metrics like In View Impressions at the Creative level, and it will roll up correctly to higher levels like Media Buy, as seen in the pivot table.
Why the Other Options are Incorrect:
A. A Harmonized dimension was created via a pattern over the Creative Name. Harmonized dimensions are typically used to unify values from the same field across multiple data sources (e.g., unifying "Campaign Name" from Facebook and Google). In this case, the client is creating a new field from an existing one within a single data source, which is not the primary purpose of harmonization.
B. A calculated dimension was created with the formula: EXTRACT([Creative_Name], #1). A Calculated Dimension is created in the reporting layer, not during data mapping. If this were a Calculated Dimension, it would not be available as a discrete dimension to map in the data stream setup for QA in the way described. Calculated Dimensions are built from existing mapped fields after data ingestion.
C. An EXTRACT formula (for Color) was written and mapped to a Media Buy custom attribute. This is incorrect because the source of the data is the Creative Name field, which belongs to the Creative entity, not the Media Buy entity. Mapping a Creative-level field to a Media Buy-level attribute would cause data duplication and incorrect results. For example, a single Media Buy (MBK1) has multiple creatives ("Red" and "Green"), so mapping the extracted color to a Media Buy attribute would force one value to overwrite the other or create a conflict, preventing the correct breakdown seen in the QA table.
Conclusion:
The only method that correctly creates a new attribute from the Creative Name field at the proper entity level (Creative) is to use an EXTRACT transformation and map it to a Creative Custom Attribute.
Your client provided the following sources:
Source 1:
As can be seen, the Product values present in sources 2 and 3 are similar and can be linked with the first extraction from ‘Media Buy Name’ in source1
The end goal is to achieve a final view of Product Group alongside Clicks and Sign Ups, as described below:
Which two options will meet the client’s requirement and enable the desired view?
A. Custom Classification: 1
Source 1: Custom Classification key will be populated with the extraction of the Media Buy Name.
Source 2: ‘Product’ will be mapped to Custom Classification key and ‘Product Group’ to a Custom Classification level. Exam Timer
Source 3: ‘Product will be mapped to Custom Classification key. Came
B. Overarching Entities:
Source 1: custom classification key will be populated with the extraction of the Media Buy Name.
Source 2: ‘Product’ will be mapped to Product field and ‘Product Group’ to Product Name.
Source 3: ‘Product’ will be mapped to Product field.
C. Parent Child:
All sources will be uploaded to the same data stream type - Ads. The setup is the following:
Source 1: Media Buy Key —- Media Buy Key, extracted product value — Media Buy Attribute.
Source 2: Product — Media Buy Key, Product Group —- Media Buy Attribute.
Source 3: Product — Media Buy Key.
D. Harmonization Center:
Patterns from sources 1 and 3 generate harmonized dimension ‘Product’. Data Classification rule, using source 2, is applied on top of the harmonized dimension
Explanation:
The client’s goal is to link Clicks (from Source 1) and Sign Ups (from Source 3) to Product Groups (from Source 2) using the Product name as the common key. The challenge is that Product is embedded in the Media Buy Name in Source 1 and appears as a standalone field in Sources 2 and 3.
✅ A. Custom Classification
This approach works because:
You extract Product from Media Buy Name in Source 1 and populate a Custom Classification key.
In Source 2, you map Product to the same key and Product Group to a Custom Classification level.
In Source 3, Product is also mapped to the same key.
This allows you to link all three sources via the classification and aggregate metrics like Clicks and Sign Ups by Product Group.
✅ D. Harmonization Center
This method is also valid and scalable:
You create patterns in Sources 1 and 3 to extract the Product into a Harmonized Dimension.
Then, you apply a Data Classification rule using Source 2 to map each Product to its Product Group.
This enables you to slice and visualize Clicks and Sign Ups by Product Group in dashboards.
❌ Why the Others Don’t Work:
B. Overarching Entities
Misuses entity mapping — mapping Product Group to Product Name is structurally incorrect and breaks the data model.
C. Parent Child
Tries to force all sources into the Ads stream type and uses Media Buy Key as a linking key, which doesn’t exist in Sources 2 and 3 — this breaks the relationship logic.
📘 Reference:
You can explore these harmonization strategies in the Meet the Harmonization Center Trailhead unit and Salesforce Help Docs on Custom Classifications
A client created a new KPI: CPS (Cost per Sign-up).
The new KIP is mapped within the data stream mapping, and is populated with the following logic: (Media Cost) / Sign-ups)
As can be seen in the table below, CPS was created twice and was set with two different aggregations:
A. Option A
B. Option B
C. Option C
D. Option D
Explanation:
Why:
From the rows:
Media Buy 35462: $2.00 / 11 = $0.18
Media Buy 33311: $1.00 / 4 = $0.25
Totals: Media Cost $3.00 and Sign-ups 15.
CPS #1 total shows $0.20, which equals $3.00 / 15. That means its aggregation recomputes the formula on the aggregated components → AUTO.
CPS #2 total shows $0.43, which equals $0.18 + $0.25 → that’s a SUM of the row values.
(If CPS #1 were AVG, the total would be ≈ $0.22, not $0.20.)
A technical architect is provided with the logic and Opportunity file shown below: The opportunity status logic is as follows:
For the opportunity stages “Interest”, “Confirmed Interest” and “Registered”, the status should be “Open”.
For the opportunity stage “Closed”, the opportunity status should be closed.
Otherwise, return null for the opportunity status.
Given the above file and logic and assuming that the file is mapped in a GENERIC data stream type with the following mapping:
“Day” — Standard “Day” field
“Opportunity Key” >
Main Generic Entity Key “Opportunity Stage” — Generic Entity key 2
A pivot table was created to present the count of opportunities in each stage. The pivot table is filtered on Jan 7th-11th.Which option reflects the stage(s) the opportunity key 123AA01 is associated with?
A. Interest & Registered
B. Confirmed interest
C. interest
D. Confirmed Interest & Registered
Explanation:
Understand the Filter:
The pivot table is filtered to show only records from January 7th to January 11th. Any records outside this date range are excluded.
Identify Records for Opportunity Key 123AA01:
06-Jan: Interest - Excluded (before the filter range)
08-Jan: Confirmed Interest - Included (within Jan 7-11)
10-Jan: Registered - Included (within Jan 7-11)
14-Jan: Closed - Excluded (after the filter range)
Result in the Pivot Table:
The pivot table counts the number of opportunities in each stage. For key 123AA01, during the filtered period:
It appears as "Confirmed Interest" on January 8th.
It appears as "Registered" on January 10th.
Therefore, the opportunity key 123AA01 is associated with both "Confirmed Interest" and "Registered" stages within the filtered dates.
Why the Other Options are Incorrect:
A. Interest & Registered:
"Interest" occurred on January 6th, which is outside the filter range.
B. Confirmed Interest:
This is incomplete, as it misses the "Registered" stage on January 10th.
C. Interest:
"Interest" is outside the filter range and is not included.
Page 2 out of 7 Pages |
Marketing-Cloud-Intelligence Practice Test Home |