Total 100 Questions
Last Updated On : 26-Nov-2025
A customer plans to do an in-place upgrade of their single node Tableau Server from 2023.1 to the most recent version.
What is the correct sequence to prepare for an in-place upgrade?
A. * In the production environment:
* Disable scheduled tasks.
* Uninstall Tableau Server 2023.1.
* Run the upgrade script for the most recent version of Tableau Server.
* Confirm everything works as expected and test new features.
B. * In the production environment:
* Disable scheduled tasks.
* Run the upgrade script for the most recent version of Tableau Server.
* Confirm everything works as expected and test new features.
C. * In a non-production environment:
* Install the most recent version of Tableau Server.
* Back up the existing production environment.
* Restore settings and backup into the non-production environment.
* Confirm everything works as expected and test new features.
* Redirect user traffic from the production environment to the non-production environment.
D. * In a non-production environment:
* Clone a copy of existing production environment to create a VM snapshot.
* Restore the VM snapshot into the non-production environment.
* Run the upgrade script for the most recent version of Tableau Server.
* Confirm everything works as expected and test new features.
* Redirect user traffic from the production environment to the non-production environment.
Explanation:
For an in-place upgrade of a single-node Tableau Server (from 2023.1 to the latest version, such as 2025.3 as of November 2025), the process is performed directly on the production server to minimize downtime and avoid the need for traffic redirection or a separate environment. This method installs the new version side-by-side with the existing one, then uses an upgrade script to migrate configurations, data, and settings seamlessly. Key steps include:
Disable scheduled tasks: Before upgrading, pause jobs like extract refreshes or subscriptions via the TSM CLI (e.g., tsm maintenance pause-jobs) to prevent interruptions or data inconsistencies during the process.
Run the upgrade script: After installing the new version's setup program (which detects and prepares the existing installation), execute tsm maintenance upgrade --file
Confirm and test: Restart services with tsm start, then validate functionality (e.g., via the Upgrade Server dashboard for feature impacts) and test critical dashboards, data sources, and new capabilities.
This sequence ensures a controlled upgrade with built-in checkpointing for rollback if needed. The process typically takes 1–2 hours for a single node, depending on data volume.
Why the other options are incorrect:
A: Uninstalling the old version before running the upgrade script is invalid—Tableau's process requires the existing version to remain in place until the script migrates everything. Uninstalling prematurely would cause data loss or require a full restore.
C: This describes a fresh install and restore approach in a non-production environment (e.g., cloning via backup/restore), not an in-place upgrade. It involves redirecting traffic, which adds complexity and downtime unsuitable for in-place scenarios.
D: Cloning via VM snapshots and upgrading in non-production is a blue/green deployment for zero-downtime upgrades or major OS changes, not a standard in-place process on the production node. It also requires traffic redirection, which contradicts the in-place intent.
Reference:
Tableau Help: Upgrading Tableau Server (Single-Node)
Best Practices: Upgrade Planning Checklist
2025 Release Notes: What's New in Tableau Server (includes upgrade impact filters)
During a Tableau Cloud implementation, a Tableau consultant has been tasked with implementing row-level security (RLS). They have already invested in implementing RLS within their own database for their legacy reporting solution. The client wants to know if they will be able to leverage their existing RLS after the Tableau Cloud implementation.
Which two requirements should the Tableau consultant share with the client? Choose two.
A. The Tableau Cloud username must exist in the database.
B. Both live and extract connections can be used.
C. Only live data connections can be used.
D. The RLS in database option must be configured in Tableau Cloud.
✅ Explanation
If a customer already uses row-level security (RLS) inside their database, Tableau Cloud can leverage that same RLS only when using a live connection and only if the database can authenticate/identify the Tableau Cloud user.
To reuse existing database-level RLS, two requirements must be met:
✔ A. The Tableau Cloud username must exist in the database.
Correct.
Database-level RLS typically relies on a field such as username, email, or user ID to filter data. For Tableau Cloud to pass the user identity correctly, the database must recognize the user.
This is usually done via:
- SAML / OAuth passthrough
- Initial SQL (passing USERNAME() into the DB)
- Database mapping tables using Tableau username/email
✔ C. Only live data connections can be used.
Correct.
Tableau Cloud cannot embed database-managed RLS into extracts, because extracts store data after the security filter is applied.
To reuse database-side RLS:
You must use live connections so the database can apply security at query time.
❌ Why the others are incorrect
B. Both live and extract connections can be used.
Incorrect—extracts cannot leverage dynamic database RLS.
D. The RLS in database option must be configured in Tableau Cloud.
Incorrect—there is no such setting in Tableau Cloud.
RLS is defined and enforced in the database, not in Tableau Cloud.
A client wants to see data for only the last day in a dataset and the last day is always yesterday. The date is represented with the field Ship Date.
The client is not concerned about the daily refresh results. The volume of data is so large that performance is their priority. In the future, the client will be able
to move the calculation to the underlying database, but not at this time.
The solution should offer the best performance.
Which approach should the consultant use to produce the desired results?
A. Filter MONTH/DAY/YEAR on [Ship Date] field and use an option to filter to the latest date value when the workbook opens.
B. Filter on calculation [Ship Date]=TODAY()-1.
C. Filter on Ship Date field using the Yesterday option.
D. Filter on calculation [Ship Date]={MAX([Ship Date])}.
Explanation:
Correct Approach: Use a Filter with the Calculation [Ship Date] = TODAY() - 1 (Option B)
This is the highest-performing solution that fully satisfies the client’s requirements. The calculation TODAY()-1 is a simple, deterministic, row-level Boolean test that always resolves to yesterday’s date, regardless of when the extract refreshes or the workbook is opened. Because it contains no aggregate functions and no LOD expressions, Tableau can push this filter all the way down into the Hyper extract creation process as an extract filter. When the extract is built or refreshed, only rows where Ship Date equals yesterday are physically written into the .hyper file. This dramatically reduces the extract size and makes every subsequent query (including dashboard load, filter actions, and mark rendering) lightning-fast—even on datasets with billions of rows. Since the client explicitly prioritizes performance over everything else and is comfortable with daily refreshes, this approach delivers the best possible speed today while remaining easy to replace later when the logic moves to the database.
Why Option A Is Incorrect and Much Slower
Option A suggests breaking Ship Date into MONTH/DAY/YEAR components and then using a relative-date or “latest date value when workbook opens” filter. This forces Tableau to scan the entire dataset on every single query to determine what the latest date is before applying the filter. On a massive dataset, this extra scan adds seconds or even minutes to every dashboard load. It also prevents the filter from becoming a true extract filter, so the full historical dataset remains in the extract, wasting storage and slowing down rendering.
Why Option C Is Close but Still Not the Recommended Answer
Option C uses the built-in “Yesterday” relative-date filter on the Ship Date field. Internally, Tableau translates this to something very similar to TODAY()-1, and performance is excellent in most cases. However, the Analytics-Con-301 exam (and Tableau’s own best-practice documentation) consistently favors the explicit calculation TODAY()-1 as the answer for performance-critical scenarios because it gives the author full control and guarantees the filter can be converted into a data-source or extract filter without ambiguity. Many real-world implementations also prefer the calculation form for clarity in version control and future maintenance.
Why Option D Performs Poorly on Large Data
Option D uses a fixed LOD expression {MAX([Ship Date])} to find the single latest date in the data and then filters to that date. While this would technically show only the most recent day, the LOD forces Tableau to run a separate subquery to compute the global maximum before applying the row-level filter. On very large extracts this subquery adds noticeable overhead, and—most importantly—it prevents the filter from being materialized as an extract filter during refresh. The result is a significantly larger extract and slower query performance compared to the simple row-level TODAY()-1 test, making it the wrong choice when raw speed is the top priority.
In summary, for a huge dataset where the client needs exactly yesterday’s data and performance is non-negotiable, the consultant must implement an extract or data-source filter using the calculation [Ship Date] = TODAY() - 1. This is the officially recommended, exam-correct, and real-world fastest solution.
A client wants to grant a user access to a data source hosted on Tableau Server so that the user can create new content in Tableau Desktop. However, the user should be restricted to seeing only a subset of approved data.
How should the client set up the filter before publishing the hyper file so that the Desktop user follows the same row-level security (RLS) as viewers of the end content?
A. Data Source Filter
B. Context Filter
C. Apply Filter to All Using Related Data Sources
D. Extract Filter
Explanation:
The goal is to ensure that a specific user, when connecting from Tableau Desktop, is permanently restricted to seeing only a predefined subset of data. This security filter must be inherent to the data source itself and cannot be something the user can modify or bypass in Desktop.
Here’s why a Data Source Filter is the correct and only robust choice for this scenario:
Embedded in the Data Source Definition: A Data Source Filter is applied at the connection level and becomes a fundamental part of the data source's definition. When this filtered data source is published to Tableau Server, the filter is preserved.
Enforced in Tableau Desktop: When a user in Tableau Desktop connects to this published data source, the Data Source Filter is applied immediately and automatically. The user cannot see, modify, or remove this filter. They can only build workbooks on top of the already-filtered dataset.
Consistency with End-Content Viewers: Because the same published data source is used to create workbooks and is then used by viewers on Tableau Server, the RLS is consistent. Both the content creator (in Desktop) and the final consumer (on Server) see the exact same, security-trimmed view of the data.
Why the other options are incorrect:
B. Context Filter: A context filter is a worksheet-level filter used for performance optimization. It is part of a workbook's specific view and is not part of the data source definition. A user in Tableau Desktop can easily modify or remove a context filter, so it provides no reliable security.
C. Apply Filter to All Using Related Data Sources: This is an action within a workbook that applies a filter across multiple sheets. It is a dashboard interaction feature and has nothing to do with defining a secure data source for publishing.
D. Extract Filter: While an extract filter does create a subset of data, it is applied during the creation of a .hyper extract file. The key distinction is its behavior after publishing:
If you publish an extract to Server, the filter is "baked in" and the user in Desktop would see the subset.
However, the question specifies the user will "create new content in Tableau Desktop." If the user connects to the published data source and creates a new extract locally in Desktop, they could potentially configure the extract filter differently, bypassing the intended security. A Data Source Filter is more secure because it governs both live connections and any extracts created from it on Server.
Key Concept:
Feature: Data Source Filters for Row-Level Security (RLS).
Core Concept: To enforce a data-level security policy that is consistent for both content creators (in Tableau Desktop) and consumers (on Tableau Server), the filter must be applied at the data source level. This embeds the security directly into the connection, making it immutable by the end-user and ensuring it is the foundation for all workbooks built from that data source.
A consultant is tasked with improving the performance of a large workbook that contains multiple dashboards, each of which leverages a separate data source. What is one way to improve performance?
A. Convert Data Source filters to Quick Filters.
B. Convert any extracted data sources to live data sources.
C. Restrict the users who can access the workbook.
D. Split the workbook into multiple workbooks.
✅ Explanation
When a single workbook contains:
Many dashboards
Each using different data sources
Large amounts of data
…it becomes heavy and slow because every data source must be loaded and queried when the workbook opens, even if only one dashboard is being viewed.
✔ Splitting the workbook improves performance by:
Reducing the number of data sources loaded at once
Reducing initial load time
Reducing query workload per workbook
Making the dashboards more modular and easier to manage
This is a commonly recommended Tableau performance optimization technique, especially for large, multi-dashboard workbooks.
❌ Why the other options are incorrect
❌ A. Convert Data Source Filters to Quick Filters
Converting data source filters into Quick Filters actually hurts performance rather than improving it. Data source filters are applied at the data source level and efficiently reduce the amount of data Tableau needs to load or query. Quick Filters, on the other hand, are interactive, user-facing filters that require Tableau to compute all possible filter values and dynamically requery or recalculate whenever they are changed. This creates additional overhead, especially when dealing with large datasets. Instead of reducing the workload, Quick Filters increase processing demands, slow dashboard responsiveness, and often significantly increase the workbook’s rendering time.
❌ B. Convert Extracted Data Sources to Live Data Sources
Switching from extracts to live data sources generally results in slower performance, particularly for large datasets or complex dashboards. Extracts (.hyper files) are highly optimized, compressed snapshots stored in Tableau’s high-performance engine, designed to return results quickly. Live connections, however, rely on external databases, which may struggle under heavy or inefficient query loads. Live queries can be impacted by network latency, database performance limitations, resource bottlenecks, and concurrent user traffic. Unless the underlying database is extremely powerful and well-tuned, live connections rarely outperform extracts. Therefore, converting extracts to live connections is counterproductive when the goal is to improve speed.
❌ C. Restrict the Users Who Can Access the Workbook
Limiting which users can access the workbook does nothing to improve the workbook’s actual performance. Performance issues are related to factors such as data volume, query complexity, number of data sources, dashboard design, and hardware capacity. Reducing user access does not reduce the computational load required to open or render the workbook for the users who still have access. Tableau performance is driven by processing work, not by the size of the audience. Even if fewer people use the workbook, the underlying queries and visualizations will not run any faster. As a result, restricting access is a security decision—not a performance optimization strategy.
Sales managers use a daily extract from Snowflake to see the previous day’s snapshot.
Sales managers should only see statistics for their direct reports.
The company has Tableau Data Management on Tableau Cloud.
A consultant must design a centralized, low-maintenance RLS strategy.
What should the consultant implement?
A. Built-in RLS security in Snowflake
B. Data policy
C. Manual user filter
D. Dynamic user filter
Explanation:
This scenario is a textbook case for using a Data Policy, a core feature of the Tableau Data Management offering. The requirements make this the clear and optimal choice:
Centralized: The security logic is defined and managed in one single place—on the published data source in Tableau Cloud.
Low-maintenance: Once configured, it requires no manual intervention. The security is applied automatically during the extract refresh process based on the current data.
Uses a Daily Extract: This is the critical detail. Data Policies are specifically designed to enforce RLS on Tableau extracts.
The company has Tableau Data Management: This is the licensing prerequisite for using Data Policies.
Here’s how a Data Policy works and why it fits perfectly:
Policy Definition: The consultant defines a policy on the published data source. This policy uses a rule, for example: "A user can see rows where the [Manager ID] field matches their own USERNAME()" (or a custom user attribute).
Application during Refresh: When the daily extract refresh job runs from Snowflake to Tableau Cloud, the Data Policy is applied. Tableau creates a separate, pre-filtered "virtualized" copy of the data for each user or user group.
User Experience: When a sales manager opens a workbook, they connect to this secured data source. Tableau Server automatically serves them their own pre-filtered view, showing only the data for their direct reports. This happens instantly and transparently.
Why the other options are incorrect:
A. Built-in RLS security in Snowflake: This is a powerful solution, but only for live connections. Since the client is using a daily extract, the connection to Snowflake's live security context is broken the moment the data is snapped into the .hyper file. The extract is a static snapshot, and Snowflake's RLS cannot filter it after the fact.
C. Manual user filter: This involves creating a complex filter with a hard-coded list of usernames and the data they can see (e.g., a long OR statement). This is the exact opposite of "low-maintenance." Every organizational change would require a manual update to the filter, which is error-prone, unsustainable, and not centralized.
D. Dynamic user filter: This typically refers to a worksheet-level calculated field filter (e.g., [Manager ID] = USERNAME()). While this is a valid RLS method, it is not centralized or robust. It must be manually added to every single worksheet that needs this security. It is fragile, as a user could create a new sheet and forget the filter, potentially exposing all data. A Data Policy is a server-enforced, data-source-level solution that eliminates this risk.
Key Concept:
Feature: Data Policies (part of Tableau Data Management).
Core Concept: Data Policies are the premier, centralized method for implementing row-level security on published data sources, especially extracts, in Tableau Cloud/Server. They apply security at refresh time, creating personalized data views for each user without the maintenance overhead of manual filters or the limitations of database RLS on live connections.
A business analyst needs to create a view in Tableau Desktop that reports data from both Excel and MSSQL Server.
Which two features should the business analyst use to create the view? Choose two.
A. Relationships
B. Cross-Database Joins
C. Data Blending
D. Union
✔ Explanation
When a business analyst needs to report on data from multiple sources—such as Excel and MSSQL Server—in a single Tableau view, Tableau offers two primary ways to combine data at the row or logical level:
Relationships:
Introduced in Tableau 2020.2, relationships allow analysts to combine data from different sources without physically joining tables, preserving the granularity of each source.
Relationships are flexible and support combining heterogeneous sources (Excel + SQL Server) in a way that Tableau dynamically generates queries when building visualizations.
Cross-Database Joins:
Cross-database joins allow a physical join across different data sources, combining rows from multiple sources into a single table.
This is useful for creating a unified dataset from Excel and SQL Server when you need a joined view at the row level.
Both options are valid ways to combine data from multiple sources to produce the required view.
❌ Why the other options are incorrect
❌ C. Data Blending
Data blending was the legacy method for combining data from multiple sources in Tableau, typically used when the sources could not be joined physically. While still available, it is less flexible and less performant than relationships or cross-database joins, especially for modern Tableau workflows. Relationships now provide a more robust and dynamic solution, making blending unnecessary in most cases.
❌ D. Union
A union stacks rows from multiple tables vertically, requiring that the tables have compatible columns. In this scenario, the analyst needs to combine Excel and SQL Server data horizontally to create a comprehensive view, not to append rows. Therefore, a union is not appropriate.
An analyst needs to interactively set a reference date to drive table calculations without leaving a view.
Which action should the analyst use?
A. Running action
B. Filter action
C. Parameter action
D. Highlight action
Explanation:
Correct Solution: Use a Parameter Action (Option C)
The analyst should use a Parameter Action because it is the only native Tableau feature that allows an end-user to interactively change the value of a parameter directly from within a dashboard view—without opening the parameter control or leaving the dashboard. A parameter action can be configured so that clicking or selecting a mark (e.g., a specific date on a timeline or a date pill) instantly writes that selected value into a target parameter. Since table calculations (like moving average, percent difference, or index relative to a reference date) frequently rely on parameters to define the reference point, a parameter action provides the exact interactive experience requested: users dynamically set the reference date on the fly, and all dependent table calculations update instantly across the dashboard.
Why Running Action (Option A) Is Incorrect
Running actions do not exist in Tableau. There is no action type called “Running action”—this is a distractor.
Why Filter Action (Option B) Is Incorrect
Filter actions can change what data is shown or hidden, but they cannot directly set or update the value of a parameter. Table calculations often need a specific fixed reference value (not just filtered data), so a filter action cannot drive the logic in the required way.
Why Highlight Action (Option D) Is Incorrect
Highlight actions only visually emphasize related marks across sheets; they have no ability to change parameter values or affect calculations.
In summary, when the requirement is to interactively set a reference date (or any value) that drives table calculations without leaving the view, the only correct and exam-expected answer is Parameter Action (C). This has been a standard Analytics-Con-301 question pattern since parameter actions were introduced in Tableau 2019.2.
A stakeholder has multiple files saved (CSV/Tables) in a single location. A few files from the location are required for analysis. Data transformation (calculations)
is required for the files before designing the visuals. The files have the following attributes:
. All files have the same schema.
. Multiple files have something in common among their file names.
. Each file has a unique key column.
Which data transformation strategy should the consultant use to deliver the best optimized result?
A. Use join option to combine/merge all the files together before doing the data transformation (calculations).
B. Use wildcard Union option to combine/merge all the files together before doing the data transformation (calculations).
C. Apply the data transformation (calculations) in each require file and do the wildcard union to combine/merge before designing the visuals.
D. Apply the data transformation (calculations) in each require file and do the join to combine/merge before designing the visuals.
Explanation:
This is a classic data preparation scenario. The key to choosing the best strategy lies in the file attributes provided:
"All files have the same schema."
"Multiple files have something in common among their file names."
"Each file has a unique key column."
Let's analyze why a wildcard union is the optimal first step:
Purpose of a Union: A UNION operation is designed to append rows from multiple tables or files. It stacks data vertically. This is the perfect operation when you have multiple files with the exact same column structure (same schema) that you want to combine into a single, larger table.
Efficiency of Wildcard Union: The "wildcard" part automatically finds and unions all files in a folder that match a specific pattern in their file names. Since the problem states that the required files have something in common in their names, a wildcard union is the fastest, most efficient, and least error-prone way to combine them. You set up the pattern once, and Tableau does the rest.
Optimized Workflow: Performing the union first is the most optimized approach. You create one single, clean, consolidated data source. You then apply your data transformations (calculations) once to this unified dataset. This is far more efficient and maintainable than applying the same calculations individually to dozens of separate files before combining them (as suggested in options C and D).
Why the other options are incorrect:
A. Use join option to combine/merge all the files...: A JOIN is used to combine tables horizontally by matching values in a key column. It is completely the wrong operation here. Since each file's key column is described as "unique," joining on it would result in no matches. Furthermore, since the schemas are the same, a join would create a massive, meaningless table with a huge number of duplicate columns (e.g., Sales_File1.CustomerID, Sales_File2.CustomerID, etc.).
C. Apply the data transformation (calculations) in each required file and do the wildcard union...: While this method would technically work, it is highly inefficient and not optimized. You would have to manually create the same set of calculated fields for every single individual file. This violates the "Don't Repeat Yourself (DRY)" principle, is a maintenance nightmare, and is error-prone. The union-first approach is superior.
D. Apply the data transformation (calculations) in each required file and do the join to combine/merge...: This option combines the flaws of both A and C. It incorrectly uses a JOIN for a scenario that requires a UNION, and it applies calculations in the least efficient way possible.
Key Concept:
Data Combination Method: Union (specifically Wildcard Union).
Core Concept: When you have multiple data files with the same structure (schema), the most efficient and logical way to combine them is by using a union to append the rows. Performing data preparation and transformation after the union is a best practice for workflow optimization and maintainability. A join is used for combining different types of data based on a key, not for consolidating identical datasets.
From the desktop, open the CC workbook.
Open the Incremental worksheet.
You need to add a line to the chart that
shows the cumulative percentage of sales
contributed by each product to the
incremental sales.
From the File menu in Tableau Desktop, click
Save.
Explanation:
Open the CC workbook in Tableau Desktop and go to the worksheet named Incremental.
In the Data pane, right-click on [Product Name] (or the dimension currently on Rows) and choose Show Quick Filter (optional, for verification later).
From the Analytics pane (left sidebar), drag Table Calculation onto the view.
In the dialog that opens, select Running Total.
Change the following settings:
Compute Using → Table (Across) or Specific Dimensions → [Product Name] (depending on current sort)
Check the box Add secondary calculation
Secondary Type → Percent of Total
Secondary Compute Using → Table (Down) or Specific Dimensions → [Product Name]
Click OK.
You now have a running-total measure. Right-click this pill on the view and choose Quick Table Calculation → Running Total (if not already applied).
Right-click the same pill again → Add Secondary Calculation → Percent of Total.
Right-click the axis → Add Reference Line.
Choose Line
Value → the running-total percent pill
Label → None or Value if desired
Formatting → make it a different color (usually red) and slightly thicker
(Optional but typical for Pareto) Right-click the secondary axis on the right → Dual Axis, then right-click again → Synchronize Axis.
On the Marks card, change the primary mark type to Bar (for incremental sales) and the secondary mark type (the one with the running % pill) to Line.
Clean up:
Hide the right-hand axis (right-click → uncheck Show Header)
Format the line (color, thickness, etc.)
Add axis titles if needed (“Cumulative % of Sales”)
From the menu, click File → Save.
You now have the classic Pareto chart: bars showing incremental sales by product (sorted descending) and a red line showing the cumulative percentage contributed by each product. This is the exact requirement for the Analytics-Con-301 hands-on exam section.
| Page 3 out of 10 Pages |
| Salesforce-Tableau-Consultant Practice Test Home | Previous |
Our new timed practice test mirrors the exact format, number of questions, and time limit of the official Salesforce-Tableau-Consultant exam.
The #1 challenge isn't just knowing the material; it's managing the clock. Our new simulation builds your speed and stamina.
You've studied the concepts. You've learned the material. But are you truly prepared for the pressure of the real Salesforce Agentforce Specialist exam?
We've launched a brand-new, timed Salesforce-Tableau-Consultant practice test that perfectly mirrors the official exam:
✅ Same Number of Questions
✅ Same Time Limit
✅ Same Exam Feel
✅ Unique Exam Every Time
This isn't just another Salesforce-Tableau-Consultant practice exam. It's your ultimate preparation engine.
Enroll now and gain the unbeatable advantage of: