Several changes were applied to improve security. See ADV-2017-17, ADV-2017-19, ADV-2017-20 and ADV-2017-21 in Security Bulletins for more details.
In some cases, connecting to Marketo would not return your full data set.
Connections to Excel files saved in Excel version 1707 would fail with an error: Errors occurred while loading the data source
SAML authentication from Tableau Desktop to Tableau Server would fail in some environments.
With recent updates to the Amazon Redshift driver, connections could fail with a no pg_hba.conf entry for host error. A workaround was to republish workbooks or data sources with Require SSL checked in the data connection.
Extracts would unexpectedly refresh when you switched between the Data Source tab and a sheet tab.
When source and target sheets were based on different data sources, filter actions failed to trigger sheet navigation when the action was set up to use selected fields.
An internal error would occur, in some cases, when you attempted to sort a calculated field containing conditional statements.
When you were connected to Snowflake and cast a string dimension to an integer or float, a Numeric value is not recognized error would occur.
Query execution errors would occur when you connected to Cloudera Impala using custom SQL with a date parameter.
In some cases, when you joined multiple tables and performed a pivot, null values would incorrectly appear in Pivot Field Names.
An internal error would occur when you renamed a sheet and then tried to use the controls on the Pages shelf in the sheet.
Printing to PDF was noticeably slower in Tableau Desktop 10.0 and higher compared to prior versions.
When the filmstrip was used to navigate between sheets in a workbook, interactions in sheets would cause Tableau Desktop to gradually consume more memory and eventually crash.
When you saved a workbook in presentation mode and then re-opened the workbook, the view would not be correctly centered.
When connected to a data source using custom SQL, if you changed the data type of a field and then created an extract or saved the workbook, extra line feed characters would be injected into the custom SQL query.
In certain cases involving parameter-based calculated fields used as filters, incorrect or inefficient SQL queries would be generated when connected to Google BigQuery, Apache Drill, or Amazon Redshift, resulting in error or poor performance.
In some European locales, double-precision numbers were returned as #ERROR or null.
In some cases, aggregated calculated fields would return no data after an extract was created.
When you connected to MySQL, case-sensitive table collation was ignored when custom SQL was used. Custom SQL connections to MySQL now default to database collation.