| The Complete Weekly Roundup of SQL Server News | Hand-picked content to sharpen your professional edge |
| Human frailty and the custodianship of data I was sitting through a rather tedious meeting discussing the minutiae of data security when it occurred to me that IT was an easy target. When the GDPR panic swept through international corporate organisations, managers managed to calm themselves down with the comforting thought that this was purely an IT problem, and therefore a convenient stick with which to beat IT. Not so. When you look through the successful prosecutions that have happened since the GDPR passed into European law, you see that human stupidity and carelessness account for many of them, and organisational malice the bulk of the rest. The IT data breaches are spectacular in scale, and we are right to pore through the details and 'put our own house in order'. However, IT are, in general, the wrong target. In terms of stupidity, we have as examples insurance companies accidentally distributing their entire client base along with eye-wateringly personal details, embedded unwittingly in a spreadsheet. We have embarrassing personal information in an email chain that eventually went outside the company. We have a social care organisation dumping old files, full of case notes, in a basement in an unsecured and unoccupied house. We have an employee of a care home loading an entire database of medical notes and personal information, taking it home, and then having the laptop stolen. Reading through accounts of successful prosecutions is fascinating. There are so many ways of 'leaking' confidential data. I was reminded of the time I'd just visited the local amenity site for disposing of rubbish. As I dumped all my old monitors, I noticed a group of people disassembling all the dumped computers. Intrigued, I asked around, and discovered that they were given a guaranteed price for every hard drive. It was, surprisingly, well over the scrap price. What I'm driving at is that the broader obligations that organisations have for the responsible curation of information can't be palmed off. It affects everyone in the organisation. When any sort of 'leakage' happens, you'll get a visit from the representative of your regulatory authority wearing a charcoal grey suit. The first thing he or she will first ask for is your DPIA, or Data Privacy Impact Assessment. All organizations that trade in Europe must formally assess the likelihood of data being 'leaked' or breached, and the impact it would have. This is a formal document that is signed off at board level. It doesn't matter where data may be concealed; like Schrödinger's cat, it is both breached and safe until you investigate and audit it. Where it is stored or hosted is irrelevant. You must prove that you are being responsible for good custody of it. The board of directors, governors or trustees cannot claim they know nothing because it is now their legal obligation to know about it. If they haven't done it, and the data is breached even by human foolishness, this could lead, in Europe, to administrative fines of up to 2% of the organization's annual global turnover or €10 million, whichever is the greater. Phil Factor Join the debate, and respond to the editorial on the forums |
The Weekly News | All the headlines and interesting SQL Server information that we've collected over the past week, and sometimes even a few repeats if we think they fit. |
Vendors/3rd Party Products |
Should you always use EXISTS rather than COUNT when checking for the existence of any correlating rows that match your criteria? Does the former really offer "superior performance and readability". Louis Davidson investigates. |
Tony Davis reviews the major new features of SQL Prompt 10, included improved 'ranking' of its code auto-completion suggestions, tab history improvements to make it easier to find 'lost' code, and auto-fixing of code that breaks code analysis rules. |
When growth was expected on the horizon for Fiducia & GAD, hiring more DBAs was not an option. With SQL Monitor they were able to scale at an unprecedented rate- 150 new servers and 70+ applications in one year. |
n the latest version of SQL Prompt, we’ve made improvements to all the most popular features. Our new ranked suggestions algorithm prioritizes the suggestions most relevant to you, tab history improvements let you find old tabs easily and star favorites, and new auto-fixes help you resolve code issues quickly. With support for key features of SQL Server 2019, which was made available this week at Microsoft Ignite, SQL Prompt 10 gives you the latest tools to develop faster, improve code quality, and boost team productivity. |
Aaron Bertrand highlights some of the important changes coming in the installation process for SQL Server 2019, such as a new command-line argument, /SQLMAXDOP, allowing you to specify this value for automated installs. |
Azure Synapse (SQL Data Warehouse and Data Lake) |
Data virtualization can be the solution to overcoming the shortcomings of a centralized repository. |
James Serra summarizes new and forthcoming features for the enterprise data warehousing side of Azure Synapse Analytics (formally Azure SQL Data Warehouse). |
Many have covered how you should backup your SQL Server database to Azure storage (also known as backup to URL) but what about restoring? |
Conferences, Classes, Events, and Webinars |
SQL Saturday comes to Bangladesh on Nov 16. |
The PowerShell to create a PowerShell Notebook, and to create a PowerShell Notebook that will create a PowerShell Notebook! |
Solving numerical puzzles using PowerShell |
Examining three different ways to create a database in PowerShell, using the Invoke-SqlCmd cmdlet, SMO, and dbatools. |
If you’re writing a service that produces some JSON from your database model. What do you need? DTOs, factories, factory builders, Jackson, an XML and JSON mapper library? No, just some SQL. |
Data Mining / Data Analysis |
Sentiment analysis is a classification problem where data teams attempt to predict whether text is positive or negative in tone. |
DevOps and Continuous Delivery (CI/CD) |
Robert Sheldon discusses ten guidelines that will help organisations implement DevOps successfully. |
ETL/SSIS/Azure Data Factory/Biml |
Mapping Data Flows is similar in look and feel to SQL Server Integration Services (SSIS). If you’re coming from an SSIS development background, Mapping Data Flows is a lot like the Data Flow tab. ADF Pipelines are a lot like the Control Flow tab. |
The Raw File source in SQL Server Integration Services (SSIS) can read data quicker than other file sources because it doesn't need to translate the data or parse the data when processing the file. This tip shows how to create Raw Files using SSIS that can also be used as source files, to take advantage of this storage format. |
HA/DR/Always On/Clustering |
Discussing the pros and cons of the different approaches, and explains the reasons why you might want to use AKS for your Big Data Cluster deployments. |
How to create a multi-subnet Availability Group |
Performance Tuning SQL Server |
SQL Server 2019 provides another option to deal with the last-page insert contention – OPTIMIZE_FOR_SEQUENTIAL_KEY. It improves throughput for high-concurrency inserts into an ascending clustered index. |
If you want to make "lightweight" indexes, the big question isn’t, “Should this column be in the key or the includes?” It's “Should this column even be in the index at all?” |
The index recommendations in the query plan and the ones in the DMVs both suffer from some pretty serious drawbacks. Brent Ozar lists them out. |
Possible duplicate indexes are those that very closely match Key/Included columns. Why do you care? Indexes have to be maintained. |
PowerPivot/PowerQuery/PowerBI |
What is the Common Data Service, what does it mean for you if you are a user of Power Apps or Power BI and how you can use it? |
Paul Turley steps you through some essential mechanics of the the query script language of Power Query, known as “M” or the Microsoft Data Mashup language. |
Paginated reports in Power BI offer a rich set of capabilities for printing, and the generation of report documents. Paginated reports can be exported to multiple document formats, and those exports can be scheduled and delivered via email. Unfortunately, for the moment at least, paginated reports require the use of a dedicated capacity, which can be cost prohibitive for some. This article describes a pattern that will help minimize the cost of using paginated reports for subscriptions. |
At Ignite 2019 Microsoft announced the public preview of large datasets in Power BI Premium, allowing the possibility of deploying organizational semantic models to Power BI. Having tested it with some large datasets, Teo Lachev shares some thoughts. |
Scott Murray explains the new Power BI Filter Pane and how to use it. |
Creating a star schema in Power BI can improve performance and more importantly, ensure accurate results! This article shows why a star schema can fix some of the issues in your report. |
Deploying Power BI content through a CI/CD pipeline makes it easy to move artifacts (pbix files) from one workspace to another, and even roll-back to an earlier version of the artifact if required, based on use of versioning in Git. |
A data intake process to leave you scratching your head. |
There are times when you need to compute row numbers in no particular order; in other words, based on nondeterministic order. This could be across the entire query result, or within partitions. Examples include assigning unique values to result rows, deduplicating data, and returning any row per group. |
Strange behavior caused by a space at the start of a password. |
In part 4 of his series, Erik Darling gets you started exploring your new, venti-edition Stack Overflow column store database. |
How to capture and store the number of records inserted, updated or deleted from a T-SQL Merge statement. |
Erik Darling tackles a “Make Big” script to produce a large Stack Overflow database that is all clustered column store. |
Erin Stellato explains some enhanced controls over Query Store behavior in SSMS 18.4 |
Tests help developers eliminate defects, build confidence, practice good design, and ideally all three. They also take time to write, run, and update–time that’s no longer available for other development tasks. High-value testing seeks to maximize the return on that investment. |
Grant Fritchey explains why database professionals can "no longer justify ignoring" these three tools: Docker, Git and DBATools |
This email has been sent to {email}. To be removed from this list, please click here. If you have any problems leaving the list, please contact the webmaster@sqlservercentral.com. This newsletter was sent to you because you signed up at SQLServerCentral.com. Note: This is not the SQLServerCentral.com daily newsletter list, and unsubscribing to this newsletter will not stop you receiving the SQL Server Central daily newsletters. If you want to be removed from that list, you can follow the instructions on the daily newsletter. |
|
|