Blog Post

Building the Enterprise DW/BI System with SQL Server PDW

Building the Enterprise DW/BI System with SQL Server PDW

Most readers considering a Parallel Data Warehouse already have a data warehouse in place and are looking for ways to help handle growing data and performance demands. Many of these next-generation, large-scale data warehouse/business intelligence systems are evolving from existing DW/BI systems that are designed based on the Kimball approach. In this case, the transition to SQL Server PDW will be straightforward.

In this section we run through the basic steps for converting an existing SMP-based Kimball data warehouse to a Parallel Data Warehouse server, including the impact of SQL Server PDW on the DBA. We’ll also explore additional roles SQL Server PDW can play, including serving as the central source or hub, in a distributed data warehouse environment, as an ETL transformation engine, and as a platform for providing real-time analytic data.

Preparation and Installation

The SQL Server PDW system must live in a data center and involves at least two racks, so you should do some planning with your server management group before the truck shows up on installation day. Since it uses InfiniBand, any other servers you want to benefit from fast data transfer functions will need InfiniBand connections and to be located close enough to the SQL Server PDW server to meet any cable limitations.

Vendor installation is usually part of the purchase and takes a few days depending on what issues show up.

Part of planning should include some consideration of your overall conversion strategy. The options range from directly converting the existing data warehouse to completely re-architecting the system as part of the migration process. We will focus on the direct conversion approach in this section and discuss the re-architecting options later in this paper.

Data Migration

Once the machine is up and running, the next step is to create the new database, instantiate the target objects and their properties, and copy over the data. The Parallel Data Warehouse database is a SQL engine, but it is a bit different from the SMP-based SQL Server database. This is mostly because it is a parallel processing system, and some things don’t work quite the same. Certain functions have an underlying assumption of serial processing that doesn’t work in a parallel environment. Other functions, such as distributing data across nodes for parallel execution, don’t exist in the SMP environment.

If you are converting an existing SMP SQL Server database to SQL Server PDW, you can use a tool the Microsoft PDW team has built to help. It creates tables, adjusts indexes and partitioning, suggests distribution strategies for the fact tables, identifies problems such as data types that do not have direct equivalents in SQL Server PDW, and generates the actual BCP out scripts to get data from SQL Server and load scripts to load data into SQL Server PDW.

If your existing data warehouse is not SQL Server, the initial data migration is still fairly straightforward as long as you have a solid set of dimensional models. It shouldn’t take more than a few hours depending on the number of tables involved.

One big advantage of the SQL Server PDW system from the DBA’s perspective is the simplification it brings to physical data management. The physical location of data, including filegroups, disk layout, LUNs, and tempdb location, is all handled automatically as part of the core SQL Server PDW system.

There is one high-level physical decision to be made when moving to a massively parallel environment: how the tables should be split up across the nodes. There are two primary ways to physically instantiate tables in SQL Server PDW: replicated or distributed. The CREATE TABLE DDL includes a distribution clause where this is specified.

Replicated Tables

A replicated table looks like a single table to anyone who accesses SQL Server PDW, but it is actually replicated out to all compute nodes on the server. That is, there is one copy of the table on each node.

The purpose of replicating tables is to improve performance by having local copies of data on each node to support local joins. Replicated tables are generally used for dimensions and lookup tables to support local joins to the fact tables.

The replicated tables are managed by the system transparently. From the DBA’s perspective, the CREATE TABLE syntax is pretty simple:

CREATE TABLE Customer (

CustomerKey int NOT NULL,

Name varchar(50),

ZipCode varchar(10))

WITH

(DISTRIBUTION = REPLICATE);

The default is REPLICATE if the distribution clause is omitted.

Distributed Tables

The rows of a distributed table are spread across all nodes as evenly as possible. Each row is written out to a distribution which is a storage location on a node. There are eight distributions on each compute node, each with its own disks. In other words, no copies are made; each row in the source table ends up in only one distribution on one compute node. The rows are mapped to the distributions using a hash function on a column from the table.

The goal of distribution is to improve performance by maximizing parallel processing. Fact tables are usually the largest tables in the data warehouse, and are usually distributed.

Figure 9 shows a simplified version of the distribution of a Sales Fact table across eight compute nodes based on the CustomerKey column.

clip_image002

Figure 9: Fact table distribution

The Customer Key from each row from the Incoming Sales Fact Data in the upper left is put through a hash function. The hashed values map to a single distribution on a single node. For example, the row for customer key 44 hashes to 0x1C, which maps to the last distribution of the first compute node. Here is the DDL for the distributed table shown in Figure 9:

CREATE TABLE SalesFact (

DateKey INT NOT NULL,

CustomerKey INT,

DollarAmount MONEY)

WITH

(DISTRIBUTION = HASH(CustomerKey));

The choice of the distribution column is key, so to speak. If a few customers accounted for a large percentage of sales, using Customer Key would lead to an imbalance in the data distribution. One or two distributions would end up with a larger percentage of the data. This imbalance is called data skew. One or a few distributions with 10% more rows than average may cause problems, and a difference of greater than 30% will lead to poor performance. This makes sense because each query has to wait for all nodes to complete, and any node with significantly more data will take longer than the others when processing queries involving skewed data.

The primary criteria for selecting a good column for distribution are high cardinality and even row counts. There are other considerations for choosing the distribution column. For example, it’s not a good idea to choose a column that is often constrained to a single value in user queries. If users typically constrain on a single day, then the DateKey column is not a good candidate because all the rows for that day will end up in a single distribution. Other factors come into play when selecting a distribution key, such as distributing multiple fact tables that may need to be joined together to support certain analytics.

The parallel processing power of the SQL Server PDW system allows you to test your distribution key choice. Pick a distribution key, load the table, and run some distribution queries and a representative set of user queries against it. If you find a problem, you can create another version of the distributed table from the first version by using the CREATE TABLE AS SELECT statement and changing the column in the DISTRIBUTION = HASH () clause. This is generally much faster than you would expect because of the parallel processing. Of course, you need enough space to make multiple copies of your large fact tables, even if they are only experimental.

Dealing with Very Large Dimensions

As we said, dimensions are almost always replicated in the SQL Server PDW data warehouse. As a rule of thumb, dimension tables that are 5 GB uncompressed or smaller should be replicated. You do have to allow for space on each node for the replicated table. A 5-GB dimension would compress to around 2 GB, which would take up a total of 20 GB once it is replicated across a 10-node rack. By the way, compression is automatic and mandatory in SQL Server PDW.

Just to get a sense of the dimension table size that qualifies for replication, a Product dimension like the one shown in Figure 5 with a 500-byte uncompressed row size could hold about 10 million rows before you might consider other options.

Dimensions larger than 5 GB uncompressed are not unheard of, especially when dealing with big data. If you have a dimension that exceeds the replication threshold, you have two main options in a parallel environment: distribution or normalization.

Distributing a large dimension leverages the same parallel processing power with the fact table. However, if the rows needed to resolve a query are not on the same node as the associated fact table, the required dimension keys must be “shuffled” between nodes. SQL Server PDW is designed to move data rapidly when necessary for a query processing step, but it’s always faster to stay local.

In some cases, it may be possible to distribute the dimension using the same surrogate key as the fact table. This shared distribution key means the joins remain local because the required dimension rows are on the same nodes as corresponding fact rows.

The second option is to normalize very large dimensions to reduce their size and make their replication less burdensome. If the product dimension shown in Figure 5 had 10 million rows, it would require about 2 GB (depending on column widths and compression ratios), which is around the replication boundary. The normalized product table shown in Figure 4 would only require about 325 MB to hold 10 million rows. Obviously 325 MB is going to be easier to copy out to 10 or 20 nodes than 2 GB.

If most queries against a large dimension only return or constrain against a few columns, consider creating an outrigger dimension. That is, the core dimension will contain the commonly used columns. The rest of the columns are put into a separate table, called an outrigger, with the same surrogate key. The core dimension can then be replicated, and will join locally to the fact tables. Queries that require the less common attributes can bring them in with a single join to the replicated outrigger dimension. This is an easy way to get back into the range where replication works without having to completely normalize the dimension.

You can insulate users from the complexity of a normalized or outrigger design by providing views that re-combine the normalized columns back into a single dimension. Test these views to make sure they do not negatively impact performance.

You may have heard that normalization is a requirement for MPP systems. Some historical context might help explain this. Early MPP systems had tighter space constraints, lower bandwidth between nodes, and more costly storage. This led to a default practice of normalizing the dimensions in order to reduce the amount of data replicated onto each node. MPP vendors glossed over this need to normalize by arguing that you should use a normalized model because it is the “industry standard” for an enterprise data warehouse. Do not be fooled by this reverse logic. Normalizing dimensions is an MPP design choice made to improve performance by reducing the amount of data that must be replicated and stored across the nodes. Again, the need to normalize a dimension has been a rare occurrence in SQL Server PDW implementations to date.

Additional DDL

There are a few additional design decisions to make in defining the data warehouse tables. There are typically far fewer indexes on an MPP system because they are not needed. Do use clustered indexes where it makes sense. In most cases, this means creating a clustered index on the surrogate key of the dimension tables, and on the same column used for partitioning the fact tables. Use non-clustered indexes with care. In many cases, they are not needed because of the parallel processing speed, and they add maintenance, slow the load process, and take up space.

Fact tables may be partitioned for the same reasons you would partition on an SMP system, such as rolling window management or load isolation that uses a SWITCH operation. Partitioning is conceptually simpler in SQL Server PDW because it is fully specified as part of the table creation DDL rather than through a separate partition function and scheme.

Create an ETL System to Load the Target Model

Once you have tables defined in SQL Server PDW, the next step is to load data into them. The initial data transfers will most likely use scripts to bulk copy the existing data warehouse history into the SQL Server PDW. Moving forward, if you were using SQL Server Integration Services, your ETL system should function essentially the same with SQL Server PDW as it did with your prior data warehouse. For example, SQL Server PDW has its own source and destination connections you will use in your Integration Services packages. However, there are a few product differences that will impact your ETL system.

Surrogate Key Assignment

The IDENTITY property of an integer field is not supported in SQL Server PDW. This makes sense when you realize rows in a distributed table will be inserted across many separate nodes. The cost of keeping track of incremental identity assignments across multiple nodes in a parallel process would dramatically slow any insert process. If you were using the IDENTITY property to assign surrogate keys to your dimensions, you will need to manage this either in your ETL process by keeping surrogate key values in a table and assigning them incrementally, or in the INSERT statement by using the ROW NUMBER ACROSS function.

Cached Lookups Only

If you use Integration Services Lookup transformations in your existing ETL packages, make sure you select Full cache in the Cache mode section when querying SQL Server PDW, which pre-populates the lookup cache. Using the Lookup transformation to perform a non-cached SELECT operation against incoming Integration Services pipeline rows is inefficient with SQL Server PDW.

The Landing Zone

The SQL Server PDW system has a separate staging server as part of the control rack called the Landing Zone. Incoming data from the Integration Services connections or the SQL Server PDW bulk loader (DWLoader.exe) flow through the Landing Zone prior to being distributed to the compute nodes for permanent storage. The Landing Zone quickly reads incoming rows from files or Integration Services and sends them off to compute nodes in a round-robin fashion using a module called the Data Movement System (DMS) which, not surprisingly, handles data movement around the system. On each compute node, a DMS instance will hash the rows and send them back out to the DMS instance of the node to which they map. This receiver DMS inserts the row into a staging table where any sorting and indexing takes place. The final step uses SELECT INTO to copy the data from the staging table to the target table. All this happens behind the scenes and is managed by the system.

This whole flow keeps data loading in a highly parallel fashion and minimizes any processing work actually performed on the Landing Zone.

One benefit of parallel processing is the load process can run while users are querying the data. The loader processes get lower priority, so they have little impact on user queries. This means you can process yesterday’s load without having to limit user access. It also means you could do near-real time data loads to give access to current data where it’s needed.

Transact-SQL Compatibility

SQL Server PDW has its own variant of SQL with extensions to support parallel processing. Some functions in the SMP SQL Server product have not been implemented in SQL Server PDW. Some of these were omitted because they are functions that do not translate well into a parallel environment. For example, the IDENTITY property is not supported as described in the ETL section.

Transact-SQL compatibility with SQL Server SMP is not yet fully complete, and Microsoft continues to add functionality through frequent updates. You will want to test any existing scripts or stored procedures that are part of your current operations against the latest functionality provided by SQL Server PDW.

System Management and Tuning

SQL Server PDW has its own Central Administration console that provides easy management and monitoring of the system. It uses a product similar to SQL Server Management Studio that is aware of the multi-node nature of the system and monitors sessions, queries, loads, backups, node activity, and alerts and errors.

From a tuning perspective, it’s best to take a simple approach on SQL Server PDW, starting with minimal indexes as described in the physical design section and testing performance with a representative set of user queries once you get the data loaded. If it works, no problem. If not, you can use the Central Administration console to inspect individual query plans to see where the bottlenecks are. For example, your fact table distribution may be skewed, so most of the processing is on a single node. In this case, you can try a different distribution column using the CREATE TABLE AS SELECT statement as described earlier. You may need indexes for specific types of queries. For example, one customer needed to query individual phone numbers from the Customer dimension for some of their lookup reports. A non-clustered index on phone number did the trick. This is something you would typically consider for queries that are used often and have a large impact on the user community.

Additional Opportunities

There are several additional roles and requirements Parallel Data Warehouse can take on beyond hosting the enterprise data warehouse. From an enterprise information perspective, SQL Server PDW can integrate with existing systems by serving as the system of record for analytic data and providing that data to downstream bulk consumers. From an ETL processing perspective, SQL Server PDW can act as a large-scale ETL engine to manage the bulk transformation of big data sets. SQL Server PDW can also support near real-time data warehousing, which is critical for certain analytics.

Integration with Existing Systems

There are many situations where the enterprise data warehouse needs to feed large data sets to downstream systems. In many cases, these are extensions of the DW/BI system in the form of data marts which can be fed from SQL Server PDW in a hub-and-spoke fashion. The definition of data mart is quite fluid; it often describes a component that exists for historical and/or political reasons and adds significant work without adding much value. Data marts and other downstream data consumers can also include purpose-built architectural components. For example, it may make sense to create a subset of enterprise data on a separate server to allow integration with business unit or divisional data. We’ve also seen large chunks of data exported from the EDW to support research or data mining on a dedicated server. Operational systems such as a sales force automation system, or customer relationship management system, often import large subsets of the EDW to provide context to their processes. We’ve also seen subsets created for business-specific reasons. For example, one company wanted to provide sales data to their customers, but decided to create a separate data mart for each customer for security reasons.

If you need to integrate with existing systems, SQL Server PDW can help. Remote Table Copy is a high-speed table copying function that can transfer tables from the SQL Server PDW to SQL Server running on a locally connected SMP server. Data transfer rates can be as fast as 400 GB per hour. Once the data is in the target SQL Server machine, you would complete the ETL process to properly integrate it into the database with appropriate indexes, partitioning, and any other required constraints.

An Opportunity for Improvement

If you have downstream data marts that were created for historical performance and/or political reasons, and which no longer serve a true business need, we encourage you to examine them carefully. This multi-layered, multi-model approach adds significant work, time, redundancy, and cost to the enterprise DW/BI system implementation. Implementing a SQL Server PDW system offers a chance to re-architect these leftover appendages into a more efficient and effective enterprise information environment.

This platform improvement strategy seeks to replace the existing DW/BI system by unplugging the existing data marts and redirecting or rewriting BI queries and reports to pull directly from the SQL Server PDW. This approach is usually more disruptive and requires more effort, but ultimately it leads to a simpler, more robust, more responsive enterprise information resource. Simply integrating SQL Server PDW into the existing environment sounds appealing because it is low impact in the short term. However, in the long term, you may be perpetuating systems that are inefficient, confusing, and costly.

SQL Server PDW as the Transformation Engine

Organizations dealing with particularly large data sets and operating with narrow load windows may not have time to use a separate ETL system to process the data before loading it into the SQL Server PDW. In these cases, SQL Server PDW can serve as a large-scale transformation engine as part of an overall EDW architecture. This approach generally involves loading the data directly into tables in the SQL Server PDW database, and then performing ETL lookups as INSERT-SELECT operations joining staging tables to dimension tables to lookup surrogate keys in bulk. This approach applies the full power of the parallel environment to the core ETL processes.

Real Time Options

While most of the analytic data in the data warehouse does not need to be loaded on a less-than-24-hour basis, some business opportunities require more frequent data loads. SQL Server PDW’s parallel load process supports “near real time loading” under the Read Uncommitted isolation level (Dirty Reads). Loads can be run while users query tables and these data loads have a low impact on the overall performance of concurrently-running queries.

Conclusion

SQL Server Parallel Data Warehouse offers a viable platform for supporting large-scale data warehouses into the hundreds of terabytes. The appliance nature of the system makes it relatively easy to configure, install, tune, manage, and expand. SQL Server PDW provides parallel processing of queries against dimensional models on atomic data to address the Kimball approach’s goals of query performance, usability, and flexibility on an enterprise information resource.

For more information:

http://www.microsoft.com/sqlserver/en/us/solutions-technologies/data-warehousing/pdw.aspx: Parallel Data Warehouse on SQL Server Web Site

http://www.microsoft.com/sqlserver/en/us/solutions-technologies/data-warehousing/fast-track.aspx: Fast Track Data Warehouse on SQL Server Web site

http://www.microsoft.com/sqlserver/en/us/solutions-technologies/Appliances/HP-bdw.aspx: HP Business Data Warehouse Appliance on SQL Server Web site

http://www.microsoft.com/sqlserver/en/us/solutions-technologies/Appliances/HP-ssbi.aspx: HP Business Decision Appliance at SQL Server Web site

http://www.microsoft.com/sqlserver/: SQL Server Web site

http://technet.microsoft.com/en-us/sqlserver/: SQL Server TechCenter

http://msdn.microsoft.com/en-us/sqlserver/: SQL Server DevCenter

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating