The Stretch Database Retirement

  • Comments posted to this topic are about the item The Stretch Database Retirement

  • "There's also a weird mention of Fabric in the announcement, which doesn't seem to fit with the objective of the rest of article."

    I think they just mention Fabric at every opportunity.

  • Maybe it'sĀ  a footer in all blogs šŸ˜›

  • I've always thought the pricing model and feature limitations of Stretch Database suggests that Microsoft uses Synapse Analytics to host the data, but I've never had anyone confirm or deny that.

    I'm currently working on a project that archives data to parquet format in Azure storage containers and then integrating it with the on-prem database using Polybase external tables. It's a very cost effective (but poor performing) solution that might be a fit depending on your use case for accessing historical data. In this case, the assumption is that the data will be very rarely used but must still be accessible online without DBA intervention when needed by data analysts or auditors.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • I think that's the direction most people are moving. Getting stuff from an OLTP database to Parquet is the challenging thing, but I think things like Synapse Link will become more common to make this easier.

  • In case anyone is interested, I'm using a PowerShell module written by Andrey Mirskiy that functions as a wrapper for the libraries Parquet.Net and ParquetSharp (it can alternately use either). It's very easy to use, just pass in parameters for server, database, SQL query, and destination file. I then use AzCopy shell command to upload the .parquet files to Azure storage.

    https://github.com/AMirskiyMSFT

    https://github.com/microsoft/AzureSynapseScriptsAndAccelerators/tree/main/Migration/SQLServer/2B_ExportSourceDataToParquet

    Cool tier storage is billed at $0.01 per GB / month and Cold tier is $0.0036 per GB - which is appropriate for rarely used archival data. It gets even cheaper if you are willing to wait a few hours for data to move from offline storage.

    This project is still in the proof of concept stage, and data analysts are considering what data lake solution would meet their needs for querying, or maybe just accessing it via an Azure VM and SQL Server Polybase will suffice. Maybe I'll write up an article here when it solidifies a bit more.

     

     

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply