It is day 2 of the summit already, am at the bloggers table again, waiting for the rest of the bloggers to arrive. Yesterday was a great day of training and networking, ending with the SentryOne party. I had so much fun and it was hard getting up early today. But I managed it. Please follow me here for today’s keynote updates and it looks like an exciting day already!
8 17 AM: Wendy Pastrick from PASS BoD is up on stage..she talks of benefits of networking and non training activities at PASS. She is talking of how to make financially wise data supported decisions.
8:24 AM: Tim Ford takes stage to explain marketing decisions. He calls out volunteer effort as being backbone of PASS and we observe a moment of silence for those who have passed on. PASS summit content has expanded to including hands on pre con session with devops and content management. Session evaluations are important and valuable. So take time to do them.
8:30 AM: Mark Souza and team are coming up to reflect on 25 years of sql sersvver.
4 product managers starting with Ron Soukup (1989-1995) – he talks of 16 bit Os/2 , intel 386 server with 60 mb RAM. In 1990 windows 3.0 was released , watershed moment – win16 dll was provided to write windows apps with sql server background. Sales failure os/2 made windows more successful. SQL Server on Windows NT was released in Aug 1993 within 30 days of NT shipping. This was a 32 bit OS so lot of RAM. Unix didnt have threads but win nt had so that was used – SQL 95 started developed. Industry was mocking and called it SQL Infinity. In summer of 1995 SQL 6.0 shipped and had GUI, replication in the box, NT perfmon. In the fall of 1995 audited TPCC benchmarks 2500 transactions per second which was a quarter of a price of sybase and os/2. There were 17 people on the development team.
Paul Flessner 1995-2005: Oracle and DB2 had hundreds of developers while SQL has 65. They had a lot of ‘bogus datatypes’, we didn’t. We need more people on the team, more software engineers and people on the team. Oracle had educated the market that everyone needed row level locking. The sybase code base was over 10 years old and was built on single smp unix environment. SQL Server 7.0 was then released with rowlocking and significant changes to the engine.
Ted Kummert 2005-2014: He was asked if he’d like to run the sql server business. We have to be great at the craft of engineering , don’t do what others are doing..’dont screw it up’…Memories were big enough so we could rethink things. ‘Managed self service business intelligence’ was a big thing. The Cloud, ‘we’re all in’ was the model when it was proposed. Since they have to run it at scale they are doing a much better job delivering ‘features’.
Rohan Kumar 2016-today: Paul sent an email to team on certain things to remember on how to succeed as a team – it is on the desk. Ted was very thoughtful and principled, and the result of what we have is 25 years of dedication.
Thank you to the community from the microsoft team.
Next generation data platform: Raghu Ramakrishnan: It takes a community to raise a student. MSFT has 10 hex bytes of data, they do over 10 hexbytes of IO and a big diversity of data systems.
The changing landscape of data and the vision for data management. AZure SQL Db Hyperscale, what design points we are tyring to address and how does it work. The cloud changes everything in the equation in terms of scale. Scale, heterogeneity, many engines, many workloads. Elastic compute with elastic storage. Cloud native designb – unbounded storage, elastic storage. Size of data operations are slow, long recovery times are painful. Incrase scale and availablity with costs reflecting workload dimensions, while masking netowrk latencies. Size of data operations take a long time, long recovery is painful. 60% of customers need quick recovery and takes a long time. Network latencies, An AG zone is a ring of T2 rigns. Network neighborhoods and latencies. Relational data/OLTP, Warehouse data DW/OLTP, Data Lake – files, docs, videos, telemetry, graphs..different workload characteristics, different requirements ,different db systems – users need to move data across systems – slow complicates governance, use query federation to go across silos. Can we break free of data silos. Parts of database system are being explained. Cloud native design separates compute from storage management. Break open silos – allow data from systems to be accessible at much higher/deeper level than popular query federation.
OLTP systems – ACID properties, transactional updates, high velocity of data changes. High degree of key design target SLAs…Learn how to store data versus how you scale compute if you are going towards a cloud future. Network simply extends database extensibility. Resilient Buffer Pool Extension..Rahter than local log they use shared logs. Log size unbounded and xlog owns quorum. Can we optimize storage at failover targets? Tempdb tables have info related to concurrency control. MVCC multiversion timestamp. SQL Server was based on two phase locking. For main memory dbs, lock free dta structures and mvcc rock (hekaton!). Persistent Version STore – PVS is the new sql server mechanism for persisting the existing versions genreated in the database itselsf instead of the original tempdb version store. Each secondary has independant cache. Auto partition read workload efficiently. Many deployment options , eg in different AZs. Backups are done automatically. In a cloud architecture ‘retry’ is like CTRL-ALT-DEL.
Log has circular buffer – large transcation, out of space, long running transaction – out of space. Unbounded log – log is the database and no log backup is required. Take advantage of highly skewed access pattern for hte log. SQL Hyperscale offers tier 1 scae and perfomrance with a cloud native architecture. Foundation for making oltp data directly accessible to dw/bi etc.
i