June 20, 2019 at 12:00 am
Comments posted to this topic are about the item Technology Flows Downstream
June 20, 2019 at 7:22 am
This is following a trend. At both the AWS RE:Invent and Microsoft Ignite, Snowflake DB played a prominent role.
Similarly we have Vertica runing in EON Mode.
The local caching strategies on the compute nodes makes or breaks these sorts of systems. The separation of compute from storage means that you could have two or more completely separate data marts, each with very different load and access patterns accessing the same data. At the same time you can have a third "data warehouse" in Snowflake terms or "sub cluster" in Vertica terms that is used for ETL work. AWS S3 storage is eleven 9s resilient so that is a step up.
As the compute nodes are focussed on compute and don't have to worry about storage you can see a noticeable performance improvement. Again, I stress that it all depends on the way in which the data is cached on the compute nodes.
June 20, 2019 at 3:53 pm
An important trend as workloads increase. Most of us would prefer more machines to grow a load than a larger single machine.
June 20, 2019 at 5:32 pm
I'm not sure if a current or near future SQL edition will do this, but in Cosmos DB, throughput is provisioned using request units per second (RU/s), so the number of compute nodes elastically scale automatically without the need for purchasing vCores or choosing a DTU tier. It will even horizontally scale (automatically redistributing) data partitions across replicas.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply