January 9, 2019 at 4:46 pm
Comments posted to this topic are about the item Files with larger disk consumption
January 10, 2019 at 9:11 am
So.... how do you find out what's causing the problem?
--Jeff Moden
Change is inevitable... Change for the better is not.
January 10, 2019 at 3:43 pm
Jeff Moden - Thursday, January 10, 2019 9:11 AMSo.... how do you find out what's causing the problem?
Hi Jeff,
January 10, 2019 at 10:21 pm
Junior Galvão - MVP - Thursday, January 10, 2019 3:43 PMJeff Moden - Thursday, January 10, 2019 9:11 AMSo.... how do you find out what's causing the problem?Hi Jeff,
Did you refer to slowness issues in reading the data?
No. I'm talking about finding high values in the things you measured. How do you find what is causing those high values? The article doesn't really provide a clue as to how to determine if it's code, hardware, or simply a temporary "data storm". I realize that you're "just" providing a script but it seems to me that at least mentioning that it could be one of those 3 would make the script article more valuable because being able to identify a problem doesn't really help if you don't know what the actual problem really is.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 11, 2019 at 6:49 am
Jeff Moden - Thursday, January 10, 2019 10:21 PMJunior Galvão - MVP - Thursday, January 10, 2019 3:43 PMJeff Moden - Thursday, January 10, 2019 9:11 AMSo.... how do you find out what's causing the problem?Hi Jeff,
Did you refer to slowness issues in reading the data?No. I'm talking about finding high values in the things you measured. How do you find what is causing those high values? The article doesn't really provide a clue as to how to determine if it's code, hardware, or simply a temporary "data storm". I realize that you're "just" providing a script but it seems to me that at least mentioning that it could be one of those 3 would make the script article more valuable because being able to identify a problem doesn't really help if you don't know what the actual problem really is.
January 11, 2019 at 8:00 am
Junior Galvão - MVP - Friday, January 11, 2019 6:49 AMJeff Moden - Thursday, January 10, 2019 10:21 PMJunior Galvão - MVP - Thursday, January 10, 2019 3:43 PMJeff Moden - Thursday, January 10, 2019 9:11 AMSo.... how do you find out what's causing the problem?Hi Jeff,
Did you refer to slowness issues in reading the data?No. I'm talking about finding high values in the things you measured. How do you find what is causing those high values? The article doesn't really provide a clue as to how to determine if it's code, hardware, or simply a temporary "data storm". I realize that you're "just" providing a script but it seems to me that at least mentioning that it could be one of those 3 would make the script article more valuable because being able to identify a problem doesn't really help if you don't know what the actual problem really is.
Jeff,I really shared the script, which I'm always using in my consulting activities here in Brazil, at no time I thought of writing an article that can illustrate or guide how we can reach this values, maybe do it, even if you want to access my humble blog: pedrogalvaojunior.wordpress.com, may have a dimension of what I do and a little of what I aim to do.There are several possibilities that can cause the presentation of these values, this will depend very much on what is being executed or even processed by the instance, server or hardware that we are analyzing.When this script is run in an environment is the values ​​presented are close to what I highlighted, we then have to start analyzing their possible causes, by default when faced with a supposedly slow reading process, the main causes can be:- Hard Disk presenting performance problems in the search of data;
- Fragmentation of data in our tables and indexes;
- Lack of indexes in our tables;
- Use of columns in the where clause that do not satisfy the condition to obtain the data in a better time; and
- As well as the query being executed may be one of the causes.At this point, you see these items as possible causes of slow reading of data that is on disk.But reinforcing again that this script is meant to alert the values ​​that are being presented at the time of its execution, the causes or reasons will vary from scenario to scenario.
I get and very much appreciate all of that, Junior. If you would have stated the very things in the preamble of your script as possibly "Here are the things that can go wrong with your system...." followed by the rest of what you wrote as a first measure to see if you even have a problem (which you did very well!), people would see the value in your script even more.
It's meant as a suggestion because there are too many people that write scripts out there without emphasizing why the script is valuable.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 11, 2019 at 1:22 pm
Jeff Moden - Friday, January 11, 2019 8:00 AMJunior Galvão - MVP - Friday, January 11, 2019 6:49 AMJeff Moden - Thursday, January 10, 2019 10:21 PMJunior Galvão - MVP - Thursday, January 10, 2019 3:43 PMJeff Moden - Thursday, January 10, 2019 9:11 AMSo.... how do you find out what's causing the problem?Hi Jeff,
Did you refer to slowness issues in reading the data?No. I'm talking about finding high values in the things you measured. How do you find what is causing those high values? The article doesn't really provide a clue as to how to determine if it's code, hardware, or simply a temporary "data storm". I realize that you're "just" providing a script but it seems to me that at least mentioning that it could be one of those 3 would make the script article more valuable because being able to identify a problem doesn't really help if you don't know what the actual problem really is.
Jeff,I really shared the script, which I'm always using in my consulting activities here in Brazil, at no time I thought of writing an article that can illustrate or guide how we can reach this values, maybe do it, even if you want to access my humble blog: pedrogalvaojunior.wordpress.com, may have a dimension of what I do and a little of what I aim to do.There are several possibilities that can cause the presentation of these values, this will depend very much on what is being executed or even processed by the instance, server or hardware that we are analyzing.When this script is run in an environment is the values ​​presented are close to what I highlighted, we then have to start analyzing their possible causes, by default when faced with a supposedly slow reading process, the main causes can be:- Hard Disk presenting performance problems in the search of data;
- Fragmentation of data in our tables and indexes;
- Lack of indexes in our tables;
- Use of columns in the where clause that do not satisfy the condition to obtain the data in a better time; and
- As well as the query being executed may be one of the causes.At this point, you see these items as possible causes of slow reading of data that is on disk.But reinforcing again that this script is meant to alert the values ​​that are being presented at the time of its execution, the causes or reasons will vary from scenario to scenario.I get and very much appreciate all of that, Junior. If you would have stated the very things in the preamble of your script as possibly "Here are the things that can go wrong with your system...." followed by the rest of what you wrote as a first measure to see if you even have a problem (which you did very well!), people would see the value in your script even more.
It's meant as a suggestion because there are too many people that write scripts out there without emphasizing why the script is valuable.
Jeff, thanks for understanding, within the possible I will seek to share my experiences and knowledge.
January 15, 2019 at 6:25 am
Having read through this quickly I thought it was valuable. Performance tools are always welcome!
I wondered if the title
would be better as
412-977-3526 call/text
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply