cannot read the next data row for dataset x

  • Hi, a peer asked me today to help on this ssrs error.   There is a lot going on in this report but the dataset in question uses a table meant just for ssrs and is filtered in the report by the userid of the person running the report.   When i run the query with a valid userid or without in ssms i get the expected results.  But the report gets the error you see below even when i rig it to use the only userid currently present in the source table.  i poked around on the internet and the only culprits i see have to do with data types.  but i dont believe a data type has anything to do with this as the only variables are 1) userid passed from ssrs, 2) 2 table variable which dont appear to interfere when i run the query under ssms with and without a valid userid.  does the community recognize this error as having anything to do with something other than data types?

    cantreadnextrow

     

    • This topic was modified 3 weeks, 5 days ago by  stan.
  • Thanks for posting your issue and hopefully someone will answer soon.

    This is an automated bump to increase visibility of your question.

  • here is one of the posts i see on this subject.   its hard for me to make the leap between a division and sql being unable to read a (next) row.  https://stackoverflow.com/questions/57414474/rsprocessingerror-reporting-services-error-rserrorreadingnextdatarow .

    here is another one.   i'm going to look closer at my report for some sort of intentional or unintentional cast.  https://www.sqlteam.com/forums/topic.asp?TOPIC_ID=180809  .

    i started dividing and conquering and believe i found the culprit.   i ...

    1. hardcoded the userid parameter which is passed to the offending dataset as a filter on a table of manufacturing values.  this was important to me because the table  that is used to generate that dataset has only one userid value at this time over about 128 rows.  this didnt help
    2. stopped passing the latter and instead hardcoded the userid value in the query.  no help
    3. got rid of variable tables in favor of temp tables.  no help
    4. got rid of two mathematical total like expressions at the bottom of the tablix in question.  they occupied their own rows.  they referred to the value in the first row of another dataset.  no help
    5. tore apart a 3 part union all in the query by including only one query at a time.  when added back the 3rd (which was most complicated of the 3) seemed to be the culprit but only brought down the report , not the same query running exactly as is in ssms.
    6. pulled out the 3rd query and instead inserted it into a new temp table planning to just union all the new temp table.   and when i ran it into that new temp table caught the fact that a divide by zero was occurring i believe on an average (the divisor) that the query had cast to just 2 decimal positions.

    so my take away was that ssms was forgiving the divide by zero but partnered with ssrs, sql wasnt.  disturbing to say the least.  i think there is a way in sql to be stricter on what is forgiven, i believe it is called exact abort or maybe arithmetic abort..

    my plan after comparing notes with my peers is simply to either cast that value to more decimal places or show a blank in the report when the divisor is so small and causing the division to generate a value that is too high to be useful to our business.

    • This reply was modified 1 week, 6 days ago by  stan.
    • This reply was modified 1 week, 6 days ago by  stan.
  • Got some details? Like the parameter values you're trying to pass, the parameter definition, the stored procedure text, the related column types and sizes... you know, something so we can actually help?

    Maybe show us the parameter values and types you're trying to pass and some data maybe?

  • thx pietlinden and sorry for the delay as i was travelling and temporarily distracted.

    i am convinced of what the problem was and think maybe i now understand what MS's bizarre message means.   "Cant read next row" probably means the previous row had a problem so i (t-sql) am not going to try getting more rows.  It would have been nice if i simply got a "divide by zero" (see below) message for that previous row.  none of us had any idea what "cant read next row" means until maybe now.

    And to make things worse, this report INSERTS into a table with a user id filter for the user running the report and for reading by the  dataset that had the issue.   It deletes anything for this user that might have been there previously.  each time the report aborted, it rolled back those rows and i was fooled (in ssms) into thinking data from one other user was what was causing the offending dataset to abort.   We tried to ask the user we were working with to tell us what userid he logged in with when the report aborted but the conversation went south.  bottom line i thought ssms wasnt aborting (ie was forgiving) on the same query and data as what ssrs was.   i was wrong.

    What is happening is as follows...the original author (for cosmetic and simplicity reasons) wanted only the 1st 2 decimal places of a standard deviation to show.  But instead of waiting for the appropriate time, he/she decided to do an early  cast on that figure to decimal(18,2) which wouldn't have been terrible even for this std dev which is something like .000000004...... But unfortunately another statistic called Cpk (process capability index) was calculated by dividing one of 2 numerator choices by 3* that casted std dev.

    what i did was to eliminate that cast and show the results (as ugly as they are)  in a test ssrs report to create better communication between myself and the real statisticians/engineers.    The Cpk value is astronomical and really meaningless in this situation when calculated using std dev = .000000004....

    what they have decided then visually is for me to show "< 0.01" for the std dev rather than the way a float shows (exponent) otherwise in ssrs.   and as a 2 decimal number otherwise.  and either  "> 4.0" or "N/A" in the Cpk text box if the number of data points is 10 or more and the value is large or the number of data points is less than 10.   and to add a message below the grid explaining that Cpk cant be calculated for a sampling of less than 10 data points if indeed that was the case.

    • This reply was modified 1 week ago by  stan.
    • This reply was modified 6 days, 2 hours ago by  stan.

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply