Temporary tables not recognised by Crystal Reports V9.2

  • I am in the process of converting an application to use temporary tables. The SQL Server side of this has been fine however my Crystal reports fail to connect to the parameterized stored procedure using the temporary table.

    The Crystal error is Not supported ADO Code 0x0040e37 Microsoft OLE DB provider for SQL Server.

    Description: invalid object name #Treflist

    SQL State 42So2

    Native error 208.

    Any one know if there is and or version number of the OLEDB provider, supporting temporary tables?

    Thanks,

    Keith

  • it may due to persistance...can you convert the #tables into ##tables....There's a practical difference in terms of the length of their existence (scope)...read BOL for details.

  • Thanks Andrew.

    I was aware that the local temporary tables would only survive the current connection's session but had thought that the dbprovider would have been able to resolve the temporary table name.

    I've changed over to the global temporary table and the reports now run correctly. 🙂

  • Sounds like you server is splitting the report over multiple sessions. By converting these to global temp tables you are now limiting it so that you CANNOT run more than one version of your report at the same time (well most likely), because they will step on each other.

  • Thanks for the input Bob.

    I think we will be all right with multiple reports.

    I would have liked a local table but the table in question is a list of reference GUIDs returned from a search and also has a current search identity GUID. All the reports are filtered on the search identity GUID.

    It was all working ok as a hard table but lead to many many inserts and deletes on this search result reference table.

  • Out of curiosity, could you tell me why you chose not to create a view?

  • Ok. If it worked as a permanant table, then I understand.

    However, I am not sure what you are gaining then. Temp tables are real tables, just dropped and created when they are no longer needed.

    You could just truncate the table at the same point you would have created it.

    I'm thinking that you aren't getting the same benefit out of temp tables that you think. They are NOT table variables. They are just tables created in the TEMPDB.

  • Hi Kenna,

    This is a search result table and may be just the user 'tagged' items from the Search result query. A standard stored Proc which underlies a number of different reports, uses a view joined across the table in question.

    For Bob,

    This reference list table is written to and deleted by all the users all the time, possibly writing 1000s of records at a go without adding or removing any 'real' data. On one of the Forums recently I read that when working on real tables Inserts and Deletes are all tracked in the Transaction log, which would explain why ours can get very big ,very fast and that if using temporary tables this would be avoided. Now my preference would have been a local table and if I could use it I might be able to optimize the design further.

    As indicated earlier using a Global table may well achieve my main objective and if it is not writing to the transaction log, also bring a performance improvement. So far I can say the system seems to run at least as well as before but I have yet to test the performance implications.

    Thanks for the questions and if my understanding or assumptions are flawed please advise.

  • I am not saying that your choice is good or bad, but just making sure what you are actually getting.

    Temp tables are real tables stored in the TEMPDB. The recovery model for the TEMPDB is SIMPLE. HOWEVER, the transactions are tracked, but it will be automatically truncated. So it is a maintenance gain, but not necessarily performance gain. However if your other DB is also simple there is no gain.

  • Thanks for the input Bob and I value the questioning.

    The maintenance gain is what I am looking for and for us with lots of small sites without local DBAs, is a bit of a head ache. There does seem to be a performance benefit that is quite significant and I think occurs if the old database needs to expand it's transaction log.

    The simple test I've just run for a single 12,000 record search result on the old version expanded the transaction log by 27Meg each time it ran.

    The new version did not affect the transaction log and completed consistently faster and on occasions in a 1/3rd of the time.

    I am aware that in the production environment the Transaction log space may already be large enough to cope and not need to expand every time.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply