September 8, 2011 at 9:06 am
I have many tables containg similar data
these tables have names like AA, FR, L , etc
all the products with product code AA% are in table AA and so on
to look up Product number aa1234 I need to select ffrom table AA
But how do i do this?
Hers what I have so far but this is not working
get error Msg 1087, Level 15, State 2,
[Code]
Declare @X as Bigint
Declare @Y as Bigint
Declare @Lookup as Varchar(10)
Declare @Subtable VARCHAR(2)
SET @Lookup ='AA1234'
IF ISNUMERIC(SUBSTRING(@LookUp,2,1))=1
SET @subtable=LEFT(@LookUp,1)
ELSE SET @subtable=left(@LookUp,2)
SELECT @subtable.Prodcode, Supplier.Supplier, ProdType.ProdType, Mnufctr.Mnufctr
FROM @subtable INNER JOIN
Supplier ON @subtable.SupplierID = Supplier.ID INNER JOIN
Mnufctr ON @subtable.MnufctrID = Mnufctr.ID INNER JOIN
ProdType ON @subtable.ProdTypeID = ProdType.ID
WHERE (@subtable.Prodcode = @Lookup)
[/code]
September 8, 2011 at 9:19 am
Some older languages used to perform "macro expansion" which swapped the variable for the value at runtime to construct a runnable statement. SQL Server doesn't. Instead, you have to construct a string representation of the statement and use one of two commands to execute the string, like this:
DECLARE @SQLStatement VARCHAR(500)
SET @SQLStatement =
'SELECT st.Prodcode, s.Supplier, p.ProdType, m.Mnufctr'
+ 'FROM ' + @subtable + 'st'
+ 'INNER JOIN Supplier s ON st.SupplierID = s.ID'
+ 'INNER JOIN Mnufctr m ON st.MnufctrID = m.ID'
+ 'INNER JOIN ProdType p ON st.ProdTypeID = p.ID'
+ 'WHERE (st.Prodcode = ' + @Lookup + ')'
PRINT @SQLStatement
EXEC(@SQLStatement)
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
September 8, 2011 at 9:19 am
You can't use a variable as the table name in a SELECT statement, you will need to use dynamic SQL.
Start here Dynamic SQL
September 8, 2011 at 11:01 am
I have to ask: If all of the tables have similar data, why aren't you simply using one big table?
The query would be trivial if all your data was in one place.
__________________________________________________
Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills
September 9, 2011 at 2:15 am
The Dixie Flatline (9/8/2011)
I have to ask: If all of the tables have similar data, why aren't you simply using one big table?The query would be trivial if all your data was in one place.
Hi Bob
I figured that once the OP had seen how wretched this scheme is to code against, he would change 😉
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
September 9, 2011 at 10:44 am
Thanks for your kind assistance and appologies for my delayed response
Actually the data is all one table, But there are nearly two million records
I was looking to try and make the process of searching more efficient
There are about 150 prefixes (QW, QB, A, etc) each containing between 5,000 and 20,000 records.
I figured if I could take the data from the main table
put in into subtables (named according to the charcter prefix)
then perform the looking up in the table (named according to the charcter prefix)
that would be more efficient.
But Hey I'm just learning What can I say!!!
But I have a further question
I follow the logic of building an SQL string thats fine, and I got it to work
But then what to do about my X and Y variables ( declared at the top)
how do I set @X and @y
before I was doing this (sorry i didnt post this bit earlier)
[Code]
SET @Lookup ='QB16785'
SET @X = (SELECT Weight FROM Prodcode Where Prodcode = @Lookup)
SET @Y = (SELECT Length FROM Prodcode Where Prodcode = @Lookup)
[/Code]
Then doing this in another table
[Code]
SELECT PackID
, PackType
FROM dbo.Packages
WHERE MinWieght <= @X
AND MaxWeight >= @X
AND MinLen <= @Y
AND MaxLen >= @Y
[/Code]
But this doesnt work, does it!:w00t:
[Code]
SET @X = (exec SQLstr)
[/Code]
September 9, 2011 at 11:12 am
Hi, instead of creating separated tables to hold the data and creating dynamic SQL queries, i recommend you keeping the original table (the one with the over 2M records) and on that table, create the appropriate set of index needed for your queries. Having the data properly indexed will speed up the output of your queries and having "static" queries & stored procedures will allow SQL server to use pre-compiled query plans to provide better performance to your data accessing needs.
One steep in the future, i could recommend using partitioned tables, but you will need to learn a little bit more about SQL (no fences) 😀
September 12, 2011 at 12:53 am
Thanks grasshopper I will stick with my big table for now, and find out about partitioned tables.
September 12, 2011 at 2:49 pm
Sonny, if correctly designed and coded, SQL is efficient up into millions and millions of rows. FYI, partitioning is a technique for segmenting a table (for example on item number ranges) while still allowing you to code against it as a single table. How many total items are you talking about?
__________________________________________________
Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills
September 13, 2011 at 7:33 am
There is just under 1.8 million in one of the dbs im playing with which is productIDs
and 1.32 million in another db which is Legal precedents
theyre both things im playing with rather than working on
i have had a quick google about for tutorials on the subject of partitioning
but any links anyone can add
would be much appreciated
September 13, 2011 at 9:13 am
At my last place, the product transaction table was about 70 million rows. A requirement was to extract transactions for a single product - about 30k to 70k rows - aggregating to a particular level reducing row count to about 5k, then calculate quantities by location / transaction type, requiring a second and far more complex level of aggregation down to about 50 rows. Around 12 other tables had to be joined in.
The final sproc took around a second to run and consisted of an aggregated extract to a temp table, indexing the temp table, then joining to the remaining tables for the final aggregation to generate the results. Not bad.
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
September 13, 2011 at 3:11 pm
About partitioning. Look at example "H" here in BOL:
http://msdn.microsoft.com/en-us/library/ms174979.aspx
Partitions are subsets of a single table defined by a range of values. Each partition points to a different filegroup. (So you could put each partition on a separate set of disk resources if you wanted to.) However all partitions belong to a single table. You simply query that table just as if it were not partitioned. Depending on the value of a specified column, the query knows to jump to the appropriate partition to get the data. This is analogous to your scheme to create separate tables for each range of product numbers.
To create a partitioned table, you first create the filegroups to hold the data. Then you create a partition function that logically divides the table based on ranges of values. Then you create a partition scheme that associates each range of values with one of the filegroups. In the CREATE PARTITION SCHEME statement, the partition scheme is tied to the the partition function by the AS clause. (The scheme is name myRangePS1, and the function is myRangePF1. There is nothing sacred about the names used.)
Finally you create the table. Instead of specifying that the table is created on PRIMARY, you specify the partition scheme on which it is to be created ( in the example: myRangePS1). You also specify the column that is to be passed into the partition function.
In your case, the function would accept varchar input instead of an integer, and the ranges might be separated by 'B','C','D' etc. When you create your table, the column specified in the ON clause would be your product number column.
Don't get confused by old articles that talk about partitioning as views that UNION ALL multiple tables. That's an old way of doing it before this functionality was created.
__________________________________________________
Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills
October 12, 2011 at 12:26 pm
Have you solved your Query issue..? i guess the guys in the forum are still willing to help.
Just let us know..:-)
October 13, 2011 at 10:50 am
cheers mate, no I havent solved it yet in terms of i havent had time to really look into partitinoning yet,
it was more an investigation on my part
I just carried on with everything in one table.
there doesnt seem to be a way to mark the thread as solved
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply