December 24, 2008 at 4:02 am
Anyone knows how we can put dynamically-build query (using EXECUTE or sp_executesql) inside user-defined function.
My problem is that I need a table output to join with other tables which seems that user-defined function is the best strategy. However, the query that produces the table output is also dynamically built.
Then, I found out that sp_executesql or EXECUTE cannot be put inside user-defined function. My best strategy turns out to be impossible (in my knowledge).
Is there really impossible? Or there is another way around? Thanks for any responses.
Regards,
Suparurk S
December 24, 2008 at 5:31 am
Depending on what you are actually trying to do, you may be able to accomplish it with a CLR procedure.
You may want to post some information about what you are really trying to accomplish. Once you have gotten to the point in a design where you need to join dynamically generated query output tables, you are very likely to have strayed from the path of good design (don't take that personally, everyone has strayed from that path a few times). It may be possible to do some design re-work and solve the problem in a much more SQL Server friendly manner.
December 24, 2008 at 10:13 pm
Thank you for your reply. My task is that I have a master data (says TableA) with 4x,xxx records. Users need to map sets of data into groups for reporting purpose such as:
Code 00000-10000, 20000-30000 except 25000-25999 = Group 1
Code 10001-19999 = Group 2
The mappings may change each month.
I decided to create a table with 4 columns (TableB) --Group, BeginCode, EndCode, IsExcluded--keeping the mapping data so that users do not need to map all 4x,xxx records one by one.
Then I generate where clause from TableB for each Group and use this where clause to dynamically map Group to TableA data into a temporary table. So that I can use this temporary table to join with transaction data and get result into Groups.
Actually, it already seems too complicated :unsure:. The requirement is that users can change Groups and their Groups' code ranges. They can not map or write SQL queries to map TableA with Groups by themselves (too many rows). Any suggestions are welcome.
Regards,
Suparurk S
December 26, 2008 at 1:45 am
doesn't seem to me that you'll need a UDF seems like a SP with a cursor would solve your problem. If performance is a concern then you should adopt what Michael suggested, using a CLR π
cheers π
jon
December 26, 2008 at 5:54 am
Cursors are evil, aren't they? π
Here's a solution that seems to work without cursors and CLR.
create table TableA(itemid int identity(1, 1), itemname sysname)
go
set rowcount 500
--populate tableA. if you need more rows, change rowcount and uncomment references to tables b and c
-- in the query
insert TableA (itemname)
select 'test item ' + convert(varchar, row_number() over (order by a.object_id))
from sys.all_objects a --, sys.all_objects b--, sys.objects c
set rowcount 0
go
create table TableB(groupid int, minrange int, maxrange int, isexcluded bit)
go
--populate tableB. add a few ranges included and excluded
insert TableB (groupid, minrange, maxrange, isexcluded)
values(1, 0, 100, 0)
insert TableB (groupid, minrange, maxrange, isexcluded)
values(1, 200, 300, 0)
insert TableB (groupid, minrange, maxrange, isexcluded)
values(1, 250, 299, 1)
insert TableB (groupid, minrange, maxrange, isexcluded)
values(1, 210, 220, 1)
insert TableB (groupid, minrange, maxrange, isexcluded)
values(1, 50, 60, 1)
--group 2 for testing..
insert TableB (groupid, minrange, maxrange, isexcluded)
values(2, 101, 199, 0)
insert TableB (groupid, minrange, maxrange, isexcluded)
values(2, 301, 310, 0)
go
--display contents of TableB
select * from TableB
--select rows for group 1
declare @groupid int
set @groupid = 1
select a.*
from TableA a
inner join
(select groupid, minrange, maxrange from TableB where isexcluded = 0 and groupid = @groupid) b
on a.itemid >= b.minrange and a.itemid <= b.maxrange
and itemid not in
(select itemid from TableA c inner join
(select groupid, minrange, maxrange from TableB
where isexcluded = 1 and groupid = @groupid) d
on c.itemid >= d.minrange and c.itemid <= d.maxrange)
go
--clean up
drop table TableA
go
drop table TableB
Regards
Piotr
...and your only reply is slΓ inte mhath
December 26, 2008 at 6:49 am
In a situation like this, I'd make a table like this instead:
Table:CodeGroups
CodeID
GroupID
IsExcluded
Store a groupID for every codeID. Then, you can write your queries just by linking through this table. Something like:
SELECT x
FROM SomeTable ST
INNER JOIN CodeGroups CG ON ST.CodeID = CG.CodeID
WHERE CG.GroupID = 4
You will have to build a small utility for your users to update the codes table, but that's a lot easier, because you can just make a simple stored procedure that does something like:
UPDATE CodeGroups
SET GroupID = @GroupID,
IsExcluded = @IsExcluded
WHERE CodeID BETWEEN @MinRange AND @MaxRange
This approach gives you a normalized table.
December 26, 2008 at 10:51 am
Thanks for all the suggestions.
For Piotr's solution, I have never written JOIN like you suggested. It would be worth testing and see if it works in my case.
For Garadin's solution, it seems very simple to do but I have another problem--users can also update TableA. I may have to create trigger attach to TableA so that the list can be updated automatically.
Michael and Jon, I am an old school and still can't write CLR procedure. :crying: Well, if all other options doesn't work, maybe it's time for me to learn.
Many ideas to work on...:satisfied:
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply