April 7, 2016 at 2:52 pm
Good Day
I'm sure my issue isn't unique. Im trying to add a table to my SQL Server diagram and SQL Server only allows you to reference tables in the same DB. I thought i could cheat and get around that with synonyms but thats not an option either.
Shy of recreating the table in my DB and having our ETL process have to feed 2 tables (= time), does any one know how to get around this.
I havent tested just adding a check constraint using TSQL code but im not even sure if thats an option now.
Thanks for your response in advance
April 7, 2016 at 10:21 pm
ttdeveloper (4/7/2016)
Good DayI'm sure my issue isn't unique. Im trying to add a table to my SQL Server diagram and SQL Server only allows you to reference tables in the same DB. I thought i could cheat and get around that with synonyms but thats not an option either.
Shy of recreating the table in my DB and having our ETL process have to feed 2 tables (= time), does any one know how to get around this.
I havent tested just adding a check constraint using TSQL code but im not even sure if thats an option now.
Thanks for your response in advance
You can only use table from the same database in a SQL Server diagram.
If you have multiple databases that contain the objects you need, SQL Server diagrams are simply not the tool.
If you need to enforce relationship etc between tables in different databases, that gets a little tricky. Normally, a check constraint can only reference columns from the SAME row of the table. You can work around this by using a UDF - not sure but the UDF may be able to reference another database (either directly or using a synonym). I would lean towards using a synonym so that when you migrate to a new environment, the code does not need to change, only the synonym(s) would need to change.
April 8, 2016 at 6:44 am
Good Morning Happy
Great suggestion on the UDF. To be honest i didnt think about that option. Ill give it a shot this morning and see what the outcome is. To your point the SQL Server diagram is somewhat limited in this capacity. Shame this hasnt been thought through but Im sure Microsoft have their reasons. Ill test and respond back if it works.
April 8, 2016 at 7:02 am
happycat59 (4/7/2016)
You can work around this by using a UDF - not sure but the UDF may be able to reference another database (either directly or using a synonym).
Be careful there. While it sounds great, it doesn't actually work properly.
With a foreign key on Table1 referencing Table2, you can't insert a row into Table2 that doesn't have a matching row in Table1, and you can't delete rows from Table1 that have matching rows in Table2
If you put a check constraint using a UDF on Table1 to emulate a foreign key, then you still can't delete rows from Table1 that have matching rows in Table2 (the constraint on Table1 prevents that), but you can insert rows into Table1 that don't have matching rows in Table2, because there's no check constraints on Table1.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 8, 2016 at 7:27 am
So here's the thought process I was thinking of and thanks for bringing that to light.
In the function i create (while not yet coded) i was going to essentially emulate the on delete and on update actions same way i would have in a diagram (ie cacade, set null or no action). In addition to that i was also going to check for the existence of a key in the parent table (in this case the external table) before any insert into the FACT else fail the insert.
I understand this is long winded and prone to rows slipping through with some specific scenario not thought of. So are we saying the only viable option of having SQL manage this is to recreate the table in the DB? That sounds somewhat redundant no?
April 8, 2016 at 7:40 am
Providing you have properly written check constraints on both sides, it should work (checking on inserts not enough, you need to consider updates), emphasis 'should'
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 8, 2016 at 8:02 am
OK thanks Gila and Happy. Sucks this is the only option I have but it is what it is.
Thanks again
April 9, 2016 at 10:30 pm
ttdeveloper (4/8/2016)
Good Morning HappyGreat suggestion on the UDF. To be honest i didnt think about that option. Ill give it a shot this morning and see what the outcome is. To your point the SQL Server diagram is somewhat limited in this capacity. Shame this hasnt been thought through but Im sure Microsoft have their reasons. Ill test and respond back if it works.
Heh... they did think it through once-upon-a-time. They provided a limited tool to wet your appetite for the more expensive tool known as Visio. Not sure that does reverse engineering anymore, though.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 9, 2016 at 10:39 pm
I believe I'd be tempted to use a trigger to do inter-database referential integrity. Specifically, an "Instead of" trigger. From BOL...
[font="Arial Black"]DML Triggers[/font]
DML triggers are frequently used for enforcing business rules and data integrity. SQL Server provides declarative referential integrity (DRI) through the ALTER TABLE and CREATE TABLE statements. However, DRI does not provide cross-database referential integrity. Referential integrity refers to the rules about the relationships between the primary and foreign keys of tables. To enforce referential integrity, use the PRIMARY KEY and FOREIGN KEY constraints in ALTER TABLE and CREATE TABLE. If constraints exist on the trigger table, they are checked after the INSTEAD OF trigger execution and before the AFTER trigger execution. If the constraints are violated, the INSTEAD OF trigger actions are rolled back and the AFTER trigger is not fired.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 11, 2016 at 7:25 am
Hey Jeff
Thanks for this suggestion. I did consider using triggers across our databases but one reason has me worried about this.
Performance. Our design essentially will be querying millions of staged records against reference (dimension) tables, and creating foreign keys in a FACT table (pretty standard). For every query, having a trigger also check records poses the worry about performance - but i'm definitely not ruling this option out.
Guess i'm trying to ascertain which option will give the most bang for the buck
.Recreating/Moving table into said DB and having SSIS feed that table too.
.Creating a function to check RI which i know may miss records on Insert and/or update
.Trigger to fire on every row query.
This seems to be a big miss by MS.
April 11, 2016 at 7:54 am
ttdeveloper (4/11/2016)
This seems to be a big miss by MS.
No, it's not. Neither is the DB diagram being restricted to a DB showing a lack of forethought
The database is the the unit for enforced integrity, backups, restores, etc. It's a container for related things. You're choosing explicitly to go against that, to have related, interconnected databases in two different containers, which means you're going to have to put in manual work to get integrity and to back them up together and restore them to a consistent state.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 11, 2016 at 3:56 pm
ttdeveloper (4/11/2016)
Hey JeffThanks for this suggestion. I did consider using triggers across our databases but one reason has me worried about this.
Performance. Our design essentially will be querying millions of staged records against reference (dimension) tables, and creating foreign keys in a FACT table (pretty standard). For every query, having a trigger also check records poses the worry about performance - but i'm definitely not ruling this option out.
Guess i'm trying to ascertain which option will give the most bang for the buck
.Recreating/Moving table into said DB and having SSIS feed that table too.
.Creating a function to check RI which i know may miss records on Insert and/or update
.Trigger to fire on every row query.
This seems to be a big miss by MS.
I think that Gail summed it up pretty nicely...
The database is the unit for enforced integrity, backups, restores, etc. It's a container for related things. You're choosing explicitly to go against that, to have related, interconnected databases in two different containers, which means you're going to have to put in manual work to get integrity and to back them up together and restore them to a consistent state.
For those times that you choose to relate databases, there are tools that do work. For example, if such DRI as you describe is a must, then put the table in the database where you need to do the DRI and then use one of the many tools available. Triggers aren't the slow monsters that you may think they are if they're properly written as a home grown solution. If you want a not-so-home-grown solution, you could always setup something like transactional replication. The fact that databases can talk to each other at all is a small miracle and huge feature to begin with but... it's not necessarily a good idea to do such a thing because I you ever need to copy the database to some place else, you may also have to copy several other databases just to get the one to work because of its dependencies on other databases.
There are some other major considerations for you to think about. For example... let's say that FK's could span databases. What do you suppose would happen to the child database if the database containing the primary tables was down or in the process of a restore?
Triggers wouldn't be much better in that case although they would imply that a local copy of the FK parent table would be local, which would allow the database to possible survive autonomously if the parent database were down.
There may also be another option. How big is the parent table and how much time delay can you withstand before the child table contains changes made to the parent table?
--Jeff Moden
Change is inevitable... Change for the better is not.
April 12, 2016 at 7:26 am
No, it's not. Neither is the DB diagram being restricted to a DB showing a lack of forethought
The database is the the unit for enforced integrity, backups, restores, etc. It's a container for related things. You're choosing explicitly to go against that, to have related, interconnected databases in two different containers, which means you're going to have to put in manual work to get integrity and to back them up together and restore them to a consistent state.
I feel like i'm too low on the food chain to argue with anything Microsoft puts out however there are instances in my case specifically where databases do have to remain separate from an application build standpoint and its just not feasible to have all in one. In my case tables need to be shared across different processes or access levels and i wouldn't think it was uncommon or untoward for a need like this to arise. A simple fix of allowing RI across DBs still allows SQL to do its job and also provides the flexibility of table management to stay in its place.
But to your point, i couldn't sell this point to the boss so we have decided to consolidate databases and subsequently tables. Thankfully we are migrating over to 2012 and still have time to plan and play.
The kicker here which i think replies to Jeff's point was that there were going to be too many parts to manage (triggers, udfs or some FK spanning several DBs).
This site provides some real good insight and has been a go to place for me as a professional in this game so hands down i have to say i appreciate all the help I've received.
April 12, 2016 at 7:46 am
Thank YOU for the feedback on what you're doing. It's great to have some insight as to the needs of others.
For what it's worth, I also sometimes run across things in SQL Server where I scratch my head and say to myself (and, sometimes, out loud), "Really? What would have been so difficult to make it do <insert some feature here>." Shrinks and index maintenance are one (actually, many... I've lots of complaints there for larger databases) of those things for me.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 12, 2016 at 7:59 am
ttdeveloper (4/12/2016)
Asimpleincredibly complicated fix with a lot of side effects and additional impacts of allowing RI across DBs still allows SQL to do its job and also provides the flexibility of table management to stay in its place
Fixed for you. 😉
Yes, there are times when it makes sense to spread the data across multiple databases, but then you're choosing to going against the 'database as a unit of integrity' concept that's built into SQL, and, as any time when you choose to use a tool in a way against it's design, you end up doing additional effort to get things working as you want.
I can take a sportscar for some offroad racing because it's more comfortable than a jeep, but I'm going to pay later for doing so.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply