February 5, 2018 at 9:16 am
Since my last visit, I've gotten by with just using my "Teach yourself SQL" book and lots of web searches. Now it's time to 'talk' to real DBAs again. I am working on creating Stored Procedures for a process that I do every single month. I have two done already and want to make another.
I have the following query lines that work, but I can only seem to run them if I select two lines at a time and then press F5. What am I missing in my query to just have these six alter statements to just run all in a row and then proceed with processing the statements that come after? (creating a new table based off these smaller tables that were imported from CSV files).
The errors I get are similar to these:
Msg 102, Level 15, State 1, Line 22
Incorrect syntax near ''.
Msg 102, Level 15, State 1, Line 24
Incorrect syntax near ''.
What does it not like about my single quotes?
--Remove providers that are not on the CPC+ Accepted Physician List (Oct 2017)
delete from Mem_BCBSMCom where NPI not in (Select NPI from CMS_Accepted_Providers)
--Add column Program column to table
alter table [Mem_BCBSMCom] add [Program] varchar(8);
update [Mem_BCBSMCom] set [Program] = 'BCBSMCom';
--Add column Program column to table
alter table [Mem_BCBSMMA] add [Program] varchar(7);
update [Mem_BCBSMMA] set [Program] = 'BCBSMMA';
--Add column Program column to table
alter table [Mem_BCN] add [Program] varchar(3);
update [Mem_BCN] set [Program] = 'BCN';
--Add column Program column to table
alter table [Mem_HAP] add [Program] varchar(3);
update [Mem_HAP] set [Program] = 'HAP';
--Add column Program column to table
alter table [Mem_Humana] add [Program] varchar(6);
update [Mem_Humana] set [Program] = 'Humana';
--Remove providers that are not on the CPC+ Accepted Physician List (Oct 2017)
delete from Mem_Priority where [PCP NPI] not in (Select NPI from CMS_Accepted_Providers)
--Add column Program column to table
alter table [Mem_Priority] add [Program] varchar(8);
update [Mem_Priority] set [Program] = 'Priority';
-Mark
MSSQL 2019 Standard, Azure Hosted. Techie/Sysadmin by trade; Three years as a "DBA" now.
February 5, 2018 at 10:05 am
I don't get any syntax errors. That error must be because of some text/code before or after the main code.
But you will likely get exec errors, since the column won't exist in the table at the time SQL parses the proc.
In these types of situations, I often use dynamic SQL instead:
--Remove providers that are not on the CPC+ Accepted Physician List (Oct 2017)
delete from Mem_BCBSMCom where NPI not in (Select NPI from CMS_Accepted_Providers)
--Add column Program column to table
exec('alter table [Mem_BCBSMCom] add [Program] char(8);')
exec('update [Mem_BCBSMCom] set [Program] = ''BCBSMCom'';')
--Add column Program column to table
exec('alter table [Mem_BCBSMMA] add [Program] char(7);')
exec('update [Mem_BCBSMMA] set [Program] = ''BCBSMMA'';')
--Add column Program column to table
exec('alter table [Mem_BCN] add [Program] char(3);')
exec('update [Mem_BCN] set [Program] = ''BCN'';')
--etc.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
February 5, 2018 at 10:39 am
While I am sure that there are good reasons for doing so on occasions, including DDL to modify your database schema within a stored procedure is not best practice. For one reason, it stuffs up source control (you can no longer compare your source project with the target database to check for drift, for example). Wherever practically possible, I'd recommend keeping your DDL away from DML.
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
February 5, 2018 at 12:01 pm
Maybe you have a BEGIN without END? Or something similar.
February 5, 2018 at 12:27 pm
Phil Parkin - Monday, February 5, 2018 10:39 AMWhile I am sure that there are good reasons for doing so on occasions, including DDL to modify your database schema within a stored procedure is not best practice. For one reason, it stuffs up source control (you can no longer compare your source project with the target database to check for drift, for example). Wherever practically possible, I'd recommend keeping your DDL away from DML.
Thank you for this insight. For this particular monthly occasion, I was originally adding the columns that I needed into the CSV file, but I found doing it that way each month was tedious through Excel, or through a command line in Windows. I found the DDL method much easier. I haven't come across another reason to do it this way in anything else yet. I will keep this in mind though moving forward.
Side note - I had to look up DDL and DML again because I couldn't remember what they were. 🙂
-Mark
MSSQL 2019 Standard, Azure Hosted. Techie/Sysadmin by trade; Three years as a "DBA" now.
February 5, 2018 at 1:32 pm
I suggest you mask actual table/db/server names in future posts.
February 5, 2018 at 1:35 pm
Maybe you should read something on relational database design, independent of SQL. If you need to constantly add columns to your tables, then your problem is of a different sort. MS SQL database is not used for teh same things as CSV or Excel. Whatever you are doing to CSV files or Excel files, should not mechanically be reproduced in any database system. That is a bigger problem than sinatx error in some script.
February 5, 2018 at 1:53 pm
Zidar - Monday, February 5, 2018 1:35 PMMaybe you should read something on relational database design, independent of SQL. If you need to constantly add columns to your tables, then your problem is of a different sort. MS SQL database is not used for teh same things as CSV or Excel. Whatever you are doing to CSV files or Excel files, should not mechanically be reproduced in any database system. That is a bigger problem than sinatx error in some script.
Zidar,
At the moment I'm following what the previous "DBA" was doing. Every month we have to assemble a report. It is significantly easier to assemble the report using T-SQL Queries. I attempted to try it with Excel at one point and it was a fruitless effort. The sheets became too large. The data in the database is not used for storage of any kind. I'm just using SQL to manipulate the data because as the previous person found trying to do it with Excel was just a waste of time. All the data import into SQL is in CSV or Spreadsheets, though, which are quite large, 50,000 rows or more. In a nutshell this is the process:
0. Remove prior months tables.
1. Download membership 'rosters' CSVs from insurance companies (which are all with different column headers)
2. Import the rosters into their own tables.
3. Add the "Program" column to each table and fill the program row with the program name for that particular roster.
4. Filter the Membership Rosters to remove certain records.
5. Join Membership Rosters into one large Roster
6. Use Unions and Joins to create two new tables on certain data columns.
7. Export and send records from two new table created from Step 6 to parties requiring them.
I realize this is not glorious DBA (and I use the title DBA loosely) work, but it is far more efficient than Excel.
-Mark
MSSQL 2019 Standard, Azure Hosted. Techie/Sysadmin by trade; Three years as a "DBA" now.
February 6, 2018 at 6:16 am
usererror - Monday, February 5, 2018 1:53 PMZidar - Monday, February 5, 2018 1:35 PMMaybe you should read something on relational database design, independent of SQL. If you need to constantly add columns to your tables, then your problem is of a different sort. MS SQL database is not used for teh same things as CSV or Excel. Whatever you are doing to CSV files or Excel files, should not mechanically be reproduced in any database system. That is a bigger problem than sinatx error in some script.Zidar,
At the moment I'm following what the previous "DBA" was doing. Every month we have to assemble a report. It is significantly easier to assemble the report using T-SQL Queries. I attempted to try it with Excel at one point and it was a fruitless effort. The sheets became too large. The data in the database is not used for storage of any kind. I'm just using SQL to manipulate the data because as the previous person found trying to do it with Excel was just a waste of time. All the data import into SQL is in CSV or Spreadsheets, though, which are quite large, 50,000 rows or more. In a nutshell this is the process:0. Remove prior months tables.
1. Download membership 'rosters' CSVs from insurance companies (which are all with different column headers)
2. Import the rosters into their own tables.
3. Add the "Program" column to each table and fill the program row with the program name for that particular roster.
4. Filter the Membership Rosters to remove certain records.
5. Join Membership Rosters into one large Roster
6. Use Unions and Joins to create two new tables on certain data columns.
7. Export and send records from two new table created from Step 6 to parties requiring them.I realize this is not glorious DBA (and I use the title DBA loosely) work, but it is far more efficient than Excel.
Sounds like a job for an SSIS package, rather than a stored procedure.
(Please note, I'm not an SSIS developer, but I've got developers who do similar workloads of importing data)
As for the 0 step, why remove the prior tables? I'm presuming that indicates you drop them? Why not simply truncate the tables, which would keep the table structures (columns, etc,) reset any ID values, and save some log thrashing? Or, do the headers in the files you're importing change from month-to-month?
February 6, 2018 at 6:27 am
jasona.work - Tuesday, February 6, 2018 6:16 AMSounds like a job for an SSIS package, rather than a stored procedure.
(Please note, I'm not an SSIS developer, but I've got developers who do similar workloads of importing data)As for the 0 step, why remove the prior tables? I'm presuming that indicates you drop them? Why not simply truncate the tables, which would keep the table structures (columns, etc,) reset any ID values, and save some log thrashing? Or, do the headers in the files you're importing change from month-to-month?
JasonA -
I'm not quite there yet on my skillset. I have tested a few SSIS import packages (for other purposes) but they stopped working after our IT department reimaged my workstation. The difficulty I had with my previous SSIS packages is that our SQL 2016 server is remotely hosted at Microsoft's Azure site. I cannot get our IT vendor to budge on setting up a VPN tunnel to Azure and joining our SQL Server to our local domain. So I have no AD authentication. It's extremely irritating at times to have to do everything with SQL Authentication, but I digress...
I had been removing the previous tables just because it was easier and I am, again, just not there yet with my skillset. I'm able to produce the results needed for our team on a daily basis, but the time to learn is not always there and usually just on getting done what I need done quickly. I'll get there, it's probably going to take years. 🙂 But in theory the headers on the source tables should be consistent each month, but one vendor did alert us that there were some changes coming up later this year.
-Mark
MSSQL 2019 Standard, Azure Hosted. Techie/Sysadmin by trade; Three years as a "DBA" now.
February 6, 2018 at 6:33 am
usererror - Tuesday, February 6, 2018 6:27 AMjasona.work - Tuesday, February 6, 2018 6:16 AMSounds like a job for an SSIS package, rather than a stored procedure.
(Please note, I'm not an SSIS developer, but I've got developers who do similar workloads of importing data)As for the 0 step, why remove the prior tables? I'm presuming that indicates you drop them? Why not simply truncate the tables, which would keep the table structures (columns, etc,) reset any ID values, and save some log thrashing? Or, do the headers in the files you're importing change from month-to-month?
JasonA -
I'm not quite there yet on my skillset. I have tested a few SSIS import packages (for other purposes) but they stopped working after our IT department reimaged my workstation. The difficulty I had with my previous SSIS packages is that our SQL 2016 server is remotely hosted at Microsoft's Azure site. I cannot get our IT vendor to budge on setting up a VPN tunnel to Azure and joining our SQL Server to our local domain. So I have no AD authentication. It's extremely irritating at times to have to do everything with SQL Authentication, but I digress...
I had been removing the previous tables just because it was easier and I am, again, just not there yet with my skillset. I'm able to produce the results needed for our team on a daily basis, but the time to learn is not always there and usually just on getting done what I need done quickly. I'll get there, it's probably going to take years. 🙂 But in theory the headers on the source tables should be consistent each month, but one vendor did alert us that there were some changes coming up later this year.
Regarding your last paragraph, I do suggest that this would be a very quick and worthwhile change to learn about and implement.
Instead of
DROP TABLE x
CREATE TABLE x as ...
Just
TRUNCATE TABLE x
(assuming the table to be truncated has no FK constraints ...)
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply