January 13, 2023 at 11:02 pm
Well the flip side of that question is, if you're passing the information on to a system that wants to read JSON do you store the sqlified data in your database and then convert to JSON every time it's requested or just store the JSON and pass that along? Especially given the somewhat volatile nature of JSON/XML/etc... being that if your initial parse of the input was incomplete for whatever reason that information is now lost?
JSON has not changed data modeling at all as far I'm concerned. From a design and admin point of view how many NVARCHAR(max) columns belong in a table? Imo not more than 1. Whenever possible I've normalized NVARCHAR(max) columns into their own table with a surrogate key and a foreign key reference to previously containing table. As Jeff wrote tho it depends
Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können
December 14, 2023 at 6:26 pm
Bump. Maybe the prior posting was never visible
Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können
December 14, 2023 at 8:26 pm
Bump. Maybe the prior posting was never visible
This site has had the problem of not posting the 1st post on subsequent pages until an additional post is made for a very long time, now. I do wish they fix that and the bad "connect to a given post" and incorrect page counts caused the SPAM removals and a whole lot more. Steve recently said they were going to work on some of these soon.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 14, 2023 at 8:32 pm
ZZartin wrote:Well the flip side of that question is, if you're passing the information on to a system that wants to read JSON do you store the sqlified data in your database and then convert to JSON every time it's requested or just store the JSON and pass that along? Especially given the somewhat volatile nature of JSON/XML/etc... being that if your initial parse of the input was incomplete for whatever reason that information is now lost?
JSON has not changed data modeling at all as far I'm concerned. From a design and admin point of view how many NVARCHAR(max) columns belong in a table? Imo not more than 1. Whenever possible I've normalized NVARCHAR(max) columns into their own table with a surrogate key and a foreign key reference to previously containing table. As Jeff wrote tho it depends
I actually set the table option to have "large data types (LOBs)" be stored "out of row" (to prevent "trapped short rows" and other page density issues), which is effectively the same thing you've done except I also take it a step further. I also recently taken to using the TEXTIMAGE option when creating a table to have the large objects get stored on a different file group to help prevent a wealth of issues.
Of course, that doesn't work on new table and so I'm also doing the swap'n'drop thing on some of my larger tables in some of my databases.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 14, 2023 at 9:56 pm
I actually set the table option to have "large data types (LOBs)" be stored "out of row" (to prevent "trapped short rows" and other page density issues), which is effectively the same thing you've done except I also take it a step further. I also recently taken to using the TEXTIMAGE option when creating a table to have the large objects get stored on a different file group to help prevent a wealth of issues.
Of course, that doesn't work on new table and so I'm also doing the swap'n'drop thing on some of my larger tables in some of my databases.
Both of those table options are unfortunately unavailable in Azure SQL where the db's I manage reside. Azure SQL doesn't have FILESTREAM either. For most cases we try not to store the JSON at all. In the best case scenario JSON is a convenient, disposable medium of exchange. Instead of writing custom C# methods which are specific for the parameters of the stored procedure(s) referenced, if you pass the procedure parameters as JSON and the data as JSON, the whole data access "ORM" type mismatch problem can be eliminated in the best possible way. From a .NET API if you map HTTP headers, URL parameters, and request content bodies to NVARCHAR(max) input parameters to stored procedures, you can handle ANY request without needing custom code by using a common set of .NET methods which can be rigorously tested and have really good exception handling and error handling.
In the other topic, with the looping through hierarchical C# objects to create non-hierarchical JSON (to convert back into hierarchically structured data in SQL), the .NET developer is likely complaining to his colleagues about how working with SQL Server data types is obtuse and I don't think it's necessarily the case anymore
Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können
December 15, 2023 at 3:52 am
Heh... lordy. OK... Two more reasons why I'm glad I don't have to deal with Azure. "Trapped Short Rows" can relatively large decreases in performance. Now I see why you used a "sister table".
Heh... and SQL Datatype are "obtuse". That's not the first time I've heard front-end developers make that claim and then resort to the likes of JSON or XML. If they just made regular TSV data, that would work a treat compared to JSON.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 7 posts - 16 through 21 (of 21 total)
You must be logged in to reply to this topic. Login to reply