September 11, 2015 at 7:56 pm
Installing the Jet engine on each PC would reintroduce the same issues you had with a shared MDB: you're executing "unfiltered" DB calls over the wire and filtering on the other end. Your locking would go to hell, corruption will go back up, etc....
Having played such games for a long time, yes you can make it work, but all in all, having the actual server was worth the investment. As wayne mentioned - you're building your own *everything* and you're the only one that can hold it together. Might be nice for a while if you're a consultant, but even then someone will eventually get wise and figure out that there are easy ways to schedule a lot of the things you ended up having to having to code from scratch. The disk Management aspects (i.e. being able to put your temp space, your system DB's and user DB's on different disks, easily, without a LOT of coding), the backup options (ever tried to implement incremental/differential backups on a Jet application), and things like replication, internal scheduler, and full-up reporting engine which is *separate and distinct* from the app you built, well - the list goes on. Oh - and an ETL tool, a BI tool etc... and yes - you can export whatever you wish to Excel for stat purposes.
One big complaint I would have in parting is that the users need to have physical access to the DB stores. Sure you can play games and obfuscate, etc.... but the fact is - they have physical access to all of the data AND the DB files AND the lock files, and by the way - they happen to have rights to modifying the content and the files too. To me that's the kiss of death (and I was on the receiving end of this): A user sees the content, and wants to work .ate, so they "drag" a copy of this onto their laptop before heading home, etc.... they didn't copy, they moved it, and now the file's not where it needs to be. All of your data just left the building, and you're left HOPING that was a happy accident and not someone pissed off at the organization.
That level of physical access isn't something you get with SQL server (at least not if your DBA's aren't entire morons). You actually have to go out of your way to grant the users access to the actual files, and even then - they'd need to have SA-level access to take the files off of the server.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
September 11, 2015 at 11:52 pm
erichansen1836 (9/11/2015)
... run the the JetComp.EXE compress/repair utility on a *.MDB Jet 4.x file, the file is re-optimized for query speed
Is that an "online" task? or an exclusive one? We have databases used 24/7/365, so the Maintenance (rebuilding indexes to improve performance by removing fragmentation etc.) has to be "online"
Referential Integrity:
Can be enforced with MS-Jet 4.x by either manually performing cascading deletes and updates (parent-child relationships), or you can use MS-Access 2007 SQL Syntax (via ODBC) to place Constraints on tables in your database after you have created them with ODBC Administrator utility (Windows 7)
Dunno about JET, but if I create a Foreign key in SQL I can never, accidentally, have an orphaned child record.
We use mechanically generated code for CRUD, so in principle our code would always warn a user if they attempted to delete a Parent record that had Child records, rather than actually, or attempting to, delete it. So we SHOULD be safe!! but there will be bugs in our code, and someone will delete something "direct" in the database when fixing something. SQL will protect me from all of that grief ...
erichansen1836 (9/11/2015)
But for locking down the database for restore during business hours... (semaphore blocking users during restore task) ... A system-wide message would pop up on their display informing them the database is in current lockout until such-n-such time.
That would be a deal breaker for us. We can do the following:
Take a full backup of the database (no user disruption during this time, the backup includes any transactions that were initiated, and completed, during the backup so a restore is "complete" as of the backup finish time. You might well have something similar in JET).
I can restore the database on the target server. Lets assume its a big backup file, it takes a long time ... I restore it using NORECOVERY option (which means that the target database sits there waiting for more, additional, backup files to be restored rather than being "ready for use").
I then take a Differential Backup and restore that ... still using NORECOVERY.
I then take a LOG backup and restore that. When I am ready I can then block users (put up a holding page, whatever), set the source database to READONLY to prevent any accidental subsequent changes, take a final Log backup and restore that (using the RECOVERY option this time) and then set the target database to READWRITE (it inherited the READONLY from the restore of the final log backup, Natch!), change the DNS to point to the new server and remove the holding page.
We've done this to upgrade hardware on websites with hundreds of concurrent users taking thousands of orders a day (database was huge, as you might imagine). Total actual downtime was less than 2 minutes and users who stayed around to watch the holding page (which told them to wait!!) did not even lose their session.
The whole Log Backup thing is critical for us. I can remember the days of key-to-disk where someone came to the house, read my electric meter and wrote it on his sheet, two people then keyed that in and there was a 4-tape-drive sort (I wrote one back in the 70's at college!) and then a merge of customer data with the meter reading data. Just like a Bond movie with tapes writing back-and-forth and lots of flashing LEDs everywhere!!
Nowadays everything is real-time using the phone or email etc. for "input". There is absolutely no way to recreate any data from "original documents", unlike the old days. So we have to have a target of "zero data loss".
We take a Log backup every few minutes. In the event of a hardware failure SQL has structures for the Data and Logs which are designed to be fail-tolerant. For example: we store the Log and Data files on physically separate hardware, so we are unlikely to lose both at once (short of the building being destroyed). If the data file is trashed we can still make a "Tail" backup of the Log and then restore the last Full backup, a subsequent DIFF backup if we have one, and then all the Log backups since, including the tail backup. Zero data loss.
We can also restore Log backups using "STOP AT" to a specific point in time - e.g. just before some twit deleted half the customers by mistake!! ... we can easily restore that to a new, temporary, database and then copy the records over to the Live database.
INSERT INTO LIVE.dbo.CUSTOMER
SELECT *
FROM MyTempDB.dbo.CUSTOMER AS S
WHERE NOT EXISTS
(
SELECT *
FROM LIVE.dbo.CUSTOMER AS D
WHERE D.ID = S.ID
)
Any one of the juniors has the knowledge to write that command (although I suspect they'd be stood shaking in my office and insisting that I did it - and take the heat!), so no critical-knowledge required, per se.
For something more robust than that there is Replication - every update is duplicated to a second (or several!) server(s), which can then take over (within milliseconds) if the primary fails.
It would be designed such that the RESTORE POINT application utility you design could bypass certain SQL updates in those SQL log files.
That sounds like a LOT of work ... and testing ... with SQL I don't need to do anything, other than to enable Log Backups, to have complete recoverability to any point-in-time.
Following a disaster time-to-recover is critical. Not just recover the database, but get everything up and running - on the failover site. Everyone has lost work time during the disaster and they have work piling up ... so they won't want any additional hassle / downtime whilst we try to figure out how to recover the data / debug the process etc.
You could bypass updates during a certain time period, by a certain User, by multiple Users updating from the same PC NodeID, etc.
I can't imagine why you would? as surely that would wreck the integrity of the database? Record-B relies on how Record-A was at the time Record-B was created/updated. They might not have a formal association, it might just be that the user checked (e.g. looked up an address, or a "file note") and on that basis did X instead of Y ... take away the update to the address or "file note" and the update to Record-B no longer makes sense ...
I HAVE not implemented the restore logs yet for anyone
Given the size of your databases how are the customers protected against a catastrophic disaster? Is the only option "recover from last night's backup"? (That could well be fine for your clients, but I can't think of a single client that we have, nowadays, who could recreate a day's data from hardcopy, or "memory"!)
September 12, 2015 at 6:08 am
You've put tons of time and effort into this and it sounds like you've done amazing work.
However, tons and tons of the work you've done is just replicating the functionality that's already in SQL Server. I understand that this is possible. Heck, it's possible to just build your own relational engine from scratch. I'm just not sure why it's necessary. Yeah, SQL Server is not cheap. However, what is months of your time worth? All that time replicating logging and point in time recovery, which has been baked right into the product for 20 years... I don't see the return on investment.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
September 12, 2015 at 11:34 am
I think my main concern would be maintainability. YOU understand what you've done, and it seems to work well for your organization. I hate to use the Bus example, but if you're hit by a bus tomorrow, will someone be able to come in and maintain the system? How upgradable is it for someone who is not you to do it when new information has to be accommodated?
Yes this is a good PRO for SQL Server for many folks/companies.
However, if you don't use Win32 Perl, but instead use a programming language most in your IT Dept know how to program with, then maintainability should not be a problem any more so than other platforms. Some will disagree.
Just be sure you have a manual (soft and hard copies) that shows the DB system architecture and programming rules to follow, etc. so that everyone maintaining it or developing for it is on the same page. And it is a good idea to have a CODE REVIEW meeting with programmers for all new code to be introduced. This may be old school to some, but it is good practice and may be a documented part of the SDLC?
September 12, 2015 at 11:56 am
You've put tons of time and effort into this and it sounds like you've done amazing work.
However, tons and tons of the work you've done is just replicating the functionality that's already in SQL Server. I understand that this is possible. Heck, it's possible to just build your own relational engine from scratch. I'm just not sure why it's necessary. Yeah, SQL Server is not cheap. However, what is months of your time worth? All that time replicating logging and point in time recovery, which has been baked right into the product for 20 years... I don't see the return on investment.
This has been a process for me since 2002. And Necessity is the Mother of Invention.
The most recent knowledge coming to me in 2014 that I could have a database not confined to a single *.MDB file. No longer confined to 2 GIG/10 million rows approx, but taking it to billions of rows and approaching 1 Terabyte or even more.
That's when I started capacity testing in Jan 2015 for multi-user (simulated) concurrent processing, by running over 500 SQL query processes as independant "detached" background processes on my Windows 7 Home Premium, 3 GIG Memory Laptop to the same TABLE. Then I ran 66 concurrent UPDATE processes to the same table. I did these tests over and over. Did not run into any issues with ODBC or Jet Engine. But again, I did have to set the THREADS property in the ODBC FILE DSN from default of 3 to 512 for this to be successful. One MS-Jet Engine can handle all that concurrent traffic. That is 510 Open Connections at the same time placing demands on a single *.MDB file. But even with that ability, my Win32 Perl database user-interface would manage Open Connections to 100s of *.MDB files (acting as partial tables) so that the connections are not persistent, opened just long enough to fetch or write from/to the database. That should put very little stress on Jet Engine in a multi-user concurrent environment.
FYI, the *.MDB files are NOT "linked" via OLE. It is a matter of ODBC accessing the correct *.MDB file in the 500 *.MDB file system (or larger perhaps to 1500 *.MDB files). I use file NAMING CONVENTION for this and segregation of data, but there are other ways. The SQL statements are dynamically built by the Win32 Perl/ODBC application interface, so this is possible. Also, my ODBC FILE DSN does not contain the name of a particular *.MDB file. This is done programmatically each time the ODBC connection is opened to any particular *.MDB file.
What I don't know at this point is how well it will work for 100s of Jet Engines (1 on each PC) accessing a 500 file *.MDB database file system located on a Network, and with heavy multi-user concurrent ROW Maintenance activity throughout the business day. I believe it would work fine.
Todate, I have only used this system on a Network for Departments of 15 people (or less) hitting a single(1) *.MDB file via my Win32 Perl/ODBC user-interface. This worked flawlessly (credit to David Roth's stable Win32 ODBC module). I was asked to write a system to replace the current MS-Access Forms-Based application system one Dept. was using, which had concurrency issues daily with the need to recover the *.MDB file using the compact/repair utility provided with MS-Access.
September 12, 2015 at 11:59 am
erichansen1836 (9/12/2015)
I think my main concern would be maintainability. YOU understand what you've done, and it seems to work well for your organization. I hate to use the Bus example, but if you're hit by a bus tomorrow, will someone be able to come in and maintain the system? How upgradable is it for someone who is not you to do it when new information has to be accommodated?
Yes this is a good PRO for SQL Server for many folks/companies.
However, if you don't use Win32 Perl, but instead use a programming language most in your IT Dept know how to program with, then maintainability should not be a problem any more so than other platforms. Some will disagree.
Just be sure you have a manual (soft and hard copies) that shows the DB system architecture and programming rules to follow, etc. so that everyone maintaining it or developing for it is on the same page. And it is a good idea to have a CODE REVIEW meeting with programmers for all new code to be introduced. This may be old school to some, but it is good practice and may be a documented part of the SDLC?
Perhaps - but just because we happen to write applications in C# doesn't mean I want to write my DBMS in that language.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
September 12, 2015 at 12:22 pm
To put this in perspective, you can make this as simple or complicated as you want.
You're in control.
If you want a 15 billion row (contained within 1500 *.MDB files) MS-Jet Engine database for something like a DSS (Decision Support System), then your programming should be relatively simple since the database is READ ONLY to end-users. You just have to write code to load the external data to the *.MDB files.
Then build a GUI user-interface to allow your Enterprise's users to build their own customized reports
by selecting the criteria for the reports from screen form(s) you've provided them, complete with all the widgets for user selection of data and date ranges.
Win32 Perl and VBA programmers (et al) can use COM Automation to connect to MS-Excel for example to send database data selected directly to a formatted spreadsheet for viewing/printing. Of course, MS-Excel software must be installed, which your organization may likely have installed anyway with MS-Word?
READ/WRITE databases will require more programming for a user-interface.
But not much if the database is used by only 1 person at a time.
The most complicated would be building a very secure user-interface for multi-user concurrent READ/WRITE access over a Network. But once its written, you can clone it for future database implementations.
NOTE: Stable hard wired Local Area Networks ONLY - Not recommended for Wide Area Networks, although may work? And be sure to use ODBC Commit/Rollback Transactions to surround related Table maintenance events to enforce referential integrity i.e. either ALL or NONE of the maintenance is performed.
Is your database going to be a relational database with parent/child 1-to-many relationships between tables?
Or is your database going to be very very basic and with perhaps 1 Main Table, and a few Lookup Tables used to normalize the database to cutdown on redundancy?
MS-Jet may be the best choice for the simplest of READ ONLY databases (small to huge), but with the ability to
go the extra mile to the most difficult of READ/WRITE multi-user Network database systems, as long as you put in the programming ONCE (cloned thereafter).
******** PLEASE GO BACK AND RE_READ MY EARLIER POSTS (page 3) WHERE I DISCUSS JetComp.EXE, etc. I have UPDATED my comments with better information that greatly reduces the need to use *.MDB file re-optimization on a HUGE READ/WRITE database. This should make it MUCH more practical to have a 500+ *.MDB database file system.
PLS SEE section: "Update Statistics/Query Analyzer/Database Tuning:" on Page 3 of this Thread
September 12, 2015 at 1:50 pm
One big complaint I would have in parting is that the users need to have physical access to the DB stores. Sure you can play games and obfuscate, etc.... but the fact is - they have physical access to all of the data AND the DB files AND the lock files, and by the way - they happen to have rights to modifying the content and the files too. To me that's the kiss of death (and I was on the receiving end of this): A user sees the content, and wants to work .ate, so they "drag" a copy of this onto their laptop before heading home, etc.... they didn't copy, they moved it, and now the file's not where it needs to be. All of your data just left the building, and you're left HOPING that was a happy accident and not someone pissed off at the organization.
My plan is to get with the System Admin person and setup a directory structure on the Network hosting the *.MDB file system, so that:
(1) The *.MDB files can be made invisible
(2) The directories can have group permission to prevent COPY, DELETE, MOVE of *.MDB files and lock files, etc.
(3) The Win32 Perl/ODBC user interface would accept login credentials from each user, and if authorized, the user would be logged into the Menu System of the Database Interface. Each user has their login credentials stored in an ADMIN table (*.MDB) file for that purpose. It also tells the ACCESS LEVEL they have to Maintenance Programs (Menu Options).
(4) When a user is successfully logged in to the database user interface software, that user is then logged in (by this database user-interface) as a database user with group permissions to the Network Database directory structure and files. Users cannot access the Database directory structure on the Network to screw around with the files or data they contain, but only have indirect and controlled access to these files through the database user-interface - which dynamically builds the syntactically correct SQL statements to process and limits the number of Result Set rows returned.
September 12, 2015 at 3:34 pm
Well. The issue is, especially in technology, words and terms do have meaning and in order for us to communicate well, we need to understand the shared meaning. NOSQL is a pretty well defined term in IT. I understand that you can come up with your own terminology, but unfortunately, everyone else may be using something else. That's just going to lead to miscommunication on both sides.
Perhaps, but also lets keep in mind that Microsoft may be responsible for leading people astray by not being totally transparent about MS-Jet Engine. This may be a MARKETING and ADMINISTRATIVE issue within Microsoft Corporation. I'm sure the Engineers are aware of Jet Engine's capabilities. Microsoft offers/promotes MS-Access and SQL Server - for profit database systems. That may be their prerogative to do so, however individuals and corporations and companies for years have invested plenty in Microsoft Software and O/S. Where does it end? SQL Server definitely does have it's place though.
Besides MS-Access and SQL Server, Microsoft quietly provides us with MS-Jet(Blue) and...
MS-Jet (Red) databases too. All components necessary (Jet Engine, MS-Access ODBC Driver, Text/CSV ODBC Driver, ODBC Administrator utility) come factory installed on Windows 7 O/S.
FYI, the CSV ODBC Driver is AWESOME for importing/exporting CSV data files from within your computer programs, because it internally does all the parsing/formatting to deal with embedded double quotes and embedded commas - a difficult subroutine to build on your own, so don't reinvent the wheel. Be aware that the CSV/TEXT ODBC Driver is for READ ONLY purposes, so you have to write back out to another file any data changes you make.
All you need is a programming language of your choice to create the database user-interface tying it all together. I use a FREE Perl for Windows software. Win32-GUI module (by Aldo Calpini) is a Native Windows Graphical User toolset. Most folks will probably not use Perl, but I love it. It makes sense to me, just like MS-Jet databases make sense to me. I can't administer or program in Visual Basic or SQL Server for some reason?
Oh, and also you need to download the FREE Jet 4.x compatible standalone JetComp.EXE compact/repair utility to optimize/reorganize/compact/repair your Jet 4.x *.MDB database files. This can be run in GUI or BATCH mode. Thank God for BATCH mode !!
DATABASE NETWORK SECURITY
Suggested Reading...
Real World Microsoft Access Database Protection and Security
by Garry Robinson (2004) APress, The Expert's Voice(R)
Chapter 12 - Protecting and Securing Your Database with the Operating System
[including Windows 2000 Pro, XP Pro, 2000 Server, 2003 Server]
September 14, 2015 at 9:25 am
I am not interested in rolling my own DBMS even if part of it is done for me, the Jet engine. Everything you have been writing from scratch is already available to me in SQL Server (or Oracle, or Sybase, or PostgreSQL, or MySQL). I don't have to test the locking features to ensure ACID properties of the database, or write the transaction logging capabilities to ensure that I can recover my databases to point in time due to hardware or network failures, I don't have to write replication code to replicate data between database systems, or try to figure out how to mirror two databases. I leave all that work to the people who enjoy that type of work and provide me with a stable (yes, there are issues at time) environment in which I can provide my employer or customers with a database product that meets their needs.
You want to do all that work, go for it and more power to you. Only thing I have to say is quit saying SQL Server can't do what you do. It is an amazing product and probably does a lot more than your system currently does. Yes, it is not cheap (SQL Server Express aside, which uses the same engine just throttled down) as SQL Server Enterprise Edition. So if a SQL Server Express application outgrows SQL Server Express and the customer still wants to use the application, they just need to upgrade to a purchased version of SQL Server. The database migrates over seamlessly and you are up and running.
What happens to your system should you retire, or worse? Are there people who can pick up and support what you have written or are your customers left with an unsupportable system? I can drop off the face of the earth and my employer can hire someone with knowledge of SQL Server and keep on working. Yes, they would lose my institutional knowledge, but a new hire would learn what I have learned and be a contributor to the organization.
September 14, 2015 at 10:07 am
Lynn Pettis (9/14/2015)
I am not interested in rolling my own DBMS even if part of it is done for me, the Jet engine. Everything you have been writing from scratch is already available to me in SQL Server (or Oracle, or Sybase, or PostgreSQL, or MySQL). I don't have to test the locking features to ensure ACID properties of the database, or write the transaction logging capabilities to ensure that I can recover my databases to point in time due to hardware or network failures, I don't have to write replication code to replicate data between database systems, or try to figure out how to mirror two databases. I leave all that work to the people who enjoy that type of work and provide me with a stable (yes, there are issues at time) environment in which I can provide my employer or customers with a database product that meets their needs.You want to do all that work, go for it and more power to you. Only thing I have to say is quit saying SQL Server can't do what you do. It is an amazing product and probably does a lot more than your system currently does. Yes, it is not cheap (SQL Server Express aside, which uses the same engine just throttled down) as SQL Server Enterprise Edition. So if a SQL Server Express application outgrows SQL Server Express and the customer still wants to use the application, they just need to upgrade to a purchased version of SQL Server. The database migrates over seamlessly and you are up and running.
What happens to your system should you retire, or worse? Are there people who can pick up and support what you have written or are your customers left with an unsupportable system? I can drop off the face of the earth and my employer can hire someone with knowledge of SQL Server and keep on working. Yes, they would lose my institutional knowledge, but a new hire would learn what I have learned and be a contributor to the organization.
+10000
The only thing I would potentially counter on is - nothing's free. Building your own to avoid a licensing fee can be a VERY expensive decision.
The time you and your dev team spend building out your own RDBMS isn't "free": it's time you're not spending on building out actual useful functionality (the stuff your business users actually need). Building out something like this is expensive, especially if you plan to do it right. You may end up breaking even or perhaps a little ahead, but only by sacrificing a lot of useful features that you really will need at some point. I severely doubt that a home grown solution would stand up to an objective TCO analysis versus licensing a COTS products. It's kind of like DIY home surgery - it's technically possible on paper, but I doubt you'd end up with the same quality of outcome.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
September 14, 2015 at 10:27 am
Matt Miller (#4) (9/14/2015)
The time you and your dev team spend building out your own RDBMS isn't "free": it's time you're not spending on building out actual useful functionality (the stuff your business users actually need).
Perhaps depends on how many units you actually sell? 1,000 sales where your sticker-price is a few $ cheaper than the competition, but you pocket 100% of the money because there is no SQL license to include, and the sums might work out in your favour.
APPs including MySQL were popular on that basis. We lost sales to competitors making a sale on that basis (although I never really understood if the MySQL license was "free" as in "air"). At the time (maybe even now?) the competitors had lousy Transaction Log ability and a range of other issues (storing 30-Feb for example), and although our target customers were sophisticated buyers, with no shortage of money, the few-bucks-saved was compelling compared to having to spend weeks, instead of minutes, sorting data out when in disaster recovery.
September 14, 2015 at 10:38 am
Kristen-173977 (9/14/2015)
Matt Miller (#4) (9/14/2015)
The time you and your dev team spend building out your own RDBMS isn't "free": it's time you're not spending on building out actual useful functionality (the stuff your business users actually need).Perhaps depends on how many units you actually sell? 1,000 sales where your sticker-price is a few $ cheaper than the competition, but you pocket 100% of the money because there is no SQL license to include, and the sums might work out in your favour.
APPs including MySQL were popular on that basis. We lost sales to competitors making a sale on that basis (although I never really understood if the MySQL license was "free" as in "air"). At the time (maybe even now?) the competitors had lousy Transaction Log ability and a range of other issues (storing 30-Feb for example), and although our target customers were sophisticated buyers, with no shortage of money, the few-bucks-saved was compelling compared to having to spend weeks, instead of minutes, sorting data out when in disaster recovery.
Except, what you're ignoring is the time it takes for developers to implement the features you are paying for in SQL Server. Potentially also the issues of poor data quality or issues if your application fails to deal with an FK or you have problems with a particular MDB.
I wouldn't dismiss this out of hand, nor would I say someone that has implemented this should have it ripped out and replaced with SQL Server. That would also be a problem with wasted resources, but I wouldn't necessarily allow someone to start a project with a JET-Engine like this. Too much of what needs to be built to ensure this works already exists and a $10k SQL license can be a low cost compare to developer time to implement things well. If we are at the point where it's a $25k/4core license, I'm thinking more, but that's roughly 1/4 man year in the US. That's a lot of developer time to spend on infrastructure that isn't necessarily helping the business.
The DR arguement makes sense, but it's small. Not a lot of DR that I've seen. I wouldn't risk things with a bigger company, but with a small one, I wouldn't bother worrying too much about this. However I also know that retrofitting this later can be a huge pain, so if there isn't a simple, easy backup solution, I wouldn't bother. The JET can do file backups, so that's not something I'm overly worried about, but if I had issues of synchronization between files at all, I would abandon this instantly.
September 14, 2015 at 10:46 am
Make it 10,000 sales or 100,000 to amortise over then. All the clients we have, even large ones, wince at the cost of SQL "on top of the cost of the APP" as they see it ...
I wouldn't want it any other way, I think MS SQL is superb for the database size and marketplace I am in, but maybe I should be using PostgreSQL or something else "cheaper". If you are at the DEV end, and not at the Sales end (I get to do both), maybe you don't get to see the Client's reaction?
I can easily, or so I think, make a case for "Why MS SQL" ... but we will continue to lose sales to other companies using free / open-source tools that reduce their Sticker Price. Functionality of our products is definitely superior to competitors (perhaps because we spend less time faffing around re-inventing-the-wheel?!!), and I don't think our sales team are rubbish ... but next time we are all having a beer I'll ask them if THEY are the problem 🙂
September 14, 2015 at 10:55 am
Kristen-173977 (9/14/2015)
Matt Miller (#4) (9/14/2015)
The time you and your dev team spend building out your own RDBMS isn't "free": it's time you're not spending on building out actual useful functionality (the stuff your business users actually need).Perhaps depends on how many units you actually sell? 1,000 sales where your sticker-price is a few $ cheaper than the competition, but you pocket 100% of the money because there is no SQL license to include, and the sums might work out in your favour.
APPs including MySQL were popular on that basis. We lost sales to competitors making a sale on that basis (although I never really understood if the MySQL license was "free" as in "air"). At the time (maybe even now?) the competitors had lousy Transaction Log ability and a range of other issues (storing 30-Feb for example), and although our target customers were sophisticated buyers, with no shortage of money, the few-bucks-saved was compelling compared to having to spend weeks, instead of minutes, sorting data out when in disaster recovery.
also keep in mind that you're not restricted to using SQL server to this specific application. Your TCO analysis would have to compare the fractional licensing cost of SQL Server for this particular app against everything you end up having to build up for this app to work.
You do have a point however. If you were a software company building out a licensed product, you might find a way to justify it. Otherwise though - for an in-house app, I have a hard time envisioning any use case where the savings justify the pain and cost you're putting your user base through.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
Viewing 15 posts - 31 through 45 (of 245 total)
You must be logged in to reply to this topic. Login to reply