MS-Jet Engine 4.x ODBC enabled databases to 1 Terabyte/5 billion rows (or) SQL SERVER?

  • erichansen1836 (9/23/2015)


    For backup/restore for example, you have not answered how you would accomplish a Point in Time restore of your database.

    Yes I have. I mentioned writing out successful comitted SQL maintenance statements (those not ROLLED BACK) to a restore log file (can be a MDB file with no indexing, can be a fixed-length or var-length delimited text file).

    I would have {DATE, TIME, USER, PC_NODE_ID, SQL_STATEMENT} as fields in these records.

    I would write a utility that could apply all or any of these SQL STATEMENTS to the database which have been submitted since the last backup/restore. With the ability for certain records (SQL STATEMENTS) to be *bypassed and others applied.

    * From an exclusion list based upon time of day, user name, and the location of the user (PC NODE ID).

    This is not as sophisticated as SQL SERVER, but serves the purpose well.

    I mentioned writing out successful comitted SQL maintenance statements (those not ROLLED BACK) to a restore log file (can be a MDB file with no indexing, can be a fixed-length or var-length delimited text file).

    So a write behind log. You wouldn't log changes unless they had successfully completed. If the system fails between the commit of the actual data to your mdb files and the writing of the log, meaning you could lose successful update/insert/deletes.

    Not only that, this does not capture the changes to the data, just the code that was used to change the data.

    ... With the ability for certain records (SQL STATEMENTS) to be *bypassed and others applied

    This is dangerous, as you are saying you could possibly skip transactions that other transactions are dependent on being completed.

    This does not ensure the integrity of the database or the data stored in it.

    All of this leads to how you would handle transaction processing, multiple updates to multiple tables that must be done as a single atomic unit.

  • Sounds like this would take a lot of programming and testing before it could be considered safe and reliable. So much for your earlier comment saying this would require no manhours.

    As CIO, I would consider your proposal for using this as company database rejected.

    You forget that I have been doing this since 1998 and already have the existing code to tweak/clone.

    Only a few things have I not implemented, but for those things, such as SQL Restore Logs, that is not an addition that would take much time.

    Coding is only part of the story. Methodology is the other.

    I can share the methodology with any Company in an organized fashion, as I have done in bits and pieces here in an unorganized fashion. Any ODBC programmer is going to have much of their own code in place too to tweak and clone from. Basically, it is just a matter of implementing the little logic of accessing multiple MDB files as partial tables, and segregating the data logically between the files that best fits the business needs of any company.

    And let me ask you this, does SQL SERVER provide all the front-end application user-interfaces to drive your business? NO, of course not. You're going to have to write program code for that, or purchase more software and train your employees. SQL SERVER is not a turn-key solution for every business or government agency or non-profit.

    Although you reject it, many have hired me to do this including.

    ZALE Corporation (Retail fine Jewelry)

    Amerwood International (Broom/Brush/Mop Wholesale Distribution & Manufacturing)

    Metrocall (now USA Mobility) Telecom Service Provider

    Control Point Solutions, Telecom Expense Management Industry

    Applications:

    Retail POS file system security monitoring/reporting

    Warehouse inventory systems

    Order fulfillment systems (Order confirmation, invoices, purchase orders, warehouse orders)

    AP/AR systems

    Issue Tracking Systems

    EDI/ETL operations

  • Lynn Pettis said...

    All of this leads to how you would handle transaction processing, multiple updates to multiple tables that must be done as a single atomic unit.

    That is what Commits and Rollbacks do, you wrap them around the parent/child table relationship updates you are to perform, so that ALL or NONE of these occur. I gave an example (ODBC/SQL code in an earlier post) of writing over 30,000 rows to the database so that ALL or NONE of them are COMMITTED if the MDB file becomes too full.

    ROLLBACKS DO NOTHING TO THE DATABASE. COMMITS DO.

    a group of SQL maintenance requests executed is held in memory and applied in one fel swoop. BLAM!

    Also, there may be the ability to place CONSTRAINTS on tables to enforce referential integrity (i.e. cascading deletes and updates). I have not tried that because of my doing it manually with ODBC transactions.

    But the SQL syntax I believe is supported (See MS-ACCESS 2007 SQL Syntax documents at Microsoft website).

    It might be safer to use ODBC transactions than CONSTRAINTS?

    For one thing, I don't see CONSTRAINTS working if you keep 1 TABLE (parent) in 1 MDB file, and the other TABLE (child) in another MDB file. With ODBC, you can keep the related tables in separate MDB files, and using ODBC transactions to wrap around their collective UPDATES.

    I'll try and do a test of this (i.e. CONSTRAINTS) for you all tonight and report on it tomorrow.

    Also, someone asked HOW LONG it takes to run JetComp.exe on an MDB file maxed out at 10million rows.

    I'll run that as well and report tomorrow.

    FYI, I can run 500 of these JetComp.exe operations (on all 500 MDB files) doing it concurrently instead of sequentially. I will try to run a series of concurrent operations too to see if the runtime is less (per file) than if run sequentially on each file in succession.

  • Steve Jones - SSC Editor (9/23/2015)


    - Recovery and restore is very important in the data world. ... Respectfully disagree if this is your issue.

    I would like to properly understand what is possible, and what is not. I am quite happy being told that something is not possible, but I would like to get to that knowledge 🙂

    erichansen1836 (9/23/2015)


    If daily backup are made, a company is pretty safe from hardware or file system failure or vandelism.

    I am surprised that daily would be enough in most OLTP database scenarios. A worst-case failure is then a whole days work, for every data entry person in the organisation, to have to repeat.

    I would like to be sure that I understand that the only backup possible is:

    Full backup, when there are no connected users

    Some sort of "transaction" backup which involves creating a log of database changes which is under application control - i.e. it is not provided as part of the JET mechanics, per se. Have I got that right?

    I mentioned using SQL restore logs with restore points in time for JET databases in earlier posts.

    My understanding and recollection of what you said is that you would need to program some sort of "log" of data changes. Please correct me if I have misunderstood.

    Assuming that is the case I see a huge problem with this (apart from the time & effort to build). The transaction log has to be able to be "replayed" in exactly the same order as the original transactions were created. If a transaction is rolled back (assuming that is actually possible in Jet?) then that has to be an integral part of the logging/playback process. No subsequent transaction can rely on a previous one which was rolled back. Absolutely every transaction must be logged, and reliably. My worry, for a solution that is part of the APP, is whether at some point in time the logging would fail to faithfully record 100% of the transaction, all in perfect chronological order. Because transaction logs, in SQL Server, are built at the Grass Roots Level I imagine it is much easier to ensure that "everything, and absolutely everything" has been included, and is in strict chronological order. But I am very interested to know if there is something inbuilt in Jet that facilitates this (or whether the only solution is a "Home brew" one with the APP code itself.

    This is not the same as periodic exports of changed data. I can see that it would be possible to take a differential backup (or an incremental one) every, say, 10 minutes. Maybe that would do instead of a Log Backup? (it clearly would not allow point-in-time restore, nor recovery from a corrupted database, but I can see that it could be used to reduce the data-loss-time to, say, 10 minutes (i.e. the frequency of the inremental backup)

    You said in a subsequent post:

    "I mentioned writing out successful comitted SQL maintenance statements (those not ROLLED BACK) to a restore log file (can be a MDB file with no indexing, can be a fixed-length or var-length delimited text file).

    I would have {DATE, TIME, USER, PC_NODE_ID, SQL_STATEMENT} as fields in these records."

    I can envisage that. However I am having trouble envisaging how you can create this to be sure that:

    a) you capture 100% of absolutely all changes

    b) it is strictly in chronological order of the sequence that the updates actually "hit" the database. For two users running a multi-statement batch, or a loop, there is potential interaction between them. The classroom solution is one user adding stock arrivals, which increase the Quantity on Hand, and the other user doing Sales Order Processing which decrease the Quantity on Hand. Running them in the wrong order will cause free stock to fall below zero which a constraint on the database would then cause to fail. Similar situation if one user created a Customer Account and another creates an Order for that new Customer. The Order cannot success if [on playback] the Customer Account failed to be created.

    "I would write a utility that could apply all or any of these SQL STATEMENTS to the database which have been submitted since the last backup/restore."

    This part I am happy with. Given a list of SQL statements, in strictly chronological order, then I agree that could be replayed easily enough.

    However, the playback must start at exactly the point that the backup was taken. I suppose (if my understanding is correct) that the Full backup can only be done with the database offline that this is a given - a "new" log file can start whenever the database is brought online (e.g. after a backup or a powercut / restart)

    "With the ability for certain records (SQL STATEMENTS) to be *bypassed and others applied.

    * From an exclusion list based upon time of day, user name, and the location of the user (PC NODE ID)."

    I am not understanding how this would work. I can see that it is possible to identify the user / date / time etc. But surely?? NOT processing SOME of the logged transactions will cause a different outcome in the data. IF the excluded user's transactions include "Create new customer Blogs" and then there is a subsequent transactions, from a different user who is not blocked, that says "Create order for Blogs" then that Record Insert is going to fail. I can't see how you can have "some-and-some" on the SQL Transaction Log playback. You've mentioned this a couple of times, so I am sure you have an example in your mind where this would be necessary and it would help me to understand a potential scenario. Happy for anyone to suggest ways where this would be useful, and viable!, in the real world.

    This is not as sophisticated as SQL SERVER

    I have no problem with that 🙂 I just want to properly understand what is possible out of the box, and what is possible with programming effort (and, if that is the solution, how 100% bullet-proof it can possibly be made)

    erichansen1836 (9/23/2015)


    My understanding is that the lastest version of MS-Jet Engine fixes many data file corruption issues on the fly as it is being used. And JetComp.exe utility could correct most of the remainder. But you will not likely see data corruption, as I have not seen data corruption, if you follow certain guidelines.

    I just cannot get my head around how this would be possible.

    Imagine the scenario where a faulty Disk Controller intermittently writes strings of zeros to the disk, instead of the data it is supposed to write.

    As is the way of such disasters it is not noticed for some time (minutes, hours ... maybe even days or weeks - let's hope sooner-rather-than-later)

    Surely there is no way that Jet can fix the strings of zeros? It has no idea what data should have been there. However, this is a scenario that can be fixed with SQL Server given that the Log File is written to a different disk, using a different disk controller, and does not also have corrupted data. If both are corrupted then all bets are off, but if the database is corrupted and the log file NOT corrupted, than a zero-data-loss restore is possible.

  • erichansen1836 (9/23/2015)


    Sounds like this would take a lot of programming and testing before it could be considered safe and reliable. So much for your earlier comment saying this would require no manhours.

    As CIO, I would consider your proposal for using this as company database rejected.

    You forget that I have been doing this since 1998 and already have the existing code to tweak/clone.

    Only a few things have I not implemented, but for those things, such as SQL Restore Logs, that is not an addition that would take much time.

    And let me ask you this, does SQL SERVER provide all the front-end application user-interfaces to drive your business? NO, of course not. You're going to have to write program code for that, or purchase more software and train your employees. SQL SERVER is not a turn-key solution for every business or government agency or non-profit.

    Although you reject it, many have hired me to do this including.

    ZALE Corporation (Retail fine Jewelry)

    Amerwood International (Broom/Brush/Mop Wholesale Distribution & Manufacturing)

    Metrocall (now USA Mobility) Telecom Service Provider

    Control Point Solutions, Telecom Expense Management Industry

    Applications:

    Retail POS file system security monitoring/reporting

    Warehouse inventory systems

    Order fulfillment systems (Order confirmation, invoices, purchase orders, warehouse orders)

    AP/AR systems

    Issue Tracking Systems

    EDI/ETL operations

    Hi Eric...hope you are well

    am trying to understand what you are proposing ....I get the impression that what you deliver is a "write once/ read many" scenario......as opposed to a realtime OLTP system.....can you clarify please?

    The reason I ask is that from looking at your Linkedin profile it seems that you used to be heavily involved in reporting a few years back now....but not what I would call transactional data.

    maybe this why you seem a little hesitant about point in time restores?

    https://www.linkedin.com/in/msjetdatabasedba

    am also concerned about your comments about using this over a WAN.....surely in this day and age access and response time should be considered over any network

    ________________________________________________________________
    you can lead a user to data....but you cannot make them think
    and remember....every day is a school day

  • Full backup, when there are no connected users

    Some sort of "transaction" backup which involves creating a log of database changes which is under application control - i.e. it is not provided as part of the JET mechanics, per se. Have I got that right?

    Correct. However, depending on how your business operation works, you might find it practical to shutdown logical data groupings from the database file system, and leave others open, while you backup/restore certain sections.

    You could do this for backups and restores.

    THIS IS A DIFFERENT WAY OF THINKING than what many of you are familiar with. And of course, it is a matter of personal preference to use this or a SERVER style system. Those coming from a file share database system such as Informix Standard Engine, Microsoft PC COBOL, CONDOR 3 RDBMS (CPM O/S), Microsoft Access or Jet Engine, etc. may find this a normal way of thinking?

    Example given earlier:

    US_Census_2010_TX_A.mdb through US_Census_2010_TX_Z.mdb for TEXAS citizens only

    ...

    US_Census_2010_CA_A.mdb through US_Census_2010_CA_Z.mdb for CALIF citizens only

    If you wanted to rotate the backup/restore operations throughout the day, so that only a few STATES data was inaccessible for an hour while the remaining 47 STATES data was accessible at any given moment during the business day, you could try that.

    REMEMBER PLEASE: MS-Access may still have a 2 or 4 GIG limit, and SQL SERVER EXPRESS 10 GIG.

    If you want a solution that is very practical for larger databases in the 11 to 100 GIG range, this MS-Jet database system may be a very pratical answer. For those who want to push it to 1 Terabyte, it should still work fine. For those who want to push it to 5 Terabytes, you are going to be fine in a single-user database, but perhaps pushing it way too far for a multi-user READ/WRITE database on a Network share even if the network is quite stable LAN hard-wired. If you have a huge database, but with only small and controlled SQL result sets being returned (by the user-inteface) to sporadically logged in end-users making indirect SQL query requests to the system at any given moment in time, then larger databases to 5 Terabytes might still be workable given the low traffic. And you could distribute 1 Terabyte to each of 5 Windows O/S Servers, making the system robust for heavier traffic.

  • Hi Eric...hope you are well

    am trying to understand what you are proposing ....I get the impression that what you deliver is a "write once/ read many" scenario......as opposed to a realtime OLTP system.....can you clarify please?

    The reason I ask is that from looking at your Linkedin profile it seems that you used to be heavily involved in reporting a few years back now....but not what I would call transactional data.

    maybe this why you seem a little hesitant about point in time restores?

    https://www.linkedin.com/in/msjetdatabasedba

    am also concerned about your comments about using this over a WAN.....surely in this day and age access and response time should be considered over any network

    Hi J Livingston,

    Regarding write once/read many scenario you mentioned.

    The end-user database interfaces I design are only going to write a handful of records at a time (per logged in maintenance user) such as changes to a parent/child, table/row relationship. Example: A single bank loan record being the parent, and the multiple collateral securing the installment loan being the child records. When changes/deletes are made, it is done collectively, to keep the related TABLE rows in sync and pervent orphaned rows. I would wrap these updates in an ODBC transaction and COMMIT or ROLLBACK based on whether or not an SQL error, ODBC error, or Jet Engine error occured (I Trap these errors).

    But I also can write BATCH applications to UPDATE or INSERT or DELETE many rows in the database, which I would run SEVER side during off hours when users were not logged in. This would be considered OPERATIONS tasks to refresh the database (if READONLY) or to make mass maintenance changes to the database from external input files (EDI, XML, Text fixed-len, Text delimited, Excel Spreadsheets, etc.). It could also include changes not originating from an external file, such as adjusting something like a TAX RATE column which you wanted to change in all rows.

    This would not be a system where end-users could run lengthly SQL queries to HOG Network resources, nor allow them to make MASS updates to the database effecting many rows.

    Users can read small collections of rows (perhaps 1000 rows meeting similar criteria) into a GUI Listview for scanning and selecting a single row to EDIT/DELETE, or to INSERT a new row. These changes would then be reflected in the database and the scrolling ListView. A Listview is similar to an EXcel spreadsheet, without the ability to edit the data in the rows/columns directly. A separate data entry screen would pop up to allow changes to data in a particular ListView item subsequently saved to the database. There are no concurrency issues to worry about in such a system whereby users STEP ON each others changes. Only one person could have a particular PARENT row open for edit at any given moment, along with access to edit the associated CHILD rows. I use my own record locking strategy, something quite simple.

    I also you TreeViews, with Parent/Child nodes, to allow my end-users to see huge amounts of data logically segregated by expanding and collapsing on these NODES. The users can click on these groupings of rows in the database, that would then be displayed in a scrolling ListView or RichEdit widget to allow users to view all detailed information, and to select individual rows for edit.

    AND IT IS NOT JUST ME, any database ODBC/GUI developer, whether MS-JET or SQL SERVER, is going to do something similar for a controlled database interface environment.

  • erichansen1836 (9/23/2015)


    Hi Eric...hope you are well

    am trying to understand what you are proposing ....I get the impression that what you deliver is a "write once/ read many" scenario......as opposed to a realtime OLTP system.....can you clarify please?

    The reason I ask is that from looking at your Linkedin profile it seems that you used to be heavily involved in reporting a few years back now....but not what I would call transactional data.

    maybe this why you seem a little hesitant about point in time restores?

    https://www.linkedin.com/in/msjetdatabasedba

    am also concerned about your comments about using this over a WAN.....surely in this day and age access and response time should be considered over any network

    Hi J Livingston,

    Regarding write once/read many scenario you mentioned.

    The end-user database interfaces I design are only going to write a handful of records at a time (per logged in maintenance user) such as in a parent/child relationship. Example: A single bank loan record being the parent, and the multiple collateral securing the installment loan being the child records. When changes/deletes are made, it is done collectively, to keep the related TABLE rows in sync and pervent orphaned rows. I would wrap these updates in an ODBC transaction and COMMIT or ROLLBACK based on whether or not an SQL error, ODBC error, or Jet Engine error occured (I Trap these errors).

    But I also can write BATCH applications to UPDATE or INSERT or DELETE many rows in the database, which I would run SEVER side during off hours when users were not logged in. This would be considered OPERATIONS tasks to refresh the database (if READONLY) or to make mass maintenance changes to the database from external input files (EDI, XML, Text fixed-len, Text delimited, Excel Spreadsheets, etc.). It could also include changes not originating from an external file, such as adjusting something like a TAX RATE column which you wanted to change in all rows.

    This would not be a system where end-users could run lengthly SQL queries to HOG Network resources, nor allow them to make MASS updates to the database effecting many rows.

    Users can read small collections of rows (perhaps 1000 rows meeting similar criteria) into a GUI Listview for scanning and selecting a single row to EDIT/DELETE, or to INSERT a new row. These changes would then be reflected in the database and the scrolling ListView. A Listview is similar to an EXcel spreadsheet, without the ability to edit the data in the rows/columns directly. A separate data entry screen would pop up to allow changes to data in a particular ListView item subsequently saved to the database. There are no concurrency issues to worry about in such a system whereby users STEP ON each others changes. Only one person could have a particular PARENT row open for edit at any given moment, along with access to edit the associated CHILD rows.

    I also you TreeViews, with Parent/Child nodes, to allow my end-users to see huge amounts of data logically segregated by expanding and collapsing on these NODES. The users can click on these groupings of rows in the database, that would then be displayed in a scrolling ListView or RichEdit widget to allow users to view all detailed information, and to select individual rows for edit.

    AND IT IS NOT JUST ME, any database ODBC/GUI developer, whether MS-JET or SQL SERVER, is going to do something similar for a controlled database interface environment.

    thanks for your response Eric....have a better feel for this now.

    what are your thoughts on using the solution over a WAN.

    thanks

    ps...am struggling to read your recent post, cos you keep editing it .....makes it difficult to respond in a timely manner

    ________________________________________________________________
    you can lead a user to data....but you cannot make them think
    and remember....every day is a school day

  • Kristen asked...

    You said in a subsequent post:

    "I mentioned writing out successful comitted SQL maintenance statements (those not ROLLED BACK) to a restore log file (can be a MDB file with no indexing, can be a fixed-length or var-length delimited text file).

    I would have {DATE, TIME, USER, PC_NODE_ID, SQL_STATEMENT} as fields in these records."

    I can envisage that. However I am having trouble envisaging how you can create this to be sure that:

    a) you capture 100% of absolutely all changes

    b) it is strictly in chronological order of the sequence that the updates actually "hit" the database. For two users running a multi-statement batch, or a loop, there is potential interaction between them. The classroom solution is one user adding stock arrivals, which increase the Quantity on Hand, and the other user doing Sales Order Processing which decrease the Quantity on Hand. Running them in the wrong order will cause free stock to fall below zero which a constraint on the database would then cause to fail. Similar situation if one user created a Customer Account and another creates an Order for that new Customer. The Order cannot success if [on playback] the Customer Account failed to be created.

    I do not allow concurrent conflicting batch updates to the MS-Jet database which attempt to STEP ON each other's changes. Batch operations only occur SERVER side when end-users are logged out, such as NIGHT operations tasks. Any concurrent BATCH operations launched would be segregated so that they did not effect the same logical group of records. Mostly I would likely use sequential processing. But with the ability to use concurrent processing if it proved a more efficient way of doing it.

    You may have a point on how trustful it is logging SQL statements to a central log file.

    Perhaps each user should have a log file of their own, SERVER side, that can be merged after EOB into one central restore file, sorted by DATE/TIME. That would prevent records from perhaps being dropped when concurrent writes attempted to this log file. This is one reason I mentioned perhaps writing restore log records to a central MS-JET *.MDB file (Non indexed) to let Jet Engine juggle any concurrent user writes safely. I would think that an unindexed *.MDB file only containing a single table OBJECT, is going to be a very stable data containment mechanism for a restore log.

    FYI, each USER is allowed only 1 logged in instance to the database user-interface.

    I think Microsoft recommends this for MS-Jet engine databases.

    SQL SERVER may be able to handle multiple instances for each user.

    Something to consider.

    If I were to COMMIT rows to the database from a client app, it would only be for 1 parent and several-to-many corresponding child rows. No other user could be attempting changes to those rows at the same time to conflict, the way my client app is designed. SERVER side, I would still be able to COMMIT 1000s of unrelated rows collectively or ROLL them back, during NIGHT OPERATIONS tasks.

    To clarify, a ROLLBACK is really doing nothing at all but clearing memory.

    It does not touch the database, only COMMIT does that.

    An ODBC transaction wrapping around multiple maintenance tasks holds this all in memory.

    Then, if COMMIT is issued, all the changes held in memory are saved in one fel swoop to the database.

    BLAM! It is an ALL or NOTHING operation.

  • thanks for your response Eric....have a better feel for this now.

    what are your thoughts on using the solution over a WAN.

    thanks

    J Livingston, I have never tried this, but from reading the recommendations of Microsoft Corp and their Partners and seasoned DBA/Developers, it is not recommended for MS-Jet databases to access them over a WAN, or over a wireless LAN.

    It is recommened that they be used over a hard-wired LAN, with a robust enough Server (for the expected traffic) and having quality hardware. The Network BANDwidth should also be sufficient for the traffic. You also want to guard against accidental shutdown of the Server by having battery backup power in place. There is also the recommendation to disable opportunistic locking on the Server. Dedicate the Server to the JET database only, is recommended.

    Consider a peer-to-peer network with a Windows Professional O/S for Network hosted databases with fewer concurrent users and less network traffic.

    SOMEONE is going to want a 5 Billion row database they can work with just by themselves as part of their job duties, and located/maintained by them on their own desktop PC or Laptop.

    MS-Jet databases may be the most beneficial for this.

  • The classroom solution is one user adding stock arrivals, which increase the Quantity on Hand, and the other user doing Sales Order Processing which decrease the Quantity on Hand.

    So what do we do when inventory goes negative? Just toss the sales order?

  • I just cannot get my head around how this would be possible.

    Imagine the scenario where a faulty Disk Controller intermittently writes strings of zeros to the disk, instead of the data it is supposed to write.

    As is the way of such disasters it is not noticed for some time (minutes, hours ... maybe even days or weeks - let's hope sooner-rather-than-later)

    Surely there is no way that Jet can fix the strings of zeros? It has no idea what data should have been there. However, this is a scenario that can be fixed with SQL Server given that the Log File is written to a different disk, using a different disk controller, and does not also have corrupted data. If both are corrupted then all bets are off, but if the database is corrupted and the log file NOT corrupted, than a zero-data-loss restore is possible.

    MS-JET guidelines:

    To be sure, you must have high quality hardware on the Server, have your sysADMIN perform scheduled checks on hardware soundness, etc. Have the necessary bandwidth. Have a hard-wired network (no WAN or wireless LAN). Have battery backup to sustain the Server if electricity is shut off. Turn off opportunistic locking. Dedicate the Server to MS-Jet database hosting, with no other high demand software running competing for resources. Have plenty of quality MEMORY on the Server.

    SQL SERVER sounds much more sophisticated to handle unexpected failures.

    Others have said: 99.99% of MS-Jet Engine database file corruption is attributed to MS-Access front-end OBJECTS (macros, forms, reports, etc.) contained in the same MDB files as the backend database. I avoid this completely by using only TABLE objects in MDB files. My front-end is not contained in the MDB files.

  • The classroom solution is one user adding stock arrivals, which increase the Quantity on Hand, and the other user doing Sales Order Processing which decrease the Quantity on Hand.

    So what do we do when inventory goes negative? Just toss the sales order?

    Patrick, who are you addressing your question to please?

    And can you clarify a bit more please?

  • SOMEONE is going to want a 5 Billion row database they can work with just by themselves as part of their job duties, and located/maintained by them on their own desktop PC or Laptop.

    MS-Jet databases may be the most beneficial for this.

    have you heard about SQL Developer edition?

    SQL Server 2014 Developer Edition lets developers build almost any kind of database application on top of SQL Server. It's licensed for use as a development and test system, not as a production server, and is an ideal choice for people who build and test database applications.

    $59.95

    maybe worth a look

    ________________________________________________________________
    you can lead a user to data....but you cannot make them think
    and remember....every day is a school day

  • erichansen1836 (9/23/2015)


    The classroom solution is one user adding stock arrivals, which increase the Quantity on Hand, and the other user doing Sales Order Processing which decrease the Quantity on Hand.

    So what do we do when inventory goes negative? Just toss the sales order?

    Patrick, who are you addressing your question to please?

    And can you clarify a bit more please?

    Sorry, I was addressing Kristen-173977, but the question is probably off topic anyways. No biggy if it isn't answered, but not making provisions for a negative inventory is a giant pet peeve of mine!

Viewing 15 posts - 196 through 210 (of 245 total)

You must be logged in to reply to this topic. Login to reply