September 23, 2015 at 2:43 pm
have you heard about SQL Developer edition?
SQL Server 2014 Developer Edition lets developers build almost any kind of database application on top of SQL Server. It's licensed for use as a development and test system, not as a production server, and is an ideal choice for people who build and test database applications. $59.95 maybe worth a look
I checked on this at:
But found one unhappy purchaser who wrote:
I bought it as a Developer Edition, but it has functionality as EXPRESS - only 5 GB database. I can't work with it and go back to SQL Server 2008 R2.
I had written:
SOMEONE is going to want a 5 Billion row database they can work with just by themselves as part of their job duties, and located/maintained by them on their own desktop PC or Laptop.
MS-Jet databases may be the most beneficial for this.
September 23, 2015 at 2:51 pm
erichansen1836 (9/23/2015)
have you heard about SQL Developer edition?
SQL Server 2014 Developer Edition lets developers build almost any kind of database application on top of SQL Server. It's licensed for use as a development and test system, not as a production server, and is an ideal choice for people who build and test database applications. $59.95 maybe worth a look
I checked on this at:
But found one unhappy purchaser who wrote:
I bought it as a Developer Edition, but it has functionality as EXPRESS - only 5 GB database. I can't work with it and go back to SQL Server 2008 R2.
I had written:
SOMEONE is going to want a 5 Billion row database they can work with just by themselves as part of their job duties, and located/maintained by them on their own desktop PC or Laptop.
MS-Jet databases may be the most beneficial for this.
https://msdn.microsoft.com/library/cc645993.aspx
not far from the top of above page
For features supported by Evaluation and Developer editions see the SQL Server Enterprise feature set.
if I am wrong then I apologise......I cant vouch for this
________________________________________________________________
you can lead a user to data....but you cannot make them think
and remember....every day is a school day
September 23, 2015 at 3:31 pm
erichansen1836 (9/23/2015)
have you heard about SQL Developer edition?
SQL Server 2014 Developer Edition lets developers build almost any kind of database application on top of SQL Server. It's licensed for use as a development and test system, not as a production server, and is an ideal choice for people who build and test database applications. $59.95 maybe worth a look
I checked on this at:
But found one unhappy purchaser who wrote:
I bought it as a Developer Edition, but it has functionality as EXPRESS - only 5 GB database. I can't work with it and go back to SQL Server 2008 R2.
I had written:
SOMEONE is going to want a 5 Billion row database they can work with just by themselves as part of their job duties, and located/maintained by them on their own desktop PC or Laptop.
MS-Jet databases may be the most beneficial for this.
Developer's Edition has the full functionality of Enterprise Edition. That unhappy customer probably connected to the wrong instance on his/her computer. Disregard that customer.
For best practices on asking questions, please read the following article: Forum Etiquette: How to post data/code on a forum to get the best help[/url]
September 23, 2015 at 3:34 pm
Alvin Ramard (9/23/2015)
erichansen1836 (9/23/2015)
have you heard about SQL Developer edition?
SQL Server 2014 Developer Edition lets developers build almost any kind of database application on top of SQL Server. It's licensed for use as a development and test system, not as a production server, and is an ideal choice for people who build and test database applications. $59.95 maybe worth a look
I checked on this at:
But found one unhappy purchaser who wrote:
I bought it as a Developer Edition, but it has functionality as EXPRESS - only 5 GB database. I can't work with it and go back to SQL Server 2008 R2.
I had written:
SOMEONE is going to want a 5 Billion row database they can work with just by themselves as part of their job duties, and located/maintained by them on their own desktop PC or Laptop.
MS-Jet databases may be the most beneficial for this.
Developer's Edition has the full functionality of Enterprise Edition. That unhappy customer probably connected to the wrong instance on his/her computer. Disregard that customer.
and of course there is the Evaluation Edition that has the same functionality as Enterprise.....but is FREE..for 180 days
________________________________________________________________
you can lead a user to data....but you cannot make them think
and remember....every day is a school day
September 23, 2015 at 3:50 pm
patrickmcginnis59 10839 (9/23/2015)
The classroom solution is one user adding stock arrivals, which increase the Quantity on Hand, and the other user doing Sales Order Processing which decrease the Quantity on Hand.
So what do we do when inventory goes negative? Just toss the sales order?
Sorry, wasn't very clear. I am assuming that Quantity on Hand is not allowed to be negative (and there is a constraint which enforces that)
Transactions have been posted today which honour that constraint - sufficient stock receipts have been processed before any orders that then depleted the stock. Those transactions were logged.
The point I wanted to make was if the Log Playback fails to replay the transactions in exactly the same order we might wind up with a situation where the Orders are processed before the Stock Receipts.
At the end of the run the Quantity on Hand will be the same whatever the sequence, but the constraint not allowing Quantity on Hand to be negative would have kicked in, because the transactions are replayed in a different order, and thus that transaction is aborted and the final result of the replayed run is thus different to the original.
September 23, 2015 at 3:52 pm
erichansen1836 (9/23/2015)
Lynn Pettis said...All of this leads to how you would handle transaction processing, multiple updates to multiple tables that must be done as a single atomic unit.
That is what Commits and Rollbacks do, you wrap them around the parent/child table relationship updates you are to perform, so that ALL or NONE of these occur. I gave an example (ODBC/SQL code in an earlier post) of writing over 30,000 rows to the database so that ALL or NONE of them are COMMITTED if the MDB file becomes too full.
ROLLBACKS DO NOTHING TO THE DATABASE. COMMITS DO.
a group of SQL maintenance requests executed is held in memory and applied in one fel swoop. BLAM!
Also, there may be the ability to place CONSTRAINTS on tables to enforce referential integrity (i.e. cascading deletes and updates). I have not tried that because of my doing it manually with ODBC transactions.
But the SQL syntax I believe is supported (See MS-ACCESS 2007 SQL Syntax documents at Microsoft website).
It might be safer to use ODBC transactions than CONSTRAINTS?
For one thing, I don't see CONSTRAINTS working if you keep 1 TABLE (parent) in 1 MDB file, and the other TABLE (child) in another MDB file. With ODBC, you can keep the related tables in separate MDB files, and using ODBC transactions to wrap around their collective UPDATES.
I'll try and do a test of this (i.e. CONSTRAINTS) for you all tonight and report on it tomorrow.
Also, someone asked HOW LONG it takes to run JetComp.exe on an MDB file maxed out at 10million rows.
I'll run that as well and report tomorrow.
FYI, I can run 500 of these JetComp.exe operations (on all 500 MDB files) doing it concurrently instead of sequentially. I will try to run a series of concurrent operations too to see if the runtime is less (per file) than if run sequentially on each file in succession.
This:
ROLLBACKS DO NOTHING TO THE DATABASE. COMMITS DO.
a group of SQL maintenance requests executed is held in memory and applied in one fel swoop. BLAM!
Actually, SQL server will start making changes to the data while the transaction is processing. These data changes may only occur in memory, but even these changes can be written to disk prior to the actual commit. This is why all changes are written to the write ahead long first so that the new and previous versions of the data are captured in case of a rollback.
I am beginning to think you don't have a real idea of how an RDBM System works to support database changes and keep the data in a consistent and valid state. (But then again I could be wrong).
September 23, 2015 at 4:18 pm
erichansen1836 (9/23/2015)
I do not allow concurrent conflicting batch updates to the MS-Jet database which attempt to STEP ON each other's changes. Batch operations only occur SERVER side when end-users are logged out, such as NIGHT operations tasks. Any concurrent BATCH operations launched would be segregated so that they did not effect the same logical group of records. Mostly I would likely use sequential processing. But with the ability to use concurrent processing if it proved a more efficient way of doing it.
I was actually thinking of something much more simple than that.
Two users are editing records which are in some way related. (One is adding stock, the other is subtracting stock).
They both press SAVE at exactly the same time. The reality is that one will be processed before the other.
Lets say that the stock added first and then the stock withdrawl.
All that is fine, and exactly how we would expect it to be.
But how will you ensure that the transactions which are written to the log are guaranteed to be in the right order, when in fact things happen concurrently?
If you do this in the application then my worry is that:
The first transaction updates the database and then there is a short delay, perhaps the CPU switches processes, whatever.
So the second transaction finishes updating the database and gets to write the log record before the first transaction (particularly given that this is happening in the application, rather than the core of the database)
Debugging this, or perhaps more correctly "building a framework to test this" is very difficult. Hours and hours of testing may never show up a subtle timing issue like this.
Re-play of logs is going to be used almost never. So it will get very little actual real-world usage. And yet it has to work right, first time, every time, and is the most critical piece of the code.
I would be too scared to attempt to write it, that's for sure!
September 23, 2015 at 5:44 pm
Just to ask the question again...
Jeff Moden (9/22/2015)
@erichansen1836Just a quick question and apologies for not following/reading all of the posts that have occurred since the last time I dropped in on this thread...
Do you actually have a system like what you've been talking about up and running in a production environment?
--Jeff Moden
Change is inevitable... Change for the better is not.
September 24, 2015 at 9:22 am
Kristen-173977 (9/23/2015)
patrickmcginnis59 10839 (9/23/2015)
The classroom solution is one user adding stock arrivals, which increase the Quantity on Hand, and the other user doing Sales Order Processing which decrease the Quantity on Hand.
So what do we do when inventory goes negative? Just toss the sales order?
Sorry, wasn't very clear. I am assuming that Quantity on Hand is not allowed to be negative (and there is a constraint which enforces that)
Fair enough. Aside from my objection about negative inventory, you have a really good point.
Transactions have been posted today which honour that constraint - sufficient stock receipts have been processed before any orders that then depleted the stock. Those transactions were logged.
The point I wanted to make was if the Log Playback fails to replay the transactions in exactly the same order we might wind up with a situation where the Orders are processed before the Stock Receipts.
At the end of the run the Quantity on Hand will be the same whatever the sequence, but the constraint not allowing Quantity on Hand to be negative would have kicked in, because the transactions are replayed in a different order, and thus that transaction is
aborted and the final result of the replayed run is thus different to the original.
If he's strictly recording transactions, he can let inventory go south of 0 and be good. If he has realtime needs, then not so much!
On a related note, I wonder if SQL Server's "roll forward" or "roll back" of transactions also has to be single threaded?
Googled it and I think it needs to be:
http://dba.stackexchange.com/questions/5233/is-rollback-a-fast-operation
@JackDouglas pointed to this article which describes one of the situations where rollback can take significantly longer than the original operation. The example being a 14 hour transaction, inevitably using parallelism, that takes 48+ hours to rollback because rollback is mostly single threaded. You would most likely also be churning the buffer pool repeatedly, so no longer are you reversing changes to in-memory pages.
The "this article" url:
Boy, if the OP wants to duplicate functionality like this, he sure does have his work cut out for him 🙂
September 24, 2015 at 9:28 am
patrickmcginnis59 10839 (9/24/2015)
Kristen-173977 (9/23/2015)
patrickmcginnis59 10839 (9/23/2015)
The classroom solution is one user adding stock arrivals, which increase the Quantity on Hand, and the other user doing Sales Order Processing which decrease the Quantity on Hand.
So what do we do when inventory goes negative? Just toss the sales order?
Sorry, wasn't very clear. I am assuming that Quantity on Hand is not allowed to be negative (and there is a constraint which enforces that)
Fair enough. Aside from my objection about negative inventory, you have a really good point.
Transactions have been posted today which honour that constraint - sufficient stock receipts have been processed before any orders that then depleted the stock. Those transactions were logged.
The point I wanted to make was if the Log Playback fails to replay the transactions in exactly the same order we might wind up with a situation where the Orders are processed before the Stock Receipts.
At the end of the run the Quantity on Hand will be the same whatever the sequence, but the constraint not allowing Quantity on Hand to be negative would have kicked in, because the transactions are replayed in a different order, and thus that transaction is
aborted and the final result of the replayed run is thus different to the original.
If he's strictly recording transactions, he can let inventory go south of 0 and be good. If he has realtime needs, then not so much!
On a related note, I wonder if SQL Server's "roll forward" or "roll back" of transactions also has to be single threaded?
Googled it and I think it needs to be:
http://dba.stackexchange.com/questions/5233/is-rollback-a-fast-operation
@JackDouglas pointed to this article which describes one of the situations where rollback can take significantly longer than the original operation. The example being a 14 hour transaction, inevitably using parallelism, that takes 48+ hours to rollback because rollback is mostly single threaded. You would most likely also be churning the buffer pool repeatedly, so no longer are you reversing changes to in-memory pages.
The "this article" url:
Boy, if the OP wants to duplicate functionality like this, he sure does have his work cut out for him 🙂
Comes down to why would I want to re-invent the wheel? Especially by myself. Microsoft has a team of developers work in this stuff.
September 24, 2015 at 10:10 am
In the below while LOOP, as each SQL INSERT is performed (Not committed yet), I would build my RESTORE log within an array, then if the complete set of INSERTS are COMMITTED, then I would output the array of INSERT statements executed successfully to the RESTORE log FILE. As I build this array, I obtain the date/time and place it within the array along with USER, PC_NODE, and SQL_STATEMENT. This way, the SQL statements are kept in chronological order for use as a RESTORE POINT going forward or between a range such as between 10AM and 1030AM only.
BELOW CODE is a SNIPET showing an input file being loaded to an MS-JET database, placing multiple copies (343 copies) of the same file (King James Bible verses) within the database, and using a bogus Bible Translation number (1-343) to make the primary index unique.
for ($tr=1; $tr<=343; $tr++) {
$ret="Y";
open(BIB,$infile) || do {$ret="N";};
if ($ret eq "N") { print "Input file not opened (tr=$tr)"; die; }
$cnt=0;
$ret=0; #-- initialize to False i.e. no ODBC or SQL or Jet Engine error
while ($rec=<BIB>) {
$cnt++;
if (($cnt % 1000) == 0) {
print "Processed $cnt rows from the input file(tr=$tr)";
}
chomp($rec); @fields=(); @fields=split(/\|/,$rec);
$bk=$fields[0]; $chp=$fields[2]; $ver=$fields[3]; $txt=$fields[4];
$sqltxt="INSERT INTO Bible (tr,bk,chp,ver,txt) VALUES ($tr,$bk,$chp,$ver,'$txt')";
$ret=$db->Sql($sqltxt);
if ($ret) {
$error=$db->Error();
print "$error - input line $cnt$sqltxt";
last;
}
}
close(BIB);
#-- Rollback or Commit the entire 31102 row database INSERTs for each copy of the Bible
#-- if the *.MDB file row limit is exceeded, and error would have generated ($ret=TRUE)
if ($ret) {
$db->Transact($db->SQL_ROLLBACK);
print "Aborted Import Operation (tr=$tr)";
} else {
$db->Transact($db->SQL_COMMIT);
print "Committed $cnt rows to Database (tr=$tr)";
}
}
erichansen1836 (9/23/2015)
I do not allow concurrent conflicting batch updates to the MS-Jet database which attempt to STEP ON each other's changes. Batch operations only occur SERVER side when end-users are logged out, such as NIGHT operations tasks. Any concurrent BATCH operations launched would be segregated so that they did not effect the same logical group of records. Mostly I would likely use sequential processing. But with the ability to use concurrent processing if it proved a more efficient way of doing it.
I was actually thinking of something much more simple than that.
Two users are editing records which are in some way related. (One is adding stock, the other is subtracting stock).
They both press SAVE at exactly the same time. The reality is that one will be processed before the other.
Lets say that the stock added first and then the stock withdrawl.
All that is fine, and exactly how we would expect it to be.
But how will you ensure that the transactions which are written to the log are guaranteed to be in the right order, when in fact things happen concurrently?
If you do this in the application then my worry is that:
The first transaction updates the database and then there is a short delay, perhaps the CPU switches processes, whatever.
So the second transaction finishes updating the database and gets to write the log record before the first transaction (particularly given that this is happening in the application, rather than the core of the database)
Debugging this, or perhaps more correctly "building a framework to test this" is very difficult. Hours and hours of testing may never show up a subtle timing issue like this.
Re-play of logs is going to be used almost never. So it will get very little actual real-world usage. And yet it has to work right, first time, every time, and is the most critical piece of the code.
I would be too scared to attempt to write it, that's for sure!
As far as 2 users editing related records at the same time, this would not happen in my user-interface because I have custom manually record locking enforced whereby related records are opened for edit by one person, locking out others until that user with the lock saves or cancels the updates.
This is going to cover parent/child relationships between tables and rows.
I would not use Table CONTRAINTS (other than INDEXES) because that might interfere with my manual logic to enforce referential integrity in some way? ALSO, I am not sure at the moment if in using CONSTRAINTS that I could build a RESTORE log correctly since I would be doing some TABLE updates manually and allowing JET to do some through CONSTRAINTS. Can't see it working that way.
Besides, I like the ALL or NOTHING approach wrapping groups of related updates within an ODBC Transaction.
ALL changes take place, or NONE do.
That may be a safer approach for a database like JET used across a NETWORK.
September 24, 2015 at 10:22 am
erichansen1836 (9/24/2015)
In the below while LOOP, as each SQL INSERT is performed (Not committed yet), I would build my RESTORE log within an array, then if the complete set of INSERTS are COMMITTED, then I would output the array of INSERT statements executed successfully to the RESTORE log FILE. As I build this array, I obtain the date/time and place it within the array along with USER, PC_NODE, and SQL_STATEMENT. This way, the SQL statements are kept in chronological order for use as a RESTORE POINT going forward or between a range such as between 10AM and 1030AM only.BELOW CODE is a SNIPET showing an input file being loaded to an MS-JET database, placing multiple copies (343 copies) of the same file (King James Bible verses) within the database, and using a bogus Bible Translation number (1-343) to make the primary index unique.
for ($tr=1; $tr<=343; $tr++) {
$ret="Y";
open(BIB,$infile) || do {$ret="N";};
if ($ret eq "N") { print "Input file not opened (tr=$tr)"; die; }
$cnt=0;
$ret=0; #-- initialize to False i.e. no ODBC or SQL or Jet Engine error
while ($rec=<BIB>) {
$cnt++;
if (($cnt % 1000) == 0) {
print "Processed $cnt rows from the input file(tr=$tr)";
}
chomp($rec); @fields=(); @fields=split(/\|/,$rec);
$bk=$fields[0]; $chp=$fields[2]; $ver=$fields[3]; $txt=$fields[4];
$sqltxt="INSERT INTO Bible (tr,bk,chp,ver,txt) VALUES ($tr,$bk,$chp,$ver,'$txt')";
$ret=$db->Sql($sqltxt);
if ($ret) {
$error=$db->Error();
print "$error - input line $cnt$sqltxt";
last;
}
}
close(BIB);
#-- Rollback or Commit the entire 31102 row database INSERTs for each copy of the Bible
#-- if the *.MDB file row limit is exceeded, and error would have generated ($ret=TRUE)
if ($ret) {
$db->Transact($db->SQL_ROLLBACK);
print "Aborted Import Operation (tr=$tr)";
} else {
$db->Transact($db->SQL_COMMIT);
print "Committed $cnt rows to Database (tr=$tr)";
}
}
erichansen1836 (9/23/2015)
I do not allow concurrent conflicting batch updates to the MS-Jet database which attempt to STEP ON each other's changes. Batch operations only occur SERVER side when end-users are logged out, such as NIGHT operations tasks. Any concurrent BATCH operations launched would be segregated so that they did not effect the same logical group of records. Mostly I would likely use sequential processing. But with the ability to use concurrent processing if it proved a more efficient way of doing it.
I was actually thinking of something much more simple than that.
Two users are editing records which are in some way related. (One is adding stock, the other is subtracting stock).
They both press SAVE at exactly the same time. The reality is that one will be processed before the other.
Lets say that the stock added first and then the stock withdrawl.
All that is fine, and exactly how we would expect it to be.
But how will you ensure that the transactions which are written to the log are guaranteed to be in the right order, when in fact things happen concurrently?
If you do this in the application then my worry is that:
The first transaction updates the database and then there is a short delay, perhaps the CPU switches processes, whatever.
So the second transaction finishes updating the database and gets to write the log record before the first transaction (particularly given that this is happening in the application, rather than the core of the database)
Debugging this, or perhaps more correctly "building a framework to test this" is very difficult. Hours and hours of testing may never show up a subtle timing issue like this.
Re-play of logs is going to be used almost never. So it will get very little actual real-world usage. And yet it has to work right, first time, every time, and is the most critical piece of the code.
I would be too scared to attempt to write it, that's for sure!
Have you actually implemented something like this where multiple users are accessing a single database (not one .mdb file that is part of the database) and doing updates/deletes/inserts/selects? Have you been able to successfully use it to restore a database?
September 24, 2015 at 10:27 am
Have you actually implemented something like this where multiple users are accessing a single database (not one .mdb file that is part of the database) and doing updates/deletes/inserts/selects? Have you been able to successfully use it to restore a database?
Comes down to why would I want to re-invent the wheel? Especially by myself. Microsoft has a team of developers work in this stuff.
Lynn Pettis
This type of question has been asked before.
You'll need to be more specific at what you are driving at for me to answer your concerns.
I know you don't plan on using this because of your preference for SQL SERVER.
Are you asking for someone else?
Please clarify which concern in the overall concept is not clear.
September 24, 2015 at 10:44 am
erichansen1836 (9/23/2015)
I checked on this at:
But found one unhappy purchaser who wrote:
I bought it as a Developer Edition, but it has functionality as EXPRESS - only 5 GB database. I can't work with it and go back to SQL Server 2008 R2.
I had written:
It seems you find a case that doesn't work and you believe that person over others. Developer Edition allows more than a 10GB database. Perhaps the purchaser didn't install this correctly.
Please don't look at the rumors or notes a few people have reported as being truthful. I'd have to look back at all of your posts, but I know there have been a few things you've put out there with regards to file limitations and scale that are not true. You can't claim this solution works great for things you believe, but be vague for limitations.
Attached is my developer edition. Got a db to 12GB with random data. BOL also shows Developer edition as having all the capabilities of Enterprise. Only the license is different. All the capabilities are the same.
September 24, 2015 at 10:52 am
erichansen1836 (9/24/2015)
Have you actually implemented something like this where multiple users are accessing a single database (not one .mdb file that is part of the database) and doing updates/deletes/inserts/selects? Have you been able to successfully use it to restore a database?
Lynn Pettis
This type of question has been asked before.
You'll need to be more specific at what you are driving at for me to answer your concerns.
I know you don't plan on using this because of your preference for SQL SERVER.
Are you asking for someone else?
Please clarify which concern in the overall concept is not clear.
I am asking for all of us reading this thread. How much more specific can I be? What don't you understand regarding the question?
You are right, I would not use your solution. You wanted a discussion about the pros and cons of your solution. the only way to have that is to know how you have implemented the features of your solution that correspond to the features already built in to SQL Server to allow us to build robust database centric solutions.
So far you have said nothing more than "You can write" statements which indicates to me that you haven't. What specific features have you successfully implemented in your solution and how does it support multiple users in an OLTP type solution?
Viewing 15 posts - 211 through 225 (of 245 total)
You must be logged in to reply to this topic. Login to reply