September 17, 2015 at 11:55 am
So now we're saying 32 bit processors can't seek past 4 gigs? Tell me more!
edit: Seriously, I don't write big files, so I haven't had to deal with this, wheres the limitation?
It is an operating system issue. Your operating system has to support Huge Files and it has to support DOUBLE INTEGERS. Windows 7 Home Premium effidently does not as I was not able to get them to work.
September 17, 2015 at 12:06 pm
erichansen1836 (9/17/2015)
So now we're saying 32 bit processors can't seek past 4 gigs? Tell me more!
edit: Seriously, I don't write big files, so I haven't had to deal with this, wheres the limitation?
It is an operating system issue. Your operating system has to support Huge Files and it has to support DOUBLE INTEGERS. Windows 7 Home Premium effidently does not as I was not able to get them to work
File System, not OS. FAT32 caps out at 4GB files but I'd really really really hope you're not using that.
Most people out there now with Windows 7 will have page files bigger than 4GB.
September 17, 2015 at 12:14 pm
<shaking head>
I am going to the Tent in the Desert, and you sir are not invited.
You still REFUSE to answer my questions. You aren't worth wasting any more time on.
September 17, 2015 at 12:19 pm
<shaking head>
I am going to the Tent in the Desert, and you sir are not invited.
You still REFUSE to answer my questions. You aren't worth wasting any more time on.
When the going gets tough, those that aren't tough get going.
Or to put it another way...
Only the boring are bored. And the bored are boring too.
Bye Lynn!
September 17, 2015 at 12:37 pm
PHYData DBA (9/17/2015)
patrickmcginnis59 10839 (9/17/2015)
PHYData DBA (9/17/2015)
erichansen1836 (9/17/2015)
I have not been limited to 4 GB text files since the late 90's.
The amount of terrible analogy, guessed solutions, and zero fact checked statements in your posts are starting to become very boring....
If you want to re-write the wheel then please do it. But may I sugest you not go to a message board run by people that make their living from the wheel and Post only arguments about how they Wheel wrong.
Perhaps, but that may be because you have figured out a way to get around the INTEGER limitations of 2 GIG.
When you use READ/WRITE/SEEK/TELL type of file I/O operations on fixed-length record TEXT files, in order to SEEK to a particular record location, you have to give the location in bytes to the SEEK statement, which accepts an INTEGER value in the range to 2 GIG.
So what I have done is SEEKED 2 GIG from top of file, but also SEEKED 2 GIG from bottom of file to build my SDBM indexes. I store a postive integer byte offset value in the Key/Value pair of my SDBM indexes to SEEK TO from TOP OF FILE, but if the location of the record is past 2 GIG bytes, I store a NEGATIVE INTEGER byte offset from which to SEEK TO from END OF FILE.
No I figured out how to stop using 386 memory architecture. <edited to add the eye roll>
So now we're saying 32 bit processors can't seek past 4 gigs? Tell me more!
edit: Seriously, I don't write big files, so I haven't had to deal with this, wheres the limitation?
Was ready to flame on then I read your edit...
Seriously I do not know what he is talking about but the 4 gig file limitation was a limitation of the 386 architecture not being able to address more than 4 gig of memory.
The OP was talking about files, not memory, and nothing on the wiki page you referenced discusses the filesize limitation.
I also don't remember the OP mentioning that this is an OS limitation ok now he has called it an OS limitation. /edit ... he actually mentioned #1, the jet limitation, and #2, perls limitation with access using file pointers bigger than 32 bits (I think he specifically mentioned a text file that the SDBM indexes, SDBM being a variety of berkeley db available to his perl I'm guessing).
I'm also guessing that there are some perl distributions that do have the 2gb limit, because you actually have a "bigfile" setting you can set when building perl from source and it especially discusses the 2gb limit. While Berkeley DB itself can probably handle the big files, the OP is talking about his situation, and maybe his perl distribution shipped without bigfile limitations and thats the limit he's running into.
My other point is this post constantly reminds me of something a debate coach use to say.
"Make certain everything you say comes from a place that exists of Facts and knowledge.
It really sounds like the OP is actually discussing facts with this issue. He probably is encountering genuine limits with the Activestate perl distribution he's using.
From CPAN:
Since Perl 5.6.0, Perl has supported large files (files larger than 2 gigabytes), and in many common platforms like Linux or Solaris this support is on by default.
This is both good and bad. It is good in that you can use large files, seek(), stat(), and -s them. It is bad in that if you are interfacing Perl using some extension, the components you are connecting to must also be large file aware: if Perl thinks files can be large but the other parts of the software puzzle do not understand the concept, bad things will happen.
http://search.cpan.org/~shay/perl-5.20.3/INSTALL#Large_file_support
September 17, 2015 at 12:41 pm
erichansen1836 (9/17/2015)
<shaking head>
I am going to the Tent in the Desert, and you sir are not invited.
You still REFUSE to answer my questions. You aren't worth wasting any more time on.
When the going gets tough, those that aren't tough get going.
Or to put it another way...
Only the boring are bored. And the bored are boring too.
Bye Lynn!
Okay, sorry, back.
You have got to be kidding ME! You respond to me giving up on your absolute arrogance regarding answering my questions, but you won't even take the time to try and answer them??
Answer my questions! If you can't, ADMIT your ignorance regarding the questions. There is NOTHING WRONG with making such an admission.
September 17, 2015 at 4:39 pm
This is just a ridiculous and extended troll. Unsubscribing. There's no point to any of this.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
September 17, 2015 at 7:23 pm
Lynn Pettis (9/17/2015)
<shaking head>I am going to the Tent in the Desert, and you sir are not invited.
You still REFUSE to answer my questions
Ditto ... so I'll bring my Billy Can too ...
September 18, 2015 at 12:19 pm
Here is a MS-Jet Engine/ODBC/Win32 Perl database code example which was requested.
I fill up a single *.MDB database file with as many copies of the King James Version of the Bible as it will hold.
I didn't have various translations of the Bible to do this with, so chose to use just multiple copies of the KJV.
This script reads a delimited text input file of KJV Verses for each Book and Chapter. In order to be able to INSERT multiple copies, I have included a pseudo-version number to represent different translations(tr) of the Bible. Once this single *.MDB file is filled, I could start another and another, filling them.
If you want to build a database end-user interface to this database, you could have an ADMIN table that tells the application which MDB file to look in based upon which translation of the Bible the user wishes to access.
Example:
ADMIN TABLE (to determine which *.MDB file to open ODBC connection to based on Translation Nbr)
ColumnNames:
TranslationFrom TranslationTo MDB_FileName
1 343 Bible4x_1to343.mdb
344 600 Bible4x_344to600.mdb
#-- ActiveState Win32 Perl source code using Dave Roth's ODBC implementation for Perl.
use Win32;
use Win32::ODBC;
use IO::Handle;
$PWD=Win32::GetCwd(); #-- get current perl application working directory location
$infile="$PWD\\BibleVersesOrig.txt"; #-- pipe delimited text file containing Bible Verses
#-- Example: 1|Genesis|1|1|In the beginning God created the heaven and the earth.
#-- open the MS-Jet Engine ODBC connection to the appropriate *.MDB file.
#-- I could have referenced a ODBC FILEDSN or MDB file on a Network Share Folder
#-- instead of in the current working directory on the client-side PC.
$db = new Win32::ODBC("FILEDSN=$PWD\\Bible4x.dsn; DBQ=$PWD\\Bible4x.mdb");
if (! $db) {
print "Bible Database Not Opened";
$error=Win32::ODBC::Error();
print $error . "";
die;
}
#-- turn off automatic SQL write commits to the database. We will do them manually.
$db->SetConnectOption($db->SQL_AUTOCOMMIT,$db->SQL_AUTOCOMMIT_OFF);
############################################################################################
#-- King James Version of Bible contains 31102 verses of text.
#-- Fill a 3.x or 4.x format *.MDB file with as many copies of the Bible as it will hold.
#-- Because Bible verse text can exceed 255 characters, we store verses in a MEMO data type.
#-- 3.x holds 175 and 4.x holds 343 complete copies of the KJV Bible.
#-- 3.x file 0.99 Gig held 175 * 31102 = 5,442,850 rows which includes the primary index.
#-- 4.x file 1.99 Gig held 343 * 31102 = 10,667,986 rows which includes the primary index.
#-- Note: I only committed entire Bibles (all 31102 rows) or else a Bible was rolled back.
#-- Note: 4.x Memo column compressed using special COMP or COMPRESS syntax in table create.
#-- when creating table, FILEDSN needs to have extenedansisql=1, default is off=0
#-- 3.x files don't gain any compression.
#-- 4.x has unicode support, so if you don't compress, file is twice as large as 3.x
#-- with same number of rows.
############################################################################################
for ($tr=1; $tr<=343; $tr++) {
$ret="Y";
open(BIB,$infile) || do {$ret="N";};
if ($ret eq "N") { print "Input file not opened (tr=$tr)"; die; }
$cnt=0;
$ret=0; #-- initialize to False i.e. no ODBC or SQL or Jet Engine error
while ($rec=<BIB>) {
$cnt++;
if (($cnt % 1000) == 0) {
print "Processed $cnt rows from the input file(tr=$tr)";
}
chomp($rec); @fields=(); @fields=split(/\|/,$rec);
$bk=$fields[0]; $chp=$fields[2]; $ver=$fields[3]; $txt=$fields[4];
$sqltxt="INSERT INTO Bible (tr,bk,chp,ver,txt) VALUES ($tr,$bk,$chp,$ver,'$txt')";
$ret=$db->Sql($sqltxt);
if ($ret) {
$error=$db->Error();
print "$error - input line $cnt$sqltxt";
last;
}
}
close(BIB);
#-- Rollback or Commit the entire 31102 row database INSERTs for each copy of the Bible
#-- if the *.MDB file row limit is exceeded, and error would have generated ($ret=TRUE)
if ($ret) {
$db->Transact($db->SQL_ROLLBACK);
print "Aborted Import Operation (tr=$tr)";
} else {
$db->Transact($db->SQL_COMMIT);
print "Committed $cnt rows to Database (tr=$tr)";
}
}
#############################################################################################
$db->Close(); undef $db;
exit;
#-- this subroutine is always performed regardless of normal or abnormal exit
END {
sleep 5; #-- just so we can read any error messages displayed from above print statements
if ($db) {
$db->Close(); #-- ensures database connection is closed upon exit or forced exit
undef $db; #-- free up memory
}
}
Here is the code to create the Bible database table and indexes.
Just to be clear, you create the empty *.MDB files using ODBC Administrator Utility, then
create your tables and views and stored queries and indexes and constraints using ODBC/SQL syntax from a program like Win32 Perl with ODBC support (Win32::ODBC module). FYI, I used SQL Syntax documented within Microsoft Access 2007 SQL documents downloaded from the Microsoft Access Developers Network.
use Win32;
use Win32::ODBC;
$PWD=Win32::GetCwd();
$db = new Win32::ODBC("FILEDSN=$PWD\\Bible4x.dsn; DBQ=$PWD\\Bible4x.mdb");
if (! $db) {
print "Bible Database Not Opened";
$error=Win32::ODBC::Error();
print $error . "";
die;
}
###############################################################################################################################
$sqltxt="CREATE TABLE Bible (tr INTEGER, bk INTEGER, chp INTEGER, ver INTEGER, rowid AUTOINCREMENT, txt MEMO WITH COMPRESS)";
$ret=$db->Sql($sqltxt);
if ($ret) {
$error=$db->Error();
print "$error$sqltxt";
} else {
print "Bible Table Created";
}
print "##########################################################";
###############################################################################################################################
$sqltxt="ALTER TABLE Bible ADD CONSTRAINT Bible_idx1 PRIMARY KEY (tr, bk, chp, ver)";
$ret=$db->Sql($sqltxt);
if ($ret) {
$error=$db->Error();
print "$error$sqltxt";
} else {
print "Bible Primary (Unique) Index (tr, bk, chp, ver) Created";
}
print "##########################################################";
###############################################################################################################################
$sqltxt="ALTER TABLE Bible ADD CONSTRAINT Bible_idx2 UNIQUE (rowid)";
$ret=$db->Sql($sqltxt);
if ($ret) {
$error=$db->Error();
print "$error$sqltxt";
} else {
print "Bible (rowid - AutoInc) Unique Index Created";
}
print "##########################################################";
###############################################################################################################################
exit;
END {
if ($db) {
$db->Close();
undef $db;
}
sleep 5;
}
September 18, 2015 at 1:13 pm
Grant Fritchey (9/17/2015)
This is just a ridiculous and extended troll. Unsubscribing. There's no point to any of this.
Come, come now man. You don't see the joy of maintaining 500+ .mdb files in a 32 bit environment instead of one file in a 64 bit environment? Where's your sense of adventure? 😀
--Jeff Moden
Change is inevitable... Change for the better is not.
September 18, 2015 at 1:19 pm
I'm also guessing that there are some perl distributions that do have the 2gb limit, because you actually have a "bigfile" setting you can set when building perl from source and it especially discusses the 2gb limit. While Berkeley DB itself can probably handle the big files, the OP is talking about his situation, and maybe his perl distribution shipped without bigfile limitations and thats the limit he's running into.
Exactly, thanks for this input.
My guess is that my binary distribution of ActiveState Win32 Perl 5.6.1. build 638 was not compiled with bigfile support. If 5.6.0 supports it, not sure why 5.6.1 binary distribution of Perl was not compiled with that support?
I actually can use 4 GIG files with 5.6.1, but the file I/O SEEK statement only accepts INTEGER values to 2 GIG, not recognizing DOUBLE INTEGER values.
However,
I am happy enough with my workaround to seek 2 GIG from either top or bottom of file, to increase my effective storage range from 2 GIG to 4 GIG fixed-length record text files (as partial tables) in my database file system (Joint Database Technology i.e. Text Files for huge record data storage, and Perl SDBM files as random access indexes to those record offsets - in bytes)
FYI, anyone curious about this type of database system might be please to know that Win32 Perl offers an InlineBitmap conversion utility to turn Bitmaps into inline text. Bitmaps can be stored in Text files, with the text then displayed as a Picture within your Perl applications. This would be a secure way to store picture files as text if you do not want folks to be able to view them as bitmaps.
Example:
############################################################################
#-- Rainbow icon
############################################################################
$Bitmap2 = newIcon Win32::GUI::BitmapInline( q(
AAABAAEAEBAAAAAAAABoAwAAFgAAACgAAAAQAAAAIAAAAAEAGAAAAAAAQAMAAAAAAAAAAAAAAAAA
AAAAAAAQGRoMFhcWICQYJCsYJisaJzAYKTYSJjQSISgcMTwWLjgZLzobLz0RKDIRKjMNKDIoLS8z
NjUuMzIjLjQjLjYeKS02PUViZG9RV15ma3dtcXtiZ3NVWmNpZ3N8cnloZm1JSUtZV1hnZGRcWF1U
WF5taW5+eX2NiYyTj5SHg4d6dnt4dXyjl57FrbTPtrrWyr9YWGBpZ295dn94dn9nbHSQjJORjJSX
lJyemqODgolGTlA8R0eDfH7Yur/az8bN5NBjZm9wcHiGhY+AhIxpb3iWlqGkoKirpa6tp6+5srvC
tr3Mtrjhwsbp1M/W7NS49+1gYWU3OTs4Ojs5Pj5XW151cnmqoqespam0rbC+s7fRt7rWt7npzMnj
6dG99umr//5dXWFdXF4lLCoOGRcbJyUvNjWSjY28tbWhmptwbW2vmZjkw8Dh38bB8t+p/f6R5v1q
a3KEgYlwbXQ8RUdDS0tgYmOBfH5hX182Pjw6Pz+umJbl2b/C6s6s/vqT5f58sv5rbHB7eoCVkpid
l56qoaebkpU2PDojLisuNDNdWFrRw7DF48Gu+/KX6/59sv12lPJtbHGBfYKUj5SblpymnaKsoqWY
jY2eiou/nqHIsqW907Gt9OWb7v9+t/53ku13hsdzbnSGgIeXkZefl5ynnqa0panIqKjDoKfJrKTF
0Kuw7dic8P+Au/1zk+x6g8ODgqhzb3aIgIiZk5mhmJuqnqG+paTAnaPGpaPBv6Ct28Kc4viBuPd3
k+J2g759gJ2DfI10cHiIg4mXlJuimJuznJ26mqLDoKDLwKGx2LqY1+Z+ruhyjNF3gLN7eph7doJ7
cntycXeKho6dl56pmZ+xkpu7mp/KuKG02bef6O2Du/ZyjdN0fq96eJR3cn56b3R6cXNwcHaIhYyl
l5yvlZ20lZjAsqG10LSi4eSIyPh0mN9zgLt5eJl2cH90bHJ5bW56b29ubnOIgoahj5Wtj5a5pZyz
xKWj3s6P0vZ2pehwh8R0eqV3dYZxbXR1amt4a2pyaGgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
) );
September 18, 2015 at 2:11 pm
erichansen1836 (9/18/2015)
... I am happy enough with my workaround ...
Definition of workaround: Red flag telling you you should consider a different solution. 😎
For best practices on asking questions, please read the following article: Forum Etiquette: How to post data/code on a forum to get the best help[/url]
September 18, 2015 at 2:25 pm
Definition of workaround: Red flag telling you you should consider a different solution
Not necessarily.
When Gene Kranz, NASA Apollo Space Program Flight Director for the ill-fated Apollo 13 moon mission, said
"I don't care what anything was designed to do, I care about what it can do."
Gene was wanting Apollo Engineers to use ingenuity.
"We designed the LEM to land on the moon, not fire its engines for CSM course correction",
one engineer argued back to Gene.
"Well, we're not going to the moon anymore, are we.", Gene argued back.
LEM=Lunar Excursion Module
CSM=Command/Service Module combined, or CM=Command Module, SM=Service Module
There was an explosion aboard the CSM which made the moon mission have to terminate early.
Gene considered it too risky to light the engine on the crippled Service Module to return to Earth.
Gene had the Apollo Astronauts use the LEM engines intermittently to gain speed and for course correction for the return trip to earth.
It was like flying with a dead elephant on the astronauts backs (i.e. the LEM attached to the CSM), but the astronauts managed.
September 18, 2015 at 3:32 pm
erichansen1836 (9/18/2015)
Definition of workaround: Red flag telling you you should consider a different solution
Not necessarily.
When Gene Kranz, NASA Apollo Space Program Flight Director for the ill-fated Apollo 13 moon mission, said
"I don't care what anything was designed to do, I care about what it can do."
Gene was wanting Apollo Engineers to use ingenuity.
"We designed the LEM to land on the moon, not fire its engines for CSM course correction",
one engineer argued back to Gene.
"Well, we're not going to the moon anymore, are we.", Gene argued back.
LEM=Lunar Excursion Module
CSM=Command/Service Module combined, or CM=Command Module, SM=Service Module
There was an explosion aboard the CSM which made the moon mission have to terminate early.
Gene considered it too risky to light the engine on the crippled Service Module to return to Earth.
Gene had the Apollo Astronauts use the LEM engines intermittently to gain speed and for course correction for the return trip to earth.
It was like flying with a dead elephant on the astronauts backs (i.e. the LEM attached to the CSM), but the astronauts managed.
And yet again you still won't answer my questions.
They are vital and important questions that need to be considered. You are talking to database professionals and our job is to defend and protect the data. You have not answered a SINGLE question relevant to this area yet you are the one who wanted a discussion about "down sizing" to an enterprise class (leaving SQL Server Express out of this) RDBMS system.
Get real. Start answering questions that you are asked that mean something to many of us.
September 18, 2015 at 5:11 pm
Of course, Gene was also smart enough to not ask the engineers to make the LEM out of even the best of all tin cans. 🙂
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 121 through 135 (of 245 total)
You must be logged in to reply to this topic. Login to reply