Lots of data in RAM

  • Comments posted to this topic are about the item Lots of data in RAM

  • This was a "feature" of the 2nd or 3rd PC I owned--I have kept the badge around for sentimental reasons:

    HPIM2591

  • In 1974 I began my fourth position in IT as manager of a brand new installation at a company that was entirely new to IT.  We started out operating 24 hours a day, doing data entry on 4 CRTs using an IBM System 3 Model 10 with 32K of memory, a typewriter console, and a processor box the size of two large refrigerator/freezer units side by side.  Three removable disk drives the size of a modern washing machine, each stored 1.25 meg of data on six platters larger than 33 1/3 LP records.   Two magnetic tapes drives each also the size of large refrigerators, with 11" reels of 1/2 inch tape gave us backup and sequential processing capability.   The hard disk units were regularly swapped in and out of the drives depending on which application was needed, running only a single application while data entry people sat and waited.

    We ran with an operating system that consumed 6K of the available hard memory, leaving 24K for our programs.  Since there was no such thing as VM, we wrote RPG and Cobol programs with hard-coded overlays that sectioned code into usable and disposable pieces that were loaded as needed and then swapped out.

    Data entry folks manually keyed orders and printed hard-copy picking lists for goods that the warehouse picked and loaded on trucks around the clock, following which we printed three-part carbon paper invoices to go with the drivers.  Hundreds of hard-copy invoices were manually posted daily to ledger cards by posting machine operators and then filed in folders in a rank of something like 25 4-drawer file cabinets.

    Today the laptop I'm on is many times as fast and powerful as the huge box system we ran the whole business on.  It has 16mb of memory and 2TB of hard disk internally, and I'm considering replacing it soon.  It runs SQL Server very nicely for me.

    • This reply was modified 5 years, 3 months ago by  skeleton567.
    • This reply was modified 5 years, 3 months ago by  skeleton567.

    Rick
    Disaster Recovery = Backup ( Backup ( Your Backup ) )

  • As it would turn out, I am actively in the process of doing just this.  I also think that the current market dynamics will not allow for the serious developer to avoid these considerations.

    I have a desktop and a laptop, with 64GB and 32GB respectively.  To be candid, a lot of that is currently wasted as I did not think it through well enough.  I had been using the desktop for a remote gig (laptop neglected), went through a RIF and found myself needing to take a computer with me at client locations (desktop neglected).  Result; misplaced code and a glaring realization I needed additional developer discipline.  All of this new discipline, such as source control, file management and so forth needed to be no longer managed by another group, but by me.

    And, we now need to think in terms of multi-platform development processes, plus the cloud, distributed teams and so on.  The desktop + laptop combination introduces an additional layer of potential problems, essentially centered around internal development cycle synchronization.  The new, more powerful, tools have helped immensely but they also bring to the developer the DevOps thought process.   For example, setting up a home office that allows for both private, non-work-related activity along with those of gainful employment, separate zones, security, and all of the rest.  Also, included in this process was the emerging changes due to the integration of communication via mobile phone.  That is, not losing calls on one hand but also being able to control who can contact whom.

    In other words, I am currently of the opinion that more RAM is only one part of the challenges developers ought to consider in today's world of devolving and atomizing realities.  Think of the developer and dev process in terms of IoT.  We are now components.  We should prepare.  I also am of the opinion that remoting will only increase.

    My current thought and I'd greatly appreciate any suggestions or advice to the contrary are as follow, and this is an incomplete list:

    • Laptop with 128GB minimum

      • Docking station
      • VM of some sort

        • I have VMWare Pro 15
        • Bare metal?
        • [fill in the blank]

      • Multiple Operating Systems

        • Windows 8, 10 at a minimum

          • Configured to average amounts of disk and memory

        • Mac

          • I have a Hackintosh running, thinking a separate real Mac for real testing

        • *nix

          • What flavors, etc.

      •  Large RAM disk.

        • An old trick that might add a bit of speed.

      • Bluetooth capability
      • Automated complete imaging, probably offsite.

    • Home-based PBX

      • Asterisk
      • VOIP
      • Google Voice for call routing

        • Would allow for VOIP calls through BT while onsite
        • Would allow for voice-to-text
        • Would aid in managing unwanted calls via a whitelist

    • VPN

      • Home system VPN-capable.

        • I am out and about and need something

    • Zones in the Home

      • If possible, get a synchronous feed.

        • I currently have 1GB synchronous, I can get 10 but no current justification exists.
        • I really like the Ubiquiti equipment.

          • They allow for PBX integration with Google Voice (well, the last time I looked, who know now 🙂 )

    So, YES, get as much RAM as you need.  If at all possible, plan for growth, but realize that the 'bigger picture' exists and to the extent you are able, cover all of your bases.

    Regards,

    Doug

    • This reply was modified 5 years, 3 months ago by  ddodge2. Reason: Thought I had overlooked mentioning a docking station
  • It might seem that cheap huge memory, multi-core client processors and All-Flash/SSD storage have met all our hardware performance objectives. But along the route from twenty years ago to today, we took a dead end path on memory. Back then, memory was desperately needed to reduce IO to an achievable level. Today, flash-NVMe storage systems can drive 1M IOPS. While there is strong value in having large memory to reduce IOPS to 100K, the incremental value of huge memory to further drive IOPs down to 10K is only minor. Anything below 30K IOPS is really just noise.

    So is there anything wrong with the massive memory typical in recent generation (client and servers) systems? Probably not, nor does reducing memory have positive effect, aside from cost. The main deficiency in recent generation systems is that most of the compute capability of modern processors is wasted in dead cycles waiting for round-trip memory accesses. Many years ago, DRAM manufacturers told system vendors that the multi-bank approach in SDRAM and DDR could scale memory bandwidth going forward at a moderate cost burden. The choice was in: 1) having the minimum number of banks necessary to support the target bandwidth, or 2) having many more banks to allow for low latency as well as bandwidth capability. System vendors, thinking heavily based on the needs of database transaction processing, were unanimously(?) of the opinion that only the lowest cost that could meet bandwidth was required.

    Today, we have essentially a mono-culture in the choice of DRAM for main memory in that all (contemporary) products of a given DDRx generation employ the same number banks. Currently, there are 16 banks in DDR4.  Over twenty years, memory latency at the DRAM interface has been essentially unchanged at around 45ns for the random access full cycle time (tRC). There are distinct memory products for graphics, network switches, and low power, each targeted to the objectives of their respective environments.

    We need to admit that the direction of more memory capacity at lowest cost followed for the last twenty years has been pushed far beyond the true requirements to ridiculous levels. Consider a system with 2 x 28-core processors and the standard 24 x 64GB DIMMs for 1.5TB memory. A year ago, the 64GB DIMM was $1,000-1,300 each. Today the 64GB DIMM is under $400 each. In reality, even very heavy workloads could probably run on 256GB memory with proper tuning and Flash-NVMe storage. Regardless of whether the 64GB DDR4 ECC memory is $1,300 or $330, the cost difference between 1.5TB and 256GB system memory is inconsequential after factoring the database engine per-core licensing cost.

    If our budget for memory were on the order of $10,000 and we admit that we probably only need about 256GB, i.e., allowing a cost budget of about 4X greater per GB than conventional DDR4 memory, what could be possible? Some years ago, there was an RL-DRAM product/technology that was 16 banks when DDR3 was 8 banks at perhaps a 2X die-area penalty per unit capacity. A modern version of RL-DRAM would probably 64-banks. If I understand correctly, the full cycle latency of RL-DRAM was under 10ns? The Intel eDRAM used for graphics memory actually has 128 banks with cycle time of 3ns? Both are much better than the conventional DRAM cycle times of 40ns+.

    Would be the benefit of employing very low latency memory out-weigh the trade-off of lower capacity and higher cost per capacity? In the typical multi-processor (socket) system, probably not. This is because the overall system memory latency to the individual core has large elements beyond the latency at the DRAM chip interface, even with integrated memory controllers. However, a single-die processor in a single-socket system could benefit substantially from low latency memory, even at much smaller capacity. The difference in cost between conventional large capacity, high latency and smaller capacity low latency memory is probably a wash, but the value of being able meet a specific performance objective with many fewer cores results in a big gain from reduced software per-core licensing. In addition, anomalous performance quirks are far less likely on a system with uniform memory than on non-uniform memory.

    Note: good chunk of overall memory latency occurs in the L3 cache. On the Intel Xeon SP (Skylake and Cascade Lake), L3 introduces about 18ns latency. Perhaps the only meaningful purpose of the L3 in Intel processors is to facilitate cache-coherency, as it probably has little cache-hit rate net benefit. Intel did mention that they are going to re-think the L3 strategy for a future generation processor?

  • I had the opportunity to build my first computer. I hand soldered all the diodes, resistors, capacitors, transistors, and IC Sockets. It had an RCA CDP1802 8 Bit processor. 256 Bits (NO G, NO M, Not even a K), Hexadecimal Key pad for input and 2 (qty) 7-segment LEDs as the output. I have been a computer junkie ever since.

    Nowadays I have a three-year-old ASUS ROG GL752VW-DH74 that I have juiced to 32GB Ram, 512GB SSD M.2 and 512GB SSD SATA III (both Samsung EVO)

    Yes, I am still very happy with this system, but I will probably get a new one in about 6-8 months

  • I had that same machine, an RCA COSMAC ELF--from an ad in Popular Electronics! Then an IMSAI 8080.  Sounds like we ate a lot of the same electrons...

  • Joe,

    I would prefer to spend a little more on faster throughput (the L3 cache you noted) and, of course, some ROI calculations, just to be sure.  I suppose many of those types of decisions would also include the type of processing taking place.  And, in this case, I am presuming this scenario would be more appropriate for the production and QA servers.  I would want to have a good grasp of how things ran on a server built exactly like the production version.  But, then we have Erv, Bradley, and Nordlaw, the accounting trolls. 🙂

    For a development box, it strikes me that I would either simulate a slower computer or use older ones as testbeds to aid in gauging the client experience.

    Overall, after 30 years, speed is still king.  OK, ECC RAM too. 🙂

     

  • Software developers making apps blazingly fast is the hardware improvement I most want to see.

  • Good ole TRS-80 days. I remember them well. I upgraded from 4K to 16K and thought I had all the memory I'd ever need, until I got the expansion interface and plugged in another 32K (at a cost of about $800--$50/16K x 1 chip; 8 chips per 16K). 48K of RAM, a 70K floppy disk and a 10MB hard drive that weighed over 4.5 pounds and cost me $1,900 (I still had the receipt until a couple of years ago when it faded so much it was just a yellow piece of paper). What more could a guy want!!!!

  • Michael,

    I agree. As hardware becomes faster and faster, the software being developed becomes more and more bloated! The "hello world" program, which compiled into 1K of code 20 years ago is now 100MB because of all the included (and unused) libraries. I realize that software today has many more capabilities than what we had 20-30 or more years ago, but I recall back in the CP/M days, WordStar and MagicWand that could do a lot of what today's word processors could and fit on a single floppy disk. Now you install your word processor from a DVD because all the code won't fit on a CD.

  • The future of database applications is cloud hosting, virtualization, and horizontal scaling.

    It's kind of like debating whether to invest $,$$$ in a riding lawnmower, hoping it will cut your weekly chore down from two hours to one - versus hiring a couple of guys to do the job for $$ every two weeks.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Hi Steve,

    Ah yes, nostalgia. I remember using a 110-baud acoustic cup modem up through all the models in-between to a "modern" 56K Zoom. I remember buying a 40MB SCSI HDD for my Amiga 2000. Cost a king's ransom at the time. Owned just about every Commodore product prior other than the 1000 and the Smithsonian relic PET. Having worked with disk-less 286 machines which booted from an EEPROM to the Novell server up to blazingly fast 486DX at the company I worked for at the time, my first PC was a 486DX-2 66MHz from Gateway. I believe it had 4MB SIMM total, too. Remember having to set the jumpers on cards for IRQ and DMA then fiddling with AUTOTEXEC.BAT and CONFIG.SYS to get everything loading in the right order for expansion cards to work? Yeah, those were the days.

    Fast forward to present day, and to answer your curiosity about how many crave more powerful machines. Well, for turning a milestone age, I bought a new 17" UHD laptop with a 6-core i7, 64GB RAM and three 1TB NVMe SSD modules to replace my old Alienware. Not for gaming, rather as a development machine (docker, SQL Server, etc).

  • Doing a lot of Power BI work lately. Too expensive RAM? Sure, that's a thing. Too much RAM? Not a thing.

    Brian

  • Ahhh, Computer Shopper! Thanks for the memory! I remember I had an "Internet Directory" book that looked like a phone book with all of the websites listed (supposedly) in it. Yuk yuk ...

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply