These drives are software that simulates a storage drive using the computers system memory. New computers using DDR4 memory can move data at rates of 17GB/s to 25GB/s or 136Gb/s to 205Gb/s. So, RAM Disks are WAY faster than any other storage media.
The problem with RAM Drives comes at computer star up and shutdown. Since RAM Drives are located in volatile memory turning the computer off erases the memory and whatever they held is lost. To make them useful software writes the RAM Drive to SSD or hard disk storage at shutdown, taking time and slowing shutdown. Then at startup it has to create the drive and copy contents to it from storage, lengthening startup time.
A Second Life viewer cache can use 10GB of space. That requires 10GB of memory. You still need system memory for Windows. The least Windows can use is about 2GB and 4GB is preferred and more is better. So, you need a motherboard that supports a healthy amount of RAM to consider using a RAM Drive. It is common for new motherboards to support 64GB of RAM.
That 64GB of RAM at the low end can cost US$255 to $820 on the faster high end. A 128GB M.2 SSD currently tops around $130. So, we have a challenge deciding on a price-benefit ratio we like. SSD’s can cost $1/GB of storage. A RAM Drive is going to cost $3.98/GB to $12.81/GB of storage. That is 4 to 12 times more money for, in my case, 38+/- times more performance. Not a bad deal.
The advantage that RAM Drives have in price:performance ratio is going to change. We are at the point that getting more RAM on the motherboard is cost prohibitive. Motherboards, CPU’s, memory controllers, and space limitation all conspire to discourage adding more RAM. It will increase. But, I think we will see SSD moving ahead faster. Motherboards already use layers of printed circuitry. Adding another layer for more PCIe cahnnels or adding Thundbolt to replace PCIe is already happening. SSD’s are getting faster, bigger, and cheaper. I expect to soon see SSD’s with half the speed of RAM Drives at 1/4 the cost.
It is possible to get too smart with SSD tech. At a time when storage space was expensive data compression was used to save space. When CPU’s became way faster than storage devices, compression was used way more often to reduce the amount of data written to and read from storage. A cost vs speed thing. We see that happening in today’s SSD devices but, with a twist.
Some SSD drives have built in compression software. You’ll never even know it is there. Some never even mention it in their promotional material. You find out about it in the technical reviews or spec sheets.
In general, we want to get compression out of the way to get the last bit of speed. We don’t want our CPU speeding time compressing and decompressing data. But, SSD’s are slower at writing/storing data than reading. But, chips in the SSD can compress and write data faster than they can skipping compression and write uncompressed data. The same on read. So, in this case we want the compression because overall it is faster.
There is no need to use the operating system’s data compression features with SSD’s. The result would often be a double compression with the second compression being work for very little return and a probable loss of performance.
An initial compression can significantly reduce the amount of data in a reasonable time. The second compression is working with ‘hard’ data, stuff that is difficult to compress further, meaning it simply can only compress a little while requiring about the same time as the initial compression. Result: a few percent decrease in data volume at the cost of roughly doubling the time spent compressing.
The take away on compression is; avoid using system compression with your SSD’s. Only consider it if you are CERTAIN your SSD is not already compressing the data it is storing.
More pages, links below…