Quantcast
Channel: DRAM | MandETech
Viewing all articles
Browse latest Browse all 24

The Long-Term Future of Solid State Storage

$
0
0

September 22, 2015, SNIA Storage Developer Conference, Santa Clara, CA—Jim Handy from Objective Analysis talked about the future of solid state storage and the changing nature of both memory and storage. new technologies will enable architectures better suited for the data needs of he future.

In the early days of computers, the industry established a distinct difference between memory and storage. Memory was configured in bytes and was used for main memory and cache. Storage was organized in blocks for disc, tape, DVD, SAN/NAS and now cloud. The role for flash changed some of the divisions by allowing persistent memory to be used for read-often, write rarely functions, and also as fast storage.

The problems associated with the divisions of bytes and blocks, or memory and storage only make new architecture development harder. Storage blocks are 512-4048 bytes which requires some processing for every store operation to package loose bytes into blocks. Even memory is not byte oriented. DRAM is configured a 32-64 bytes, and even CPU cache is 64 bytes. Except for internal processing, almost nothing is done at the byte level.

The other major difference between storage and memory is that storage is persistent while memory after magnetic core memories is volatile. Flash again is an outlier as it provides non-volatile storage that is orders of magnitude faster than the other storage technologies. The entry of storage-class memory (SCM) with the new non-volatile devices is going to change everything.

The solid state storage devices start with SSDs, a non-volatile memory with a storage interface. The concept has been around since the late '70's with products following the normal Moore's law price/density curves. Storage Tek produced a 45 MB, 600us SSD in 1978 which cost $400k.in '89, EMC released a 4 MB 500iops 100us drive for $34k. in '97, Quantum crated a family of devices between 134 MB and3.2 GB with 9k IOPS and 50 us access for under $55k, and in 98 Texas Memory Systems came out with a 16 GB 50k IOPS per channel for $50k.

To date, one of the primary contributions to performance in HHDs and SSDs have come from the storage drive itself. HDDs are limited by the rotating speed of the disc and SSDs are by read, transfer, and misc. SSD functions. a smaller set of contributions come from link transfer, platform and adapter, and software. Even is an SDD, these functions add tens of ms to the transfer.

These delays can be reduced significantly by changing chip interface, system interface, and media. Starting with a base MLC NAND on SATA 3 and ONFi2, a ch;ange to ONFi3 shrinks transfer time. Replacing the SATA interface with a PCIe interface shrinks the link transfer, platform, and adapter times which accounts for the faster transfers from the same MLC NAND storage media.

In the future, the next-generation NVM devices on PCIe will shrink total latency to under 20 us, leaving the bulk of latency in the software. The standards groups such as NVMe, and STA will affect the link transfer, and platform and adapter components while the SNIA will be working on the software aspects.

These changes show a path ahead, to free the flash from the drive interfaces. PCie and NVMe have made progress in these areas and are laying the groundwork for newer technologies. As a result, storage will be designed for cost and not persistence in the future.

Another reason for the drive to change technologies is that DRAM transfer rates have hit a plateau. DDR 4 is the last generation to see increases in transfer rates, albeit at the cost of very low signaling voltages and point-to-point signals. The follow-on technologies offer little in the way of additional bandwidth and add increasing limitations on performance and signaling. The best approaches seem to be in hybrid memory cube or hybrid bulk memory.

As a result, everything is pointing towards fixed memory sizes. DRAM is not the only memory upgrade path. NAND is cheaper on a per bit basis and the NV memories will fit between the DRAM and NAND. Future memory and storage systems will include everything: DRAM, NVM, NAND, and HHDs. One technology will not kill off the others.

The new memories are faster than NAND and most offer more symmetric read and write times than flash. the new NV memories also are merging the functions of storage and memory, creating opportunities for more agile architectures. See figure.


The different technologies fill a range of user storage needs with various performance, capacity, and cost parameters.

 

The result of all the changes in technologies and architectures is a new component, storage-class memory. This memory will combine the benefits of a solid-state memory with DRAM performance and robustness and archival capabilities and low cost of hard-disc magnetic storage. The persistent memories will disrupt the entrenched thoughts on possible and available latency budgets for storage.

One example is a change from today's app to file system to disc driver to disc access flow to one where the app directly accesses memory mapped files in persistent memory, eliminating the file system and disc driver latencies. The computer of tomorrow will have fixed DRAM size made of stacked packaging of DRAM dice, and upgradeable NVM that will be the equivalent of DIMMs.

The storage system will include both flash and disc with the flash on PCIe of its own bus. The magnetic drives will continue to exist for mass storage as there is no foreseeable price crossover for high density, long term storage. Finally, the SCM software will eventually get to the point where is will contribute to overall system performance.
 


Viewing all articles
Browse latest Browse all 24

Latest Images

Trending Articles





Latest Images