There has been a lot of talk about models made only of Flash disks, and hybrid models , which combine them with mechanical disks. But on the server side, Solid State Device (SSD) disk storage deployments are currently favored: this solution is one of the easiest to implement to deploy Flash storage .
SAS / SATA disk formats, PCI Express flash storage, NVMe-compatible Flash storage (Non-Volatile Memory Express), Dual Inline Memory Module (DIMM): There is more than one way to deploy server-side Flash storage . And now, the latter can be permanent or serve as cache.
Multiple servers in the same cluster can share it. There are also new features and new formats, such as NVDIMM.
Disk formats remain common and exist in three sizes: 3.5, 2.5 and 1.8 inches. They fit into the same bays as the mechanical hard drives. In general, they are not hot swappable.
Some SSDs have the same thickness as mechanical disks, others are thinner. For servers, 2.5-inch SSDs are the most common.
Dell recently announced a rack server model that supports 1.8-inch SSDs. Nine of these disks occupy the same physical space as two of 3.5 inches. If you’re looking for a solution that supports a large number of IOPS for minimal footprint, 1.8-inch SSDs are a must.
Capacities are increasing too. For example, Samsung is offering a 2.5-inch enterprise-class SSD with a capacity of 3.8 TB. In 2015, expect other manufacturers to offer high-capacity SSDs of 2 TB and more. Enterprise-class SSDs now outperform enterprise disks at 10 and 15,000 rpm.
For servers, the PCIe card (PCI Express) is another popular format. Installed in the PCIe slot, these cards provide very fast access to storage.
They are often referred to by their capacity, but also in reference to their physical size. And this last factor is important when using servers of reduced size: they are thus called “full height, full length” and “half height, half length”, or by their English abbreviations, respectively FHFL and HHHL.
Performance explodes: Connected directly to the PCI bus, these cards display extremely low latency.
Their disadvantage? They are limited to one server, which must be turned off to install and remove. For many SSDs, the server must have a PCIe 2.0 x8 slot. However, some of the latest products fit into a PCIe 3.0×4 slot.
More and more enterprise servers are configured to use SSDs at boot time. At Demartek , we have been doing this in our laboratory since 2010. We appreciate how fast the operating system starts. In addition, applications appear more responsive when loaded from SSDs.
Boot drives do not require the same performance levels as mission-critical volumes, so you might want to consider using less expensive SSDs as server boot drives. Knowing that it boosts performance, an SSD used as a boot drive helps extend the life of a server.
M.2 is a newer format designed for different types of peripherals mounted inside enclosures, including SSDs. It is a map of 22 mm wide and whose length varies from 30 to 110 mm. It fits into an M.2 PCIe slot and provides up to 480GB of capacity, which is more than enough for a boot drive. This format is already available for desktops and laptops.
Other format comparable to M.2 but a little older: mSATA. These SSDs are mounted on a card of a size close to that of a business card, installed inside the system. The mSATA format was also originally designed for laptops. If it can be used on servers, eventually it will probably be replaced by the M.2 format.
Another consumer format, microSD cards, is being adopted in the server market. This storage technology is used in some mobile phones and other small electronics. We should soon find it as a boot drive on some servers. The server implementation will likely include two microSD cards for redundancy purposes.
The SuperDOM form factor, named for Supermicro SATA Disk On Module, is a proprietary format available on Supermicro servers. This device is a tiny flash drive that fits into a special SATA slot on the server’s next-generation motherboard. Its capacity, up to 64 GB, allows it to serve as a boot drive.
Flash storage connected directly to the memory channel
There are currently two Flash memory formats on the memory channel: NVDIMM nonvolatile memory and storage on the MCS memory channel.
Both formats use the memory channel for read and write operations on the device. They both fit into standard DIMM slots and provide storage space, but each in their own way.
The NVDIMM module includes DRAM and Flash components, control logic, and an independent power source, typically supercapacitors. This module functions as a DRAM and, in case of unexpected power failure or system failure, stores the data contained in the DRAM memory on the flash memory. When the power is turned back on, the data is restored from the flash memory into the DRAM.
NVDIMMs are currently available in 4 to 16 GB capacities. These relatively low capabilities make it difficult to use DVDIMM modules as large storage resources. These memories are particularly useful for caching, metadata storage, In-Memory databases, in-memory queuing, and similar operations that require the full performance of DRAM, but with a persistence ability.
Storage on the memory channel uses DIMM flash memory as a storage device. These devices are available in capacities of up to 400 GB, with latencies of less than 10 microseconds. This technology is particularly useful for some applications that require extremely low latency .
However, to use this storage on the memory channel, the BIOS or the UEFI interface on the motherboard must be able to determine whether there is memory or storage in the DIMM slots, and differentiate between them.
Some server builders are currently manufacturing motherboards with this capability. Diablo Technologies and Netlist, two major manufacturers, have brought a dispute in court: the arrival of these products on the market may be delayed pending the court decision.
Some SSD device formats use multiple interfaces, such as SATA, SAS, and PCIe / NVMe. Others use a single interface, such as SATA or PCIe.
For several years, the SATA interface has been used for storage on a single device. The classic SATA technology we know today has reached its peak with the 6 Gbit / s (0.6 Gbps) interface. It will not evolve towards higher speeds.
It will be replaced by SATA Express which uses up to two channels of a PCIe interface to reach the speed of 2GB / s with PCIe 3.0 and 1GB / s with PCIe 2.0.
SATA technology can be used with M.2, mSATA and SuperDOM reader formats.
Serial Attached SCSI (SCSI) has been used as a storage device for several years, and we are moving towards new versions.
The current version of SAS that supports 12 Gb / s allows multiple devices to be connected. It is expected to double its performance to 24 Gb / s in the future. SAS technology refers to the SCSI protocol and the underlying physical interface.
It should also take advantage of the PCIe physical interface with SCSI Express: this interface will use the same SCSI protocol but on four channels of the PCIe interface. SAS technology is used primarily for hard disk format. SSDs and mechanical hard drives are available with the 12 Gbps SAS interface.
NVMe is a software interface designed for SSDs that use the PCIe physical interface. It therefore applies to disk format, PCIe SSD cards and any other new PCIe format such as M.2.
NVMe replaces traditional SATA or SAS control protocols with a streamlined protocol that runs on a PCIe bus. Performance improves dramatically and latency is significantly reduced. In our real-world tests, we saw performance of several gigabytes per second for single-disk and PCIe-based SSDs.