Back to Blog Home

Now where did I put that flash?

For those of us who have been around the IT industry for a while, we have seen a lot of changes and an ever growing set of complexities as we ask our environments to continually do more for us. This has led to ever growing software tools that can manipulate data in new and interesting ways. This can take the form of SQL databases, columnar databases, GIS information, computational analysis, statistical analysis, CAD/CAM, gaming, and the list goes on. Accelerating your application performance in these complex environments has become a difficult task but there are some technologies that can assist you in speeding up your access to disk. These technologies are based on flash and can be implemented in your architecture in various ways to benefit the different types of workloads but are found in the server, Storage Area Network (SAN), and storage array.

Why am I focusing on storage? Let’s take a brief look at what is happening in the typical data center:

  • A 2.0 GHz CPU clock cycle is roughly 0.5 nanoseconds (ns)
  • Memory latency ranges from 50 ns to 200 ns
  • Average latency to a storage array is 5 milliseconds (ms)

If we change the scale and make 0.5 ns = 1 second we then have:

  • CPU clock cycle                 = 1 sec
  • Memory latency                = 1.6 to 6.6 minutes
  • Disk latency                       = 116 days

When you look at the numbers in a comparative sense you start to understand why storage becomes the area of focus. To a processor, every trip to a disk drive is like an eternity and a lot of clock cycles are wasted.

The types of flash available are based on either Single-Layer Cell (SLC) or Multi-Layer Cell (MLC) technology. SLC is a higher performance but lower capacity form and MLC is a higher capacity but lower performance form. These flash technologies are packaged into a standard memory packaging or into Solid State Devices (SSD) or disks. Both forms are used to speed up access to the back end disk and can be deployed at various points in the architecture.

As a general rule, the closer you get to the processor, the faster the access will be. However, it is also true that the closer you get to the processor, the more expensive the solution will be. Since this is the case, I will start near the processor and work my way out.

When you create data, it is typically created in the server’s memory and then copied out to disk. It will remain in memory until something new that needs to go there or age pushes it out, so if you access it again before this happens, your access time will be very good. You can also add memory on the servers PCI bus. This can take different forms – Random Access Memory (RAM) or as an SSD. The form will determine the access protocol that the CPU uses to access this memory. There are a number of manufacturer’s today who have this type of PCI based flash, some also provide software that will enhance the performance in various ways. The enhancements can be one or a combination of the following:

  • Deduplication – saving only one copy of the actual data and all secondary copies are pointers to this copy. The one copy is maintained until there are no pointers left.
  • Compression – compressing the data that is being stored in the device; the compression ratio will vary depending on the type of data that is being stored. This is sometimes used in conjunction with deduplication and can be done before or after the deduplication process.
  • Extension of array based pre-fetch algorithms – in this case the array where the data will eventually be stored will take this memory into account in its pre-fetch algorithms and send the data it believes will be accessed next to this set of memory to save access time.

The next location away from the processor is the SAN. There are devices available that reside in the SAN, in between the server and the storage array, and will cache the data in its memory to avoid having to traverse to the array. These devices have their own software and algorithms and will sit in the data path so they can understand the data access patterns and determine which data to keep in cache.

The final location is in the storage array itself. There are different forms of flash in the array that can be accessed as well. All arrays will have some level of standard memory as an acceleration point that will work in conjunction with the pre-fetch algorithms. The amount of standard memory will vary from array to array and some will allow for different configurations. Some vendors will allow for a daughter card type setup using standard memory in order to expand the amount of system RAM that a storage array can have. An additional point of flash for the storage array is to use an SSD installed in the array as an extension to memory. In this scenario, the SDD is implemented in a protected RAID set that is dedicated to performing this function. Again, the sizes of the allowed implementations vary by vendor and model.

The final location where flash is used to speed up access to data is when an SSD is implemented as a tier of storage. This can be an all flash array or a tiered storage array that allows for SSDs to be as an endpoint disk. This disk can be a tier of its own where all data destined for that LUN is on this tier of data or where these disks are used as part of a tiered pool of storage where the storage array moves data from one tier to another based on data access patterns.

As you can see, there are many things to think about when it comes to flash technology and it can be daunting to undertake. When the time comes to start looking at where flash can help, remember that OnX has the professionals and the process to help you along the way.