10 Tips for Reducing Storage TCO
Relying on big vendors to design and implement your storage systems is often considered a safe approach. However, it allows vendors to take control over your storage systems and budget. New products on the market have reached maturity levels which enable administrators to resume control over their storage systems’ design and significantly reduce costs. This article will describe 10 ways to reduce the Total Cost of Ownership using the available tools.
1. Use Heterogeneous Storage
Vendor lock-in is a major contributor to the cost of storage systems. Often organizations find themselves dependent upon their storage vendor not only for the sake of providing the storage systems, but also to design the Storage Area Network (SAN). It is only natural for storage providers to recommend solutions that are profitable for them — leveraging the fact that lack of competition drives up prices.
Many vendors nowadays propose storage virtualization solutions enabling organizations to work seamlessly with heterogeneous storage devices.
In some cases, storage vendors attempt to intimidate customers with a threat of revoking their warranty and/or ceasing to support them if they use heterogeneous storage. However, when the customer insists, an offer to use the vendor’s storage virtualization solution is typically offered. One major concern with using heterogeneous storage is the interoperability of storage services software such as snapshots and mirroring. Therefore, it is recommended to select a storage virtualization solution that also supports storage services for heterogeneous storage.
2. Use Multi-Tiered Storage Devices
Every organization puts multiple applications in use, each has different storage requirements. For example, ERP software which is the core of many businesses may require very fast and reliable storage devices. Email systems may require mid-tier storage devices and development applications may use lower-tier storage. The costly solution, which is often suggested by storage vendors and integrators, is to place all data on the high-end storage devices. The outcome is a redundant cost of storage for applications that do not require high-end storage systems. In some cases, the argument used is that “we have spare capacity on the high-end device”. However, when a real need for high-end capacity arises, the extra storage space is already occupied.
The cost of storage can be easily reduced by checking the availability and performance requirements for storage – leaving you the freedom of choice, to purchase only what you need. Since storage requirements may change over time (e.g., the Information Lifecycle Management – ILM) it is also recommended to have an on-line volume migration application for moving volumes from one storage device to another, while applications are up and running. Obviously this software must be able to migrate between storage devices from different vendors.
3. Allocate Storage According to Actual Requirements
Storage allocation is a complex task, resulting in storage administrators allocating storage based upon applications’ growth estimations. This often leads organizations to purchase much more storage than is actually needed at the time of implementation. Over time, some applications may require more than was initially allocated for them and others may have extra capacity which cannot be reclaimed.
Storage virtualization solutions enable the creation of logical “storage pools” that can be expanded dynamically by adding more physical storage, and creating logical volumes from the storage pools. Those logical volumes are like any other volume to the application but can be easily expanded by the virtualization software. The result is that an organization can buy just the storage it needs, expand volumes on demand and add physical storage only when needed. The cost of storage is lower, since organizations buy only what is required at each point in time and a very low amount of storage remains unused.
4. Implement Central Storage Management
Today, many software applications are used to manage storage. In some cases one vendor offers a few software applications – each for different purposes (e.g., one for snapshots and one for mirroring), not to mention software applications from different vendors. Implementing such applications requires some level of training and knowledge resulting in higher cost of maintenance. By selecting storage management software that can provide centralized management for all storage devices and services an organization can significantly reduce maintenance costs.
5. Use Low-Capacity Snapshots to Protect Against Logical Failures
Market research shows that 93% of failures are logical. Among logical failures we count virus attacks, corrupt file systems, or accidental file deletion by users. Recovering from logical failures using tapes is a long and exhausting process. Tapes must be ordered from the remote vault. Data extraction is time consuming. Finally, if the data cannot be found or is damaged, the process has to be repeated using a different set of tapes.
By using low-capacity snapshots, recovery from logical failures can be done in a few minutes – by either mounting a recovery server to a snapshot in the case of data loss or by rolling back the volume and remounting to a snapshot in the case of damage to the entire volume.
Low capacity snapshots consume very little storage space (an average of 5% of volume size per day). Therefore, the cost of implementation is minimal, while the saving of recovery resources is huge.
6. Instant Volume Replication and Low- Capacity Snapshots for Application Testing
Many organizations develop applications. In most cases, development teams need real data for development and testing purposes. Replicating a volume may take a long time until the entire data set is copied. Sometimes, several teams need the data for independent tests simultaneously, resulting in multiple replications using a substantial amount of storage space.
There are a few software vendors that provide an “instant copy” option, in which copies are ready for use within seconds, and the data is then copied in the background. In addition, there are low-capacity snapshot solutions that enable us to simultaneously assign the same snapshot to multiple servers for independent read and write purposes.
Using instant physical copies in conjunction with the above low-capacity snapshot solution enables the fast creation of a replication and the assignment of snapshots in parallel to multiple servers, each may be used by different testers or developers for independent purposes. Using this solution, developer’s time is saved as well as a significant amount of storage space.
If a test fails, the tester or developer would typically like to run another test on the same data for debugging purposes. With this solution, there is no problem re-assigning a new snapshot with the replicated data.
7. Cost-Effective Disaster Recovery Site
Disaster recovery solutions are usually very expensive. The most trivial solution requires a mirrored site with the same equipment as the original site, connected via fiber channel lines and its data synchronously mirrored to the remote site.
While synchronous mirroring guaranties no data loss, there are cases that mirrored data is not usable due to lack of integrity. Some applications cannot use data unless data integrity is guarantied. In those cases, and many cases when same data may be lost (Recovery Point Objective, RPO >0) asynchronous mirroring may be used. The use of asynchronous mirroring enables the use of much cheaper IP lines. In many cases, these network lines already exist. Snapshot based mirroring enables further reduction of the required bandwidth (and cost) since it transfers only the changed data, regardless of how many times it was changed during the snapshot period.
Storage at the DR site does not need to be same as the original site. The DR site may use cheaper storage, and operate at somewhat lower performance.
8. Use Snapshot-Based Backup
As the amount of data for backup increases, it becomes more costly. In order to minimize disruption to production servers, backups are executed during the night independently and without supervision. This may result in failures, or alternatively, organizations pay extra for a night shift to be present during the process. Due to increasing production requirements and increasing data capacities, backup windows are in many cases not large enough to complete the required task. This leads to huge infrastructure upgrade expenses (e.g., servers, LAN, faster tapes, etc).
By using low-capacity snapshots for backups, there is no disturbance to production servers or the LAN. A dedicated backup server can be mounted to a snapshot and perform the backup at any given time. It eliminates the need to upgrade the servers or the LAN, and cancels the need to dedicate a night shift to supervise the backups.
9. Use of a Disaster Recovery Site for Backups
When a disaster recovery site exists, it can be used for backups. In this case, there is no need for a backup server, since the DR site already has servers on call for failure and can be used for other tasks. Backup tapes are frequently shipped to a remote vault, in which case, the DR site may function as the remote vault and the shipment costs are eliminated.
10. Use of a Disaster Recovery Site for Application Development and Testing
Disaster recovery sites typically have a very current mirror of the data, a large amount of equipment (e.g., servers, terminals, networks, printers, desks, etc.) and people that are just “waiting” for a disaster to occur. Development and testing teams may use this equipment for development and testing in order to save the cost of purchasing the same equipment for the main site.