Bernie Spang

by Joshua Whitney Allen

Solving the data storage dilemma with software-defined storage

A Q&A with Bernie Spang, Director of Marketing, IBM, Part I

Published May 05, 2015

 
 

As the amount of data continues to grow exponentially from mobile, social, and the Internet of Things, companies must also grow storage options to keep that data secure and accessible. In response to the at once fundamental and urgent demand for storage, IBM earlier this year announced IBM Spectrum Storage, a software-defined storage offering intended to put the massive amount of data associated with a hybrid cloud at the user’s fingertips. The company has also pledged to spend more than $1 billion to diversify their storage portfolio over the next five years.

Bernie Spang, Director of Marketing at IBM, is a part of the team responsible for Spectrum Storage. “Everybody is talking about how data pools have grown into data lakes,” he says. “The reality is that it’s oceans of data, and the investment in software-defined storage and flash storage is really focused on addressing the challenges and changing the data economics.”

Insights Magazine sat down with Spang to discuss software-defined storage, flash storage, and how IBM client storage needs have shaped their preparation of the Spectrum Storage portfolio.

Insights Magazine: Can you describe the two types of storage options related to IBM’s big Spectrum Storage announcement in February:  software-defined storage and flash storage?

Bernie Spang: With software-defined infrastructure, there are three words to remember: abstraction, automation, and optimization. Software-defined is about abstracting the intelligence that traditionally has been only available [when] integrated with servers or storage systems, and delivering it as software that can be used across heterogeneous, diverse hardware environments. When we do that [by] implementing open industry standards, you then enable a higher level of automation through those interfaces, through that flexible software. Then you can implement analytics-driven optimization of the placement of the application workflows, tying together the resources—the compute, storage and networking resources—so you get this optimal performance at this highest level of efficiency.

IM: What is notable about flash storage?

Bernie Spang: Moving to Tier 1 storage deployments [enables] you to deliver cost-effective performance; a smaller footprint; power savings, space savings—all while delivering significant performance boost not only to the new analytics workflows but also to the traditional workflows. As we get more of the systems of engagement throwing off more data, and more clients wanting instantaneous real-time access to information and doing business through mobile and other devices, it drives a greater demand pressure on the back-end systems.

For software-defined storage, we have clients who are consolidating many different kinds of workloads like high performance computing, big data analytics, and the range of new types of workloads: Spark and Cassandra and Mongo and all the application frameworks du jour.  [This type of] resource pool, optimized continuously in a dynamic way, gives you a level of agility and efficiency through this approach that traditional approaches haven’t been able to deliver.

IM: How does this conversation begin with a client? Say it’s someone who doesn’t have a prior relationship with IBM.

Spang: It starts with listening to what their challenges are and probing about their application workloads; where their problems are now with respect to performance bottlenecks or never-ending cost growth because of storage growth or the compute; and then helping them understand how to apply this new generation of technology to their particular environment. Sometimes it’s a discussion about the storage environment and wanting to think in terms of a new generation storage infrastructure, and that’s why we’re really focused on delivering storage solutions. It’s not about storage software or storage hardware. It’s about integrated, highly optimized storage solutions.

IM: How to approach all this in a tactical sense?

Spang: The way to think of it relative to software-defined infrastructure is [similar to how] virtualization came about to reign in server sprawl—the growth of under-utilized servers that were being deployed for each application. All these different workloads run on clusters of compute storage. If you set up those [clusters] as silos, they are inefficient and you’re over-provisioning. If you don’t have it as a shared resource pool, managed by the software-defined infrastructure, you have no way to dynamically shift resources from workload to workload. You have to optimize the environment.

IM: IBM is heavily promoting its cloud offerings. Are you obligated to prioritize the cloud option here?

Spang: No.

IM: Looking at the context of the market that we’re seeing coming out of IBM, it’s all about the hybrid cloud.

Spang: It’s definitely about the hybrid cloud. The ‘hybrid’ word is critically important there, because ‘hybrid’ means you’re still running on-premises. One of the things that is different about our software-defined approach is that we make the intelligence available as software that you can deploy on your existing or commodity hardware, making it available as a service or services in the IBM cloud and through a growing community of cloud provider partners. We still integrate that software with hardware in highly optimized and integrated systems—for instance, the new FlashSystem v9000 that we recently announced. It allows you the ability to have the flash-based high performance storage environment—that level of virtualization than then can be the front end to an existing heterogeneous SAN environment. It could be IBM storage or non-IBM storage.

There are definitely workloads of data that make sense to move into the cloud for a level of agility and cost savings. For instance, our platform computing software and our Spectrum Scale (formerly GPFS Elastic Storage) have been available in the IBM cloud for several months now. We have one client who has made the decision to run all of their workloads—their high performance computing, big data analytics workloads—in the IBM cloud in this environment. We have others who are using it the cloud to do development and test types of environments to test new ideas and try to build out new solutions, without having to do capital purchase and manage the infrastructure on premises. They intend, once they deploy in production, to do that on premises. We have others that are truly looking at the hybrid cloud to be able to burst out during peak periods, and by offering the same compute and storage environment, that’s a high performance environment on both, we’re giving clients the flexibility to move the workload where it makes the most sense.

Click here to read part 2 of this Q&A with Bernie.

 
 

Comments

No one has commented on this item.