Bernie Spang

by Joshua Whitney Allen

Larger IT budgets, more data: Is software-defined storage the answer?

A Q&A with Bernie Spang, Director of Marketing, IBM, Part 2

Published May 12, 2015

 
 

Bernie Spang, Director of Marketing at IBM, is a part of the team responsible for IBM’s Spectrum Storage offering. Clients are investing in technology with more confidence, and the data yield from new engagements in connectivity is driving companies to question their storage plans.

“The Internet of Things means we have more ways to capture data than ever before,” says Spang. “The new generation analytics capabilities means we actually have a reason to capture a lot of this stuff—even beyond what the regulations say, because we want to be able to get new intelligence and insights out of it.”

Insights Magazine sat down with Spang to discuss software-defined storage, rising data amounts, and how IBM client storage needs have shaped Big Blue’s preparation of the Spectrum Storage portfolio.

Insights Magazine: What’s the level of sophistication that you see in your clients’ understanding of storage and security?

Bernie Spang: They’re educating us as to the reality of their environment and where they want to go. In fact, the recent addition of a set of services that aren’t traditionally thought of as high performance computing or big data analytics—the MongoDB, Cassandra, Spark, those additional application frameworks—really came from the clients that we’re working with. They say they love moving their workloads to this highly efficient grid, and they want to move more of their workloads there. They don’t want to have to spin off another cluster to build these new environments for whatever the new application framework is.

For Spectrum Storage, clients tell us we have all the pieces. We have competitive technology in these areas. Let’s bring it together, let’s simplify it and streamline it. You see the commitment for that—more than $1 billion investment over the next five years, the rebranding and expanding the portfolio.

With our clients we’re doing the storage across those various pieces. There’s a good amount of it that’s consultative. We do design workshops, architectural workshops, deployment, and skills transfer.

IM: We’ve seen in surveys of our readership a commitment to spend more money on IT this year. Is there a feeling that with all this data is coming, let’s not lose it, and let’s spend a little more money to capture it?

Spang: The Internet of Things means we have more ways to capture data than ever before. The new generation analytics capabilities means we actually have a reason to capture a lot of this stuff—even beyond what the regulations say, because we want to be able to get new intelligence and insights out of it.

Plus, [there is also] the regulatory compliance need for long-term storage. I’ve spent a lot of time talking with clients recently who have petabytes of data off in the tape somewhere. The metadata for that data was who owned it and when was it backed up to their archive, [which is] not enough intelligence for them to know what’s in there. Do they need to keep [the data] to comply with regulation? Or should it really have been deleted to be compliant? They don’t know. Part of this is just vanilla data lifecycle management, which doesn’t sound so sexy to us sometimes, but I see this increasingly.

Clients are saying we have all this new capability, we have enterprise content management to do workflows, and we can do metadata tagging and all this good stuff. Now I’ve got to take that data out and add that intelligence to it and move it to this new generation workload.

Our clients are seeing that story here. They’re starting to see that if that [capability] existed 15 to 20 years ago, then we’d actually have that information. They’re starting to think about transitioning to this kind of workload going forward.

IM: We hear a lot about the industrial Internet and concepts where utilities, for example, can monitor performance of a turbine or a plant. What’s the storage need in that context?

Spang: I’m meeting with a partner who has an Internet of Things management platform that runs on SoftLayer on the IBM Cloud, and I was talking to them about our hybrid storage solutions, asking them if they envision having big data challenges and long-term data storage. And they do. In fact they were wrestling with this now, and what I was describing to them that’s in our roadmap, it was very exciting to them because it was tying it in on the cloud side.

IM: To extend the data ocean analogy, you just threw them a life ring.

Spang: And that’s exactly what you say. You say we need to capture and retain all this data.

IM: And they have their own autonomy in managing their own data? This isn’t a function that IBM performs? You’re giving them that capability to make the right decisions.

Spang: Correct, and in a way that enables them to set policies that then can trigger automated movement amongst the storage tiers on premises and out to the cloud. Stay tuned for more fun later this year.

Click here to read Part 1 of this Q&A with Bernie Spang.

 
 

Comments

No one has commented on this item.