Minggu, 25 Maret 2012

Why modern computing will kill traditional storage

"This company is dead. ... You know why? Fiber optics. New technologies. Obsolescence. We're dead alright. We're just not broke. And you know the surest way to go broke? Keep getting an increasing share of a shrinking market. Down the tubes. Slow but sure. You know, at one time there must've been dozens of companies making buggy whips. And I'll bet the last company around was the one that made the best goddamn buggy whip you ever saw. Now how would you have liked to have been a stockholder in that company?"

-Danny DeVito (as Lawrence Garfield) in Other People's Money (1991)

At the end of the day, IT can be boiled down to three things: computing, storage and networking. Major revolutions in IT (e.g. the move to client server, the ongoing move to cloud) require major changes in all three areas.

It should be no surprise that computing has changed dramatically over the past 10 years. Just as no one would invest in a buggywhip company, you would be hard-pressed to find anyone interested in investing in a company that manufactured big iron computing systems (mainframes, supercomputers, and the like). This is obviously not because the need for reliable, powerful, centralized computing power has decreased. Rather, people have realized that computing became more robust, more reliable, more manageable and more economical as the following transformations occurred:

-proprietary software systems were replaced by open technologies like Linux and the rest of the LAMP stack;

-dedicated, single-use systems were replaced by virtualized architectures, which let multiple apps run on the same computer or let a single application run on multiple computers; and,

- large, monolithic, scale-up architectures were replaced by scale-out architectures, which let you build power by combining large numbers of redundant, small elements.


In other words, now computing is done by treating it as a scale-out, virtualized, commoditized and centrally-managed pool. An organization can own its own pool (a private cloud) or rent space in a pool someone else owns (a public cloud). In either case, the pool approach works and people are diving in.




Of course, if computing is going this way, storage and networking need to go this way as well. From an architectural standpoint, storage needs to support the new computing paradigm. It doesn't do you much good to move your applications around dynamically to take advantage of any spare CPU cycle if the application data is still locked inside an expensive, inflexible box. It's not surprising that many are now citing storage as the Achilles heel of true data center virtualization. The situation gets even worse when one considers the challenges of deploying hybrid clouds, when the ease of moving virtual machines between data centers runs smack against the challenges of moving terabytes and petabytes of application data economically and efficiently between disparate data centers over (relatively) low bandwidth connections.




Storage needs to do much more than just support the new computing paradigm. Inevitably, storage must begin to LOOK much more like computing: scale-out, open source, commoditized, virtualized and present in the cloud.

However, the cloud movement will demand much more fundamental changes. Not only must storage be delivered in increments (i.e. scale-out), it also must be delivered in a way that untethers the fundamental storage functions from any particular hardware or from a particular vendor. You can't define what storage hardware will be available in the cloud. Instead, storage must be treated as a software problem -- with a software solution.

sumber:
http://blogs.computerworld.com/18372/why_modern_computing_will_kill_traditional_storage

Tidak ada komentar:

Posting Komentar