The Next Gen Digital Transformation: Cloud-Native Data Platforms

The Next Gen Digital Transformation: Cloud-Native Data Platforms

The Next Gen Digital Transformation: Cloud-Native Data Platforms
The software world is rapidly transitioning to agile development using micro-services and cloud-native architectures. And there’s no turning back for many companies that want to remain competitive in the new digital transformation.

Evolution in the application space has a significant impact on the way we build and manage infrastructure. Cloud-native applications in particular require shared cloud-native data services.

The Old Apps and Storage

To better understand this new approach, let’s take a quick look back.

With legacy applications, servers had disk volumes that held the application data. As the markets matured and services changed, things shifted to clouds and infrastructure-as-a-service (IaaS) which meant virtualized servers (virtual machines, or VMs) were mapped to disk partitions (vDisks) in a 1:1 relationship. Storage vendors took pools of disks from one or more nodes, added redundancy, and provisioned them as virtual logical unit numbers (LUNs).

Then came Hyper-Converged. This technology wave enhanced the process and pooled disks from multiple nodes. Real security wasn’t required; rather this solution relied on isolation to ensure only the relevant server talked to its designated LUN. The process is also known as zoning.

Read Also:
The most common data quality problems holding back businesses, and how to solve them

The New Apps and Data

As the evolution continues, the new phase is platform-as-a-service (PaaS). Rather than managing virtual infrastructures such as virtual machines, apps are now managed. Rather than managing virtual disks, data is now managed. The applications don’t store any data or state internally because they are elastic and distributed. The applications use a set of persistent and shared data services to store data objects, streams, logs, and records.

These new data services and NoSQL technologies do not require legacy storage stacks since the resiliency, compression, and data layouts are built into the data services. What that means is that, for example, traditional redundant arrays of independent disks (RAID) and deduplication features are useless and, in some cases, potentially harmful.

This new model must address the data sharing challenge since many apps access the same data – and from far-flung places, such as mobile devices or remote sensors. In addressing this new reality, here are some important aspects for consideration:

Read Also:
Eight predictions about the future of big data

• Security must evolve from isolation to tighter management of who is allowed to access what and how; it must have the capability for guaranteeing the identity of remote devices or container applications; and it must include the means for automatically detecting breaches.

Read Full Story…


Leave a Reply

Your email address will not be published. Required fields are marked *