The Next Gen Digital Transformation: Cloud-Native Data Platforms

The Next Gen Digital Transformation: Cloud-Native Data Platforms

The Next Gen Digital Transformation: Cloud-Native Data Platforms

The software world is rapidly transitioning to agile development using micro-services and cloud-native architectures. And there’s no turning back for many companies that want to remain competitive in the new digital transformation.

Evolution in the application space has a significant impact on the way we build and manage infrastructure. Cloud-native applications in particular require shared cloud-native data services.

The Old Apps and Storage

To better understand this new approach, let’s take a quick look back.

With legacy applications, servers had disk volumes that held the application data. As the markets matured and services changed, things shifted to clouds and infrastructure-as-a-service (IaaS) which meant virtualized servers (virtual machines, or VMs) were mapped to disk partitions (vDisks) in a 1:1 relationship. Storage vendors took pools of disks from one or more nodes, added redundancy, and provisioned them as virtual logical unit numbers (LUNs).

Then came Hyper-Converged. This technology wave enhanced the process and pooled disks from multiple nodes. Real security wasn’t required; rather this solution relied on isolation to ensure only the relevant server talked to its designated LUN. The process is also known as zoning.

Read Also:
How Big Data is Transforming the Restaurant Industry

The New Apps and Data

As the evolution continues, the new phase is platform-as-a-service (PaaS). Rather than managing virtual infrastructures such as virtual machines, apps are now managed. Rather than managing virtual disks, data is now managed. The applications don’t store any data or state internally because they are elastic and distributed. The applications use a set of persistent and shared data services to store data objects, streams, logs, and records.

These new data services and NoSQL technologies do not require legacy storage stacks since the resiliency, compression, and data layouts are built into the data services. What that means is that, for example, traditional redundant arrays of independent disks (RAID) and deduplication features are useless and, in some cases, potentially harmful.

This new model must address the data sharing challenge since many apps access the same data – and from far-flung places, such as mobile devices or remote sensors. In addressing this new reality, here are some important aspects for consideration:

Read Also:
Open Source Data Sharing Software Takes Aim At Cancer

• Security must evolve from isolation to tighter management of who is allowed to access what and how; it must have the capability for guaranteeing the identity of remote devices or container applications; and it must include the means for automatically detecting breaches.

 



Enterprise Data World 2017

2
Apr
2017
Enterprise Data World 2017

$200 off with code 7WDATA

Read Also:
The Good, The Bad, and the Hype about Graph Databases for MDM

Data Visualisation Summit San Francisco

19
Apr
2017
Data Visualisation Summit San Francisco

$200 off with code DATA200

Read Also:
Improving the Quality of Data on Hadoop -

Chief Analytics Officer Europe

25
Apr
2017
Chief Analytics Officer Europe

15% off with code 7WDCAO17

Read Also:
Let's Resolve the Data Analytics Conflict in 2017

Chief Analytics Officer Spring 2017

2
May
2017
Chief Analytics Officer Spring 2017

15% off with code MP15

Read Also:
The hidden danger of big data

Big Data and Analytics for Healthcare Philadelphia

17
May
2017
Big Data and Analytics for Healthcare Philadelphia

$200 off with code DATA200

Read Also:
Microservices for Internet of Things Edge Devices
Read Also:
Improving the Quality of Data on Hadoop -

Leave a Reply

Your email address will not be published. Required fields are marked *