As organizations gather and manage more and more big data, here are 10 best practices to protect all of that information from hackers and cyber breaches.
Apache Hadoop, the grid technology, is increasingly popular for storing massive amounts of data. By default, Hadoop runs in non-secure mode. When service-level authentication is turned on, Hadoop end-users must be authenticated by Kerberos – the popular computer network authentication protocol.
The major Hadoop distribution providers – including Cloudera, Hortonworks and MapR – also offer various security solutions that support authentication, authorization, encryption and more.
Yes indeed, relational databases and SQL-oriented solutions still manage the majority of enterprise data, according to a recent survey from Dell and Unisphere. To safeguard a relational database, make sure you focus on five areas of breach prevention (authentication and authorization; database firewall; encryption; data redaction and masking; and patch management), according to Layer Seven Security. Also, make sure your organization understands four areas of breach detection (data discovery and classification; privilege analysis; configuration management; and logging and auditing), Layer Seven Security adds.
In-memory databases store data entirely in main memory, which can be an ideal approach for data-intensive applications like analytics, social networking and e-commerce systems.
Generally speaking, in-memory databases have built-in security features but the bigger concerns involve IT architecture. For instance, a Payment Card Industry Data Security Standard (PCI DSS) best practice calls for application and database services to run on separate servers located in independent network zones, notes Layer Seven Security. However, some in-memory databases have built-in application and web servers – allowing each piece of software to share hardware resources.