Data protection is always a top priority for business leaders and consumers alike.
Other than only (regulated) enterprise organisations, recent implementation of GDPR and the extensive media coverage of major data breaches has made organisations more mindful of their responsibility to ensure data protection. Despite the numerous benefits of cloud usage, many are reluctant migrating to the cloud, as they feel storing data off-premises robs them of the control needed to ensure its security, thus exposing their organisation to the risk of being faced with hefty regulatory fines, job losses and an increased risk of a substantial reputational damage.
When we look at CSPs 'Shared Responsibility Models, it's important to understand that this is a mutual effort with Storage repositories on the one hand and the actual Data on the other one. '
The guiding Principle is:
Data must be adequately protected, whilst being processed, to protect against leakage, tampering and eavesdropping, in-line with the data classification and its intended use
Here are few high-level key points to help us improve data protection, in the spirit of the Principle:
Encryption & Tokenisation:
At rest: Enable across all data repositories, with strong encryption methods (e.g. AES, RSA), along with Access Control polices
In transit: Enable across all traffic (both internal and external), with approved security protocols (e.g. TLS v1.2/v1.3 -SNI enabled)
Use Customer-Managed keys (for Master and Data) where possible, across all data classification levels
Rotate keys often (based on the use-case and according to the relevant security Policy)
Define and use Key Policies and Lifecycle process
Keys must be stored securely in an approved location (e.g. HSM) with a Quorum Authentication/‘Split Knowledge’ enabled (based on the use-case)
Secure Metadata across all tiers and resources, where possible, as such is usually not being provided by the CSPs
Tokenise certain parts of the data which are highly sensitive or has a specific regulatory compliance requirement
Data Leakage/Loss Prevention:
Encrypt (and potentially tokenise) all data repositories with Object, Block and File storage and any Cloud service where data is stored and which tokenisation/encryption is available
Monitor all data repositories through the use of CSP monitoring services and on-prem/3rd party SIEM. For data at-rest, deploy where data resides. For data in-transit, deploy on the network perimeter. For data in-use, deploy on users’ client/workstation. For mid-stage of data in-use and data in-transit, deploy on the App server/tier
Protect all data repositories via local access polices (both on the network and data-store levels), with appropriate authentication methods
Classify data across all repositories, to determine the appropriate security measures (e.g. AWS S3 Macie, Azure AIP)
Adopt ‘Swim-lane isolation’ methodology where possible, for differentiating access
Validate frequently all egress traffic flows across all environments (Security-zone modelling) and enable detection capabilities for unauthorised traffic
Disable where possible, and restrict (via IAM) ‘Sharing’ or egress functionalities to unauthorised external locations, by an internal user (e.g. Disable the option to send S3/Blob objects to a 3rd party account over a public endpoint)
Map out frequently the active connections and methods in-use (e.g. Peering, API connections, VPN, Cross-Account, Public IP, etc.) across all environments, ensuring to eliminate DLP scenarios
Data analytics as enabler to data protection: It's all about "spinning the data". We need to be able to look into our environment and be able to pull metrics on a specific location/person/etc., as everybody and anything is an asset or a data object, which needs to be 'measured'. It will allow us a better control over data and its protection, amongst other business logic benefits outside security
Data minimisation (enhancing Data Privacy). Store only necessary data, as we will need to protect it. Less is better in some cases
Comments