Cloud is an innovative way to manage computing and storage resources. Reduced TCO, agility, and data localization are some of the features that make a compelling case for cloud adoption. It also gives enterprises flexibility to deliver solutions with power to scale up and down as per economic requirements and situations.
Cloud is not only about sharing resources but also a shared responsibility model. Multitenancy is synonymous to cloud, i.e. multiple customers (Inter as well as Intra) share the same resource pool. Abstraction and orchestration are the two characteristics that enable cloud to deliver the resources in a segregated and isolated manner.
Broadly, scope of security and compliance doesn’t change much with cloud but does introduce complexity in terms of roles and responsibilities between a cloud user and provider with regards to securing different components of the solution.
It is highly possible that service models overlap and the resulting project is a combination of IaaS and PaaS. Technologies, tools, and configurations offered by a provider could be different at each stage and dependent on the model finalized. These gaps should be identified as part of architecture design.
Cloud computing has a direct impact on governance and risk management due to the shared resource model. To an organization, a cloud provider should not be treated as another third-party service provider, as in this case, it is not dedicated and may not be feasible for them to fully customize their offerings and legal agreements.
Negotiated contracts, supplier assessment and compliance reports are some of the tools to exercise governance.
A very good analogy put forth by Cloud Security Alliance is, “Think of a shipping service. When you use a common carrier/provider you don’t get to define their operations. You put your sensitive documents in a package and entrust them to meet their obligations to deliver it safely, securely, and within the expected Service Level Agreement.”
Enterprise should be ready to accept that these compliance reports may not be fully accessible as a cloud provider is servicing many customers over the same platform and have reservations in sharing the complete report. Organization should define risk tolerance based on the assets involved and service model agreed.
Most of these laws and guidelines were developed in the late 1960s and 1970s, later clarified and expanded for OECD. Quite a few countries have mandated that the personal data, or as defined in regulations, should not move out from their respective geographical boundaries.
Cloud providers should explicitly document location of user, infra location, data classification, and any other restriction involved. At times, these cross-location requirements could be conflicting and difficult to manage.
‘Privacy by design’ should be the guiding principle for defining any product or service. Without any restrictions and guidelines, data may get replicated easily in multiple pockets, and hence, practically difficult to identify and delete.
Data security depends upon the location of data, its classification, storage format, applicable access controls and encryption tools, and technologies used. The most common types of data storage over cloud are object/file-based, volume, and database (relational/NoSQL).
One more framework in use is data dispersion to help break down data in small parts and store multiple copies on different physical storage. Sending data to cloud object storage via APIs is considered relatively reliable and cost effective as compare to setting up a dedicated SFTP server.
Architecture should also include tools to detect data transfer or large data migration. Cloud Access and Security Brokers (CASB) and Data Loss Prevention (DLP) tools help detect large data migrations and network monitoring. Some are capable of security alerting, as well.
Securing data in-motion is an important aspect attached to cloud computing. Few options for encrypting in-transit data are client-side encryption, network encryption (TLS/SFTP), and proxy-based encryption.
Design and architecture should have an appetite to accept public data as that may be one of the expectations from the solution. Design should have capability to isolate and scan the data before integrating it with the primary data store.
Key management is tightly coupled with these choices and can be implemented based on Hardware Security Module (HSM), cloud provider specific virtual appliance, or hybrid (a combination of HSM and virtual appliance).
Similarly, encryption and tokenization are two techniques used to manage data-at-rest. The methods and techniques may vary based on service model, provider, and deployment. It may be easy to adopt a blanket encryption policy, but we should understand that data processing over encrypted data is going to increase the compute time.
We’ve discussed various options around data encryption, but as a guideline, cloud application architecture should be defined with threat model as an input. We should document key exposing mechanism, location of encryption engine, etc.
One should take note of cloud provider capabilities as input to application architecture and assess native security choices offered by cloud provider as, at times, it may not only be better but also be cost effective by not re-inventing the wheel.
Moving to cloud should be considered as an opportunity to define better ways to process and manage data.
(References: Cloud Security Alliance’s "Security Guidance for Critical Areas of Focus in Cloud Computing v4.0")
Article written by Akshey Gupta
Image credit by Cloud Security Alliance
Want more? For Job Seekers | For Employers | For Contributors