Is Public Cloud Secure?

How Secure is Public Cloud?

There is a common perception that storing data in the cloud is inherently insecure: it seems no rising media star is complete without a sex tape that was obtained from a public storage service. More serious breaches have been reported involving millions of credit card records and other valuable personal data. Is this because shared cloud services are by their nature vulnerable to hacking and unauthorised access in ways that other storage services are not? Would it be advisable to keep all personal data on personally held storage?
Another commonly held and related idea is that cloud providers can always access your data whenever needed, whether out of personal curiosity or because of regulatory requirements.
I will consider AWS but they are representative of major public cloud providers.

Physical Security

  • AWS provide secure network architecture within their data-centres that is designed to handle failure at many points
  • API access points are secured by requiring HTTPS (SSL encrypted) access to a monitored gateway
  • Amazon.com corporate network is completely separated from AWS networks. There is no shared infrastructure
  • All aspects of the network are monitored and various levels of protection are in place

When hard disks are decommissioned they are first scrubbed by software then physically shredded so that they end up looking like this:

high-security-shredded-paper

How can we be sure AWS is secure?

AWS like Microsoft and other large public cloud providers expend hundreds of millions of dollars each year to ensure their security compliance meets the requirements of many external certifying authorities such as ISO 27001, and others required to satisfy government departments such as FedRAMP in the USA and so on.
You can see a list of AWS security compliance certifications.

Security is a Shared Responsibility

There are three broad levels of services that AWS provide and each has its own level of security requirements shared between AWS and its customers.

  1. When a customer uses AWS infrastructure in a similar type of network to a private datacentre with network, sub nets, compute servers, storage disks, gateways and load balancers then AWS is responsible for the network hardware and the virtual machines and the customer is responsible for the operating system, storage volumes, application software and configuration of IP addresses within the network, network access and permissions. The customer is running their infrastructure in a manner not significantly different from the way they would run it on their own premises except that they will probably be using shared hardware with virtual machines on top unless they choose to use dedicated hardware at extra expense.
  2. Container services such as Relational Database Services, Elastic Beanstalk, Internet Gateway, AWS EMR which is a big data service, Elastic Search are abstracted away from the virtual machine and operating system level so the user has no access to patching the database, operating system or configuring connections. Access permissions, code and data are the main areas that the user is responsible for.
  3. Abstracted managed services such as AWS Lambda, SES, SQS, S3, DynamoDB can only be accessed via an API and the level of user responsibility mostly relates to choices around encryption, access permission and region of deployment

Monitoring your AWS deployments

  1. Visibility - you can easily discover what assets you have deployed in AWS by using AWS Config to record and monitor each service with security rules defined to determine your security posture whether compliant or not compliant
  2. Auditability - CloudTrail will log every API call to create a record of when accessed and by whom (non repudiation)
  3. Controllability - secret key management can be managed by KMS or by CloudHSM, if a dedicated hardware device is required to meet FIPS 140-2 compliance for example
  4. Agility - in order to manage repeatable, consistent deployments services such as Elastic Beanstalk and CloudFormation can be used to create coded deployments complete with access permissions
  5. Automation - OpsWorks a Chef Solo managed deployment service, and AWS Code Deploy, a component of a managed code integration/deployment pipeline, can automate deployments to production or whatever stage is preferred
  6. Scale - AWS treats each customer, whether they are a billion dollar business or a sole user, as the same with regards to their capacity to manage their security and deploy their resources at a global scale. In other words you can have the best practices of the largest businesses in use on your services.

Underpinning all of the above are a number of key services: AWS IAM to control access permissions and user identification, CloudWatch to monitor health and create alarms, and AWS Trusted Advisor that can run a variety of checks on your deployments to compare against best practice and identify over permissive access and unused infrastructure. IAM is one critically important service that I will explore further in future posts.

comments powered by Disqus