Security Control Attestation for Cloud Computing Providers

While working on one of the  initiatives in the cloudsecurityalliance.org working groups, we had an interesting exchange of ideas on relevance of SAS 70 and similar certifications for cloud service providers. There were viewpoints that such certifications may not be sufficient & their usefulness debatable when it comes to cloud environment & various flavours it has to offer.

In this post, I preset my views on the subject

To understand how such certifications can help the Cloud service providers, we can look at the strategies various IT sourcing providers with global delivery models adopted when it came to providing assurance to their customers on Information Security & regulatory compliance.

In the early years of outsourcing, (around 2000 – 2004) there were lot of apprehensions expressed by potential customers around the state of Information Security, Risk Management & Regulatory Compliance when outsourcing to IT service providers, be it HP, IBM, CSC, EDS, HCL etc. The IT service providers knew that to get customers business, they need to provide a reasonable assurance on the state of Information Security to their customers.  In any outsourcing discussion that took place around that time, there used to be a huge focus on such topics.

What these providers then did was,  they adopted industry best practices and standards like ITIL and BS7799 (now ISO27001). Then they got an external body to audit and certify the state of these controls in their delivery centers (certifications like SAS 70 Type I&II to demonstrate the presence and effectiveness of the implemented controls). On top of that, some of these providers also allowed their customers to audit the security controls implemented at the provider’s delivery centers at random either by customer’s internal auditors or by customer external auditors.

Over time, the adoption of the industry standards combined with SAS reports and yes ‘right to audit’ did provide a reasonable assurance to the customers. Many of these providers have been successfully able to demonstrate the state of information security at their delivery centers to the existing and new customers and the business has been good.

Now, is it guaranteed that just because these providers have SAS 70 certifications, all is well at their centers? I don’t think anyone can guarantee a 100% secure environment.

I think the cloud computing market will evolve in a similar manner. It would require cloud computing providers to implement necessary controls, adopt standards, furnish recognized certifications as a proof of effectiveness of the controls. Without these certifications, these providers will find it tough (just like the IT outsource providers) to demonstrate effectiveness of the controls implemented by them.

Having said that, I also think that cloud computing service providers will also be required to let  customers the ‘right to audit’ on top of these certifications.Especially enterprise with enough business potential will be able to muscle their way with the providers.

I recently met a cloud computing provider and asked them about the right to audit and they said – they wont let customers audit their facilities and even refuse to divulge the location of their DC’s. I don’t see them winning too many favors with auditors with such an approach, especially those who are very particular about data sensitivity and regulatory compliance. These providers may continue to get the non-critical portion of the enterprise  IT environment. Unless reasonable and acceptable assurance around Information Security & regulatory compliance  is provided, the critical, sensitive corp apps are likely to stay within the enterprise DC probably in a private cloud kind of setup.

Advertisements

Cloud Security Alliance Guidance Document v2

CSA has released the version 2 of cloud security guidance document. it is available at –  http://www.cloudsecurityalliance.org/csaguide.pdf

it was privilege working with so many of my peers located in different parts of the world. it is actually amazing that so many of us could collaborate and work collectively on this initiative.

Security, Risk & Compliance for Cloud Computing Model

IT has long struggled with the issue of securing the information and the underlying assets in tradition IT environments. There have been debates around how much security is enough. Over a period of time, various models have evolved to enable an enterprise get a grip over the information security issue. Most approaches encourage taking a risk based approach to measure adequacy of existing controls & identify areas of improvement to bring the risk to an optimum risk level.

Now, with the evolution & popularity of Cloud computing model in the recent time, it has added new dimensions to the concept of information security. There is a lot of discussion is taking place within the IT industry but there is still a haze around the security, risk & compliance issues. I believe that with time, a clearer picture is likely to emerge as Cloud service providers realize the importance to address concerns around information security and emergence & adoption of standards.

In my opinion, key issues around security, risk & compliance in Cloud computing model are:-

Governance – one of the concerns customers have expressed is the loss of governance over the service that is now provided in the Cloud. The control now resides with the Cloud service provider on issues like location of data, implementation of security controls and their functioning etc. This lack of overall control and hence over some of the topics that can potentially impact security, risk & compliance issues is a serious concern for the organizations exploring the use of Cloud based services.

Access Control – how do you control the access to the information residing in the Cloud? While models are emerging to control the end user access like OpenID etc, one of the key issues is around control access for the privileged users like administrators.

Data location within Cloud – this is another important concern since it has lot of regulatory & compliance issues. The location of data residing in Cloud also has implications on legal jurisdiction and implications of regional/country specific data privacy requirements.

For example – some of my customers especially in non-US geography have specifically expressed concerns around the implication of US patriot act on the Cloud computing services.

Securing Data at Rest – how secure is the information residing on the Cloud in a shared environment? Information segregation and encryption are key topics that are being discussed to address concerns around information assets in the Cloud.

Regulatory compliance – Organization do need to understand that while the responsibility of security the underlying assets may rest with the Cloud services providers, the responsibility & accountability to secure the information still rests with the enterprise. Traditional IT organizations are faced with lot of audit (internal & external) & security certifications like ISO27001/SAS 70 etc to demonstrate an acceptable level of presence & effectiveness of security controls. Similarly there will be a need for the Cloud service providers to accommodate their customers internal & external audit requirements along with an acceptable demonstration of presence of security controls within the Cloud services offered by them.

Incident Response & Forensics – another important point of concern is around support during incident response and forensics. Due to the shared nature of the Cloud based services and the fact that the service provider can host the data in any of the data centres, establishing log/audit trails to enable Incident Response and to support forensics can be a challenging task.

Organizations are realizing that when using a Cloud computing services, there is a limit to the security controls that can be implemented and enforced. One needs to rely on the controls that are implemented by the Cloud service provider and trust these controls are adequate and working the way they Ire designed. Similarly, there is also a limitation on how much audit information can be generated, collected in a tamper-proof format, retained and if that is adequate to satisfy an organizations audit requirements.

I believe that while enterprise will continue to use public Clouds, the IT spend on setting up private Clouds will increase over a period of time. Public Cloud services will be used to host non-critical services by the organizations while they will use the Cloud computing model within their organizational boundaries to benefit from the concept of Cloud computing while ensuring security, risk & compliance issues are within their control. I are most likely to see emergence of industry standards and also some guidance movement by government bodies on the topics of security, risk and compliance for Cloud services.

references:- http://www.enisa.europa.eu/act/rm/files/deliverables/cloud-computing-risk-assessment/at_download/fullReport; twitter feeds on cloud

Defining Continuous Data Protection – II

in October this year (2008) i had written about the way  Continuous Data Protection was being defined by some vendors to promote their portfolio of  backup and recovery solutions( https://inthepassing.wordpress.com/2008/10/18/defining-continuous-data-protection/). in the post i had stressed about evolving a more holistic definition of ‘data protection’ and developing a framework to facilitate the same rather than use the definitions and concepts forwarded by the different OEMs and solution vendors.

I recently came across a blog post from Stephanie Balaouras  from Forrester (http://blogs.forrester.com/srm/2008/12/the-numerous-me.html) which more or less agrees with my approach. the post highlights how the term “Data Protection” is being interpreted byIT Operations teams and IT Security professionals and the need to look at the term from both, security and recoverability point of view.

Redefining AAA – Anybody, Anywhere, Anytime

i came across an article where the discussion was on how to enable any person access the required information at anytime and independant of the device from which the information is accessed or for that matter, the geography (office/home etc).

it was a nice read and it brought to my mind that perhaps its time to realign the AAA as it is known in the security circles (AAA typically stands for – Authentication, Authorization and Accounting).

now this also has implications for enterprise IT. almost anyone can buy a powerful smartphone with capability to browse internet even while in the office networks, able to use the smart phones as modems to connect to internet, ability to access corporate emails and documents on the smartphones, participate in blogs and social networking sites and share ideas.

the standards way IT typically approaches the topic of access and authorization is to be restrictive and stop the users from brining in phones or not allowing the users to access corporate emails over mobile devices (and allow only a selective bunch of employees to do so). however i am not sure if it would be productive and IT will be looked as hindering the productivity and effeciency of the business users.

there was also an article on similar lines – http://mikeschaffner.typepad.com/michael_schaffner/2008/10/the-un-marketin.html which touches on the aspects of relaxing the controls and enabling users to use IT in a manner they can enhance their productivity & effeciency.

in my opinion, time has come for IT to move from providing traditional restrictive, controlled environments to provide an AAA (Anybody, Anywhere and Anytime) environment to business users while ensuring they are able to manage the IT risk in an optimum manner.

“Anybody should be able to view the information they are entitled to, use the information in a manner they are authorized to, from Anywhere they desire and at Anytime they want”

this will require a combination of few topic on which i have written about before (and probably few more), namely:-

with the redefined IT-Perimeter and redefined continuous data protection, IT teams can extend the same experience of accessing the required informaiton with necessary controls and rules from anywhere just as they would experience it in the corporate network. at the same time, it will allow them to access the necessary infromation based on their roles and authorization. it will also ensure that the data is protected without being too restrictive thus allowing the end users to extend and enjoy their IT experience.

IT Security Outsourcing Models – III

outsourcing security infrastructure management

in this case, the service provider is responsible for monitoring, management and maintenance of the security infrastructure.

the service provider will usually bring in their tools for security event monitoring like in the previous case (outsourcing security infrastructure monitoring with service provider’s tools & processes). along with being responsible for incident monitoring, the service provider will also be executing the following processes:-

  • change management
  • configuration management
  • version upgrades/maintenance
  • incident management
  • reporting

 in case of stand alone security management outsourcing, the service provider will usually prefer to use their own trouble ticketing tools to open tickets incident and associated tickets on which the customer’s team need to take actions (e.g – remote an virus infected desktop from the LAN etc). the customer’s retained security operation’s organization (if any), is then responsible for taking this ticket and redirecting the work to their internal IT teams.

If the customer prefers to get rid of this hop (of redirecting tickets to their internal IT teams), the may require the service provider to use the customer’s ticketing tools. this can either be achieved by having a two way integration between the service provider’s and the customer’s ticketing tools.or by extending the ticketing console to the service provider to manually open the tickets. a manual way can also mean an increase in the service provider’s response and notification time since the ticketing automation with security event monitoring tools will no longer be possible.

from a delivery perspective, again following models can be explored:-

  • shared tools and shared monitoring & management teams
  • shared tools and shared monitoring teams, dedicated management teams
  • shared tools and dedicated monitoring teams, shared management teams
  • dedicated tools and dedicated monitoring & management teams

as stated in the previous post –  one of the areas that requires attention is the incident management process. what are the expectations from the service provider and how does the hand off happen between the outsourced and the retained teams is a matter that needs to be thought through in detail also

IT Security Outsourcing Models – II

in this post i will talk about what are the various paths i have seen customers walk when it comes to outsourcing security operations.

outsourcing security infrastructure monitoring with service provider’s tools & processes

many IT functions will outsource monitoring only activities. the service provider will bring in their tools and associated processes to perform monitoring of security event logs and also monitoring the health security infrastructure like firewalls, IDS, VPN etc. in a pure monitoring only engagements, service providers are usually responsible for event log aggregation, analysis (in some cases use analytical tools like SIEM etc) and alerting the customer’s retained security teams on detection of an event of interest.

the customer’s team is then responsible for carrying out further analysis of the tickets and do necessar change and configuration management as required. the maintenance of the security infrastructure is also the responsibility of the customer’s retained security ops team.

in most of the cases, to bring in effeciency, improvement in response time and SLA based services and to bring economies of scale, the service provider normally would use a multi tenant tool set for event monitoring and analysis. on detection of an event which requires customer’s attention, the service provider can:-

  • open tickets on service provider’s ticketing tool. the customer retained security ops team has an interfact into this tool.
  • open tickets on customer’s ticketing tool, the service provider’s team needs to have an interface into the customer’s ticketing tool.
  • or in some cases, have a bi-directional interface between service provider’s and customer’s ticketing tools.

if this is a total outsourcing engagement, this decision is simplified since the service provider will be responsible for the entire IT function so the choice of trouble ticketing tools is pretty much straight forward.

now, in a discreet outsourcing engagement, this get little complicated. usually the service aggregator would want the outsourced security function to use the single ticketing tools being used by rest of the service providers. this can put some pressure on the outsourced security service provider to realign their internal delivery processes to accomodate this requirement.

models that can be explored are as follows:-

  • service provider’s multi tenant (shared) tools and multi tenant (shared) delivery teams – should be the cheapest model, financially.
  • Customer’s already bought/developed toolset and service provider’s delivery team dedicated for the customer- basically out-tasking and not exactly outsourcing (usually explored by BFSI segment)
  • service provider’s multi-tenant (shared) tools and a dedicated delivery team for the customer – dedicated team increases the cost of this model.
  • service providers’s provisions dedicated toolset and a dedicated delivery team – should be the most expensive model (usually explored by BFSI segment)

again, one of the areas that requires attention is the incident management process. what are the expectations from the service provider and how does the hand off happen between the outsourced and the retained teams is a matter that needs to be thought through in detail.