Cloud Security Alliance Guidance Document v2

CSA has released the version 2 of cloud security guidance document. it is available at –  http://www.cloudsecurityalliance.org/csaguide.pdf

it was privilege working with so many of my peers located in different parts of the world. it is actually amazing that so many of us could collaborate and work collectively on this initiative.

Security, Risk & Compliance for Cloud Computing Model

IT has long struggled with the issue of securing the information and the underlying assets in tradition IT environments. There have been debates around how much security is enough. Over a period of time, various models have evolved to enable an enterprise get a grip over the information security issue. Most approaches encourage taking a risk based approach to measure adequacy of existing controls & identify areas of improvement to bring the risk to an optimum risk level.

Now, with the evolution & popularity of Cloud computing model in the recent time, it has added new dimensions to the concept of information security. There is a lot of discussion is taking place within the IT industry but there is still a haze around the security, risk & compliance issues. I believe that with time, a clearer picture is likely to emerge as Cloud service providers realize the importance to address concerns around information security and emergence & adoption of standards.

In my opinion, key issues around security, risk & compliance in Cloud computing model are:-

Governance – one of the concerns customers have expressed is the loss of governance over the service that is now provided in the Cloud. The control now resides with the Cloud service provider on issues like location of data, implementation of security controls and their functioning etc. This lack of overall control and hence over some of the topics that can potentially impact security, risk & compliance issues is a serious concern for the organizations exploring the use of Cloud based services.

Access Control – how do you control the access to the information residing in the Cloud? While models are emerging to control the end user access like OpenID etc, one of the key issues is around control access for the privileged users like administrators.

Data location within Cloud – this is another important concern since it has lot of regulatory & compliance issues. The location of data residing in Cloud also has implications on legal jurisdiction and implications of regional/country specific data privacy requirements.

For example – some of my customers especially in non-US geography have specifically expressed concerns around the implication of US patriot act on the Cloud computing services.

Securing Data at Rest – how secure is the information residing on the Cloud in a shared environment? Information segregation and encryption are key topics that are being discussed to address concerns around information assets in the Cloud.

Regulatory compliance – Organization do need to understand that while the responsibility of security the underlying assets may rest with the Cloud services providers, the responsibility & accountability to secure the information still rests with the enterprise. Traditional IT organizations are faced with lot of audit (internal & external) & security certifications like ISO27001/SAS 70 etc to demonstrate an acceptable level of presence & effectiveness of security controls. Similarly there will be a need for the Cloud service providers to accommodate their customers internal & external audit requirements along with an acceptable demonstration of presence of security controls within the Cloud services offered by them.

Incident Response & Forensics – another important point of concern is around support during incident response and forensics. Due to the shared nature of the Cloud based services and the fact that the service provider can host the data in any of the data centres, establishing log/audit trails to enable Incident Response and to support forensics can be a challenging task.

Organizations are realizing that when using a Cloud computing services, there is a limit to the security controls that can be implemented and enforced. One needs to rely on the controls that are implemented by the Cloud service provider and trust these controls are adequate and working the way they Ire designed. Similarly, there is also a limitation on how much audit information can be generated, collected in a tamper-proof format, retained and if that is adequate to satisfy an organizations audit requirements.

I believe that while enterprise will continue to use public Clouds, the IT spend on setting up private Clouds will increase over a period of time. Public Cloud services will be used to host non-critical services by the organizations while they will use the Cloud computing model within their organizational boundaries to benefit from the concept of Cloud computing while ensuring security, risk & compliance issues are within their control. I are most likely to see emergence of industry standards and also some guidance movement by government bodies on the topics of security, risk and compliance for Cloud services.

references:- http://www.enisa.europa.eu/act/rm/files/deliverables/cloud-computing-risk-assessment/at_download/fullReport; twitter feeds on cloud

Defining Continuous Data Protection – II

in October this year (2008) i had written about the way  Continuous Data Protection was being defined by some vendors to promote their portfolio of  backup and recovery solutions( https://inthepassing.wordpress.com/2008/10/18/defining-continuous-data-protection/). in the post i had stressed about evolving a more holistic definition of ‘data protection’ and developing a framework to facilitate the same rather than use the definitions and concepts forwarded by the different OEMs and solution vendors.

I recently came across a blog post from Stephanie Balaouras  from Forrester (http://blogs.forrester.com/srm/2008/12/the-numerous-me.html) which more or less agrees with my approach. the post highlights how the term “Data Protection” is being interpreted byIT Operations teams and IT Security professionals and the need to look at the term from both, security and recoverability point of view.

Redefining AAA – Anybody, Anywhere, Anytime

i came across an article where the discussion was on how to enable any person access the required information at anytime and independant of the device from which the information is accessed or for that matter, the geography (office/home etc).

it was a nice read and it brought to my mind that perhaps its time to realign the AAA as it is known in the security circles (AAA typically stands for – Authentication, Authorization and Accounting).

now this also has implications for enterprise IT. almost anyone can buy a powerful smartphone with capability to browse internet even while in the office networks, able to use the smart phones as modems to connect to internet, ability to access corporate emails and documents on the smartphones, participate in blogs and social networking sites and share ideas.

the standards way IT typically approaches the topic of access and authorization is to be restrictive and stop the users from brining in phones or not allowing the users to access corporate emails over mobile devices (and allow only a selective bunch of employees to do so). however i am not sure if it would be productive and IT will be looked as hindering the productivity and effeciency of the business users.

there was also an article on similar lines – http://mikeschaffner.typepad.com/michael_schaffner/2008/10/the-un-marketin.html which touches on the aspects of relaxing the controls and enabling users to use IT in a manner they can enhance their productivity & effeciency.

in my opinion, time has come for IT to move from providing traditional restrictive, controlled environments to provide an AAA (Anybody, Anywhere and Anytime) environment to business users while ensuring they are able to manage the IT risk in an optimum manner.

“Anybody should be able to view the information they are entitled to, use the information in a manner they are authorized to, from Anywhere they desire and at Anytime they want”

this will require a combination of few topic on which i have written about before (and probably few more), namely:-

with the redefined IT-Perimeter and redefined continuous data protection, IT teams can extend the same experience of accessing the required informaiton with necessary controls and rules from anywhere just as they would experience it in the corporate network. at the same time, it will allow them to access the necessary infromation based on their roles and authorization. it will also ensure that the data is protected without being too restrictive thus allowing the end users to extend and enjoy their IT experience.

IT Security Outsourcing Models – III

outsourcing security infrastructure management

in this case, the service provider is responsible for monitoring, management and maintenance of the security infrastructure.

the service provider will usually bring in their tools for security event monitoring like in the previous case (outsourcing security infrastructure monitoring with service provider’s tools & processes). along with being responsible for incident monitoring, the service provider will also be executing the following processes:-

  • change management
  • configuration management
  • version upgrades/maintenance
  • incident management
  • reporting

 in case of stand alone security management outsourcing, the service provider will usually prefer to use their own trouble ticketing tools to open tickets incident and associated tickets on which the customer’s team need to take actions (e.g – remote an virus infected desktop from the LAN etc). the customer’s retained security operation’s organization (if any), is then responsible for taking this ticket and redirecting the work to their internal IT teams.

If the customer prefers to get rid of this hop (of redirecting tickets to their internal IT teams), the may require the service provider to use the customer’s ticketing tools. this can either be achieved by having a two way integration between the service provider’s and the customer’s ticketing tools.or by extending the ticketing console to the service provider to manually open the tickets. a manual way can also mean an increase in the service provider’s response and notification time since the ticketing automation with security event monitoring tools will no longer be possible.

from a delivery perspective, again following models can be explored:-

  • shared tools and shared monitoring & management teams
  • shared tools and shared monitoring teams, dedicated management teams
  • shared tools and dedicated monitoring teams, shared management teams
  • dedicated tools and dedicated monitoring & management teams

as stated in the previous post –  one of the areas that requires attention is the incident management process. what are the expectations from the service provider and how does the hand off happen between the outsourced and the retained teams is a matter that needs to be thought through in detail also

IT Security Outsourcing Models – II

in this post i will talk about what are the various paths i have seen customers walk when it comes to outsourcing security operations.

outsourcing security infrastructure monitoring with service provider’s tools & processes

many IT functions will outsource monitoring only activities. the service provider will bring in their tools and associated processes to perform monitoring of security event logs and also monitoring the health security infrastructure like firewalls, IDS, VPN etc. in a pure monitoring only engagements, service providers are usually responsible for event log aggregation, analysis (in some cases use analytical tools like SIEM etc) and alerting the customer’s retained security teams on detection of an event of interest.

the customer’s team is then responsible for carrying out further analysis of the tickets and do necessar change and configuration management as required. the maintenance of the security infrastructure is also the responsibility of the customer’s retained security ops team.

in most of the cases, to bring in effeciency, improvement in response time and SLA based services and to bring economies of scale, the service provider normally would use a multi tenant tool set for event monitoring and analysis. on detection of an event which requires customer’s attention, the service provider can:-

  • open tickets on service provider’s ticketing tool. the customer retained security ops team has an interfact into this tool.
  • open tickets on customer’s ticketing tool, the service provider’s team needs to have an interface into the customer’s ticketing tool.
  • or in some cases, have a bi-directional interface between service provider’s and customer’s ticketing tools.

if this is a total outsourcing engagement, this decision is simplified since the service provider will be responsible for the entire IT function so the choice of trouble ticketing tools is pretty much straight forward.

now, in a discreet outsourcing engagement, this get little complicated. usually the service aggregator would want the outsourced security function to use the single ticketing tools being used by rest of the service providers. this can put some pressure on the outsourced security service provider to realign their internal delivery processes to accomodate this requirement.

models that can be explored are as follows:-

  • service provider’s multi tenant (shared) tools and multi tenant (shared) delivery teams – should be the cheapest model, financially.
  • Customer’s already bought/developed toolset and service provider’s delivery team dedicated for the customer- basically out-tasking and not exactly outsourcing (usually explored by BFSI segment)
  • service provider’s multi-tenant (shared) tools and a dedicated delivery team for the customer – dedicated team increases the cost of this model.
  • service providers’s provisions dedicated toolset and a dedicated delivery team – should be the most expensive model (usually explored by BFSI segment)

again, one of the areas that requires attention is the incident management process. what are the expectations from the service provider and how does the hand off happen between the outsourced and the retained teams is a matter that needs to be thought through in detail.

IT Security Outsourcing Models – I

i have received few queries and comments on various models of IT security outsourcing. well, in the next few posts, i will try and share my opinion and experiences on this topic.

i will not be discussing how to assess the state of the service provider’s information security related controls.

to start with, let me share my thoughts on state of security operations outsourced in total outsourcing vs discreet out sourcing engagements. therafter i would move to a more tactical subject of various outsourcing models available for exploration for an enterprise.

Security outsourcing in total IT outsourcing engagements

in total outsourcing, the entire IT function is outsourced to a service provider (which may also include the financial ownership of the assets).  the customer may still maintain control over certain policies like – asset refresh cycle, technology standards etc. however in most of such cases, even these decisions can be driven by the service provider.

the service provider is hence responsible for – maintaining the existing controls and ensure that the controls framework (asessment, adequacy and functioning etc) is kept upto date to mitigate the new risks as they emerge, on behalf of the customer.

if you look from ITIL point of view, in total outsourcing, service strategy, design, transition, operations and continuous improvement are all service provider responsibilities. some customers would still (and should) like to be involved or be informed about the service strategy and design activities related to information security.

depending upon the structure of service delivery within the service provider’s organization, the security operations may or may not be performed by a dedicated security function in the service provider’s organization. the way i have seen the outsourcing deal structure, the traditional security operational  responsibilities are now dispersed to respective technology towers (firewalls are part of network team, end user computing teams are responsible for content management etc). the overall security and compliance functions are cross tower areas as it impacts multiple teams and hence, responsibility for the same lies with the team responsible for similar functions like governance, program management, finance management etc.

i have seen many customers take a hands off approach when it comes to outsourcing of security function in an total outsourcing deal. they are not involved with the service provider in risk assessment, service strategy & deisgn phase for information security. i don’t think its a wise approach. many outsourcing rfp’s do not mention clearly how the IT risk, especially due to information security risk would be handled. it is presumed (and at times without much thoughts on the actual “how-to”) that the IT governance function would also report on the risks and subsequent risk management approaches.

what is important is the awareness and acknowledgement by the customers of the fact that they have just outsourced the operations to manage the risk but not the overall ownership of the risk itself. in case there is an incident, it will be the customer who will still have to absorb the impact and pay up any penalty. the customer may have the right to terminate the relationship with the service provider but it would depend how the legal and contract documents are drawn.

Security outsourcing in discreet IT outsourcing engagements

in discreet outsourcing, there are a group of service providers, each responsible for a particular piece of the IT function. there is usually an aggregator role (either retained by the customer or another service provider) to consolidate and manage the other service providers who are also delivering services to the same customer. the service aggregator then becomes responsible to the customer for the delivery of all of the outsourced IT services.

in discreet outsourcing, usually each service provider delivers the security operations for the technology/tower it is responsible for. for example, the network service provider will be responsible for monitoring and managing the firewalls only.

the service aggregator is usually responsible for the enforcement of security policies and ensuring customer’s regulatory and compliance requirements are met. this role also requires tracking the OLA (operational level agreements) between service providers also. for example – network service provider can report high utilization of network and using the logs from routers/firewalls, can point out the source of the traffic to an infected desktop. the provider then opens a ticket on the end user computing team to have the desktop cleaned/removed.

in such an engagement, one of the most important processes that needs to be tracked is the “Incident Management” since it would involve multiple parties in efficient resolution/closure of an incident. along with Incident Management, tracking the enforcement of customer security policies to meet compliance & regulatory requirements across various service provider teams and infrastructure is also a challenge in such an engagement. in my opinion, the service aggregator needs to being in experience and necessary tools to be able to track the OLA’s, track enforcement of policies and deviations.

usually the open ended question, in this type of arrangement is also around the ownership and accountability of driving the overall information security strategy. many a times, it lies with the service aggregator only. but like i mentioned earlier, the customer must get involved in the strategy and risk assessment and mitigation planning phase at-least.

yawn….more…in the next post on the same topic!

Defining Continuous Data Protection

recently i met a CIO of a pharma organization having presence in more than 17 countries. during the discussion, he asked me what were my thoughts on the ‘continuous data protection’.

in the recent past, i have also attended presentation from few vendors and oem’s and have heard their version of ‘continuous data protection’ (CDP). almost all offer, what i can call as “backup and recovery’ solutions under the guise of CDP.

if you look at wikipedia, the term is defined as “Continuous data protection (CDP), also called continuous backup or real-time backup, refers to backup of computer data by automatically saving a copy of every change made to that data, essentially capturing every version of the data that the user saves. It allows the user or administrator to restore data to any point in time” (refer – http://en.wikipedia.org/wiki/Continuous_data_protection)

however i don’t agree with the definition.

if you look at the definition of the word “protection” – “In Computer science, protection mechanisms are built into a computer architecture to support the enforcement of security policies. A simple definition of a security policy is “to set who may use what information in a computer system”. (refer – http://en.wikipedia.org/wiki/Protection_mechanism)

extending the definition with contex to data, it means – enforcement of security policies to define who may use what information or data in a computer system. hence CDP is a framework of preventive, detective and reactive controls to protect the information stored in any computer system. the backup & recovery solutions which are being sold as CDP solutions consitute only the reactive controls.

the concept, is hence simple – basically, protect the data wherever it is created, ensures that there are necessary access control in place to safeguard against unauthorized access and modification, ensure that the data and information is prevented from unauthorized copying in removable media and transmission (email etc), and in case of accidental or unauthorized destruction, have approproate controls to recover the data and information from backup media.

hence, in my opinion, whoever is looking for CDP solution needs to look at the following solutions at the minimum:-

  1. data classification solutions
  2. data leakage prevention solutions (host and network)
  3. user activity monitoring solutions
  4. backup and recovery solutions

when i shared my approach to the CIO of the pharma organization, i was glad he agreed with the concept. he was concerned by recent cases of loss of information from the R&D centers and was looking at a framework to protect the data and the information created and stored in the validated IT systems in the research labs. right now, we are working on developing the framework for the CDP and talking to various solution providers and OEM’s to see how these solutions can work in tandem without reducing the effeciency and productivity of the employees.

IT Outsourcing & Security Issues

the recent news on Word Bank and leading Indian outsourcing firm – Satyam made the news headlines a few days ago.

it was reported in media that Satyam had been banned from all offshoring work following a so called “security breach” in the World Bank IT systems which were being managed by Satyam under a total outsourcing contract between the two.

when i read the news articles and the media hype over security risks involved in outsourcing, there were couple of points that stood out and probably need a serious thought. i admit tho that i am looking at this topic purely from a services provider point of view.

broadly, there are two type of security risks when it comes to outsourcing.

1. state of security and associated risks in the service provider IT environment – usually these are are discussed in detail and evaluated during the rfp stage. a good number of articles have also been written been assessing service providers security policies and controls before and during the term of the contract. a service provider is usually asked to provide proof of the state of information security, answer certain specific questions in the rfp and in some cases provide sas 70 type I & type II reports.

2. state of security and associated risks in the enterprise IT environment now being outsourced to the service provider – this is a relatively overlooked topic by many of the enterprises who have entered or are entering an outsourcing agreement with an IT services provider. in the context of discreet and total outsourcing, this requires an in-depth understanding and a joint strategy development with the service provider.

in many cases, the enterprise, by entering into an agreement for discreet or total outsourcing engagement with the service provider tend to forgo their responsibility of maintaining, tracking the risk in their IT environment (even though it is now oursourced) and are not invited or participate in assessing the risk, formulating and implementing a suitable risk treatment plan.

with reference to point (2) above, i would like to highlight few point which, in my opinion, require attention during contract and legal discussion stages:-

  1. In almost all of the outsourcing contracts, the service provider usually take over the customer IT environment on as-is basis and hence the risk due to any security (technology / process) shortcoming also gets transferred to the service provider. in most contract the ownership and accountability of this risk is not clearly mentioned in the contract.
  2. there is not many engagements where a risk profiling of the enterprise by the service provider is carried out prior to begining of the outsourcing enggement as a result there is usually no coherent strategy to address the risk that is inherited by the service provider in an outsourcing engagement.
  3. many a times an enterprise may not have invested in adequate set of controls (both technical and procedural) which may result in an high risk exposure for the enterprise. depending upon the level of maturity of the enterprise security organization and practices, the management may or may not be aware of this exposure.
  4. even though the risk of not having necessary controls might be acceptable by the customer when the operations were in-house, they suddenly appear as un-acceptable if there is an incident post off-shoring. Again, in my opinion, this needs to have a clear mention in the contract/legal document.
  5. even when additional controls to reduce the risk, usually the recommendations are side-lined either by the customer or sales teams due to due to cost implications. However, the ownership of accepting the residual risk is not clearly and is a vague area. This should, in my opinion be also addressed in the contract document.

the most important fact remains that:-

There is no guarantee that security breach will not take place either due to technology failure or personnel mis-adventure. even without outsourcing, we have seen breaches being reported.

hence a clause, which indemnifies the service provider due to technology failure or absence of a control not stated in the RFP as a mandatory requirement, needs to be incorporated in the contract/legal documents.

or

there needs to be a stage in the outsourcing project plan where the service provider assesses the information security related risks in the customer’s IT environment for which the service provider is going to manage and then jointly develop a risk treatment plan with the customer to ensure the risk is kept at a level acceptable to both the organizations.

Redefine the IT Perimeter – II

ON September 19th, there was a post on CSO Online portal which had outlined 5 trends for mobile security- http://www.csoonline.com/article/450166/Five_Trends_Driving_the_Need_for_Better_Mobile_Security?page=1

to summarize, the 5 trends mentioned are:-

1. More powerful and less expensive mobile devices are becoming ubiquitous and are as irreplaceable as any PC or laptop, significantly increasing the risks from loss and theft.

also the network providers are having charging not on “number of bytes downloaded etc” but based on the service features opted for like “gprs enabled talk plan etc”.

2. A move toward more powerful, IP-based network infrastructures is leading to increased use of data-heavy mobile services, which need more sophisticated management.

3. Increased numbers of corporate users (which also includes staff at all levels and not only the CxO’s) of mobile devices accessing company applications and data at all levels of the enterprise are creating a huge headache for IT departments.

4. More and more sophisticated security threats are appearing as new devices provide richer targets

if you look from the prespective of IT perimter, the perimter needs to be redrawn to secure each of these mobile devices also as now corporate information can be access and reside on these powerful mobile devices.

Redefine the IT Perimeter – I

this follows my post in 2006 on the question of realizing a secure IT environment without any perimeter. i read about the JERICHO framework for the first time, way back in 2005. i was and still am fascinated by the concept. it made sense and all, but only in theory as i quickly realized the challenges in implementing a total de-perimeterization strategy. it not only involves a change in the mindset of the IT teams (to let go of the LAN) but also posed challenges on the technical front as the solutions are not ready for a 100% JERICHO based network yet. (Of course, JERICHO is more then just removing the LAN)

with the continuous improvement and maturity in technologies like identity management, endpoint security, network admission/access control, the time is ready for large organizations to reap benefits of the a modified approach.

in this post i present my thoughts on implementing a step down version of the de-perimeterization approach for an enterprise which aims to ‘remove the need for a enterprise LAN’.

in my opinion, this approach can be implemented in a phased manner, targetting the mobile users first and then the users with desktops and so on. needless to say, there will still be departments and/or business functions for which this approach will either not be applicable or the management will still like to retain the traditional LAN based models e.g – R&D and design functions.

———————-

today, almost all the enterprises are facing challenges in providing a secure IT environment for business and provide assurance to the management and auditors.

If you take a typical enterprise, one can see IT expenditures in the areas of establishing a governance framework for information security, enterprise wide security policies and user awareness initiatives, infrastructure security components like firewalls, IDS/IPS to secure the perimeter, b2b partner connectivity and other identified perimeters. there has been increased focus on establishing and securing data centers and the systems residing in them.

After having spent money on securing data centers and implementing network security controls, the next target is to secure the endpoints. many IT teams are implementing advanced endpoint security solutions like desktop based IPS, encryption solutions along with traditional anti-virus & personal firewall on the endpoints. with a change in threat landscape, where more and more threats are now targeting endpoints especially mobile users, the endpoint security is the new focus area for many CISO’s.

a point to ponder – if we own the network, why do we need to protect the endpoint and spend top dollars in securing the systems that connect on the network?

well, we need to do so cause we just can’t control what flows through the network in the first place. we have put firewalls, network IDS, IPS, DDOS appliances blah blah.. but still we don’t have the assurance that a system that connects on the network will be secure and hence the need to implement some endpoint security solution to protect it.

with enterprises moving to make most of the applications web enabled, extranets and business partner connectivity, vendors and consultants connecting to the enterprise IT environment, roaming users and work from home culture have all lead to collapse of the traditional castle approach towards securing the enterprise.

so, this brings up another point to ponder – even though we spend top dollars in securing the network by using state of the art network security controls and we still can’t control the kind of traffic that flows through it, why do we want to own it in the first place?

my own laptop has all the endpoint security features enabled when i connect to my corporate LAN as well as when i connect to the internet. so does it mean that the LAN or corporate network is as insecure as Internet???

routers, layer 2 & 3 switches, firewalls, network IDS/IPS, DDOS appliances, QoS, sniffers, network management tools, network security management tools, teams for network & security operations………and then anti virus, personal firewall, host based IPS, DLP, desktop encryption…and still the question remains – are we secure yet?

so, is there any way to bring down the total cost of securing the operating environment for the business?

…… just do away with the hard perimeter and the underlying corporate network, focus resources and effort to protect the data center and endpoints only.

i am not against the networks 😉 (I am, rather was a certified CCNP). But I am just extending the logical reasoning which many CIO and CISO ponder when the network and security teams ask for funds to secure the enterprise.

  1. consolidate the  applications in the data center and implement network & system security controls as we do traditionally along with additional SSL VPN and network admission control at the perimeter from where the users can access the enterprise applications.
  2. have the internet service providers to implement wireless access points in the office premises. the users will then connect to the internet directly even though they are in office premises. ensure that there are adequate endpoint security controls implemented on the endpoints. we are doing it anyway even in the existing scenarios.
  3. let the users connect to the enterprise applications hosted in enterprise data center over the internet. if the application is already SSL enabled, no additional encryption/decryption is required at the gateways. however in case of client server applications, we can use the clientless SSL VPN to secure the data flow between the endpoint and application server for the session.
  4. once the user connects to the data center, the authentication enforcement systems implemented at the gateway check for the authenticity of the user. depending upon the application landscape, a single sign on solution can also be implemented. However, if it is too much of a challenge for the moment, a user can have a separate network login credentials and separate application login credentials as is the case within many enterprises today.
  5. post authentication, the network admission control enforcement systems ensure that the endpoint has the latest OS patches, anti virus updates etc and also conform to the corporate baseline security standards.
  6. incase the endpoint does not conform to the policies enforced by the network admission control elements, the endpoint is allowed access to a quarantined zone where the administrators can then push the latest updates and patches on the user endpoint. once the endpoint is bought back into compliance, the user is allowed access to the applications.
  7. once the user and the endpoint, both are validated, the user is allowed access to the applications to which he has access based on the defined role of the user as reflected in the enterprise directory systems.
  8. the user can perform the necessary activities and then logs off. during the entire session and the time duration for which the user had connected to the data center, the session and user activities are monitored using event monitoring framework in real or as near to real time as possible.
  9. in case there is any hands and feet support required to fix a problem in the desktop, the users can call the helpdesk as they are doing in the current scenario.

this approach also ensures that the users have the near same experience irrespective of the location they are trying to access the enterprise IT from.

now, the users are logged on to the internet even when they are in office in addition to when they login from home over internet or from public wireless hotspots (e.g airport), they have the same look and feel experience when they connect to enterprise applications over the internet.

in my opinion, the security associations also do not change.

for e.g – if an enterprise has not enforced the host based IPS and robust patch management solution on the laptops of mobile users, it has inherently accepted the risk of a security beach due to malicious activity when the user connects to the internet from home or from public wireless hotspot. hence in the proposed framework, the risk of a security breach remains same and does not escalate if the user connects to the internet directly from office also.

the core of this approach is based on the following frameworks – data center security. endpoint security, identity management, network admission control, clientless VPN, security event monitoring.

some of these are described in brief below:-

A. data center security

this subject is not something new to most of us. traditionally organizations have implemented network and system security solution to protect the systems within the enterprise data center.

data center consolidation

  • instead of having islands of server farms within the enterprise each secured by set of network and system security elements.
  • one of the key points in this approach is to remove these islands from the enterprise LAN and consolidate them in specific data centers. this will not only increase the manageability aspect but also focus the effort to secure the data centers instead of individual islands.
  • there can be various approaches to consolidation. it can potentially involve moving from local country specific data centers to limited regional data centers. server virtualization is another area which will contribute significantly to the data center consolidation.

Securing the perimeter of the Data Center

  • the data center architectures should (and usually is) clearly identify the perimeter (hard and soft) and the traditional controls deployed on them to secure the data center.
  • the data center architecture should be designed in such a way to have layers of control which will help resist an attack or malicious activity by having adequate preventive controls.
  • this should be complemented by a detective set of controls and then set of controls that will help contain and recover in case of a malicious incident.

network admission control

  • the network admission control should be deployed to check for configuration & settings compliance after the user has been successfully authenticated.
  • necessary controls should be deployed at the perimeter of the data center which will enforce a compliance check on each endpoint that connects to the data center to access the enterprise applications.
  • the compliance check should check for the following at the minimum – os patches, antivirus updates, ensuring critical services like dlp, encryption etc are running, enterprise baseline policies etc
  • based on the validation of the endpoint, the user should be allowed access to the applications otherwise the endpoint should be placed in a restricted access zone where the administrator can then push the necessary patches etc to bring the endpoint back in compliance.

B. Identity Management

increasingly enterprises are looking forward to streamline the way they are managing the identity of the users in their environment. since there are enough material available on this subject, i am not spending too much time on this.

  • along with managing the identities, managing the access to the enterprise resources based on the role of the user is also hot on the radar for many enterprises.
  • not only these two initiatives can address most of the user identity lifecycle and associated issues but is also very helpful in ensuring compliance by streamlining and effectively management of access control in applications and on IT resources.
  • The user identity is checked the moment the user connects to the data center using secure authentication controls. the complexity of the authentication mechanism will vary from enterprise to enterprise and from vertical to vertical.

C. endpoint strategy

the endpoint strategy consists of implementing the right technology solutions at the endpoints combined with strict control over the configuration standards and policies enforced on them.

implement an endpoint security framework on the endpoints

The framework should consist of the following technologies at the minimum:-

  • anti-virus & personal firewall
  • endpoint encryption
  • Desktop HIPS
  • DLP for endpoints
  • url filtering *

Most of the organizations have already implemented the first two endpoint strategy enforcement technologies. lately more and more organizations are now exploring the desktop level HIPS and DLP technology and solutions to further strengthen their endpoints and ensure continuous data protection. in fact, many solution providers are now bundling these solutions under the umbrella of endpoint security solutions where a single agent at the endpoint has all the functionality listed above.

i also think the anti virus solution from McAfee also allows roaming users to update the anti virus updates from a hosted McAfee website if the user cannot connect to the enterprise EPO server. If this is the case with other solution providers also, we can leverage this feature to ensure the anti virus is always updated irrespective from where the user joins the network.

enforce corporate baseline configuration standards and policies for the endpoints.

ensure each endpoint is configured as per accepted baseline standards and enforce these standards using group policy objects and other controls on the endpoints.

restrict the proliferation of administrative rights for the endpoints.

even if such rights are required, ensure that the end users cannot disable the deployed endpoint solutions without administrator password for these solution (i have seen TrendMicro endpoint security solution which requires a separate password different than the local or domain admin passwords in case anyone wants to disable it)

in the cloud url filtering to restrict the browsing when users are in office

in case there is still a need to enforce a url filtering solution to ensure users at office premises do not access prohibited sites, one can contract with the service provider to provide in the cloud url filtering solution for a range of ip addresses that have been allocated to the enterprise.

D. redefine the concept of local LAN

LAN, as we know today comprises of core and access switches and routers, cables and wiring cabinets, fiber and other media connecting offices to each other. also throw in some complex routing protocols routing traffic from office to the enterprise data centers enabling users to access enterprise applications.

  • it also includes heavy payout from the enterprise IT budget. The payout usually includes amongst other things, the cost of the switches and routers, the annual maintenance and support charges, cost of bandwidths provisioned between offices, cost of complex network management tools and the effort that goes in ensuring the network is ‘up’ and the users can go about their work.
  • I have already discussed in brief why we need endpoint security even though we spend heavily on the LAN and on the network security elements to protect the systems on it.

now, take the LAN out of the picture and ask service providers like BT, Verizon to install DSL based internet connectivity in the building.  with wireless access points in the building, the end users can connect to the internet from anywhere in the office.

One concern that does crop is the issue of the available bandwidth for the users in such a scenario and it is a genuine concern. with most of the enterprise applications becoming web enabled, the bandwidth requirement has considerably gone down. also if you look at the network utilization when a user is on a 100mbps and access email, you will notice that more often than not, the utilization is hardly usually less than 1% .

however there can be issues in case there are time sensitive applications which require real time response.

i still do believe that there is still some time before we have solutions to realize the JERICHO framework in totality. however the approach mentioned above can lead to substantial cost savings by removing the LAN and focusing the resources to secure the endpoints and data centers only.

Discussion with CIO (Pharma/Healthcare) – 1

recently i had a chance to have a discussion with a CIO of a leading generic drug manufacturer in this part of the world. the discussion was mainly around information security, the pressing needs for his organization an how to set up a vision around information strategy and then get it executed.

being a generic drug manufacturer, the organization had thin margins from the products they sold. hence, it was imperative for his team to be able to provide a secure operating environment for the organization at the same time keep the cost of ‘security’ low.

In fact, he was not the only one with that mandate. most of the CxO’s i have met, have the same single line agenda on their charter.

in the past 3 years, the IT security spend is range bound between 7 – 9% of overall IT spend across the industry verticals and the trend is same for NA and EMEA. also with never ending developments in the threat, vulnerability & risk theaters there is a need with the need to respond in real or as near to real time as possible. hence, the IT teams are faced with considerable challenge to ensure a secure environment for business to operate and to provide assurance to the management on the same.

the discussion also revolved around using point best of the breed solutions against eco system based approach to secure the IT landscape.

i believe that an ecosystem based approach is much better than using best of the breed point solutions. usually there is a huge cost associated with purchasing and maintaining the best of the breed solution portfolio as mentioned below:-

since the solutions are best in their category, the customer has to pay a premium to purchase them in the first place. (yes, some large organizations do have the capability to arm twist the vendors 😉 based on the brand name of the customer.). then comes the issue of the ensuring the skill set in the team to implement and manage such solutions. in most of the cases, it does require imparting training to the team or picking up someone from the market. and in-spite of qualified team very often than not, the manageability of a portfolio of point solutions and their integrations still remains an issue an issue.

with cert reporting that about 72% of the downtime is caused due to configuration issues, it becomes important to ensure that manageability of a solution portfolio becomes an important criteria while selecting a solution along with integration capability & fitness into the existing solution portfolio.

an eco-system based approach generally involves having solutions that need not be the best solutions in their respective areas but that can provide as an ‘integrated system’ to ensure a secure environment. It also ensures an overall reduction in overall management and integration complexities. having said that, irrespective of a strong philosophy and ecosystem approach, i don’t think one can avoid having a stand alone point solution due to the inherent nature of the risk and dynamics associated with the domain of information security. but, the number of point solutions can be still be kept under control by adopting an ecosystem based approach.

one of the questions he put up for me was – there are so many point solutions in the market claiming to address issues around information security,  what were my thoughs on how the solution space would evolve in due course of time..

in my opinion, solution which are targeting issues that are seen as significant by the customers would either be absorbed by system or network vendors. there will always be some niche players in the market with fancy toys 😉 to address a very unique or niche requirement. however, the moment customers start perceiving the requirement as significant and the requirement then becomes pretty much standardized, these niche solution providers will be ready for acquisition by either system (e.g. microsoft) , network (e.g. cisco, juniper) or players like IBM, HP.

hence large infrastructure vendors will keep on the M&A activities to either fill security gaps in their portfolios by acquiring best-of-breed security vendors or as compensatory solutions to cover the security related weakness in their other offerings. the velocity or urgency of M&A will also be driven by the customer pressure on these players to minimize the risk to the customer environment due to inherent weakness in the solutions offered by these players (e.g risk in the customer environments due to susceptibility of a windows based systems to worms etc may drive customers to push Microsoft to acquire or offer HIDS solutions also in future)

  1. we are already seeing the leading network equipment providers incorporating features like firewalls, ids and ips in their portfolio. some of these  solutions are already being manufactured and marketed by the network equipment manufactures like cisco, juniper etc as is the case today. the next transition of such solutions will be to have them as part of the feature set of the networking products itself.
  2. similarly in the systems space, with microsoft entering into the picture has ruffled many alike. microsoft’s acquisition of companies like giant, sybari and the recently introduced offering of ant virus, ant spam solution has proved to be one of the most significant development in the security market in my opinion. i have started hearing discussions in meeting rooms where cio’s and cso’s are asking their teams to evaluate the solutions that microsoft has started offering. i don’t see people ready to discard the solutions that they have been using in the past in favour of microsoft security solutions yet.

the enterprise IT security teams i have interacted with are adopting wait and watch stategy but nevertheless, it is in their radar definitely. atleast to the ones i have interacted with, are seriously tracking how the solution from microsoft evolves and what kind of effort microsoft puts in to make it a credible offering.

similarly is the case for system security solutions like data at rest encryption, biometric authentication for systems etc. at one point in time, either these will become pretty much standard feature set of the underlying hardware (i believe some hardware manufacturers are already providing laptop models which have inbuilt processors to encrypt the entire hard disk, fingerprint readers etc) or would be offered as out of the box, standard feature of the operating systems (e.g microsoft already offers encryption solutions along with the os platform).

Return on Investment – Identity & Access Management Case Study

This post is a short analysis of a successful Identity & Access Management strategy adopted by a 10 billion dollar organization having more than 25,000 users and over 25 manufacturing facilities.
In 2005, the organization had 25 people team performing what is called helpdesk and “GAM function” GAM stands for Global Account Management. Out of the team of 25, 8 people were dedicated to issues related to account creation, management, password resets, access management etc.
During the discussions with the CIO and VP – IT, it was already decided that IT functions that did not add direct strategic value to the business, would be commoditized. Hence it made sense for the IT to classify such functions and not be on the aggressive or on the leading edge of technology for such functions. GAM was classified as one such function. The business, in-spite of some complaints about efficiency, was not ready to pay for initiatives that could bring in further improvement of services.

Some of the tasks being performed under the GAM category consist of:-

  • User account management including provisioning and de-provisioning on various IT assets including applications and infrastructure
  • User password management including reset of passwords, unlocking accounts locked due to bad username/password attempts.
  • Managing access of users in various applications
  • Generating reports of users with access to critical applications covered under audit scope for compliance & regulatory requirements like SOX etc.
  • Helpdesk services – answering calls from users related to IT issues etc and providing first level of support

In order to reduce cost of operations, the organization explored various options including:-

  • Outsourcing to an IT services provider
  • Off-shoring to low cost geography
  • Automation using Identity and Access Management solutions

Outsourcing the GAM function to an “on-site” IT services provider (who would perform the same activities from their facility) would not have yielded them the benefits the organization was looking for. The IT teams also deliberated between the two options:-

  1. Automation first and off-shoring the task of maintenance
  2. Off-shoring first, realizing cost savings and funding automation initiative

Also, various Identity and Access Management solutions were evaluated for the technical capabilities and financials. It was also desired that any such solution needs to be self funding and should not require additional funds from the management. However in 2005, all solutions proved to be too costly.
Hence the organization decided to follow a two phased strategy:-

  1. Off-shore GAM activities till the cost of automated Identity and an Access Management solution was affordable.
  2. Once the Identity and Access Management solution became affordable, the team would then analyze the solutions available and engage with the right vendor and system integrator for implementing the same.

Also, off-shoring business case provided an immediate cost savings. A back of the envelope calculation is shown:-

Towards the end of 2007, the team relooked at the available automation solutions and started negotiation with leading vendors of Identity and Access Management Solutions. Key observations were:-

  1. The Identity and Access Management market had undergone a lot of consolidation and players had strengthened their propositions by making the right acquisitions and partnerships.
  2. The prices of solutions in the market had come down drastically and the vendors were ready to give good discounts.
  3. Good system integrators were available with good exposure to similar implementations thus reducing the risk of technology for the organization.

The team was able to negotiate over 60% discount with a leading provider of Identity and Access Management solution and asked the vendor to recommend an apt system integrator for the rollout. The rough analysis for the business case that was calculated now for automation is given below:-

Discussion with Director – Security Operations (Pharma/Healthcare) – 2

a few months ago, i met the director of security operations of a large pharma enterprise with presence in 4 continents and with over 50,000 users. the enterprise had 4 large data centers with centralized IT function. however within the IT organization, the challenges were immense with 4 regional teams, each having their own set of taxonomies, processes and ‘ways of working’.

during the discussion the director expressed a desire to have security operations with ‘dial tone reliability’ in his words.

when you pick up a handset, you expect to hear a dial tone. its a given thing. its pretty elementary  right!. today, if you pick up a handset and don’t hear a dial tone then you will be surprised. similarly, not only in information security, but also in IT operations, more and more executive management are wishing or rather demanding for ‘dial tone reliability’.

in the context of information security operations, how do we realize this desire?

in this post, i am putting down few thoughts that we shared with the director and then, implemented some of them to achieve this goal. i am leaving the security strategy & architecture out for the time being, though i must acknowledge at this point that it has to be a top down approach involving strategy, architecture and operations.

  1. it has to start with a knowledge of what you have. both, IT assets and control enforcement points. basically, what you don’t know, you cant protect (back to basics huh!). asset inventory & management anybody? 🙂
  2. track vulnerability & threat landscape to identify those which are relevant for the IT environment of the enterprise. it is imporatant to be able to identify vulnerabilites and threats that can potentially affecting an organizations IT environment and take necessary steps to be able to either prevent, detect or contain & recover any incidents arising out of the realization of risks due to these threats and vulnerabilities.
  3. track how many controls are actually working and ensure 100% uptime. in large organizations, i have noticed this is also one of the areas that requires lot of oversight especially if the number of controls deployed are large in number. for this organization, it was a challenge to track how many IDS out of 100’s of IDS deployed were working at any point in time to ensure effective monitoring of network segments. similarly was the case with firewalls, HIDS and antivirus controls.
  4. the risk treatment plan must drive the control requirement and subsequent enforcement. this ensures that the IT security spend is aligned to ‘optimum’ management of risk.
  5. implement a process to identify anything that is plugged on the network and ensure that only the desired, validated endpoints are allowed to connect. you can use network access control framework and use it to ensure only validated systems are allowed on the network.
  6. for any system that connects to the network, you need to ensure that events, both, system security and user activity are logged and analyzed for unauthorized / malicious activities / access control violations.
  7. define and adopt robust incident response process to respond to unauthorized activities and malicious events. this process has to be a globally defined and implemented throughout the enterprise. hence if there is an incident, one is assured that the NA team will respond using exactly the process as the EMEA team. this will also require other teams to pitch in like network teams, server management teams etc.
  8. implement metrics to track the effectiveness of the controls that are enforced and appropriate measurement standards are enforced through out the enterprise.
  9. have real time visibility into security operations: have ability to track incidents and malicious activities , the responses being taken to mitigate or contain them as and when they are detected. track the change requests and sla to respond to such requests. if possible also track the financial parameters that can be used to measure the effectiveness of the controls quantitatively. however, one must not ignore the qualitative metrics at the same time.
  10. measure and track residual risk.

these measure were implemented to get a degree of assurance that an device that connects to the network at any given point in time would be validated and allowed on the network only if there is a conformance to the enterprise standards and policies, all user and system activities were logged and analyzed in real or near real time for malicious activities. In case any new vulnerability or threat was detected, the operations team was able to respond with effective strategy to either, prevent, detect or recover from potential incident as far as possible.

an important aspect in the implementation of some of the above mentioned areas was to ensure that the processes around each were global in nature and all teams understood and had one way of working. while the team used global processes, they still retained their ability to leverage the local knowledge of the IT environments to effectively control and maintain a secure operating environment for their business operations.

Realigning Security Operations

during the course of my engagements with various customers, i am noticing an interesting trend in the way the security functions of these customers are evolving. usually this trend is fairly common in large organizations but recently even mid size organizations seems to follow this trend. about 3 – 4 years ago, the information security team in an enterprise was handling almost all the aspects of securing an enterprise IT environment. some of tasks that the security team were responsible were:

  • defining corporate security policies
  • performing IT risk assessment
  • tracking threat and vulnerability landscape for new threat vectors and vulnerabilities
  • identifying security controls required to mitigate the threats of close the vulnerabilities
  • managing and maintaining security controls like IDS, firewalls, anti virus, url filtering etc
  • monitoring malicious activities on the network/system security elements
  • incident response
  • in some cases also working on BCP/DR initiatives.

in the recent past, i have noticed a change in the way security functions are being organized and their work areas or job descriptions defined.

looking at few analyst reports, the security budgets have more or less remained range bound between 7 – 9% of the overall IT spend in the past two years. there is exception in 2004 – 2005 for some verticals due to sox deadline. of the overall IT security spend about 40 – 45% is on products and solutions.

in the dynamic era of globalization, the business needs also keep on changing in face of new business initiatives and service rollouts. such initiatives require involvement of the security teams to identify and formulate a risk management strategy for these initiatives. at the same time, new and more complex threats appear on the horizon (for more details on new threats etc, one can refer to sans or cert websites). Thu, the security teams seldom have time to focus on more strategic initiatives and risk management functions.

in the discussions i have had with some CIO’s and CISO’s, there are some interesting points which came out. there is a desire at senior management level to shelve the tactical and operational responsibilities to the other IT teams. the management now wants their teams to now focus on strategic tasks like risk management and program management (to keep a check on how various teams go about execute their newly acquired security operational responsibilities ). however there is much resistance to this change at level of security engineers, to give up their controls and move to more strategic role. i am not sure how long they can hold on their resistance cause this shift in responsibilities though.

at a tactical level, i am noticing the transition of following responsibilities:

  • the systems and network teams are now also responsible for ensuring the servers and routers that are now being provisioned are build securely rather than having a security features provisioned as an after thought. the systems teams ensure that the infrastructure are build as per corporate baseline security guidelines and standards. same is the case for desktops also.
  • the security teams are now responsible for developing and updating the corporate security baseline standards for various technologies.

At the operational level, i am noticing the transition of responsibilities as follows:-

  • responsibility for monitoring, management and maintenance of the following components is being now – anti virus, HIDS, endpoint encryption, two factor authentication, access control etc.
  • the security team works with the system team for logical and physical design and vendor selection for the above mentioned technologies.
  • responsibilities for maintaining and managing access control at the network layer using firewalls is now being handed over to network teams. the only exception i have seen is in the case of checkpoint firewalls (since they don’t speak the acl language yet 😉 ).
  • the role of security teams is then to validate a change request for opening certain ports or access to subnets etc.
  • the systems and network teams are also becoming more and more responsible for detecting malicious events and initiating appropriate responses using incident management process.
  • The security team is responsible for defining the incident management process along with the system and network teams.

however the security engineers are resisting this ‘letting go’ of their traditional responsibilities. i have seen engineers who are very good in their respective domains of intrusion analysis, endpoint protection using HIPS technologies etc who have fought tooth and nail to retain their areas of responsibility and resisted any attempt by management to move them to more strategic roles. in the end, many of these engineers have been moved to respective systems and end-user teams so that they can continue their work in those areas.

however this has introduced a new dimension for the existing IT teams. traditionally they have not been accustomed to handle responsibilities for building and maintaining a the security attributes of the IT infrastructure components they are responsible for.

with the transition of tactical and operational responsibilities, there is skill set challenge for the IT teams who, are the executioners of these tasks. many organizations are either spending money to train the teams, hiring new personnel with required skill sets and in some cases, moving the security engineers who still want to continue working with the technology into their teams from security teams.