The Failure of Tactical Security

Information security within organisations has to date, largely been driven by reactions to immediate, impending threats; breaches and other incidents. Other drivers can come from managers or executives beginning to understand the importance of information security but often only reacting tactically to perceived and / or “marketed” threats. The problem with these is the net negative operational impact they have on the organisation’s information asset environment. Individual issues get targeted rather than broad-spectrum mitigations with a better return on investment.

Indeed, the greatest preventer of strategic security outcomes within organisations is often executives and senior managers who have developed enthusiasms as a result of specific security incidents or as victims of well targeted marketing campaigns.

Information Technology solutions deployed within organisations have been notorious for not being driven by genuine operational needs. They are commonly deployed because a manager or executives are externally focused by something someone else is using, or something they were “sold”. This leads to a highly fractured solution set creating performance and usability issues for the rest of the “non-I.T.” business. Where this is the case for general I.T. it is orders of magnitude worse in the case of I.T. security solutions.

Any member of an organisation more than 10 years old need only look as far as their workstation IP address to see the legacy of this ad-hoc approach. They would see an IP address that is a legacy of the organisation deciding some time ago that it “needed” to connect two workstations to a shared printer, yet now causes numerous issues when attempting virtual private networking with business partners that happen to be using the same IP address ranges.

IT security solutions and / or services are normally selected due to a legitimate business need. The need is most often determined not by the business as a strategic decision, but instead is determined by well intentioned managers acting in the best interests of the organisation; commonly with little understanding of the strategic requirements of the organisation. This immediately raises the question, “why don’t the well intentioned managers understand the strategic requirements of the organisation?”

When a manager, or staff member, seeks strategic direction for the definition of any requirements there are only two places they can go. The manager can either refer to an executive for specific instruction, however this just shifts the ad-hoc decision making to the executive level, or they can seek instruction from organisational policy. If the manager is able to locate documented policy that articulates strategic requirements they are then faced with the prospect of determining what that policy is saying. This, in particular, is often where strategic direction for security falls down severely. If the organisation has a security policy at all it is often poorly written, poorly thought out and the manager is unable to practically comply with it due to its operational impacts.

Before discussing what common mistakes within security policies cause a complete breakdown in strategic direction for security and their consequences it is important to clearly articulate what a policy is and what a policy is not. A policy, whether for information security or any other matter, is a statement of requirements that the organisation has. A policy is not a document that articulates how those requirements are to be met and in fact should be mechanism agnostic.

A good, real world, example of policy is government legislation, such as road laws. Road laws, as described in government legislation, describes requirements that must be complied with, but does not articulate how those requirements are to be adhered to. For example, road laws (in Australia) mandate that when approaching an intersection with a traffic light any vehicle (car, truck, bus, bicycle, etc...) must stop if the traffic light is showing red and must wait until the light is showing green before proceeding. What the legislation, or policy, does not say is how that law is to be complied with, as long as you comply you can achieve it in any manner that complies with all laws. Organisational policy should be the same.

This brings us to the first critical error that is commonly made in defining IT security policies that render them practically non-implementable; blending policy, process and procedure.

Mistake #1 – Blending policy, process and procedure

By blending process and / or procedure elements in to policy, the organisation either becomes locked in to particular requirement solutions or is compelled to deliberately breach the policy requirements.

An example of this can be seen in a security policy recently reviewed by IPSec. The policy document was dealing with the issue of patch management and stated:

Windows patch management tools will be utilised to automatically download the latest Microsoft security patches. The patches will be reviewed and applied as appropriate.

Many IT workers would look at this statement and consider it to be entirely appropriate; however there are questions that arise as a result of this requirement:

  1. What if the organisation doesn’t currently use Windows or currently has non-Windows systems?
  2. What if the organisation wanted to start using non-Windows machines in the future?

The consequence of this requirement is that the organisation has no policy for patching non-Windows machines. This means the organisation’s IT staff have to just make it up as they go, or as soon as the organisation deploys non-Windows machines (e.g. VMWare vSphere virtualisation software) they have to amend the policy, and all that that should entail.

Describing how requirements are to be complied with within the policy itself condemns the organisation’s staff to ignoring the policy, and / or continually having to adjust the policy to the new environment.

Well written policy should not be impacted in any way by the mechanisms of compliance. When new solutions or methodologies are introduced it should not be necessary to adjust existing policy let alone write new policy.

Mistake #2 – Aligning policy to compliance requirements not operational practices

Many organisations find themselves creating security (or other) policies due to compliance requirements dictated by regulatory bodies, such as APRA or PCI-DSS, or by customers, such as AS/NZS 27001. When organisations find themselves tackling policy under such circumstances it means their principal motivation is to achieve the requirements dictated to them, with little time available to consider the broader strategic impacts that this may have.

When considering a policy set aligned closely to a compliance requirement, rather than being aligned to operational requirements, it is appropriate to consider scenarios to determine how successful the policy will be for the organisation. A good example would be to consider the average user's Contacts database associated with the organisation's email system.

When considering the impact of a prospective policy structure on the organisation by role-playing the impact on a user's Contacts database it is appropriate to consider all of the places that the Contacts database is used, and how the Contacts database gets to a location from which it can be used. A fairly standard (in today's business environment) set of uses for the average person's Contacts database may include email and mobile phone; it may also include a VoIP handset.

The ability to use the Contacts database for sending emails requires that it is available where ever email may be written from. Hence the Contacts database must be available on the staff member's workstation, smart phone, tablet computer, email server for web based access, and anywhere else they may type emails.

The ability to use the Contacts database for mobile or VoIP telephony also requires that the contacts list is located on the staff member's mobile phone, desktop phone and VoIP server.

When considering the alignment of policies to compliance requirements and how this impacts achieving policy compliance with a user's Contacts database, we find that it is now a requirement that at any point in time the use of the Contacts database requires compliance with multiple policy elements at the same time. Additionally we find ourselves in the circumstance that if the author of the compliance requirements changes those requirements, the Contacts database may suddenly be required to comply with an entirely new set of requirements; and with little relationship to existing policies.

When assessing the use of the Contacts database against a commonly available standards compliant policy set, we find that the user's Contacts database, and the user's use of that Contacts database, must comply with no less than 13 policy elements. For example:

  • Telecommuting and Mobile Computer Security Policy
  • External Communications Security Policy
  • Personal Computer Security Policy
  • Electronic Mail Policy
  • Computer Network Security Policy
  • Internet Security Policy for User
  • Intranet Security Policy
  • Privacy Policy - Stringent
  • Privacy Policy - Lenient
  • Data Classification Policy
  • External Party Information Disclosure Policy
  • Information Ownership Policy
  • Email Security Including Phishing
  • And many others

One widely used security policy template set states that it has "1400 pre-written information security policies covering over 200 security topics". Even if only 10% of these had to be complied with it would be 140 policies to be complied with just to the Contacts database.

The simple fact is that organisations do not exist or operate to comply with particular regulatory requirements. Organisations exist to deliver services or goods to their customers, and from time to time they have compliance requirements to achieve. This means that policy sets should be closely aligned to the operational practices of the organisation. They should also be universally applicable, rather than chasing a particular standard that is dictated by others.

When applying the example of the Contacts database to a well structured security policy set the user's Contacts database, and the user's use of the Contacts database, will need to comply with policy elements based on the action they are performing. For example the following policies would require compliance:

  • Asset Acquisition - When the user is acquiring a new Contacts database or new contact to enter in to the database.
  • Asset Disposal - When the user is deleting an old Contacts database or deleting a contact from the database.
  • Exchange and Transportation - When the user is transporting the Contacts database. Whether it is being transported on a mobile phone, a laptop computer or a tablet computer the requirements are the same.
  • Change Control - When the user is changing the Contacts database. The requirements for change control on a Contacts database would be kept very low by ensuring an appropriate classification of the level of importance that specific Contacts database has to the organisation.
  • Access Control - When the user is accessing the Contacts database. As with change control the requirements are set according to how important that individual Contacts database is. Also, as with the exchange and transportation requirements, requirements for access control are consistent no matter where the Contacts database is used.
  • Storage - Impacting any device upon which the Contacts database is stored, in the same manner for all locations.

Rather than having to comply with potentially hundreds of policy elements for a single organisational asset, a well structured policy set will ensure a minimum number of consistent policies that are universally applicable. This should be regardless of asset, technology, medium, operational use, or compliance requirement, and that is able to remain consistent regardless of changes to any of those.

Whilst a well written policy set must be able to demonstrate compliance with governance, regulatory or standards requirements, it should not be so closely structured as to potentially impede the organisation's staff's ability to comply with the policy requirements.

Mistake #3 – Cut-and-Pasting standards elements

Achieving the requirements of standards through policy requires that the policy author understands the role of standards and how to appropriately use them. Standards are guidance documents intended to create a common language amongst industry professionals. They seek to allow complying organisations to achieve a pre-defined level of service or product specification, and / or design, to ensure products, services and systems are safe, reliable and perform in a consistent manner in a way intended by the organisation. Standards are not items of legislation mandated by a government; they are freely interpretable by implementing organisations that allow the organisation to achieve compliance in any manner they see fit.

When a policy author does not understand the role standards play, and is being pressured to rapidly develop policies that comply with those standards, they can become tempted to use the parameters defined by the standard directly. To create the required organisational policy the tasked staff member will run through the list of standards requirements, and will derive a policy line-item per standard element; for example:

AS/NZS 17799:2006 requirement 11.3.1 for “Password use” requires:

Control

Users should be required to follow good security practices in the selection and use of passwords.

Implementation guidance

All users should be advised to:

  1. Keep passwords confidential;
  2. Avoid keeping a record (e.g. paper, software file or hand-held device) of passwords, unless this can be stored securely and the method of storing has been approved;
  3. Change passwords whenever there is any indication of possible system or password compromise;
  4. Select quality passwords with sufficient minimum length which are:

    1. Easy to remember;
    2. Not based on anything somebody else could easily guess or obtain using person related information, e.g. names, telephone numbers, and dates of birth, etc.;
    3. Not vulnerable to dictionary attacks (i.e. do not consist of words included in dictionaries);
    4. Free of consecutive identical, all-numeric or all-alphabetical characters;
  5. Change passwords at regular intervals or based on the number of accesses (passwords for privileged accounts should be changed more frequently than normal passwords), and avoid re-using or cycling old passwords;
  6. Change temporary passwords at the first log-on;
  7. Not include passwords in any automated log-on process, e.g. stored in a macro or function key;
  8. Not share individual user passwords;
  9. Not use the same password for business and non-business purposes.

If users need access to multiple services, systems or platforms, and are required to maintain multiple separate passwords, they should be advised that they may use a single, quality password (see d) above)(sic) for all services where the user is assured that a reasonable level of protection has been established for the storage of the password within teach service, system or platform.

Other information

Management of the help desk system dealing with lost or forgotten passwords needs (sic) special care as this may also be a means of attach to the password system.

A bad creation of policy that is using cut-and-paste from the standard to derive organisation policies would then result in a policy similar to:

ACME Security Policy for Password Use

ACME users are required to follow good security practices in the selection and use of passwords.

The ACME users shall:

  1. Keep passwords confidential;
  2. Avoid keeping a record (e.g. paper, software file or hand-held device) of passwords, unless this can be stored securely and the method of storing has been approved;
  3. Change passwords whenever there is any indication of possible system or password compromise;
  4. Select quality passwords with sufficient minimum length which are:

    1. Easy to remember;
    2. Not based on anything somebody else could easily guess or obtain using person related information, e.g. names, telephone numbers, and dates of birth, etc.;
    3. Not vulnerable to dictionary attacks (i.e. do not consist of words included in dictionaries);
    4. Free of consecutive identical, all-numeric or all-alphabetical characters;
  5. Change passwords at regular intervals or based on the number of accesses (passwords for privileged accounts should be changed more frequently than normal passwords), and avoid re-using or cycling old passwords;
  6. Change temporary passwords at the first log-on;
  7. Not include passwords in any automated log-on process, e.g. stored in a macro or function key;
  8. Not share individual user passwords;
  9. Not use the same password for business and non-business purposes.

If users need access to multiple services, systems or platforms, and are required to maintain multiple separate passwords, they should be advised that they may use a single, quality password (see d) above)(sic) for all services where the user is assured that a reasonable level of protection has been established for the storage of the password within teach service, system or platform.

As can be seen in the above example, the policy author has taken the articulated requirements of the standard, in verbatim format, and has defined them as a requirement of the organisation.

The issue with this is that the author has not actually described any specific requirements. In relation to passwords, specifically the author has failed to:

  1. Describe what “good security practices” means;
  2. Describe what “confidential” means;
  3. Describe what “stored securely” means;
  4. Describe how an “indication of possible system or password compromise” is to be detected or managed;
  5. Describe what a “minimum length” for the passwords are, and under what circumstances;
  6. Describe under what circumstances password characteristics, such as character combinations, should change;

Additionally, by adopting the standard’s requirements in a verbatim style the author has tied procedural elements in to the policy set, for example:

  1. Changing passwords at the first log-on;
  2. Requiring passwords to not be used in macro or function keys.

By directly using the standard to define the policy, a very common mistake by inexperienced policy authors, the organisation’s staff are left with a confusing set of policy requirements. With such a policy they either can not comply with it, or have to mandate compliance requirements that materially impact their ability to operate efficiently within the organisation.

Mistake #4 – Not considering the target audience

IT Security is most often left to the organisation’s IT department. What most organisations fail to consider is that the assets being protected by the IT security policy do not belong exclusively to the IT department; they in fact belong to the whole organisation. This means that it should not be an IT Security Policy, but should be an Organisation Security Policy. Such a policy impacts all assets, environments, staff, systems and processes within the organisation regardless of their location or form (electronic or physical).

When IT staff are allowed, in isolation, to create policy for the organisation they do so from their perspective. What this most often means is that the authors of the policy have little understanding of the broader organisation, and what impact their policies could have on the efficient and profitable operation of the organisation.

Further, when IT personnel develop policy they often do so in a manner that is only focused on their environment. IT staff authoring policies will define requirements for IT personnel to comply with in delivering IT solutions to the business. As soon as an attempt is made to apply those policies to an individual, environment, asset, system or process that is not IT oriented (e.g. the reception desk) the policy is very commonly not able to be practically applied and must either be rewritten or ignored.

The largest single inhibitor, however, of good policy coming from IT departments is their lack of ability to understand how people behave in the face of policies; and how to craft policy in a manner that encourages broader staff engagement and compliance. In most cases, where an IT department has been tasked to develop an IT security policy, the policy is being authored by someone who does not interact (beyond as a technical support resource) with others within the organisation outside of IT. This means they have a potentially limited ability to understand how non-IT staff think, react or behave and often only have negative interactions with non-IT staff.

These mistakes ensure that the vast majority of information security policies in place in organisations today, or produced by 3rd parties on organisation’s behalves, have fundamental mistakes of design within them. These fundamental mistakes ensure that when policies are published and staff are required to comply with them the organisation faces significant conflicts, large operational impediments, and broad-spectrum non-compliance from the very top of the organisation to the very bottom.

Senator Conroy increases insecurity

I would like to congratulate Senator Stephen Conroy, Minister for Broadband, Communications and the Digital Economy within the Australian Government, for greatly assisting Australian businesses to become less secure.

Yes, you're reading that correctly, the minister for the Digital Economy has made Australian businesses dramatically less secure. He has managed to achieve this as a by-product of his highly publicised efforts to introduce censorship to protect Australian's from alleged illegal content on the world wide web.

In the days of prohibition within the United States of America, when the U.S. government band the production, transportation, sale and consumption of alcohol, large sections of the community ignored the law and took their production and consumption of alcohol underground. The net result was a dramatic rise in bootlegging, thereby assisting the rapid and dramatic growth of organised crime syndicates. After 13 years the U.S. government declared prohibition to be a failed experiment and re-allowed the licensed production and consumption of alcohol.

Just as in those days of prohibition, when large sections of an otherwise lawful community resorted to breaking the law, Stephen Conroy's efforts to censor Australia's access to the Internet is already achieving the same outcome, even before any legislation has been passed.

A number of organisations in Australia have declared they will provide, or are considering providing, instruction on how to bypass the government's filter so that people can make their own choices regarding what they do and don't access from the Internet. These organisations range from the Electronic Frontiers Australia, to Euthanasia societies who all feel targeted by Mr Conroy's attempts at moral control.

The problem for Australian business is that the techniques that would be used to bypass the Internet filter would be very similar (in many cases the same) as those used to bypass business security measures.

This means as a direct result of Mr Conroy's actions a significantly increased number of Australian citizens will know how to bypass their employer's network security, likely resulting in a significant increase in corporate security breaches by employees.

Links
http://www.arnnet.com.au/article/342401/efa_mulls_publishing_filter_bypass_instruction_guide/
http://www.smh.com.au/technology/technology-news/elderly-learn-to-beat-euthanasia-blacklist-20100405-rn6i.html
http://www.abc.net.au/news/stories/2010/04/06/2865643.htm

Finding your way in the clouds

In August 2009 I wrote of my concerns regarding the drive to place core business functions in to the cloud. I wrote that the cloud, whilst on the surface attractive due to potential management savings, presents the potential user with many traps that could end up tying the organisation to a solution it can not live with long term and may not be able to escape at all, whilst opening the business up to new threats posed by the creation of a common environment.

The original article, IT Experts With Their Heads In The Clouds, can be found here http://robson.ph/blog/index.php/bensblog/it-experts-with-their-heads-in-the-cloud

Having laid out the case against cloud computing solutions the reality is that the industry is still moving in this direction. That whilst I am still adamant that any organisation would be foolhardy to rush in to such a solution it must be realised that many business are investigating this option. Consequently it would be helpful if prospective users of cloud solutions were given some specific guidance on some of the questions they should be asking their prospective Software as a Service (SaaS) provider before signing on.

Consequently I present questions you should ask your prospective cloud service provider and that you should ensure you have robust answers to prior to commencing the use of their services:

What type of interface is presented to the user>

Initially it is important to understand from a functional perspective how users are to access your services and data provided via the cloud services provider's infrastructure. Are they granted some method of thick-client to server access, is it a thin-client to server access, is it a web based portal or is it some other type of user interface.

Each method has pro's and con's. A thick-client solution would work most similarly to how a standard IT environment work, and as such would be most familiar to the end user but leaves most of the management issues still within the hands of the organisation. A thin-client to server solution moves some of the management headaches to the service provider but still requires that the user utilise a pre-defined think-client system to access the cloud environment. A web portal allows the organisation to move all issues in to the cloud and largely not care what the end user is using as they only need a compatible web browser, however this solution removes all control from the organisation and places total dependency on the systems of the service provider to maintain a quality outcome.

How is communication between the user and the cloud service provider to be secured?

When you place all of your organisation's golden eggs (accounting packages, CRM solutions, intellectual property) in to the cloud you are removing them from your physical environment. This means that any use of these core business systems requires that you traverse the public Internet, and inherently insecure environment, thereby transmitting your organisational confidential information across an inherently insecure medium.

Consequently it is important to ask your prospective cloud service provider how communications are to be secured between your users and their in the cloud systems?

From what user systems can the cloud be accessed?

With such a variety of user systems now available (PC Workstations, Notebooks, Thin-Client Consoles, PDAs, mobile telephones, etc...) it is important to discover from the service provider what methods of access to cloud based services are to be provided to the user. Are the provided methods compatible with the organisation's existing infrastructure or will new user systems and/or software need to be deployed?

How are user systems evaluated for security issues?

When your organisation maintains its own service infrastructure it does so being in full control of the environment in which they are run and accessed. However when the organisation places its data and associated processes in to the cloud the temptation exists to treat the user workstation as a thin client, thereby reducing the administrative overhead for the business. However this relaxation of workstation management introduces significant risks to the compromise of those workstations thereby turning those workstations in to relays for attacks against your in-the-cloud solutions.

When discussing the provision of cloud based services and having the service provider explain the communication techniques it is important to ask them how they verify that the workstation connecting to their environment has not been compromised. This is especially the case if their solution allows access from mobile devices which may, or may not, have anti-virus\anti-spyware solutions running in various states of maintenance. Will the cloud providers systems verify that the users workstation can not introduce compromises in to the cloud based services?

How are users authenticated?

With important organisational assets to be held in the cloud it is important to know that users accessing the cloud based services are meant to be able to access those services and are who they say they are. Consequently it is very important to ask the cloud service provider how access is controlled to the environment.

In asking the service provider how access is controlled you should be asking about what types of authentication is used? Is it just a username & password combination via a simple web interface, something that is very likely to become the victim of a brute force password crack attempt at some stage, or is the authentication method more sophisticated, restricting access to fixed locations, at fixed times, with physical tokens or other methods of identifying the user?

What level of guaranteed connectivity can be achieved?

Given that the core of the cloud based solution is to put core organisational information assets in to the Internet cloud it is a given that Internet connectivity is critical to your organisation's performance. An Internet outage will mean that your organisation is completely cut off from its own information assets, thereby shutting down significant aspects of your organisations activities.

To combat this issue cloud service providers will detail, usually in their brochures, a guaranteed uptime, for example 99% network availability. This however does not deal with the whole picture and is also a slightly misleading way to represent the guaranteed availability.

99% uptime equates to a permitted outage of 14.4 minutes per day. However this is only the permitted outage at their end of the Internet connection, there is still your end of the Internet connection and more. If we then speak to you then speak to your own organisation's ISP and -if- you can convince them to give you an SLA, then a 99% uptime from your ISP will allow them up to 14.4 minutes of outage per day, this now means that at either end of your connectivity you could have as much as 28.8 minutes of outage with no recourse.... however this says nothing about guaranteed communication between your ISP and the cloud service provider's ISP(s).

So in considering the cloud service providers solution your organisation should seriously consider what the likelihood of an outage is during business hours and what impact such an outage may have on your activities.

How is data and associated processes of your organisation isolated from other organisation's data and associated processes?

If you place your organisations information processes and associated data in to a cloud service provider's environment you are placing them on to shared infrastructure that could have any number of other users operating on it. This raises two significant questions: How is your data secured from other users; and how are you guaranteed the resources required to deliver your requirements to the performance level you require?

When you are discussing the prospect of migrating your internal business functions to the cloud service providers infrastructure you should ask them very specifically how they secure your data and processes from other users and how do they validate that those security methodologies are in fact working? You should also ask by what method are they ensuring that one user of their environment can not take over so much resources that other users' performance start to degrade and how to they validate that?

How is data and associated process of your organisation isolated from the service providers own administrators?

Whilst the service provider may have a very good answer for the securing of your data and processes from other users there remains the question of protecting your information assets from the service provider's systems administrators.

Most systems administrators have access to administrative (root) access privileges and under normal circumstances, for example within your own internal IT environment, those administrators would have full, uncontrolled access to all IT based assets of the business (not a good thing, but at least they're your employees). This could very well be true for the service provider's administrators, so it is important for your organisation to decide how much it likes the idea of the service providers staff having full visibility and control over your data and if that is not acceptable to ask how the service provider prevents this.

How is the organisations data protected against subpoena against another customer of the service provider?

One of the often overlooked aspects of cloud based services is what happens when another user of the cloud environment has their records seized by the courts?

Ordinarily when an organisation has their records seized by the courts a legal team seizes the physical IT assets of the business and has a forensic analyst go through their storage system looking for information of interest to the court case. However what is unknown at this stage is how a court will treat a shared storage environment, such as that in a cloud service provider?

As a result when discussing putting systems in to the cloud it is important to ask the service provider how they would protect your data from being seized by a court action being taken against one of their other customers, to prevent your organisation confidential data being collected up and published just as collateral damage in someone else's fight?

What access does your organisation have to data upon the termination of the service contract?

At some point in time, should you decide to use the cloud service provider, your organisation will need to terminate the contract. The issue is, however, that all of your company's data for the last few years is now housed in the service providers environment and they may be none-to-pleased about losing you as a customer. So what happens now?

It is important to ask your prospective cloud based service provider, on day one, how you gain access to your data for your own use upon termination of the contract? How will they provide access to that data so you can continue your business uninterrupted?

Also, when your contract ends what will they do with the data they previously hosted for you? Do they destroy the data and if so how do they destroy it? What about data held in their backups, how is this handled after your cease to have a contractual relationship with them?

What access does your organisation have to the data whilst in contract?

The reality of business relationships is that they don't always go smoothly. Issues occur, disputes happen so what happens in the event of a contract or accounting dispute, will access to the organisation's data and associated processes be denied?

As a potential combat to such a situation what level of access does your organisation have to the data itself for your own backups so that in the event of such a dispute your business doesn't get shut down until it is resolved?

What is the financial position of the cloud service provider?

Lastly, but very importantly, is to ask the service provider to convince you about their financial robustness. The last thing you want is to become dependent on their services being available whenever and wherever you need them only to come in one morning and find them in liquidation, their services shut down and your data locked away and inaccessible.

If you can achieve a good answer to all of these questions (and possibly more) then you may be dealing with a service provider who can meet your needs. If, however, they baulk at any of these issues you should probably walk away.

IT Experts With Their Heads In The Clouds - Updated 13/08/2009

"To have one's head in the clouds" is a saying that means one is prone to having fantastic or ridiculous dreams, to be thinking impractically, to be prone to day dreaming and to be disconnected from reality. Whilst this is an old saying that has been used in many forms, and in many circumstances, it has rarely been more apt than when referring to the delusion of Cloud Computing.

Cloud Computing, as confirmed by the authors and editors of Wikipedia (for all the credibility one may or may not attest to that source), is intended to provide its users with a highly scalable computing solution that sees applications, processing and storage of information conducted outside of an organisation's own computer infrastructure and within large data facilities provided by service providers. The term "Cloud" refers to the Internet as a place where such service providers exist and by which those service providers provide computational and storage resources to their customers.

In short, Cloud Computing is very much like the old-school mainframe, except that the mainframe computer itself is housed outside of the customer organisation's own network and is accessed via an Internet link. In the parlance of 2009 one might refer to Cloud Computing as being iMainframe 2.0.

The value proposition provided by service providers wishing to cash in on the Cloud Computing concept is that organisations who either can not afford their own computing infrastructure, do not wish to run it, nor do not have the required knowledge to run it can contract with a Cloud Computing service provider to access a centralised computing facility with all of the applications the customer needs, via the Internet, using thin-client technologies, such as Citrix at their local desktop. However there are some key issues that these service providers will either not address or will flippantly refer to as the customer's problem that potential users of Cloud Computing should take in to serious consideration.

The first key issue is the customer's ability to access their operational information, such as accounting data, whenever and however they want. The second key issue is how do they insure that their corporate confidential data, including the data of their clients, remains private when someone else has 100% control of that data? The third key issue is that the customer is beholden to the account management practices of the Cloud Computing service provider.

As a company director I know that I need to be able to access the information systems of my organisation 24 hours a day, 7 days a week. I need to know that in a worst case scenario I can go to my office and access systems, not using a network if necessary, to get the information I need to service my customers, and to conduct the day-to-day activities of my business. The inability to access my company's information for a few minutes is annoying and inconvenient, for a few hours is gravely disturbing, and for a day is damaging beyond calculation. Consequently my organisation has put in robust systems of information storage and access that ensure that when I need data, the data I need is available to me.

However, if I were to be a customer of the Cloud Computing model it would dictate that all of my information and the systems that process and make that information available to me are housed outside of the information environment that I control. The only thing I would have as an assurance of access is a service level agreement (SLA) with my Cloud Computing service provider. Whilst this SLA may be quite robust and well specified no Cloud Computing service provider will provide an SLA contract that covers communication paths between the edge of my network and the edge of their centralised computing infrastructure.

Current day Internet communication systems rely heavily on ADSL communications via the legacy telecommunications infrastructure laid some decades ago, however there is no ISP in Australia that will provide you with an SLA for your ADSL link. With the majority of SME businesses utilising ADSL technologies for their Internet link it means that most SME's who may utilise a Cloud Computing service would be doing so without a guaranteed communication path between themselves and the location housing all of their information processing systems. Given the unreliable nature of the Internet and the non-guaranteed nature of ADSL links this poses a significant risk to businesses using Cloud Computing.

When quizzed about such risks Cloud Computing service providers will most often say connectivity is the responsibility of the customer and that if they are worried about guaranteed access they should put in additional Internet links, citing that Internet links are cheap these days - and they'd be right - up to a point. Internet links are cheap, however how many SME businesses, that would be likely to use Cloud Computing because they don't have the money for their own infrastructure or don't have the knowledge to run them well, that would have the extra money for an extra link would not have the extra money for the infrastructure to manage that extra link or would not have the knowledge of how to diagnose that their communication issues are due to the principal link failing and how to switch to the backup Internet link? This leaves the customer with the conundrum of having additional Internet links, per the service provider’s recommendation, but no ability to use the links redundantly. Additionally I'd suggest that if a company can afford the extra $100 per month it might cost for an extra Internet link, they can probably afford an extra server in their own network to provide the computational resources they need, as a basic server can be purchased for less than $3,000 (and $100 p.m. over 3 years, standard depreciation time, is more than $3,000).

The net result of this is that organisation's using Cloud Computing are beholden to the whims of their Internet connectivity for their ability to operate their business. Whilst this used to only be a problem for eCommerce organisations, Cloud Computing will make connectivity a principal business risk for all users, including manufacturers, accountants, even fish-and-chip shop owners who might be using a Cloud Computing hosted account package.

The second issue I have concern about is security. When an organisation holds its own data and does all of its information processing within the four walls of its own building they only need to worry about illegal access or manipulation of their information by someone who manages to cross in to their building, either by physically entering the building or via an external network link (such as the Internet). The organisation is in full, 100% control of their information and information systems and can make their own decisions about how much or how little security they require. Organisations that utilise Cloud Computing services only have their SLA contract to tell them how secure they will or won't be and have no tangible controls over how those security needs are being met nor do they have the ability to ensure that what is in the SLA is in fact being delivered. Unfortunately the first time the organisation will find out the service provider didn't do their job is when the customer's data is stolen, manipulated or otherwise illegally accessed, by which time it’s all too late.

Cloud Computing service providers have a number of significant threats that they need to deal with. Not only do they need to deal with the normal background noise threats posed by script-kiddies, and not only do they need to deal with the threats posed by attackers who have a specific gripe against one of their customers, but they also need to deal with attackers who see them as a high-value target, that to break in to the Cloud Computing service provider's network is to gain access to many organisation's confidential data. Finally Cloud Computing service providers also need to deal with unethical customer's who choose to try and use their legally provided access to gain knowledge of their competitor customers on the same infrastructure by trying to subvert the internal security systems or who may deliberately try and degrade the performance of centralized systems to damage their competitor’s operations. Keeping all of this in mind, potential customers of Cloud Computing service providers need to realise that Cloud Computing service providers almost never design their systems with security being core to their architecture and that they at most have 1 or 2 people on staff with a very limited understanding of information security threats and their mitigations, and no experience in assessing an environment for security threats.

The outcome of this is that customers of Cloud Computing service providers are putting their highly valuable, company confidential, operationally vital information in to an environment that is of high value to illegal users, that has many people of various ethical positions within it, that has likely not been designed from first-principals to be secure and is maintained by people who have little to no understanding of the intricacies of information security threats and mitigations. One might suggest that this is like putting one's child in to a used car manufactured in the former Soviet Union without getting it independently checked for safety and trusting the used car salesman when he says, "sure, it'll get your family where you need to go safely." The only time you will find out he was not 100% accurate is when that family car crashes in to a lamp post and your family (business) is the proverbial road-kill on an accident commission TV advertisement.

The final significant threat posed to customers of Cloud Computing service providers comes down to the account management behavior and robustness of the service provider. If an organisation's information and processing systems are wholly contained and operated by a Cloud Computing service provider that service provider can essentially hold the customer hostage to their own information. If there is an account management dispute the service provider can simply cut off access to the customer's own information putting the customer's business at grave risk of failure. In this circumstance the customer will have little option but to do what the service provider instructs them (pay more, pay in advance, etc…) as trying to pursue the provider through other means will take too long and will kill the customer’s business. Additionally if the Cloud Computing service provider ever had business difficulties of its own it is not clear how the customer would regain access to their own information in the event of a receivership or the service provider business simply closing its doors. In fact it may not even be clear about what right the customer would have to gain access to their information once the service provider's management loses control of their own systems in such a circumstance.

In summary, I consider Cloud Computing to be a farce that has little true value for potential customers, that is deliberately designed by service providers to entrap customers and that introduces significant new security threats to the customer's company confidential information that they would otherwise not have to deal with.

Update - 13th August 2009

Two days ago I attended an evening seminar discussing the management of intellectual property when an employee leaves the organisation. One of the presenters at the seminar was speaking on the issue of forensic analysis for the purposes of recovering lost data or determining what data had been taken.

During the forensic experts presentation he mentioned that he had recently done some work for a client operating a virtualised environment and that this raised some interesting challenges, however he identified that ultimately he was able to obtain a snapshot of the host environment and start reverse engineering things from there. At this point an interesting ramification for cloud computing environments struck me.

What if your organisation was using a cloud computing service provider and another of your service provider's customers ended up in a lawsuit that required their company data to be presented as evidence? A computer forensics expert could well take a snapshot of the cloud computing service provider's virtualised infrastructure, under subpoena, resulting in your company's data being presented as part of the evidence in a case you are not involved in. This information then could become part of the public record of the case, thereby disclosing your company's confidential data to the general public (including your competitors).

So here we have yet another reason companies should be very wary of Software as a Service (SaaS) / Hardware as a Service (HaaS) / Cloud Computing solution providers.

Achieving a Practical NAC Solution

NAC (Network Admission Control or Network Access Control) has existed for a few years now, yet we still haven't seen wide spread adoption of it within the corporate network environment. Despite being pushed by the major network solution vendors, including Cisco & Juniper, as the best way to control access to the networked environment organisations are yet to dive headlong in to the space.

NAC, as a concept, is a very good way to maintain a thorough control of who accesses the organisation's networked infrastructure. The ability to require a user to authenticate not just to the workstation, but to the edge of the network infrastructure and to have that authentication be passed through every choke point within the network, including switches, routers and internal firewalls, is an excellent way of ensuring that whomever is trying to access a specific network resource is actually permitted to do so. However, the solutions offered by hardware vendors, such as Cisco & Juniper, come with a major impediment to their broad adoption. They require you to use their hardware wherever you want to achieve enforcement.

What this means is that if you have a typical corporate network environment, where you have a mixture of switching, routing and other networking hardware, that you have acquired over time and you wish to implement a NAC solution from a network vendor you have to contemplate the cost of replacing your entire network infrastructure in one hit. Most organisations simply can not afford the cost of this, let alone the time and productivity impact such a transition makes.

To address this issue a new type of NAC solution, one based on software alone has stepped in to the breach. Software NAC involves a central policy server that is communicated with by an end-point software agent that is installed on to workstations. If the software agent is not present or the person authenticating to the workstation is not adequately credentialed then the policy server restricts the user's access to the network. The way the typical software NAC solution achieves this is by acting as the DHCP (Dynamic Host Configuration Protocol) server for the controlled network environment and assigning IP addresses & associated network details (routes, DNS servers, etc...) based on the authenticating users permissions. This approach however presents a problem.

When security of a network is controlled via DHCP it relies on the ignorance of the user to not know how to assign their own network details to gain access to the network. However most PC users with more than a few years experience know how to manually assign themselves an IP address. They can also use network sniffing tools (such as Wireshark) to sniff the network to identify available DNS servers, and upstream routers (if they manually assign the same IP address as a workstation already in use they will see reply packets from router ARP queries & DNS server responses as the DHCP assigned workstations makes queries. Having established these details the user can then manually reassign their IP address to one that is not in use). So ultimately this means that the level of security being achieved with software NAC is only as good as the ignorance of the userbase.

In recent years, however, I have had the pleasure of using a software NAC solution that takes a different, if controversial, approach to enforcement. CyberGatekeeper DNAC, by InfoExpress (http://www.infoexpress.com), performs its enforcement utilising more of a "neighborhood watch" mechanism based on the MAC (Media Access Control) Address details of workstations (& other network devices) attached to the network.

The way CyberGatekeeper DNAC works is also by deploying a software agent on to the workstations, however enforcement is done by workstation peers within the same network segment. Within a network segment workstations (or servers) that have the DNAC agent installed negotiate amongst themselves, automatically, as to who will take on the role of the local enforcer. The agent that is acting as a local enforcer polls the other networked devices within the network segment to determine whether they have the DNAC agent running and whether that agent's system is running in a compliant manner. If the enforcing agent identifies a networked device that is not running the agent, or the agent's system is non-compliant (and the device is not in a white-list) the enforcing agent (and this is the controversial bit) utilises MAC address spoofing to force the non-compliant network device to route its traffic via the enforcing agent and then restricts that non-compliant network device's access to the network.

The astute reader might, at this stage, point out that MAC addresses can be spoofed also. Yes they can, but the knowledge required to spoof a MAC address is substantially more than manually setting an IP address, and the ramifications of spoofing a MAC address are also more dramatic. For example, if an unauthorised user chose to spoof the MAC address of a printer, because its in the CyberGatekeeper white-list of systems to always permit access, the printer would become unavailable to the network and would quickly be noticed.

Whilst the solution from InfoExpress is not perfect, it is the only solution I have seen getting deployed in to "normal" companies, that is very robust and is very affordable because you don't need to change any of your network infrastructure to run it. It works over any IP based network, and works across VPNs, WiFi and many other sorts of connectivity. It is very affordable, but more robust than the vast majority of the other software based NAC solutions (including those by anti-virus vendors).

Whilst I am normally very pro-standards, sometimes someone surprises you with a liberal interpretation of a standard (and this is very liberal) that makes you think, "maybe its good to be a little flexible?" Nice one InfoExpress.

(Please note that I have been very generalised in describing how all of these technologies work, more detail can be obtained by the reader by making inquiries of their preferred hardware & software NAC vendors.)