RSS

Monthly Archives: August 2012

Selection of VPN Solutions

Over the last decade (and specifically since the revision of the encryption export control regulations); there has been a considerable amount of research, development, and adoption of VPN technology sets used by not only business and government, but also in consumer markets. Indeed protection of data while in transit, and at rest, is a best practice in industry, a requirement in Government, and is finding use in a growing number of commercial applications. Given the wide array of available technology, the question of which type is best doesn’t really need to be asked. Use of techniques at the application layer such as SSL and SSH, technologies at the transport layer such as IPsec or PPTP, at the link level with L2TP, and while at rest using either hardware or software disk encryption can all be used together to ensure that data confidentiality and integrity is maintained throughout its lifecycle. Even with this complete stack, these protocol selections only touch on the various options, combinations and uses of encryption within each layer.

Given so many options, the selection of which technology to use – like many other things in IT must be based on the answer to “what is it you are trying to do”, “who is going to be using the system”, and “what are your requirements”. At the client end, the simplicity of implementation has become nearly trivial. Window 7, for example, provides native support for IKEv2, PPTP, L2TP/IPsec, and SSTP with a single drop-down selection. Authentication protocols for these can be matched with current infrastructure requirements, and data encryption algorithms can enforce data protection by encrypting within the established tunnel. With so many options, the classification of type will depend on what the requirement is. Should it be classified by authentication mechanism, by layer of operation, by supported encryption strength, or by the endpoints being connected? The short answer is … it depends on what the VPN intends to accomplish.

At its most basic however, the VPN is designed to provide remote-connectivity from one system to another. This connectivity may be to support remote device (e.g. a travelling laptop) access to the corporate LAN, or to interconnect LANs to each other in a MAN/WAN scenario. The purpose of either approach, at its simplest is to provide secure connectivity between these systems for the purpose of protecting the confidentiality and integrity of data in transit. Given the vast array of options as endpoint devices, software methods, hybrid approaches and the various protection levels (and associated cost) that can be provided by each … selection is in the eye of the beholder.

//Levii

 
Leave a comment

Posted by on August 24, 2012 in Information Technology

 

The Principle of Least Privilege

Introduction

The materials on http://cccure.org, and specifically the guide by Rause & Tipton certainly supplemented the material by Shon Harris while I was studying for my CISSP. Even though I haven’t seen this page since sometime in 2007, a recent discussion thread re-linked it, as a starting point for a discussion on the principle of least privilege.

The Principle

While the principle of least privilege can be briefly summarized and defined, the context in which it is evaluated through a given access control method is quite a different undertaking. The principle of least privilege can be succinctly defined as “limiting the access of authorized users to data they require to perform their duties” (Conrad et. al., 2010)(p. 47). All access not required would then be denied. This is slightly less restrictive than the need-to-know standard, which would be a subset of the authorizations offered in the least privilege approach. Both can be considered “deny by default” provisions where access must be specifically allocated by a person authorized to grant that access.

Authorization to grant access is the key delineator between the two primary access control methods of Mandatory Access Control and Discretionary Access Control. MAC restricts this authorization to an administrator of the data; owners and users are unable to provision or grant rights to other users. In the DAC model however, this privilege is granted to data owners and creators; effectively decentralizing the assignment of privilege to content owners.

Not well-covered in the linked article by Rause & Tipton is one additional model: Non-Discretionary Access Control. This access control methods typically though of when discussing NDAC are Role-Based Access Control (RBAC) and Attribute-Based Access Control (ABAC). While many will argue that it belongs as a subcategory of MAC; it exists as a separate entity. Consider the following:

  • “…RBAC is sometimes described as a form of MAC in the sense that users are unavoidably constrained by and have no influence over the enforcement of the organization’s protection policies. However, RBAC is different from TCSEC (Orange Book) MAC.” According to NIST, “RBAC is a separate and distinct model from MAC and DAC.” This is a frequently confused (and argued) point: non-discretionary access control is not the same as MAC.” (Conrad et. al., 2010)
  • The NIST source cited by Conrad has moved, but is is available here:http://csrc.nist.gov/groups/SNS/rbac/

At this point: least privilege, need-to-know, and the models to support the assignment of authorization have been defined, the next area to tackle would be that of policy and supporting models. Supporting models are subcategorized into a matrix of security models which include: confidentiality, integrity, information flow, noninterference, take-grant, access-control matrices, Grahm-Denning, Harrison-Ruzzon-Ullman, Zachman Framework and Brewer-Nash models.

These models do have overlap. For example the Lattice-Based, State Machine, and Bell-LaPudula models are all confidentiality models, while the Biba and Clark-Wilson models are integrity models. Each of these, however, falls into the larger category of information flow models.

Avoiding the temptation to define and explain each of these, it will at this time have to be sufficient to say that these models are then supported by the system’s mode of operation. Modes of operation are codified within the Common Criteria adapted from DoD 5200.28 and provide modes for each of the MAC models for operation.

While these models themselves do little to protect organizational assets, the adherence to the governance policies they define (including both administrative and technical controls) provides the structural foundation to protect the confidentiality, integrity and availability of information.

 

References

Conrad, E., Misenar, S., & Feldman, J. (2010). CISSP Study Guide. Syngress: Burlington, MA.

ISO/IEC. (n.d.). Common Criteria Documents. Retrieved from NSA Common Criteria Evaluation Scheme:
http://www.niap-ccevs.org/cc_docs/

Krause, M., & Tipton, H. (1997). CISSP Open Study Guides . Retrieved from Handbook of Information Security Management:
http://www.ccert.edu.cn/education/cissp/hism/087-089.html

NIST. (n.d.). DoD 5200.28. Retrieved from Computer Security Research Center:
http://csrc.nist.gov/groups/SMA/fasp/documents/c&a/DLABSP/d520028p.pdf

 

 
Leave a comment

Posted by on August 9, 2012 in Information Technology

 
 
%d bloggers like this: