Road Map for an Application/Software Security Architect (Part 6)

So, the application designer has disclosed that the solution for the web services being designed will involve the (1) need to authenticate; (2) need to determine levels of authorization; and (3) [by the way] need to have some personalized data be carried forward to the application. If you, as a the security architect involved in the security assessment process, are smart, you would have a security framework to meet these requirements. And if you are “lucky” the application designer will have aligned the requirements to the security framework. But, the reality is that even with an architecture supported by standards and guideline, convincing the application developers to follow it is another story.

Rather than take on the “creative conflict”, a discussion should be a convincing proposal that the information is in place to make it easier for the application developer to obtain the information through the use of the “architecture” than creating yet-another database. The proper manner is to bring value to the organization and enable the development process to be easier with the architecture. The key to bringing value is to have the information in the “best” place (here!), at the “best” time (now!) and with the “best” information (right!).

The application developer will be interested in two types of data to drive the application: the identity information and the application’s database. Identity Management as a service deals with the former (discussion of the security requirements and implications of the latter are to be discussed in future posts). Although there are multiple products out in the market that are label to perform identity management, it is more than just technology (the tool), it also involves people and processes. Information about a digital identity that is combed from multiple data sources and stored in other information stores is a result of a set of operational procedures (people, process, and technology) that manage the information flow.

Security for Identity Management involves a lot more than just Confidentiality, Integrity, Availability, so I prefer to use the Parkerian Hexad (elements of information security proposed by Donn B. Parker in his book “Fighting Computer Crime, A New Framework for Protecting Information,” [John Wiley & Sons, 1998].) to proposed how to make “here!”, “now!” and “right!” the feasible end result (goal) of a good identity management procedure. The core elements (six) lay out the parameters that, for the developers, make the choice of the architect’s vision of an identity information store of “digital identity” preferable to that of creating yet-another identity store through yet-another registration process:

  1.  Confidentiality – not only will the process protect who has access to the data during the “managing” of the identity data from the data source of record into the identity information store (such as an “LDAP directory) but that the process that created data into the data source of record was also protected.

  2.  Possession or control – the process by which the information is delivered from the data source of record to the identity information store is secured, similar to a “chain of custody,” is well understood and controlled.

  3.  Integrity – the information itself is not compromised, being consistent with its intended state asserts the validity of the data.

  4. Authenticity – the claim that the data is coming from an reliable source is the assertion that the information from the data source of record is valid and truthful is important to document so that the information may not be mis-used under false assumption. The ways that the management process can assert authenticity of the data will be discussed later.

  5. Availability – the identity information store needs to be accessible to the application for it to be of any use. But, more importantly, the information that feeds into the identity information store must be accessible, and the rules as to how current the information is should be well understood.

  6. Utility – the most important is the agreement that the data is useful and that it will meet all the requirements for authentication, authorization, and personalization without causing an excessive amount of overhead in processing time and development costs.

 

Steve Primost CISSP, CISM
Information/Application Security Architect

 

Road Map for an Application/Software Security Architect (Part 5)

Without a Digital Identity, how would you expect to do any authentication? And with an incomplete Digital Identity, how would you expect to get the authorization done correctly? Without the proper data model and the expectation that it would have the correct data (besides being in the right place at the right time), securing a system is impossible, although having the information, it is the easiest question to answer.
In my last post, I examined the purpose of a Digital Identity and why it is not appropriate when thinking through the architecture of a solution to make this another after-thought of the system architecture. Worse than not having the information (a security risk), is that the information is inaccurate, both in reliability and conflicting (a business risk). So let me lay out some rules and guidelines, and a couple of general questions you might ask as part of the logical design.
But before getting started, a good data model of the infrastructure that is used for authentication and authorization is required. This is part of the overall security framework, which has an “as is” as well as a “to be” component. In this case (and the subject of a framework and road map is, obviously, going to be mentioned again), we look at where the data is that identifies the person (of computer) and all the information that is stored about the person. (or computer), best described in as a data model with a component model. Let’s deal with each.
The data model defines all of the attributes that would be part of the digital identity. But who’s digital identity? An enterprise (or organization) has a number of different types of users, known as constituents. Typical constituents would be employees (temporary, permanent, vendor-access), customers, and business partner representatives; basically anyone that may have access to your systems and services. ,Each set of constituents would have a basic set of attributes, like user name and password, and a distinctive set of attributes, such as employee number or customer number. Everything about that constituent is termed as its digital identity. In general, the more you “know” about the constituent, the better your challenge for authentication and determination of authorization.
The component model defines where the attributes necessary for the digital identity reside. Just because they are defined as necessary does not mean that they are available. The objective of this document is to determine where the information resides. It is the responsibility of this document to determine also the reliability of the information and whether the place where the attribute resides is the most accurate. What you need is the source of the attribute data, or, if not available the most reliable copy of the information. Duplicate information about an attribute is a warning sign that information being provided for digital identity may not be the most reliable (more when we look at identity management). [How many applications are using the wrong copy of the attribute, the one that is, perhaps, not updated as often?]
While the two models are logical, the assumption is that the digital identity of any of the constituents may not be physically on a single database or LDAP-accessed directory. An Active Directory may have sufficient data about credentials, but it will be less reliable for a person’s job function, which could determine the role. The component model will likely include indications of multiple stores, and data models will indicate relationships between the multiple stores (and be not always consistent, either). It will also indicate the “owner” of the information (attribute as well as database)
With this, now comes the discussion with the application (or service) designer to review the data necessary for the authentication and authorization (credential checking) access sequence. The objective of this discussion is to review the following (partial) list of items:

  1. Define the constituents that will have access, and the types of access that is necessary as well as the business reason for the access.
  2. What is the method of authentication and is this sufficient for the data that is being exposed as part of the business reason for the access.
  3. What are the business rules for the types of access, defining what would be the answer to the question of “do you have authorization for access (coarse grained)?”
  4. What are the business rules for the types of access, defining what would be the answer to the question of “do you have authorization for the type of access (fine grained)?”
  5. What other information is required from the digital identity to support the process of access into the system or service? Hint. Application designers like to take information with them for use in the session handling (stored in a session table), usually to be part of the cookie, such as name, address, or subscriber number, that is more reliably obtained during the access control session from the digital identity.


Steve Primost CISSP, CISM
Information/Application Security Architect

Road Map for an Application/Software Security Architect (Part 4)

Planning your application’s use of the digital identity is not an after-thought of system architecture. At the least, it might offer the occasional lack of reliable and conflicting information. At the worst, it provides little, if no protection, at all. And like the proverbial little dutch boy, you will be putting fingers in the holes of the dike, attempting to shore up an weak infrastructure with fixes and excuses.

In my previous post, four classifications of possible vulnerabilities were given. The top one, in my view, is the use of Digital Identity. Application developers are prone to view this as as just another operational infrastructure component that will, by some miracle, provide the reliable credentials for authentication. Authorization is something that either is part of authentication or just a couple of conditions in the lines of code. The problem is more than just the lack of governance of how an application does authentication and (required) authorization; the issue is that the data is not properly planned to support proper authentication and authorization for an application to leverage properly.

Digital Identity:

At a recent security event I attended, a colleague was lamenting how his “LDAP” servers were not being synchronized correctly. Application developers were finding it difficult to verify credentials and had little capability to extend that to more than a very high level authorization process. While in the end he may have an identity management problem, the basic issue is that of poor planning and no strategy for a data model of the digital identity. The digital identity was not serving the application development properly.

Unless you have the luxury of a “green field” when planning a digital identity infrastructure, you need to understand the framework within which you will need to operate. Having a single store for all of the end users with all of the necessary information for all of the applications will not exist. So, to make sense out of it, understand, and inventory, the relationship of the various data stores to the applications, both technically and politically. At a minimum, this would clearly define the purpose and function of the data stores and any LDAP servers, such as Active Directory.

Understanding where the digital information data is only part of the problem. The more difficult problem is understanding how users are identified in each data store and LDAP server. Part of my colleagues problem was that “Jane Jones” was listed as userID of JJones01 in the Active Directory, JJones02 in LDAP, and JaneJones in the key field of UserIdent in the finance database. Of course, the synchronization would not work! If the digital identity for “Jane Jones” is broken, it is not the application designer’s problem to fix it, but an infrastructure issue to mend the relationships so that the infrastructure can be leveraged in a cost efficient and expedient manner. At a minimum, establish relationships between the stores to enable its intended function, and “rules of engagement” including the expectation that certain data is located in specific (and sometimes uniquely) data stores.

Recognizing that there is a difference between the function of authentication and authorization is necessary in maintaining a digital identity across multiple stores. The fewer number of points where authentication is done, the better is the control of the initial access. Authorization can, and most likely, will be spread across stores, but that is fine as long as you have the relationships clearly defined. But authorization comes in two classes: (1) the coarse-grained variety, which is easiest at the point of access control of authentication, and (2) the fine-grained variety, which is complicated by the business logic of the application. Logically, all of this information is part of the “data model” of the digital identity; physically (and politically), it may reside in separate data bases “owned” by the application.

And, lastly, having no governance as to how applications can use (when, why, and how) the various stores of the digital identity will cause chaos. If the policies are in place, and standards are re-enforced then the security framework provides the governance. Enforcement at the various points of the SDLC during security and risk assessment is crucial, both to continue to educate developers and designers, but also to maintain the integrity of the information at the various stores. Ad hoc modification of the data model for digital identity, including locating authorization data in multiple data stores can and does have long term consequences on the viability of the digital identity.

Steve Primost CISSP, CISM
Information/Application Security Architect

Road Map for an Application/Software Security Architect (Part 3)

Risk assessments for application software is not a matter of a quick penetration test nor a matter of code reviews at a single point in time. It is a process of moving through the application/solution’s Software Development Life Cycle (SDLC) and evaluating the results of the controls that are put in place at each phase. Whether it is waterfall, or agile method, waiting for the end of the final delivery of the software makes no sense. No matter how much you put into the end phase (usually the acceptance testing), if you have not tested and sampled the effectiveness and examined the results of the controls along the way, it will be a flawed product. So having a security risk gate review and assessment at each point in the process must be mandatory.

The needs and the controls will be different at each point in the SDLC for a security evaluation. The previous posting spoke of the necessary elements of scope, purpose, objectives, responsibilities, and processes for a risk assessment. will be different. While an application security is evaluated on many different levels, from code to architecture, the intent is to define “risk assessment” on the latter since that is within the scope of responsibilities for an application security architect. (More on the role of that architect during the detailed specification, development, testing, and deployment/operational phases of the SDLC in later posts).

The last post mentioned “vectors” (my term!) for classification. Perhaps, it can be better explained as four areas of vulnerabilities, and each will have, assuming this morphs into a procedural process some definition of administrative, physical, and technical controls and data-gathering. I would propose the following classifications:

Digital Identity: The application as well as the end-users will be using a defined set of information (credentials and the resulting authorization and specific attributes) in order to operate in the defined environment. The setting of the privileges as well as the placement of information is reviewed. This is typically expressed as the authentication and authorization data model

Access Management: The application needs to allow access to its end-users for data as well as access data on its own. This defines what access scenarios need to be used and how they are used, as well as indicating the type of information that will be exposed, either singular or composite (for data leakage) as well as validated. At a minimum, a data model and sequence diagrams are used.

Identity Management: This addresses the operational impact (people, process, technology) to manage the digital identities across the enterprise, or, in the case of federated identities, across multiple domains. If the security infrastructure (roles, delegated administrators, governance, enterprise policy, data owners, etc) is affected, this brings out the issues. References are made to both of the data models and additional definitions are provided for entitlements and roles.

Session Management: In addition to the typical state (and perhaps the timing) diagrams that developers are prone to use, this addresses the concerns of “data in use”, “data in motion”, “data at rest”, and “data disposal” throughout the software solution’s data life cycle and transaction flow, either batch or “real-time.” One should not confuse, in this pattern definition that of an access “session”, which is the interaction of end-user to system (or system to system), with that of data life-cycle sessions of data in points in time.

Each of these classifications or vulnerabilities could have a varying set (some overlapping) of threats based upon the threat model. Basically, we are addressing a set of security aspects and focusing on a set of possible attacks. But rather than listing the attacks and referencing the method to reduce the risk for each classification (incident based), we will look at providing a checklist or asset based methodology for each of the classifications.


Steve Primost CISSP, CISM
Information/Application Security Architect

Road Map for an Application/Software Security Architect (Part 2)

Vulnerability testing at the acceptance stage of an application’s Software Development Life Cycle (SDLC) will not compensate for the lack of an understanding of what is being done during the software development even though you may not have control over the development efforts. You need a plan that puts those controls in place and allows that governance. Ignoring vulnerabilities will not prevent breaches.

Remembering back to building a risk assessment plan, we can build a similar plan for application security, but with the intention of engagement at predefined points in the SDLC for _every_ software solution (or application) that might also raise concern for a risk assessment. The application security plan needs to cover the same set of tasks that a risk assessment might cover and would have a similar set of assignments of a RACI (Responsible, Accountable, Consulted, Informed) matrix.

The first step is to establish the purpose and objective of the program. The program’s main purpose is to reduce the number and level of “bad” design and application programming habits. The intention is to determine the effectiveness of the plan by measuring how effective it is in avoiding mitigation efforts. This assumes that there are appropriate policies as well as “tone at the top” in place (or else this becomes the first and most significant prerequisite prior to organizing the plan). But that does not answer how to measure the effectiveness. Partially, a baseline evaluation of the history of solutions and applications previously submitted for risk assessments, and the results obtained, especially, if any further effort was required for mitigation is needed. Having never performed a defensive risk assessment before, it is easy to miss the “unknown knowns”, so part of the object of the program will be to identify what has been “missed”.

The second step is to establish the responsibilities for this plan The team that is responsible for the development of the plan may be the same team, or designates, that will be responsible for implementing the plan. While on the surface this may be a pure technical “play” (let the architect’s handle this!), in reality it is a cooperative effort that involves input from the business users (or at least the ones that are the most involved in development of the significant services), the security officer (and the team that does the current risk assessments), the audit and controls types (for advice) and the application architects. Involved in this process will be senior management since it should be fully expected that the SDLC process will be affected: you will need a Security Review Board (SRB !!) to intercede at various points (gates) of the SDLC.

The process entails the identification and classification of the assets, which need protection because they are vulnerable to threats. But in this case, we will not need to assign to the classification a priority or an impact value, but an assigned standard (or guideline) in which to measure the effectiveness in meeting the classification. In an application assessment, the primary asset is “data” to which we assign vectors that would impact the security (and privacy protection) of the data. We will evaluate the vectors in the light of the threats and the vulnerabilities that it might expose.

In this plan, we are addressing the identification of the exposure and how to handle that exposure, either through a certain set of standards and guidelines that would be from a library of information and documented experience. Obviously, this library would grow with the repeated iterations of additional projects. The intention is to drive the knowledge of the appropriate behavior of applications in regard to security to the application architects and detailed designers from the security architect and the security team.

Next post … identifying the classification of vectors

Steve Primost CISSP, CISM
Information/Application Security Architect

Road Map for an Application/Software Security Architect (Part 1)

With the level of security concerns about security, it is interesting that there is not more concern with a holistic focus on application security. Numerous articles are citing chilling statistics about security breaches, with the majority (some use the figure of 80%) being related to applications. It is not for lack of information as to what constitutes an “application problem”. One just has to go to the OWASP web site for more than sufficient information.

The issue, as I see it, is much more than the “what” and “how”, it is the approach that needs to be defined (and understood) as well as the “top ten”. In this, and the following series of posts, a modest proposal for a logical approach that would address the issues of how to define a secure application architecture (the plan), what would be the logical framework (the organization), what would be the steps that both the supplier (the application security architect) and the consumer (the software developer) would take to implement (the direction), and how would you assess the cost and effectiveness (the controls).

The basis of any security plan is that it must consider the threat model. In elementary form, it merely states that you can determine a set of possible attacks with a probability of occurrence to a set of assets that have a value worth protecting. So one needs to build a set of countermeasures to mitigate the effects of the threat that could exploit the defined vulnerabilities. It is considered “best practice” to define a set of threat models for, in this case, an application solution, so that the larger problem is reduced to a manageable set of smaller problems. This is the approach we will take as we course our way through the road map.

Clearly, there are two ways to approach the determination, and ultimately, the use of the threat model. Only when you use both will you have confidence that you have completely address all of the threats.

The typical approach seen in organization today is that of the reactive mode (some sources might use the term adversarial). The Software Development Life Cycle (and choose any one of a number of types) grinds forward with security, more or less, part of the “development” and “design” until the test and acceptance phase. There may have been some meetings to review the risks associated with the application solution, but the real crunch comes when the code is delivered. The security group gets involved and performs a true risk assessment, with both penetration and vulnerability testing at all layers. Vulnerabilities may be detected, assessed, and, if significant (cost-justified), a process of risk mitigation is planned and completed. The question is, in my mind, whether you really did address all of the threats and applied the appropriate controls or you went through an exercise in compliance. But compliance is not security!

The alternate approach gaining acceptance in a number of organization, perhaps for the savings in development cost, is that of the defensive mode. If a good security strategy is that of “defense in depth”, then apply it to the development cycle, and address the separate layers of development specification. Rather than the “gate review” for the security “check box” as the security architect judging the development architect, view the development architect as the security architect to work within the technical specification framework. With a set of standards, guidelines, templates, and training of developers, security become part of the logical architecture, reassessed when the review of the physical architecture is done, incorporated in the detailed design, and instantiated in the software. Now the evaluation of the acceptance of the software, which would include the penetration and vulnerability testing of the traditional (reactive) approach above, but would add the additional dimension of understanding the development view.

Steve Primost CISSP, CISM
Information/Application Security Architect