“Active Directory Recovery”
Anything from human error, malicious events or unforeseen environmental catastrophes can wipe out your critical system infrastructure. Having your critical systems crash is unacceptable when your customers deserve the best from you. Having systems go down for 24 hours or even days is unnecessary when you can back your systems up with CionSystems Active Directory Recovery.
CionSystems offers an easy-to-use web-based solution for fast, online recovery. Active Directory Recovery Manager empowers you to recover from inadvertent deletions or changes in seconds, not hours. The online, granular restore capability allows you to recover without taking AD offline. In-depth comparison reports highlight what objects and attributes have changed or been deleted in Active Directory. This allows IT administrators to conduct efficient, focused recovery at the object or attribute level. Having accurate backups and fast recovery enables you to reduce the time and costs associated with AD outages and decrease the impact on end users.
Implementing the right tool for comprehensive protection is critical to turning major problems into minor restores. If your Active Directory is significantly damaged just restore your entire domain. Don’t fuss or fight with nasty situation.
“Change Notifier”
IT professionals who work with Active Directory know this can be a very beastly experience. However what’s troubling is change management for manage and unmanaged changes. It is imperative for IT professionals to know the changes that are happening to active directory, for example administrator group membership, accounts creation and deletion and so on not just from Audit/Compliance point of view but from Security point of view. Active Directory is the central repository that controls the access.
There is a way to have better change management control on Active Directory and avoid security or otherwise failures. One solution is to backup your IT team with a lightweight Change Notification tool that gives an immediate heads up of managed and unmanaged changes. Being informed allows you to act quickly and efficiently to known and unknown changes including any malicious activities and keep your critical systems healthy and safe.
Proactive policies can save your IT folks a lot of pain and lost productivity. If you have been hit by the security issues and or are apprehensive about security then visit http://www.cionsystems.com for Active Directory Change Notifier.
So, the application designer has disclosed that the solution for the web services being designed will involve the (1) need to authenticate; (2) need to determine levels of authorization; and (3) [by the way] need to have some personalized data be carried forward to the application. If you, as a the security architect involved in the security assessment process, are smart, you would have a security framework to meet these requirements. And if you are “lucky” the application designer will have aligned the requirements to the security framework. But, the reality is that even with an architecture supported by standards and guideline, convincing the application developers to follow it is another story.
Rather than take on the “creative conflict”, a discussion should be a convincing proposal that the information is in place to make it easier for the application developer to obtain the information through the use of the “architecture” than creating yet-another database. The proper manner is to bring value to the organization and enable the development process to be easier with the architecture. The key to bringing value is to have the information in the “best” place (here!), at the “best” time (now!) and with the “best” information (right!).
The application developer will be interested in two types of data to drive the application: the identity information and the application’s database. Identity Management as a service deals with the former (discussion of the security requirements and implications of the latter are to be discussed in future posts). Although there are multiple products out in the market that are label to perform identity management, it is more than just technology (the tool), it also involves people and processes. Information about a digital identity that is combed from multiple data sources and stored in other information stores is a result of a set of operational procedures (people, process,andtechnology) that manage the information flow.
Security for Identity Management involves a lot more than just Confidentiality, Integrity, Availability, so I prefer to use the Parkerian Hexad (elements of information security proposed by Donn B. Parker in his book “Fighting Computer Crime, A New Framework for Protecting Information,” [John Wiley & Sons, 1998].) to proposed how to make “here!”, “now!” and “right!” the feasible end result (goal) of a good identity management procedure. The core elements (six) lay out the parameters that, for the developers, make the choice of the architect’s vision of an identity information store of “digital identity” preferable to that of creating yet-another identity store through yet-another registration process:
Confidentiality – not only will the process protect who has access to the data during the “managing” of the identity data from the data source of record into the identity information store (such as an “LDAP directory) but that the process that created data into the data source of record was also protected.
Possession or control – the process by which the information is delivered from the data source of record to the identity information store is secured, similar to a “chain of custody,” is well understood and controlled.
Integrity – the information itself is not compromised, being consistent with its intended state asserts the validity of the data.
Authenticity – the claim that the data is coming from an reliable source is the assertion that the information from the data source of record is valid and truthful is important to document so that the information may not be mis-used under false assumption. The ways that the management process can assert authenticity of the data will be discussed later.
Availability – the identity information store needs to be accessible to the application for it to be of any use. But, more importantly, the information that feeds into the identity information store must be accessible, and the rules as to how current the information is should be well understood.
Utility – the most important is the agreement that the data is useful and that it will meet all the requirements for authentication, authorization, and personalization without causing an excessive amount of overhead in processing time and development costs.
Steve Primost CISSP, CISM
Information/Application Security Architect
https://adsploit.com/wp-content/uploads/2023/06/logo.png00https://adsploit.com/wp-content/uploads/2023/06/logo.png2010-04-09 11:03:292010-04-09 11:03:29Road Map for an Application/Software Security Architect (Part 6)
Without a Digital Identity, how would you expect to do any authentication? And with an incomplete Digital Identity, how would you expect to get the authorization done correctly? Without the proper data model and the expectation that it would have the correct data (besides being in the right place at the right time), securing a system is impossible, although having the information, it is the easiest question to answer.
In my last post, I examined the purpose of a Digital Identity and why it is not appropriate when thinking through the architecture of a solution to make this another after-thought of the system architecture. Worse than not having the information (a security risk), is that the information is inaccurate, both in reliability and conflicting (a business risk). So let me lay out some rules and guidelines, and a couple of general questions you might ask as part of the logical design.
But before getting started, a good data model of the infrastructure that is used for authentication and authorization is required. This is part of the overall security framework, which has an “as is” as well as a “to be” component. In this case (and the subject of a framework and road map is, obviously, going to be mentioned again), we look at where the data is that identifies the person (of computer) and all the information that is stored about the person. (or computer), best described in as a data model with a component model. Let’s deal with each.
The data model defines all of the attributes that would be part of the digital identity. But who’s digital identity? An enterprise (or organization) has a number of different types of users, known as constituents. Typical constituents would be employees (temporary, permanent, vendor-access), customers, and business partner representatives; basically anyone that may have access to your systems and services. ,Each set of constituents would have a basic set of attributes, like user name and password, and a distinctive set of attributes, such as employee number or customer number. Everything about that constituent is termed as its digital identity. In general, the more you “know” about the constituent, the better your challenge for authentication and determination of authorization.
The component model defines where the attributes necessary for the digital identity reside. Just because they are defined as necessary does not mean that they are available. The objective of this document is to determine where the information resides. It is the responsibility of this document to determine also the reliability of the information and whether the place where the attribute resides is the most accurate. What you need is the source of the attribute data, or, if not available the most reliable copy of the information. Duplicate information about an attribute is a warning sign that information being provided for digital identity may not be the most reliable (more when we look at identity management). [How many applications are using the wrong copy of the attribute, the one that is, perhaps, not updated as often?]
While the two models are logical, the assumption is that the digital identity of any of the constituents may not be physically on a single database or LDAP-accessed directory. An Active Directory may have sufficient data about credentials, but it will be less reliable for a person’s job function, which could determine the role. The component model will likely include indications of multiple stores, and data models will indicate relationships between the multiple stores (and be not always consistent, either). It will also indicate the “owner” of the information (attribute as well as database)
With this, now comes the discussion with the application (or service) designer to review the data necessary for the authentication and authorization (credential checking) access sequence. The objective of this discussion is to review the following (partial) list of items:
Define the constituents that will have access, and the types of access that is necessary as well as the business reason for the access.
What is the method of authentication and is this sufficient for the data that is being exposed as part of the business reason for the access.
What are the business rules for the types of access, defining what would be the answer to the question of “do you have authorization for access (coarse grained)?”
What are the business rules for the types of access, defining what would be the answer to the question of “do you have authorization for the type of access (fine grained)?”
What other information is required from the digital identity to support the process of access into the system or service? Hint. Application designers like to take information with them for use in the session handling (stored in a session table), usually to be part of the cookie, such as name, address, or subscriber number, that is more reliably obtained during the access control session from the digital identity.
Steve Primost CISSP, CISM
Information/Application Security Architect
https://adsploit.com/wp-content/uploads/2023/06/logo.png00https://adsploit.com/wp-content/uploads/2023/06/logo.png2010-04-09 11:02:242010-04-09 11:02:24Road Map for an Application/Software Security Architect (Part 5)
Planning your application’s use of the digital identity is not an after-thought of system architecture. At the least, it might offer the occasional lack of reliable and conflicting information. At the worst, it provides little, if no protection, at all. And like the proverbial little dutch boy, you will be putting fingers in the holes of the dike, attempting to shore up an weak infrastructure with fixes and excuses.
In my previous post, four classifications of possible vulnerabilities were given. The top one, in my view, is the use of Digital Identity. Application developers are prone to view this as as just another operational infrastructure component that will, by some miracle, provide the reliable credentials for authentication. Authorization is something that either is part of authentication or just a couple of conditions in the lines of code. The problem is more than just the lack of governance of how an application does authentication and (required) authorization; the issue is that the data is not properly planned to support proper authentication and authorization for an application to leverage properly.
Digital Identity:
At a recent security event I attended, a colleague was lamenting how his “LDAP” servers were not being synchronized correctly. Application developers were finding it difficult to verify credentials and had little capability to extend that to more than a very high level authorization process. While in the end he may have an identity management problem, the basic issue is that of poor planning and no strategy for a data model of the digital identity. The digital identity was not serving the application development properly.
Unless you have the luxury of a “green field” when planning a digital identity infrastructure, you need to understand the framework within which you will need to operate. Having a single store for all of the end users with all of the necessary information for all of the applications will not exist. So, to make sense out of it, understand, and inventory, the relationship of the various data stores to the applications, both technically and politically. At a minimum, this would clearly define the purpose and function of the data stores and any LDAP servers, such as Active Directory.
Understanding where the digital information data is only part of the problem. The more difficult problem is understanding how users are identified in each data store and LDAP server. Part of my colleagues problem was that “Jane Jones” was listed as userID of JJones01 in the Active Directory, JJones02 in LDAP, and JaneJones in the key field of UserIdent in the finance database. Of course, the synchronization would not work! If the digital identity for “Jane Jones” is broken, it is not the application designer’s problem to fix it, but an infrastructure issue to mend the relationships so that the infrastructure can be leveraged in a cost efficient and expedient manner. At a minimum, establish relationships between the stores to enable its intended function, and “rules of engagement” including the expectation that certain data is located in specific (and sometimes uniquely) data stores.
Recognizing that there is a difference between the function of authentication and authorization is necessary in maintaining a digital identity across multiple stores. The fewer number of points where authentication is done, the better is the control of the initial access. Authorization can, and most likely, will be spread across stores, but that is fine as long as you have the relationships clearly defined. But authorization comes in two classes: (1) the coarse-grained variety, which is easiest at the point of access control of authentication, and (2) the fine-grained variety, which is complicated by the business logic of the application. Logically, all of this information is part of the “data model” of the digital identity; physically (and politically), it may reside in separate data bases “owned” by the application.
And, lastly, having no governance as to how applications can use (when, why, and how) the various stores of the digital identity will cause chaos. If the policies are in place, and standards are re-enforced then the security framework provides the governance. Enforcement at the various points of the SDLC during security and risk assessment is crucial, both to continue to educate developers and designers, but also to maintain the integrity of the information at the various stores. Ad hoc modification of the data model for digital identity, including locating authorization data in multiple data stores can and does have long term consequences on the viability of the digital identity.
Steve Primost CISSP, CISM
Information/Application Security Architect
https://adsploit.com/wp-content/uploads/2023/06/logo.png00https://adsploit.com/wp-content/uploads/2023/06/logo.png2010-04-09 11:01:112010-04-09 11:01:11Road Map for an Application/Software Security Architect (Part 4)
Risk assessments for application software is not a matter of a quick penetration test nor a matter of code reviews at a single point in time. It is a process of moving through the application/solution’s Software Development Life Cycle (SDLC) and evaluating the results of the controls that are put in place at each phase. Whether it is waterfall, or agile method, waiting for the end of the final delivery of the software makes no sense. No matter how much you put into the end phase (usually the acceptance testing), if you have not tested and sampled the effectiveness and examined the results of the controls along the way, it will be a flawed product. So having a security risk gate review and assessment at each point in the process must be mandatory.
The needs and the controls will be different at each point in the SDLC for a security evaluation. The previous posting spoke of the necessary elements of scope, purpose, objectives, responsibilities, and processes for a risk assessment. will be different. While an application security is evaluated on many different levels, from code to architecture, the intent is to define “risk assessment” on the latter since that is within the scope of responsibilities for an application security architect. (More on the role of that architect during the detailed specification, development, testing, and deployment/operational phases of the SDLC in later posts).
The last post mentioned “vectors” (my term!) for classification. Perhaps, it can be better explained as four areas of vulnerabilities, and each will have, assuming this morphs into a procedural process some definition of administrative, physical, and technical controls and data-gathering. I would propose the following classifications:
Digital Identity:The application as well as the end-users will be using a defined set of information (credentials and the resulting authorization and specific attributes) in order to operate in the defined environment. The setting of the privileges as well as the placement of information is reviewed. This is typically expressed as the authentication and authorization data model
Access Management:The application needs to allow access to its end-users for data as well as access data on its own. This defines what access scenarios need to be used and how they are used, as well as indicating the type of information that will be exposed, either singular or composite (for data leakage) as well as validated. At a minimum, a data model and sequence diagrams are used.
Identity Management:This addresses the operational impact (people, process, technology) to manage the digital identities across the enterprise, or, in the case of federated identities, across multiple domains. If the security infrastructure (roles, delegated administrators, governance, enterprise policy, data owners, etc) is affected, this brings out the issues. References are made to both of the data models and additional definitions are provided for entitlements and roles.
Session Management:In addition to the typical state (and perhaps the timing) diagrams that developers are prone to use, this addresses the concerns of “data in use”, “data in motion”, “data at rest”, and “data disposal” throughout the software solution’s data life cycle and transaction flow, either batch or “real-time.” One should not confuse, in this pattern definition that of an access “session”, which is the interaction of end-user to system (or system to system), with that of data life-cycle sessions of data in points in time.
Each of these classifications or vulnerabilities could have a varying set (some overlapping) of threats based upon the threat model. Basically, we are addressing a set of security aspects and focusing on a set of possible attacks. But rather than listing the attacks and referencing the method to reduce the risk for each classification (incident based), we will look at providing a checklist or asset based methodology for each of the classifications.
Steve Primost CISSP, CISM
Information/Application Security Architect
https://adsploit.com/wp-content/uploads/2023/06/logo.png00https://adsploit.com/wp-content/uploads/2023/06/logo.png2010-04-09 10:58:172010-04-09 10:58:17Road Map for an Application/Software Security Architect (Part 3)