Jump to: navigation, search

FreeIPAv2:Overall Design of Policy Related Components

FreeIPAv2 Development Documentation

Please note that this content was made during FreeIPAv2 development and it may no longer be relevant for current versions of FreeIPA. We left the content here for study and archaeological purposes.

Please check our Documentation for a recent list of topics.

Policy Related Concepts

One of the requirements for the IPA v2 is to provide centralized policy management. The policy, however, is a very loaded term and can mean different things in different contexts. During our analyses of the use cases we came to the conclusion that there are different kinds of policies that need different kinds of treatment. We started by sorting policies into different buckets based on their characteristics.

  • The first kind of policies we have identified is host base access control policies (HBAC). These are the policies that define which users can access which machines. The main characteristic of HBAC policy is that it should be very dynamic and reflect the reality as much as possible. Any delayed processing and caching should be minimal to avoid granting user access to a host when his permissions to access the host were just revoked. It was decided that for dynamic policies like HBAC, when change of the user's group membership can affect significantly the scope of what he can do or access, should be stored in the DS and clients should do LDAP searches every time with very minimal caching if the system is online. In the offline case the cached HBAC policies will be used (if the IPA client policy allow it). The HBAC design is covered in details on the FreeIPAv2:Concepts and Objects page. The schema objects and low level implementation details can be found on the DS Design Summary page.
  • The second kind of the policies is user policies. These are policies that are related to the user and affect user environment regardless which machine he is logged on. In Windows world it is named “Roaming Profile” - set of settings that follow the user regardless which machine user logs in. This kind of policies is not a part of v2. It is deferred to later version.
  • The third and the major kind of policy is the machine policy. This is kind of policy that affects the state of a machine or group of the machines. The idea behind this kind of policy is that the policy is defined on the server, delivered to the client and then translated into specific configuration of the client software. This configuration can affect what users can do, but permissions for one and the same user can be different on different machines. These policies are assumed to not be frequently changed, thus clients can pull and apply these policies periodically – once an hour, for example.

The rest of the page is dedicated to the machine policies and drills down into the details of how these policies are defined, stored, maintained, delivered and applied.


One of the important concepts that we introduced during analyses and evaluation of the machine policies is the concept of an application. The idea is that the policy usually makes sense in the context of an application or putting it simple “policy translates into a specific application configuration following centrally defined rule”. We see the mission of the IPAs policy engine as centralized management of the policies (read security configurations) of the different security related applications like SUDO, SELInux, iptables, and others. Potentially IPA can be used for a broader configuration management but this is not currently a goal for IPA. Though IPA is being designed with broad flexibility and extensibility in mind, it is yet to be seen and proven if IPA would be the right instrument for a broader configuration management task.
The concept of an application allows sorting policies and dealing with configuration on per application basis. This way SUDO policy translates into the configuration of the SUDOERS file, the apache policy into a collection of apache related files and so on.
From the policy management perspective the IPA administrator would use web UI (or CLI), select application, define policy (configuration) of this application and then associate this policy a set of hosts that should have this configuration. The policy engine and IPA client will do the rest – deliver policy to the client and translate into the application configuration. Sometimes, however, the policy should be viewed broader than on per application level but rather across several applications. To address this we plan two different solutions.

  • The first solution is considered for IPA v2. If time permits we will implement so called policy profiles . The policy profile is a collection (group) of policies defined for different applications that are in some way related to each other and should be delivered to the same set of hosts. From the perspective of policy management this would mean that the IPA administrator would have to:
    • Define individual application policies
    • Aggregate application policies together in to a group of policies (policy profile)
    • Apply (assign) this group of policies to a set of hosts
This is a convenience feature to better organize policies and provide a more logical management experience. Without this feature the administrator would have to apply each individual (application related) policy to a set of hosts.
  • The second solution is to provide so called “meta policy”. The idea behind the “meta policies” is that its sole purpose would be to become a portal to changing related configuration settings for different applications at the same time. There are several ideas how this can be accomplished but this feature is not planned for IPA v2 and thus would not be discussed here in more details.

Types of Policies

As it was mentioned above, we are talking about the centrally managed policies that define the configuration (configuration state) of an application on a client machine. While evaluating different use cases we realized that there are other situations besides delivering configuration to applications that policy engine needs to handle. One of the examples is SELinux. The SELinux policies are compiled binary blobs. Delivering and applying such policy means downloading a policy file from IPA server and running a script or command to apply it. This use case led to an attempt to sort out different kinds of policies and responsibilities the policy engine should be in charge of. The following subsection dives into the three different responsibilities.

Configuration Policies

The configuration policies are the policies that as was mentioned above defined and management on the IPA server itself. They are delivered to the client and applied using specific client components that we will talk in more details a bit later on this page. The following list summarizes the characteristics of the “configuration policies”:

  • Defined on the server itself (not an externally defined blob or file that needs to be distributed)
  • Defined in the context of an application
  • Can have multiple instances of the policy that target different machines
  • Can be more than one policy in the scope of an application. These policies can be targeting overlapping sets of hosts. The example can be: there is a default policy that applies to a big set of machines and then there is a specific policy that applies to a subset of machines. These two policies need to be merged on those machines to create a resulting combined policy (configuration).
  • Can be associated with overlapping sets of hosts thus require merging. To be able to merge policies predictably the precedence rules must be defined for such kind of policies.
  • More or less free structured contents. The contents of the policy is dictated by the application and there are not requirement to its structure and format.


Actions is a different kind of policy. It is actually more an operation than a policy. There are always two different views on the system configuration:

  • One view is state oriented - “here is the configuration, make the application be configured like this on the client so that the final state of the configuration matches the specified policy”.
  • Another view is action or operation oriented - “run this script to apply SELinux policy”

The good practice is to view the configuration as a state but in reality both mechanisms – “change state to be like this” and “perform action” are needed. The “actions” in IPA give the administrators the capability to centrally control execution of different operations and apply policies that are not managed directly by IPA. In this case IPA just provides delivery mechanism and framework to do something with delivered data or script. We envision the action as a combination of download and run operations. The download portion of the action is about what file to download (if any), what should be its name and location on the target system, ownership and permission; what condition should be met to actually download the file; what script to run if any under which user context and how frequently. The following list summarizes the characteristics of the actions:

  • Actions deal with operations not with state
  • Actions can do download and execute operations. Any action can have either part or both.
  • Actions have one and the same format but not all the data can be defined if the action is related to just download or just execution.
  • Actions can be destined to different sets of the machines
  • More than one action can be targeted to a machine but there is no need to merge. Actions are independent of each other and should be executable in any order. If action is dependent on another action these actions should be viewed as one action and combined together.
  • Though actions can sometimes be logically associated with applications there is no sense to bound them together. The main value for application binding for the configuration policies is that their priorities can be defined in the scope of the application. Actions do not need priorities and also one action can affect several applications at the same time.


Role is the third category of policies that policy engine will deal with. Many applications need to deal with the question whether or not a user can perform a specific action on a machine inside this or that application. Currently this kind of access control is implemented on per application basis. The IPA's role mechanism would allow application to take advantage of the centralized role management defined in IPA instead of maintaining or building it own role based access control solution. The application usually knows the user that tries to perform an action and what an action means. The application is usually interested in just yes or no answer about whether user can perform this action or not. The desktop team developed a library named Policy kit that applications (currently desktop applications mostly) can use and ask whether a user can perform an action. The mapping between user and actions in Policy kit is currently done based on the direct user association with actions on a local machine. The data is stored on each individual machine in the files that contain direct mapping of users to actions. With IPA v2 a new level of abstraction can and will be created. The operations will be aggregated into roles and users and groups of users will be mapped to roles. The Policy kit then will receive role information from IPA via policy engine. On the other side users will be mapped to application roles in the IPA. The Policy kit will integrate with IPA client and will use IPA client interfaces that would return to the caller the set of roles the user is mapped to in the context of an application. The following list illustrated what would happen if application decides to take advantage of the IPA backed up Policy kit.

  • In the policy engine of the IPA an administrator will define roles (usually once per application). Roles should be viewed as named (tagged) sets of permissions/operation one can perform if he or she logs into an application. Most likely canned role definitions will be provided for different applications as product matures.
  • The administrator will define mapping of the users to different roles. The mapping follows the notion that users can assume different roles within same kind of application running on different machines. So to define a role the administrator will associate user or group of users and a host or group of hosts to a role (role tag) in the context of an application.
  • The IPA policy engine will deliver the policy (definition of roles) to IPA client machine. IPA client will translate this information into the format that is more suitable for Policy Kit to use.
  • The application integrated with the policy kit will request access control decision from the Policy Kit.
  • In turn Policy Kit will ask IPA client about what role(s) the user is a mapped to on the current machine in the context of an application. IPA client will provide interface to get user roles. The details about this interface can be found on the IPA Client Design Overview page.
  • By matching user -> role -> operation to each other Policy Kit will determine if the user can perform operation or not.

The applications are not required to use Policy Kit. Applications can take advantage of the IPA's role infrastructure directly doing processing of role data and mapping users to roles by itself.

The following diagram gives an overview of the roles and related concepts in IPA v2 .


The following list summarizes the characteristics of roles:

  • Roles are names sets of operations/permissions one can perform in the context of an application.
  • Roles make sense in the context of an application
  • There is only one set of roles per application
  • The role set should be delivered to all the machines that are going to host an application these roles are assigned to.
  • Since there is only one definition of the roles per application there is no need to merge or prioritize role definitions .
  • Roles can be inclusive and exclusive. Inclusive role means that user can have several roles in the context of the application and the permissions defined by these roles should be merged (combined) together. The merge logic for merging role meanings (sests of permissions) in this case is application specific and left to Policy Kit or application itself to implement. If the application supports the exclusive roles only one role can be assigned to user at a time.
  • Role definitions should have a specific format so that UI engine could pull the possible role names from it. This would create a list of all possible roles the users can be mapped to. It is convenient for the UI that would define mapping of users to roles.
  • The mappings of users to roles will be stored in the DS and will be looked in the DS with caching for a very short period of time. This will be done to dynamically adjust user capabilities when user is moved from one group to another.

Comparison of the Kinds of Policies

As one have noticed the policy engine will be in charge of configuration policies, actions and roles. All these parts though significantly different have the following things in common:

  • Same mechanism will be used to store and manage information on the server
  • The delivery of the information will be performed via the same channel – the “policy downloder” client component .

Policy Management

In this section we will talk a little bit about the anticipated user experience with the management of the policies.

Defining Policy

Depending upon the task administrator will:

  • Define a new or modify existing configuration policy for an application.
  • Define a new action.
  • Define or change application roles.

Editing Configuration Policy

To edit configuration policy the administrator will navigate to the appropriate place in the UI (menu item) that will allow him to define configuration policies. There he will select application for which he wants to define a policy. Then he will see the list of already defined policies within the context of the application. There administrator would be able to change the order of the policies in the list to change policy priority, associate policy to a group of hosts, select policy for editing, copy or remove policy or start editing a new policy.
The policy data for each application will be different. To be able co accomplish such flexibility the policy engine should be extensible to allow dropping in different policy templates. The UI engine will use these templates to render the UI and allow administrator to set values for the policy.
Potentially there can be several different configuration policies that have not only different values for same configuration parameters but also define values for different configuration parameters. For example there can be a default policy that defines value for parameter X while there is other policy applicable to the same application that defines value for parameter Y. All the configuration policies for an application are prioritized in one list. If two different policies are destined to one and the same host they will be merged. It is anticipated that usually there will be one set of the configuration parameters expressed by one and the same template (or different versions of such). In this case the merge logic will assume that the policies can be merged and the data is complementary. In rare occasion when the application would require more than one set of the configuration data, i.e. different templates, it should be split into two different unrelated applications from management perspective. In future we might allow more than one configuration template per application.
The term template used in this section is a way to describe the fact that the policies will be flexible and there will be a way to define the contents of the policy and tell the UI how to collect policy data from user. In reality there will be no templates but rather policy structure definitions named schema. We will talk about this in more details in later sections on this page.

Associating Policies to Hosts

Once a policy is defined the administrator will have an option of associating the policy with a set of hosts. This set can be expressed as a mixture of hosts and host groups. The administrator will select hosts and host groups from the list. As it was mentioned above there is chance that two different policies will be associated to the same subset of hosts. In this case the policies will be merged based on the merge rules defined for a policy.

Preview of the Policy

It is very important for an administrator to understand how the data he entered into the policy form would actually translate into the configuration file or files on the client. In IPA v2 we plan two kinds of the previews. One preview will just create a configuration file out of the selected policy and would allow the administrator to check if everything is Ok with the policy he is working on. Another preview mechanism will show a preview of the policy in form how it will be delivered and saved in the configuration files on the targeted host. This preview will request the administrator to enter the targeting host and would perform merges of the policies if necessary. The preview will not show what would be the result of the merge of the central policy with the local files on the host if local merges are allowed. This feature is deferred to future releases.
In addition to the preview of the policy itself the preview will actually show how the merge was done. It is not clear how eaxctly it will be done but the vision is to have a log that would contain something like this:

  Evaluated SUDO policies for host ipa-client@test.lab.com
     3 policies are destined for host ipa-client@test.lab.com
        Default Corporate Policy – precedence 4
        Department Policy – precedence 16
        Lab Policy – precedence 39
     SUDO application does allow merging policies
     Default Corporate Policy was selected as a base for merge
         Department Policy was merged on top
         Lab Policy was merged on top

With such log the administrator would know what was the logic and how things are going to be merged.

Policy Validation

It is important to realize that policy UI (and CLI) would not have any specific business logic that will validate that one instance of the policy does not contradict other instance of the policy. There will be syntactical validation and validation of the consistency of data within the policy itself but not across several policies. Several policies will just be merged based on the merge rule defined for the policy and priorities of these policies. The customer has two options:

  • Develop and test policies in the test IPA domain
  • Use preview mechanism to determine what policy will be downloaded to the client.

It is important to underline that the preview mechanism does not support the preview of the merge of the local files with the centrally defined policy in v2. In future this might be addressed most likely by special command line utilities that would allow this kind of merge preview if run from the host the policy targets.

Policy Lifecycle Management

In future we plan to provide a full policy lifecycle management. In IPA v2 we will provide parts of it.

  • System will support two states of the policy – enabled and disabled. This means that the disabled policies are not sent to the clients only the enabled ones. This is mostly useful for the case when the whole policy should be disregarded as if it is deleted without actually deleting it.
  • The policy engine will have a two step commit functionality. One would be able to write a policy but not apply it yet. It is useful when the policy needs to be modified and prepared but applied later at the time of the outage windows, while the old version of the policy is active. The administrator will have the capability to edit the policy and save it and then apply it later. In future we might consider doing scheduled commits. The UI that of the policy list will be built in such a way that it would be easy to specify if you want to use an uncommitted policy in preview or not. This would allow administrator to see if the policy is correct in the context of other already applied policies before pushing it out.
  • The policy will have a special field that would contain comments about the changes made to the policy.
  • There will be one depth history for the policy. This functionality would be useful when the policy was applied and something stopped working and there is a need to roll the policy back to the last known good state. In future we will extend the history depth to be configurable.
  • There no plan to support approval mechanism for the submission of the policies in IPA v2 this functionality will be considered for later releases.

Policy Engine. Under the Hood

Now it is time to talk about the mechanisms that will be used to define, manage, store and deliver the policies. The policy will be represent by a blob of the XML data. The following section dives into the details of how the XML policy will look like.

Policy Format

The XML has been selected for its flexibility and extensibility. Each policy XML file will have some structure that will be common for all the policies of the same kind (Config policies, roles, actions)


First each policy regardless of its kind will contain a metadata section at the top of the file. The metadata will hold the information about this policy including but not limited to:

  • Version of the policy
  • Name of the policy
  • Who created this instance of the policy


The following example shows a subset of the metadata:

        simple sudoers example, allowing mount/umount of a CD-ROM
      <app>SUDO</app>                 <- means that this policy is for SUDO
     <mergeXML>yes<mergeXML>            <- means that in case of collision the XML files need to be merged
                                                (alternative means the policy with higher precedence wins)
     <local>no</local>                  <- do not merge with local files - applicable to the configuration section only

The example shows the kind of the metadata that will be stored in the policy file. It is currently unclear what the final set of the required and optional metadata elements will be. It will be published separately as prototyping of different policies proceeds.

Main Body

There are three different kinds of the XML policy files – one for each kind of policies we identified earlier.

Configuration Policy

For the configuration policy the main body will contain the configuration data for an application. This configuration data is different for different applications so there is no predefined structure of this information. To read more about how the structure of the XML file is defined read the Relax NG section later on this page.

The following is an example of the configuration policy for sudoers:

        <path>/sbin/umount /CDROM</path>
       <path>/sbin/mount -o nosuid,nodev /dev/cd0a /CDROM</path>
        <path>/sbin/shutdown -r now</path>

This snippet is specific for SUDOERS file and SUDO's configuration policy. For some other configuration policy it will be completely different.


General Overview

The structure of the XML files that defines roles will be a little bit more formal. The reason for this is to make it easier for the IPA UI engine to parse the XML blob and extract the names of the roles so that they can be used in other places in the UI especially when someone tries to map users to the roles.

The following example shows how the roles can be organized. It is an example and subject to change as we do the prototyping of the role-defining XML files:

   <roles>                                < Roles section 
       <role name=Common> 
           <...>                      <- Role specific data 
       <role name=Uncommon> 
           <...>                      <- Role specific data 
       <role name=Default> 
           <...>                      <- Role specific data 

The suggested structure of the XML roles file is partially fixed. The data under role element can be anything the application needs but the other data and especially elements that define role name should follow a predefined structure so that they can be easily extracted from the file.

Roles for Policy Kit Enabled Application

Each application will have its own definition of roles and what they mean. However, for the applications that would leverage Policy Kit will have more in common. Initially we considered treating Policy Kit as an application and make it contain one role definition for all other applications that will leverage Policy Kit. Such approach is not flexible and thus was rejected. Each application that leverages Policy Kit will have a similar template for the roles so that it is easier to develop new role definitions for new Policy Kit enabled applications. All Policy Kit enabled applications define roles based on the application actions (do not confuse with IPA actions described above). Each application that integrates with Policy Kit defines its own set of actions. On the local host the actions are a part of the application and installed when the application is installed. On the server side it is required to know these "actions" too so that they can be used in the role definition policy. There are several options how these actions can become known to the server side. We have evaluated the following options:

  • Store application actions for Policy Kit in the DS - this approach involves a lot of overhead. Also from the grand picture storing the Policy Kit actions in DS does not fit the high level architectural approach. It would also require the policy processing engine to be much smarter and allow dynamic lookup of the data in DS. We plan to implement dynamic lookups but for the host object attribute substitution only. If we go the LDAP route we would have to develop a much more complex feature.
  • Store application action list hardcoded in the schema - this approach requires knowing all the actions in advance. It does not scale well since the modifications to the applications will cause the IPA administrator to update schema for the application and redistribute new schema to all the clients.
  • Store the possible actions in the special section of the role definition policy itself - this approach calls for a more generalized schema that can hold the list of the actions that are defined in the scope of the application. It would be possible to populate this list manually in the UI or by loading it into the IPA from the file that came with the application. This approach seems most promising. I means that the role file schema will also contain a special section after metadata and before role definitions that will list all possible actions that can be used later in role definitions in the roles section of the role definition policy.


The main purpose of actions is to perform some operation on the client system. Sometimes the the action requires some data to be delivered to the client before the action should be executed. This data can be the policy to apply, configuration file to copy or script to execute. After analyses and comparison of different use cases we came to a conclusion that the “action” should consist of the file section and execution section. Either of these two sections is optional if another is present. This means that action can just specify that the file should be downloaded or that the script should be run or both. The file section describes what to do when the data is delivered to the client system. It specifies:

  • The data itself (which is read from the file when the policy is defined by administrator and embedded as an element into the XML blob)
  • The full path of the file that need to be created on the client system
  • Ownership on the file
  • Permissions on the file
  • SELinux label for the file
  • ACL for the file
  • Condition – a script or command that needs to be run on the client. If that script returns zero or a certain output then the file should be downloaded. The condition allows differentiating client systems based on its properties, for example the version of OS or version of the application. It is not decided yet whether there will be one condition targeting both file download and action or there will be two separate conditions one controlling the file download and another the execution of command. It is proposed that if the file download does not happen becuase the condition was not met the command should not run. With this assumtion in mind there should be only one condition that applies to both the file and command.
  • Cleanup flag - should the file be removed after the action was sucessfully run.

The “command” section of the file will contain the command itself and the “schedule” element. The “schedule” element will define how frequently and under what conditions the command should be executed. The action XML blobs will have one and the same predefined structure for all the actions that can be defined in the system. The structure of the action XML blob might look like this:

<?xml version="1.0" encoding="UTF-8"?>
<ipa xmlns="http://freeipa.org/xml/rng/ipaaction/1.0"> 
    <name>simple ipaaction example with an URL</name>

     <command>test -e /etc/redhat-release</command>
      <command>/bin/rm /tmp/something.txt</command>

<?xml version="1.0" encoding="UTF-8"?>
<ipa xmlns="http://freeipa.org/xml/rng/ipaaction/1.0"> 
    <name>simple ipaaction example with embedded data</name>



The action XML blob structure is not finalized yet but gives a good example of what an action constitutes. It is not clear how far we will go with the implementation of the scheduler in the action. As an option there might be a series of configuration policies that would translate into the crontab configurations that would be equvalent in functionality to the actions but would require a bit more customer involvement.
The file to download in the file section can be specified as an embedded data put as binary data into the policy itself. In this case the action will contain the element. Alternatively the file to fetch can be expressed as a url. In this case the action processing part of the policy donwloader will fetch this file using specified URL.

Relax NG

In the previous section we looked at the different formats of the XML policy blobs depending upon the kind of policy this blob represents. In this section we will talk about the means to define and enforce the structure. Historically there have been several ways of defining the structure of the XML document. During the early evaluation of the alternatives we have identified Relax NG as the best option for definition of the structure of the XML blobs. Relax NG is the XML based language to define the structure of the XML file. The Relax NG was selected for its flexibility and ease of use.

Policy Schema

The following Relax NG schema gives an example of how the configuration policies will be defined using Relax NG schema.

Page removed

Documenting Schema

The Relax NG schema language supports name spaces. One of the name spaces is used for annotation of the schema itself.

Schema and UI

The Relax NG schema can not only contain the the description of the XML file structure but also the description of the UI that needs to be used to interact with the user and collect from him the values for policy fields. A special names space is used to annotate which UI elements should be used to represent different data elements that need to be collected from the user. Exact elements and tags are under design now and will be added later as we move forward with prototyping of the policy rendering UI.


In addition to RNG, we will use Schematron. XML Schema, Relax NG, and others are prescriptive in that they stipulate what elements may appear, in what order and with what content. Schematron is rules-based and allows expression of constraints that the other schema languages cannot, such as "when element <foo> has value <bar>, element <baz> must not exist". Schematron by itself is almost never a good choice, but it is common to use it as an adjunct to the other schema languages. Relax NG has direct support for embedding Schematron annotations.

Special Tags in Policy Schema

In addition to the UI and documentation tags the schema can in future contain tags for audit so that special event are logged not only when the policy is saved or applied but when specific values in the policy change. It is unclear whether we would have time to implement this feature in IPA v2.

Another valuable feature we will consider implementing in IPA v2 is references from the policy to the host object attribute. The idea behind this feature is that it might be useful to have a policy be a template and have customized based on the attributes of host object the policy is delivered to. Current vision of this feature is that the schema for a policy will contain a special tag that will have the name of the host object. Then the client will do the substitution of this tag with the real value from the host entry. It is unclear if we have time to implement this feature in the IPA v2 timeframe.

Storing Policies

Now when we know and understand what format the policies are stored in and how we will enforce the structure of the policy we can talk about the storage mechanisms used storing policies. As it was mentioned above the policy itself is an XML blob. We have looked at two options on how the XML blobs will be stored in the system. The first option was to use file system as Microsoft does the second was to stick the XML blobs into the DS itself. In both cases the DS should have “helper” or so called “link” entries that help finding the right policy. So when we were comparing the two options we kept in mind that:

  • There will be a DS entry with a link to the policy
  • We need to think about replication
  • We need to think about scalability of the solution and performance

The file based approach has several significant issues but also some advantages.


  • Can store big chunks of data
  • Scalable


  • Have to replicate the changes separately from the changes to the
  • Potential de-synchronization between DS and file system
  • A lot of work related to replication
  • The available file replication packages are not robust enough for our needs
  • File based access control (different from DS)

The DS approach has the following characteristics:


  • Consistent multi master replication out of box
  • Much less work
  • Same DS based access control rules as for other objects in the system.


  • Scalability concerns if policies become really big.

We have chosen the second option for v2. We will implement it in such a way that we can later switch to using file system if we face the scalability issues.
We plan to compress the policies before saving them in the DS entry. XML has a very good compression ratio. Since the policy management is not an frequent task and it is expected that new versions of policies will be developed in the test environment the impact of the compression and uncompression operations would be minimal.

Storing Schemata

The schemata for the policies will be stored in the XML files on the file system and will not be replicated. This means that when a new schema for a new application is added the schema definition files should copied to all replicas manually. Since this should be viewed as a system upgrade it is acceptable to require installation of the schemata files on different replicas manually and not provide automatic replication of the schema definitions for policies. In future we will consider storing the schemata in the DS too but this is currently outside of the scope of the IPA v2.
A more detailed description of how to add support for policies for new applications can be found later on this page.

IPA Client and Policy Delivery

The IPA client will use two components: IPA provider and Policy Downloader to perform policy downloads. It is a pull model. The policy downloader will periodically ask IPA data provider for the list of the new policies it needs to download. It is expected that IPA provider being an LDAP “guru” would be in charge of doing different kinds of the sophisticated DS lookups. The actual logic of those lookups will be defined later on this page. As the result of the lookup the data provider will return the list of the policies the policy downloader should download. The policy downloader will connect to the IPA server using XML-RPC mechanism. The reason we will use XML-RPC mechanism is to abstract the storage mechanism. We anticipate that we might have to change storage mechanism in future from DS based to some other method if we see scalability problems.
The policy downloader will request the policies based on the returned list. The policies will be requested via XML-RPC call one by one. Each policy will be delivered compressed. The client code will uncompress the policy, perform any special tags substitution and store the resulting XML blob in the file in the system. The file system directory hierarchy is covered later on this page.
In the current proposal the policy downloader and the data provider are two separate components that both connect to the IPA server. It makes sense to logically separate duties of thise two components since one is responsible of doing the DS lookups while another for downloading and processing the policies. Physically we might consider combining them into one and the same process. The idea is that it would be easier to updated the client and integrate it with other data and policy providing back ends like Samba or 3rd party central server. Regardless or whether these two components are combined there is pretty much the same amount of work if keepinmg them separate or together under one process umbrella. Current plan is to implement them as two separate processes in IPA v2. Later they can be refactored for a better extensibility.
We also evaluted approach of making the policy downloader XML-RPC calls rely on server side processing rather than ask data provider to do the searches. This puts a lot of burden on the server especially in the deployments with many clients. It is hard to assess which approach is better without prototyping. Current plan is to do the lookups using the data provider and do processing of the policy lists on the client. In future we might re-evaluate this approach if we face the performance and scalability problems.
The configuration policies that are destined for one and the same application will be merged together based on the merge rules. The resulting policy will also be stored in a special place in the file system. Then the policy downloader will launch the XML converter passing in the XML file name as an argument. The XML converter is the utility that would translate the XML policy into the format the application expects. Current plan is to have a converter utility that will use an XSLT template to translate the contents of the XML policy to some other format more native for the application. The XSLT is sort of XML based program that defines how to process the XML file and convert it to something else. Every application will have its own XSLT template to use with converter. What template to use will be defined in the policy itself in metadata section as shown earlier on this page. The name of the XSLT template in metadata of the policy will be populated automatically based on the schema. It is possible for the application to have more than one XSLT that needs to be applied. In this case the metadata will have multiple XSLT templates specified and policy downloder will launch the conversion utility with each of the specified XSLTs. So far we envision the following XSLT templates that we would provide:

  • XSLT template for SUDO files
  • XSLT template for IPA client configuration
  • XSLT template for SELinux roles


For the text files after the conversion is done there can yet be another step – merging configuration file locally with local configuration file. The rule about merging the files is based on the on the flag stored in policy's metadata. If the flag is omitted the policy should assume that the IPA is the authoritative source of the configuration information for this application and the local configuration file should be overwritten. If the flag indicates that the files should be merged the policy downloader will launch the local file merger utility. There will be no generic local file merge utility. The complexity of the utility would depend on the configuration files it needs to merge. For the simple merges a simple shell script can be used. For more complex merges we plan to use Augeas library. To read more about Augeas project see http://augeas.net/. The specific merge logic will be implemented as we prototype local file merges. The ability to merge with local files is the property of an individual policy. In the UI an administrator will be able to define if the policy should be merged with local files or not. This creates a situation when there might be two different policy instances destined for a host: one can be merged and another can't. A special attribute of an application object will define whether the result of merge of such instances should or should not be merged with local files.

Local XML Processing Files

For the policy downloader to be able to process the XML file and for converter to translate the policy into something else the schema definition and XSLT file shall be installed on the client system for every supported application. The applications that will be supported out of box the Relax NG and XSLT files would be a part of the distribution. For the applications for which the IPA integration will be enabled later the files would have to be distributed. For more details of how it can be done see the Adding Support for New Applications section at the bottom of this page.

File System Structure

The downloaded XML files after extraction and file substitution will be stored under the directory structure in the following directory: /var/cache/ipa/policy/xml. Under it there will be three subdirectories:

  • config – for configuration files
  • roles – for roles
  • actions – for actions

Under “config” directory there will be subdirectories with the application names. Under the application directory there will be files with the names that correspond to the policy's unique identifier. For more details about policy unique identifiers see the DS Schema and Policy Related Objects section later on this page. There also be one policy with name final.xml. This is the policy that will be created as a result of merge of the policies. If there is only one policy destined for the machine and there is no need to merge the final.xml will be a link to the only policy file.

Under “roles” there will be subdirectories with the application names too. Each application directory will contain one file (if any) that will describe roles. Since there is always not more than one file with role definition per application there is no need to merge role definitions. However application might expect the roles to be translated into some other format so there will be a converter run when the role definition is downloaded

The “actions” directory will not have any subdirectories. All the actions will be stored in the “actions” directory itself. The name of the action policy file will be based on the policy unique identifier.

The time stamp of the file creation/modification will be used to determine whether a new version of the policy need to be downloaded. Alternatively (if we have time) we can implement the sequence based approach to determining if the policy needs to be refreshed. For details about policy download logic see the Policy Lookup Logic in DS section later on this page.

Note: Renaming applications would not be supported and attempt to do it manually might cause problems on clients. This should be an acceptable trade off since application names should be treated as an intricate part of the system.

Merge Rules

There are two kinds of merges that the system might perform:

  • Merging several (more than one) application configuration XML policy files between each other.
  • Merging the centralized policy with local policy.

Merging XML

In IPA v2 we will support only two XML file merge options. The first one is not really a merge. It just states that the policy with the higher precedence wins. Such approach is useful when for an application A there will be several instances of the policies with pretty similar contents but assigned to different groups of hosts that can potentially overlap.
The other option is an additive merge. This would mean that the policy with lowest precedence will be taken as a base and all the other policies that need to be merged with it will be laid over it overwriting common data attributes. As a result a combined policy will be created. The merge algorithm will be be same on the client and in “preview” on the server so that administrator can actually see the results of the merge operation as he develops the policy on the server. How the policies should be merged for an application is a property of the application and thus will be stored in the application entry. For more details about DS schema for application and other related objects see DS schema topic later on this page. In future the might be other merge methods supported by the system. It is also planned to create a pluggable framework for handling of the merges. This feature is outside the scope of IPA v2 project.
We also evaluated the option of allowing the individual policy instances be marked as ones that can be merged and ones that can't be merged. We think that this functionality can be accomplished by creating two different types of applications one with XML policy instances that can be merged between each other and one that can't. Allowing a mixture of two would complicate UI, merge logic and confuse user. Since it is possible to add this complexity incrementally in future this idea is deferred to later versions.

Merging Local Files

The policy can be authoritative source of the information. In this case the configuration file(s) generated based on the policy will just replace the local configuration files. In other case there might be a need to respect the local files and merge the local file and the central file.
There are two different pieces of information that control the merge logic with local files:

  • Whether the merge is needed. This information should be specific to the policy instance since it can be that one policy instance targeting one subset of machines should be merged but another targeting another subset of machines should not. A flag in metadata of the policy instance will indicate whether the merge is needed. By default if there is no flag present in the XML file the client will assume that central policy takes precedence. This flag should be editable in the UI if this option makes sense for the application. In cases when one policy will be translatable into several local files the rules about merges should be defined on per file bases inside the corresponding XSLTs
  • What should be merged to what. It can be that the configuration created based on a policy should be merged to local file or vice versa the local file shall be merged to it. This kind of information is a part of the policy as a whole and can't be different on different machines since it depends upon the structure and nature of the policy itself. This this will be specified in the code of the XSLT template. In case the policy shall be translated into multiple configuration files the XSLT logic for each file will have the right code to do the right thing for that kind of file.

We plan to use Augeas for the merging of files. We would create lenses (set of regular expressions used by Augeas). Augeas uses the lenses to parse the file and present the configuration in a form of a tree. The task of merging the configuration files becomes the task of merging two trees together.

Backup – Restore Logic for local Files

In any case when the local file is merged or replaced it is important to keep an original (or last known good copy) of the configuration file that was on the system. It is needed in case the policy is changed so the merge has to be performed again or in case policy is completely rolled back. Then the client shall restore the original configuration file.
The logic described here shows how we will handle the backup of the local files. In case there are several files affected by the policy this logic will be applied to each of the files.

  • If there is no backup file (means this is first time the policy touches the file):
    • Copy configuration file into /var/lib/ipa/policy/backup/<application> with appended extension .orig
    • Copy configuration file into /var/lib/ipa/policy/backup/<application> with appended extension .backup
    • Create a new configuration file in place of the old one by performing merge or just replacing the file depending upon the values of the flags that have been discussed in the previous section
    • Run a one way hash (sha256 for example) of the resulting generated file. Convert the hash into a printable hexadecimal string. Append this string to the name of the backup file. This is done to record the state of the file and be able to make the right decision if the file is manully altered by the administrator.
  • If there is a backup file and the hash string of the backup file is the same as the hash string of the current configuration file then there was no out of band modification of the file. This means that our backup file is current so if we need to perform the merge the backup file will be used as a base for the merge. If we do not need to perform a merge the current configuration file should just be replaced by the one generated based on the policy.
  • If there is a backup file and the hash string of the backup file does not match the hash of the current configuration file then this means that the configuration file was manually modified by someone who had the authority to change it. We then will assume that whoever did it approved the changes so this new file should be treated as a latest correct client configuration. Actions:
    • Copy configuration file into /var/lib/ipa/policy/backup/<application> with extension .backup overwriting the previous backup file.
    • Create a new configuration file in place of the old one by performing merge or just replacing the file depending upon the values of the flags.
    • Run a one way hash (sha256 for example) of the resulting generated file. Convert the hash into a printable hexadecimal string. Append this string to the name of the backup file.
  • If the policy is completely removed then restore the orig file in place of the configuration file and clean the contents of the /var/lib/ipa/policy/backup/<application> directory.

One of the alternative ideas about the backup file location is to append the full path of the configuration file into the backup path. So instead of backing up the configuration file for some “test application” named /etc/testapp/testapp.conf to /etc/ipa/policy/backup/test application/testapp.conf.backup.HASHSTRING as suggested above the policy donwloder will back it up into /val/lib/ipa/policy/backup/test application/etc/testapp/testapp.conf.backup.HASHSTRING. This approach solves the problem of potential naming collision if there are more than one file that needs to be updated per application. The final decision on what way to choose will be done at the implementation phase.

Reloading Configurations

When the policy is delivered, merged, converted into files, backed up, merged again it still is not active. Some applications would reread the configuration periodically but some require restart of the application to apply new configuration. The XSLT template for the policy can optionally have a command that will be executed to apply the policy. It can be sending signal to an application, restarting it or doing something else that is prescribed by the application that will cause it to reload its configuration.

Handling Roles

The roles will be handled by running the converter using the XSLT template specified in role files's metadata. Also there can be an “apply” command (like in the config case above) that will most likely just copy the resulting converted role file into application specific area. It would be a responsibility of the application to deal with the interpretation of the roles.

Some of the applications might choose to take advantage of the policy kit in terms of access control checking. The IPA client will have an native integration with the Policy Kit. This integration will be two- fold. Firstly the IPA will be able to download role definitions for different applications that will be handled by Policy Kit and store them in the LDB storage (local LDAP style storage). Secondarily the IPA will provide the plugin into the Policy Kit back end. This back end will be able to use the downloaded data put into LDB to make authorization decisions. In case of policy kit integration the role file for an application will be translated into the LDAP ldif file and loaded into the LDB database using LDB tools. Running this tool will be the specified in the XSLT in the “apply command” section.

Handling Actions

The XML schema for the actions is fixed so processing of all the actions will be performed by one and the same engine (logic). As configuration policies the actions will be prioritized creating a well defined execution order. This would allow running several operations in a sequence.
Only actions destined for the specified host will be delivered to that host. But the list will be ordered so the actions will be executed following that order.
Each individual action (after caching it on the client system in /var/cache/ipa/policy/xml/actions) will be processed following this logic:

  • If there is a file specified in the action
    • The condition script will be run.
    • If the condition is met the file will be saved in the file system with provided file permissions and ownership.
  • If there is an execution portion of the action
    • If the command is scheduled to run once – run it. Next time it will be run only if action changes.
    • If we need to run action periodically it might be translated into the cron job crontab or handled by the policy downloader itself. Since that functionality can also be accomplished by creating crontab configurations via config policies the periodic aspect of the actions might be deferred to later versions. On of the options on this route is to create an IPA component that would actually run different actions on a scheduled basis. This application will be periodically invoked by cron and check if there is an action to perform. From this perspective it can be viewed as a different application and have its own configuration policy in IPA.

Handling Policy Deletes

When the policy is deleted or rolled back on the server the clients should restore its original state. This is the case for when there is no more configuration policy destined for a machine. In this case the policy downloader (with the help of LDAP lookups performed via data provider) will determine that there are no more polices destined for the current machine in the scope of some application. It will then:

Type of Policy Actions to perform
Config policy * Restore original configuration file(s) from backup
* Delete all cached XML files from under /var/cache/ipa/policy/xml/<application>
Roles No rollback action is planned for the roles in v2. In future the roles XML schema can be extended to handle a rollback action. The reason for not providing the rollback is that the application might not behave properly if the roles become completely unavailable.
Actions If action is removed on the server it would just be removed from the /var/cache/ipa/policy/xml/actions on client. In future when (and if) we support crontabs we will remove associated crontabs.

Policy Lookup Logic in DS

The downloading of the policies to the client will be done in two steps. First based on the request from policy downloader the IPA data provider will determine the unique IDs of the policies that need to be downloded. Then the policy downloader will download each of these policies using XML-RPC connection. It would be beneficial to deal with actions, configurations and roles separately. There even might be a different period defined for each kind of the policies. For example, since the roles are rarely changed polling for roles more frequently than once in 2-3 hour does not make much sense. The configuration policies might be more time sensitive so checking for configuration update might be done once an hour. The checks for new or updated actions might be done once an hour too or may be even more frequently. The general idea is that they can be handled separately and independently. At the start of the machine the machine will probably request first roles, then configurations and then actions so that actions run last with latest roles and configuration in place. However this order is subject to change. This kind of configuration preference is a good subject for inclusion into the IPA client policy that IPA v2 will provide out of box.

Lookup Roles

Policy downloader will first determine that it is time to check for roles. I will inspect the contents of the /var/cache/ipa/policy/xml/roles directory and will create a list of the applications and the role definition files it knows about. Something like this:

   {application 1, guid 1, version A}
   {application 2, guid 2, version B}
   {application ..., guid ..., version ... }
   {application X, guid X, version Z}

IPA downloader then will request the list of the roles destined for the machine from the IPA provider.

IPA provider will perform a series of the lookups to determine which role files are destined to the current machine.

  • It is assumed that the data provider downloads and saves host entry that corresponds to the current host. If the cached record has expired

the IPA provider will update it.

  • IPA provider will then look at the member attribute of the host entry and create an LDAP search request that would return the list of the policy link objects. The search request will be very similar to the one described in the Host Based Access Control Design page.
  • The link object will contain a pointer to the policy object itself. This level of abstraction is needed if we decide to store policies on the file system or in some other data store. For more details about the schema and DS objects that would represent policies see schema section later on this page.
  • The list of the link entries will be returned back to the policy downloader.

Policy downloader then will compare its list of role definition policies with the list of the policies that need to be downloaded. It will determine:

  • Role definitions that need to be removed. In this case the policy downloader will invoke the cleanup logic, if it is defined in the role definition file it is planning to remove. As it was mentioned above this functionality might be deferred to later version.
  • Role definition policies that need to be updated. For each of those policy downloader will:
    • Issue request to get the policy via XML-RPC and get the policy
    • Decompress it
    • Save in /var/cache/ipa/policy/xml/roles/<application> directory overwriting previous version
    • Execute converter if any
    • Execute “apply” command if any
  • New policies that need to be downloaded. For those:
    • Create new storage place under roles directory
    • <all other actions from update use case above>

The logic to compare the two lists will look like this (the logic is written in some abstract language that is convenient to express algorithm, it is not basic or 4GL though has some splendid similarities):

  // Prepare the two lists for comparison
  // Sort these two lists by application name – assume it is done as we construct both lists.
  NeedNextExitingPolicy = TRUE
  NeedNextIncomingPolicy = TRUE
     IF NeedNextExitingPolicy = TRUE
        GET NEXT ExistingPolicy
        IF ExistingPolicy NOT AVAILABLE
           // This is the end of the existing policy list
           // So all the rest incoming policies are new
              GET NEXT IncomingPolicy
           END WHILE
           // We are done so 
        END IF
        NeedNextExitingPolicy = FALSE
     END IF
     IF NeedNextIncomingPolicy = TRUE 
        GET NEXT IncomingPolicy   
        IF IncomingPolicy NOT AVAILABLE
           // This is the end of the incoming policy list
           // So all the rest existing policies should be removed
              GET NEXT ExistingPolicy
           END WHILE
           // We are done so 
        END IF
        NeedNextIncomingPolicy = FALSE
     END IF
     // We have both
     IF ExistingPolicy <  IncomingPolicy
        // Existing policy should be removed
        NeedNextExitingPolicy = TRUE
     ELSE IF ExistingPolicy > IncomingPolicy
        NeedNextIncomingPolicy = TRUE
     ELSE // They are equal
        IF ExistingPolicy.Version <  IncomingPolicy.Version
           // We got a new version of the policy
           UpdatePolicy(ExistingPolicy, IncomingPolicy)
           // If it is same version do nothing
        END IF
        NeedNextExitingPolicy = TRUE
        NeedNextIncomingPolicy = TRUE
     END IF      

This proven algorithm allows comparing two lists in one pass avoiding MxN fetches and comparisons. Instead it uses just N+M fetches and MAX(N,M) comparisons, thus is much more performant. I encourage us to use this login in any place where we need to compare two lists that can be presorted by the same criteria at the moment of construction. Even if presorting is required as a separate step it is still usually less operations than MxN.

Lookup Configurations

The configurations will be looked up in pretty much the same way as role definition policies above. The only difference is that there will be two lists to deal with. One list of the applications and another list of the policies inside the applications.
The following example shows how the Existing and Incoming policy lists would look like:

   {application A, {{guid A1, version X1}, {guid A2, version X2}, ...}}
   {application B, {{guid B1, version Y1}, {guid B2, version Y2}, ...}}

The matching of the lists will happen on two levels. The outer loop similar to one above will match the application names. If names match another sub loop will compare the giuds of the existing and incoming policies. The only caveat that need to be kept in mind is that next step after each application is processed in the loop is actually merging the policies on per application bases. The logic above assumes that the list is sorted by guids by merging requires sorting by precedence. This can be overcome in different ways:

  • Using the sorting precedence sorting of the policies and no use the optimized algorithm for processing policies on per application basis
  • Keep two indexes
  • Resort

The choice of the logic will be selected at the implementation phase based on the complexity of code and performance observations.

Lookup Actions

The looking up of actions is similar to dealing with the policies on per application basis. The difference is that there is no need to merge the policies after they are received but rather execute the according to the precedence order. It is important to take into the account that only new or updated actions will be executed. The merging logic that is presented above might not be the best approach in this case since it would require resorting. Alternative or modified solutions will be considered during the implementation phase.

XML-RPC Interface

The XML-RPC interface will expose entry points that will perform following operations:

  • Get Policy
    • Get Policy by GUID – returned policy is a compressed XML blob
    • Get Policy by GUID – returned policy is uncompressed XML blob
    • Get Policy by GUID encrypted with user provided password – returned policy is compressed XML blob encrypted with hash of the password.
    • Get Policy by Application – returns list of configuration policies related to an application in precedence order
  • Update Policy
    • Update policy by GUID – creates a new version of the policy
  • Add Policy
    • Add Policy - adds a new instance of the policy to the policy store
  • Delete Policy
    • Delete Policy by GUID – deletes specified policy
    • Delete Policy by application – deletes all policies for specified application
  • Assign Policy
    • Assign policy – replaces current list of hosts and host groups the policy is assigned to with a provided list
  • Apply Policy
    • Apply Policy by GUID – currently saved version of the policy is applied
    • Rollback Policy by GUID – undo last applied policy

XML-RPC Connection

The XML-RPC connection uses SSL. This SSL is not mutual authentication. The client is going to authenticate to the server using kerberos over GSSAPI. Client will authenticate the server by using public key from the server certificate. The only problem with situation is provisioning of the certificate to all client machines. Requiring during the client installation to copy the cert from the IPA server to the client seems like a bad approach and a big blocker for the deployment of the IPA solution as a whole. Delivering of the certificate to the client so that XML-RPC interface can work, needs to be done via LDAP channel. To solve this problem a special entry will be created in the configuration area of DS's DIT. This entry will contain IPA server's public certificate and its version. It will be populated automatically when the primary server is installed. Each time the client would connect to DS it would check if the version of the certificate it has older than the one in the DS. If DS has a newer version of the certificate or client does not have certificate at all the client will download certificate and store it for use in the XML-RPC connection. Policy downloader will be blocked until a valid certificate is delivered to the client by the IPA data provider. In rare case the customer would want to replace his server certificate he would to generate new certificate and key pair, then update mentioned above configuration entry with the new certificate and up the version.

Developing and Testing Policies

It is expected that customers will have the test IPA realm where they will test their environment. IPA V2 plans to provide a set of scripts that will allow customers to dump a policy they developed in test environment and reload it into the production server. It is clear that to migrate a policy correctly from test IPA domain to production IPA domain some entries from the test DS also need to be carried forward. We plan to provide a set of convenience scripts that will ease solving this migration use case. The scope and the complexity of such scripts will be assessed later.

Policies for the IPA Client

Policies for the IPA client will control the behavior of the IPA client itself. So far the following configuration objects are considered for inclusion into the IPA Client configuration policy:

  • Should IPA client allow access for IPA users when system is offline  ?
  • Cache lifetime for different LDB entities
  • How frequently policies should be downloaded from the server.


The IPA client will be installed with the default values pre-populated in the LDB, but the centralized policy would be able to alter this data on different subsets of hosts by applying specific IPA client policies to specific hosts.

DS Schema for Policy Related Objects

Schema for the policy related objects is described on the following page. <TBD>

Adding Support for New applications

Adding New Application Roles

If one wants IPA to support role management for a new application that IPA yet does not know about, he would have to conduct the following development steps:

  • Develop Relax NG schema for the role definition policy following the roles described on this page. A more detailed quide will be developed as part of the doc set.
  • Drop this new schema into a schema storage on all the IPA server replicas. This is a manual step.
  • Develop XSLT templates that will be run on the clients.
  • Deliver XSLT templates and schema definitions to the clients using one of those:
    • IPA's Actions mechanism
    • Satellite
    • Manually
    • Other delivery mechanism
  • Add a new application entry to the IPA database. Let it replicate.
  • Start a new administrative session

At this point the system is ready to define or import new roles.

Adding New Configuration Policies

If one wants IPA to support configuration management for a new application that IPA yet does not know about, he would have to conduct the following development steps:

  • Develop Relax NG schema for the role definition policy following the roles described on this page. A more detailed quide will be developed as part of the doc set. Keep in ming that schema should include all required metadata that would control merging logic on the client.
  • Drop this new schema into a schema storage on all the IPA server replicas. This is a manual step.
  • Develop XSLT templates that will be run on the clients.
  • Develop local file merge utilities if needed
  • Deliver XSLT templates, schema definitions and merge utilities to the clients using one of those:
    • IPA's Actions mechanism
    • Satellite
    • Manually
    • Other delivery mechanism
  • Add a new application entry to the IPA database. Let it replicate.
  • Start a new administrative session

At this point the system is ready to define or import new configuration policies.