This is one of those interesting items that pops up now and then when dealing with FIM and workflows. I’ve recently made some major changes to some approval workflows and pushed the changes to a production environment. What ended up happening however, is that any approvals that were queued that used the old workflows that were deprecated were no longer operable.
The approvals simply hung around in a users approval queue and threw an error that the system was unable to complete the approval when either accepted or denied. The just continued to lurch through the system like “zombies”.
The removal of the “zombies” was available using two options:
1. If the owner was frustrated that they had all these hung approvals, it was possible to create an MPR and go in and delete the offensive objects thereby removing them from the queue.
2. The “zombies” appear to have a shelf life and have started to age out of the system using the normal 30 day object expiration (after the timeout period has expired).
I’m continuing to monitor this however, it was an interesting little side effect that I hadn’t quite anticipated when making the change to how the approvals were being completed in the environment.
I have to admit, sometimes there are changes made to a product which are really good. This one is about doing the R2 upgrades which I know a lot of people have been having a lot of pain and something I whined about a while ago when it took more than 15 days to the upgrade.
After working with a client and Microsoft, they did a lot of testing on the database and the upgrade now takes place in about 3.5 hours. I would say that is a considerable improvement to the overall operation of the system and something that makes me very happy.
There are times when a look at something and realize that best practices for development and the way a product uses the objects may collide. This is the case with the development best practice of version numbers on workflow activity DLLs and their implementation within FIM.
In the case of the custom workflow activity DLLs that are registered within the GAC, it is easily remembered that the Activity Information Configuration (AIC) within the FIM Service must also be updated. (Remember, the version number is part of the “Assembly Name” value for example –> “FIMActivities.MyWorkflow, Version=188.8.131.52, Culture=neutral, PublicKeyToken=12345abcde1234”)
What is sometimes missed is that the “Assembly Name” is used in the XOML for the workflow definitions where the custom activity is used. So therefore, it is necessary to either go in and add the new activity in again or manually updating the XOML with the new version number. The XOML for the activity is present in the “Advanced View” and is what is used by the GUI to render the workflow for the normal view. If the XOML is updated, go back and validate that the settings have been maintained or, if necessary, reset them to the proper values.
This is a more of a nuisance issue that once you’re aware of it, something that is easily prepared for through process development.
So, I’m a bit behind the curve right now. I have been on a long engagement and we’re only starting to move towards the R2 version of FIM right now. I have to admit there are quite a few new things I like in the release but most of all, the filtered outbound synchronization rules are probably my favorite right now.
The filtered outbound synchronization rules are a huge improvement over the existing method for outbound synchronization rules but there are some trade offs. Key things of note are:
- Filtered Outbound Synchronization rules still have the option to codelessly provision an object into the connected directory. There is however, no option to deprovision the object once it is no longer being applied to the object.
- The criteria used in the FIM Service can still be used for the Outbound Synchronization rules in many cases so long as the Metaverse contains the information. This may result in more import rules from the FIM MA but in the long run, I really think it is worth it.
- To deprovision objects that were provisioned using the filtered outbound synchronization rules, the metaverse extension provisioning code will have to be used to call the deprovision()
- The outbound synchronization rules are limited to comparison of values and therefore, if there are comparisons between attributes or wanting to look back into the connector space of the object, these rules will not apply. Classic rules using a rules extension may still be the best choice.
In my implementations, this will save a lot of workflow churn in the FIM Service. I no longer have to add and remove the synchronization rules based on transitions into a set. This is especially important because I can now use the filtered outbound synchronization rules to switch out reference values where I couldn’t before (e.g. A disabled group no longer flows a populated reference attribute but a blank one). This took two synchronization rules, two sets, two workflows and two MPRs to implement before and that isn’t even including the ERE churn that has to pass through the FIM MA for it to operate.
Looking at it from a straight numbers perspective when looking at the total metaverse object count, this is huge. For simple implementations where the simple criteria can be met in the metaverse, it is now possible to remove the ERE’s. In an environment where there are 100,000 groups being synchronized and provisioned to active directory this is 100,000 ERE objects I no longer have to have in my environment. SWEET!
During the upgrade of of the FIM Portal and Service to FIM 2010 R2 in a controlled environment, the installer kept bailing and the MSIEXEC log file really didn’t provide a lot of errors. Perhaps there was something in the release notes (I mean I read them but I didn’t *READ* them) but the account that was being used for the upgrade wasn’t a domain admin. Once domain admin rights were added, the installation worked fine and the system upgraded.
Just a bit of a heads up for those of you who may be scratching their heads. To get the system to write a log, the command was
MSIEXEC –i “Service and Portal.msi” –lv “.\fim_install_log.txt”
A snippet of the log file that was showing the system trying to log on locally which seemed odd:
(UNKNOWN) System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. —> System.UnauthorizedAccessException: General access denied error
(UNKNOWN) at System.DirectoryServices.Interop.UnsafeNativeMethods.IAdsContainer.GetObject(String className, String relativeName)
(UNKNOWN) at System.DirectoryServices.DirectoryEntries.Find(String name, String schemaClassName)
(UNKNOWN) at Microsoft.IdentityManagement.ServerCustomActions.CustomActions.ChangeUserMembershipInGroup(Session session, Boolean addUser)
(UNKNOWN) — End of inner exception stack trace —
(UNKNOWN) at System.RuntimeMethodHandle._InvokeMethodFast(Object target, Object arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner)
(UNKNOWN) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object parameters, CultureInfo culture, Boolean skipVisibilityChecks)
(UNKNOWN) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object parameters, CultureInfo culture)
(UNKNOWN) at Microsoft.Deployment.WindowsInstaller.CustomActionProxy.InvokeCustomAction(Int32 sessionHandle, String entryPoint, IntPtr remotingDelegatePtr)
Recently while working with a client, we have opted to look at the R2 upgrade from an existing R1 environment. It quickly became clear that there are a few things that can affect your upgrade experience:
1. Reference data will slow your upgrade down – This is a definite trade off that has to be made in the upgrade of FIM to R2. The synchronization engine has had major changes to how the reference data is stored to reduce the amount of time the system needs to process it. That said, if the environment being upgraded is dealing with a lot of referential attributes, this process can take a long time! The release notes have stated that this can take up to 5 hours and well, that was an optimistic estimate with as complex and environment as I’m dealing with right now. Having a test environment is a godsend for this as it makes it possible to provide an estimate of how long the FIM environment will be offline when moving to production. The trade off is that under R2 the system should process reference information much more quickly and therefore reduce your synchronization times. This is something everyone always wants but in the case of the R2 upgrade, there is a price to pay up front for you to realize the gain (and make sure the users of the environment understand that the system may be offline for a while).
2. Sharepoint Services 2010 Upgrade Issues – This was interesting as one environment upgraded fine and the other didn’t. In the case that I’ve been dealing with the system wouldn’t restart correctly and was hanging on the “Applying Computer Settings” screen. This was fixed by using the registry modifications noted in this technet article http://support.microsoft.com/kb/2379016
3. Plan an Appropriate Backup Strategy – This is standard operating procedure when you apply any patch or upgrade but is especially important in the case of the R2 upgrade. Remember that the upgrade is making significant changes to the underlying databases (so much so that I’ve seen one run for longer than 3 days!). Make sure a proper backup of the sync engine, fim service and databases have been captured. Rolling back if a failure occurred during the upgrade without these components will result in a lot of long hours and a ruined weekend or two. (Now is not a good time to test your DR strategy, make sure the backups are good prior to starting the upgrade!)
I was on a support call with Microsoft today and learned something that, I have to admit, wasn’t something I had really dug into as it was in the deep dark hole of the FIMSynchronizationService database.
Apparently within FIM 2010 R1, the link table for the referential data held a lot of extraneous information including things like DN instead of just the GUID values. This can cause significant performance increases when there are a large number of referential objects that are renamed within a synchronization process (all fear the “referential integrity” checks after the MA seems to have finished processing.
FIM 2010 R2 has reduced the amount of data in the link table thereby reducing the overhead to manage the referential data. I’m currently in the process of seeing how much of a performance update is gained by this however, anything that may reduce the overall amount of referential integrity checks in the MA’s would be greatly appreciated. That said, my “devil’s advocate” side asks whether these reductions on the benefit of speed remove any of the robustness we’ve seen in the previous instances of the synchronization engine.
I was recently playing with the UocDropDownList control in the RCDCs and came across a couple things that are in the documentation but not really defined that clearly. I went around digging and found a couple simple examples which allowed me to distill the following:
Setting a default value for a drop down list
By default, if you use the “custom” settings for the list and include the options, the first option in the list is the default. This is really handy when setting up an object where the majority of the items will always fit the same series of settings and the user is properly informed of how the object is created by default and that they should look at and change the settings.
Forcing the user to select a value
This is where things got interesting with the drop down controls. The fix for it was really quite simple although with some of the iterations that were played with, the results were unexpected.
If you have the control set to “required” and the list of options as set out in the “setting a default value for a drop down list” noted above. Where there is simply a list of the options you want, the first option in that list will be selected if the drop down isn’t changed.
An interesting item that I did notice however, is that if the first option value is set to a null value (“”) and the caption is set to any value, such as “*select item*”, the required flag is met and the control is allowed to pass a null regardless of whether a null is an accepted value or not. So the first option has to be set with both a null value and caption title. That worked fine for me.
Examples of the different formats for this control are provided by Carol over at her blog which can be found in the article “Listing Choices in RCDC Dropdowns”
The other day while working in the FIM Service, I added an attribute type to the schema and bound it to one of the resource types that was being picked up and synchronized via the Synchronization Service. I noticed however, that after the change was made and the attribute was populated with data that I received the “app-store-import-exception” when trying to do the delta imports.
Turns out the issue was quite simple. The FIM MA was getting updates from the FIM Service that included an attributeType that was not included in the MA’s record of the schema.
To fix the issue, all that was required was to simply refresh the FIM MA schema and redo the import which worked fine.
This did lead me to ask myself, why did this happen. Turns out it was a configuration shortcut that I had taken a while back when first building the system for testing that never got corrected. The MPR that granted permission to the synchronization engine for reading the objects was set to “all attributes”. So therefore, simply adding the attribute to the FIM Service and that resource type made it immediately available to the FIM MA.
Those who have attended my training classes for FIM 2010 as well as myself have yet another reason as to why I like to avoid that “All Attributes” selection when granting permissions. It can be the root of a lot of different issues not limited to the improper disclosure of data to people or systems who should not be otherwise authorized.
Forgot to mention that I passed the GISP Certification (GIAC Information Security Professional) a couple weeks ago. Was happy to finally have that one behind me as I had taken the course in February from SANS (CISSP Bootcamp) and this was the companion exam.
It is similar to the CISSP in content but uses a different exam format that still made it quite challenging.