FIM2010–Preloading Data to Avoid Long Synchronizations

It happens all the time, we have a set of data that is being synchronized on our dataset but then a change occurs. The change is fairly significant in that it is the addition of another attribute (or removal for that matter) to existing objects that needs to be synchronized to one or other systems.

The synchronization service is already chugging along happily and efficiently working on the current rule set. The addition of the new attribute however, will cause a couple things to occur. The first is the requirement for the full import and full synchronization on the MA where the data is coming from and the second is the export for ALL the objects that have data in the new attribute to the target systems.

When dealing with the FIM MA, this can be a significant delay and I strongly recommend that if possible pre-populate the data using an out-of-band process. This will allow the data to be present in the target MA and by prepopulating the data in the target attributes, you can greatly reduce the time for the synchronization of the data through the synchronization engine.

Think about it this way. In the sync engine only approach the following steps would have to occur. Assuming that we’ve done all the work to update the MAs and the attribute flows as required and our dataset is about 100,000 entries.

  • Step 1: Full import on the source MA (100,000 entries)
  • Step 2: Full Synchronization on the Source MA to move the data from the connector space through to the metaverse. (100,000 entries)
  • Step 3: Export to Target MAs (we’ll say that we only have to MAs which means that only 100,000 exports have to occur).
  • Step 4: Import of Target MA. Regardless of whether we do a delta or full import here, we modified the 100,000 entries therefore, we’ll have 100,000 imports.

If we pre-stage the data into the target MA, we cut out a fairly large portion of the export work which is by far one of the higher cost operations from a time perspective. To prepare the environment, the attributes would have to be present in the target systems. Note that in some cases you may have to do a schema refresh or you may end up with app-store-exceptions (more about that in a later post). But preloading the target system removes an entire step, the export thereby saving us the processing of all those entries.

  • Step 1: Full import on the source MA (100,000 entries)
  • Step 2: Full Synchronization on the Source MA to move the data from the connector space through to the metaverse. (100,000 entries)
  • Step 3: Import of Target MA. Regardless of whether we do a delta or full import here, we modified the 100,000 entries therefore, we’ll have 100,000 imports.

Anyway, just wanted to remind everyone that a major change in the dataflows does not have to be the equivalent of an initial load. With proper forethought and planning, the time that normal synchronization operations are affected can be minimized considerably.

Posted in Uncategorized | 7 Comments

FIM2010–Don’t forget the Resource ID in the Event Viewer…

When troubleshooting workflows and such in a production environment, the event log will fill with errors that say something similar to “Workflow Instance <resource ID> failed” or some similar message but still with the embedded resource ID. The error messages can seem both complicated and confusing as isolation of the problem while searching through the requests to find a correlating time stamp can be both arduous and time consuming.

Don’t forget that the resource ID in that message is very important and extremely useful. Everything in the portal is a resource, even a workflow instance. As such, the troublesome workflow can quickly be found by searching for the resource ID using the “Search Resources by Resource ID” search scope on the home page.

Cutting and pasting the resource ID in the event viewer message into the search scope parameter quickly allows you to find exactly what workflow threw the error so you can take a closer look at the steps that were being executed when the failure occurred.

Posted in Uncategorized | Leave a comment

CISSP Certification Awarded…

Woo hoo! I was awarded my CISSP certification. The resume/experience review took a while but this is certainly one of the certifications I wanted to add. Happily added the logo to the “Who’s Blain” page.

For those who are wondering my experience review had taken about five weeks to complete so it was about 11 weeks from my exam on February 11th to get the certification. In June however, this will change as the exams are moving to a computer based format so the results are fairly immediate. I think the waiting for the exam results from the written exam was probably the hardest of the two waiting periods. Smile

Posted in Uncategorized | 1 Comment

FIM2010-Sometimes Declarative Rules Are Not Your Friend…

While recently working with FIM, I found that we were able to remove a declarative rule from use as the attribute flows defined were no longer required. The process for removing the ERE’s is well defined in that you can simply create a transition in MPR that goes against the set of objects that the ERE is being removed from (which of course, calls the workflow that actually creates the “Remove” ERE).

So now in the Portal you have an “Add” and “Remove” ERE. The synchronization engine then processes the remove and stages the deletions of the two EREs from the FIM Service. Think about how many objects this operates against.

If you’re working with 200 objects, that’s great, its 400 exported deletions but start thinking the large enterprise with say 100,000 objects that have the “Remove” ERE applied and how long that will take to process in your environment.

I would strongly suggest reviewing the overall system performance when it comes to operations of this type. Perhaps even scoping the task to take the synchronization rule out in phases, if an algorithm for the criteria based set can be properly defined. Remember that classic rules are executed last so you can “override” the flows that are being done by the declarative rule you’re phasing out (even if that means writing a simple extension that copies the current value of the csentry back into itself which causes no net change but prevents the particular attribute flow in the removed synchronization rule from having any net effect).

Posted in Uncategorized | Leave a comment

FIM2010–Minimizing Requests Generated During Object Changes

One of the things I’ve noticed is that I’m not overly fond of is the sheer number of requests that are generated when I need to create flag attributes or other items that are done when a group is initialized or changed in a manner which updates other downstream values. I’m looking into a couple methods for reducing these to make the overall systems more manageable and intelligible to the users who will ultimately be responsible to support them.

For example, I have a few situations where there are six different attributes that are being set to identify whether attributes are present or not. These are then in turn used to define criteria based groups and RCDC visibility. The problem is that this initial request generates a total of seven immediate requests (not to mention any other actions that are taken when the objects that may be triggered when the default values are set!)

Anyway, I strongly recommend that you look at the different tools out there, a personal favourite of mine being the FIM PowerShell workflow activity that was developed  coworker of mine, Craig Martin.

Using this tool, it is possible to perform all the logic and tests on the object within a PowerShell script and then commit all the changes in a single request. Something I have to admit, I like because it removes some of the clutter than can appear in the search results for requests.

The other option is a custom workflow activity to do all the work for you. It all depends on where your comfort factor is. I just like Craig’s tool because you add the script as a configuration item in the portal GUI rather than having to ensure that DLL versions are up to date around your environment.

If you want to follow another FIM Blog, I recommend Craig’s highly. The link is http://www.identitytrench.com

Also, Craig will be covering this during the FIM pre-conference workshop at TEC in San Diego this year! So if you’re going to be there, seriously consider attending his pre-conference workshop on Sunday. Tell him Blain sent you.. Smile

Posted in Forefront Identity Manager 2010 | 1 Comment

CISSP Exam Results–Passed!!!!

I am really happy about hitting this mark. I spent a lot of time preparing and got through the exam. I have to admit, waiting almost the full six weeks for the results was agonizingly painful. I left the exam feeling confident but hearing all the horror stories of people who were confident that got that unpleasant surprise of a failing mark had me second guessing myself a bit.

Thanks again go out to Eric Conrad, my instructor for the SANS CISSP Boot camp, who helped me put the last few pieces of the puzzle together in my weaker domains. I strongly recommend his study guides including the “Eleventh Hour CISSP Study Guide” which I used religiously in the days leading up the exam.

Now to get my endorsements and resume in order to finish off the certification requirements… Smile

Posted in Uncategorized | 2 Comments

FIM 2010–Issues Setting DN Values using Calculated Strings

A colleague of mine was recently working on a problem where he wanted to create a DN in the declarative synchronization rules by using a string attribute value in the metaverse. The string value was created in the FIM Service by a workflow and imported into the metaverse via the FIM MA where the synchronization rule then used it.  This worked fine for the initial flow only rules and therefore seemed seemed pretty straight-forward when he set up the persistent flow so that accounts could be dynamically renamed. That said, the persistent flow failed and wouldn’t perform the rename as expected. It was a rather curious error and to him quite frustrating. The detailed situation he had was this:

A workflow was setup that built a DN string that was put into a FIM attribute. This attribute was then imported into the Metaverse. The workflow did everything including the escaping of the “,” in the RDN component. An example of the the string, a fully defined FQDN, was similar to “CN=Checkley\, Blain, OU=Users, DC=Happyface, DC=inc”.

The synchronization rules were set so that the attribute value in the MV attribute was used for the DN. So it was a pretty simple rule that was used for both the initial flow and persistent flows (where adFqdn is the attribute that stores the calculated string imported from FIM):

adFqdn => dn

Interestingly enough, although it worked fine for the initial flow only and the entry got created but the rename, which should have been seen as part of the provisioning section of the preview, didn’t occur when the persistent flow would be called. It simply tried to map an attribute value to DN when it did the attribute flows which didn’t work.

By modifying the code a bit so the workflow generated the Target OU and the RDN being explicitly defined and escaped using the “EscapeDNComponent” worked and the renames occurred correctly. The completed flow rule in this case that worked was (as before, adTargetOU is the calcuated OU string in the MV that was generated by and imported from the FIM Service):

CustomExpression(EscapeDnComponent(“CN=” + displayName) + “,” + adTargetOU => dn

Hope this helps anyone who may have run into a similar situation. Appears that having the DN as a string is fine for the initial flow rules however, you have to more explicitly show the system it is a DN for it to do the renames as part of the persistent flow.

Posted in Forefront Identity Manager 2010 | Leave a comment

FIM2010–Monitoring the Requests is Important!

Don’t forget when you’re working in your environment to validate the request log is being purged correctly. Failing to do so can certainly impact performance of the FIM environment as a whole.

As an example, I have a complex environment where certain requests will trigger other requests that are completed by the Function Evaluator as part of an action workflow. Each activity in the workflow is another request. Therefore, for example, if you have one request that changes the state of an object such that it triggers an action workflow setting five other attributes, you end up with six requests in your database.

Now on its face, that really doesn’t seem to bad. You want the requests there for obvious auditability over the short term (FIM sets the default for expired requests being purged to 30 days) but not so long that your database grows uncontrollably.

Now imagine that I have 5,000 requests against the database most of them trigger the six subsequent actions based on my workflow I noted above. That means I have 25,000 requests in my log. The default SQL server job for removing the expired objects only runs once per day and removes 20,000 expired objects. Doing the simple math, that is 5,000 less than what I’ve actually generated during that day! Smile Over the course of a month, that is 150,000 more, 6 months would be 900,000 more, etc. My request log never gets purged of all the expired requests, it simply continues to grow.

So long story short, when you’re doing your tuning and planning for initial load and ongoing operations get a good focus on the average number of total requests per day versus the total number of changes made by a user. Then tune the SQL agent job to run often enough to clear them out and keep your requests down to a manageable number.

I have heard of clients running this hourly to ensure that it keeps up to the load of data change in large organizations without any ill effects. Some suggestions from colleagues suggest that running it every 30 minutes during off-peak hours can be of great benefit as well. Look at the results of the scheduled jobs as I have noticed that although they are scheduled to run every 30 minutes, there will be variance in how many times they will run as more complex requests that have expired take longer to purge. I have had jobs that take up to four hours for a single run to remove 20,000 requests and others that have taken a 30 minutes.

If you want to take a look at the “residual” objects that are left in your environment that should have otherwise been purged, do an advanced search from the “Search Requests” navigation bar item. The parameters of the search are pretty simple:

Select request that meet all of the following:

  • Expiration Time prior to 30 days ago
Posted in Uncategorized | Leave a comment

FIM2010 – Custom Objects Search Scopes on the Home Page

During a recent deployment I was seeing some interesting behaviour in search scopes that were defined for custom objects. The key issue was that I could enter in a search parameter that would work fine for the object when I was already in the custom objects page but when called from the home page, the search parameter was ignored and all objects would be returned.

A bit of digging around with some coworkers found that the problem was associated with the URL that is provided for navigation from the home page. In most cases, the default URL that is used would be the one that you get when you first open that object type from the “All Resources” page. For example:

~/IdentityManagement/aspx/customized/CustomizedObjects.aspx?type=myNewObject&display=My%20New%20Object

However, when the object was clicked the URL that would be formed using the base URL provided with the system added suffixes was:

http://fim.mysite.com/IdentityManagement/aspx/customized/CustomizedObjects.aspx?type=myNewObject&display=My%20New%20Object?searchtype=a3a03523-b72b-4e09-97b2-d09835ba5311&content=testing

Note how the searchtype parameter is prefixed by a ? indicating a second query string. The string appears to be lost in translation as the objects ALL show up and are not limited by the parameter I had entered in the search box (“testing” as indicated by the last part of the URL “content=testing”)

Modifying the home URL slightly (although it does make for an ugly URL) seemed to have made the environment work okay. The fix was to modify the URL such that the search scope was present and the string ended with an & to show another parameter. For example:

~/IdentityManagement/aspx/customized/CustomizedObjects.aspx?type=myNewObject&searchtype=a3a03523-b72b-4e09-97b2-d09835ba5311&

This results in a URL generated when doing the query that actually repeats the searchtype parameter in the string however, the second one that was appended with the leading question marks appears to be ignored and the query works okay. The resulting URL is:

http://fim.mysite.com/IdentityManagement/aspx/customized/CustomizedObjects.aspx?type=myNewObject&searchtype=a3a03523-b72b-4e09-97b2-d09835ba5311&?searchtype=a3a03523-b72b-4e09-97b2-d09835ba5311&content=testing

Now that this change has been made, the system is working fine with the custom object searches returning scoped results based on the provided string in the search field from the home page.

Posted in Forefront Identity Manager 2010 | 2 Comments

CISSP Training

I’ve been really tied up of late studying for a bunch of different certification exams including Microsoft and CISSP… of particular note is my CISSP exam which is coming up in a week or so…

I’ve been doing a lot of studying for the CISSP as my experiences in some domains are pretty limited (although for access management I think I’ve been around that bush a few times). To that end I recently took a boot camp to help me ramp up on my weaker domains (physical security, etc). The boot camp was offered by SANS (www.sans.org) as their MGT414 course and my instructor was Eric Conrad. The course really helped me to solidify the knowledge I had been gleaning from books and really helped launch me forward on the curve of understanding some of the nuance and terminology that I otherwise was not exposed to.

Note that although its called a bootcamp, there is a requirement to have some security knowledge. This is not like a Microsoft bootcamp where you will be able to simply go to it and pass the exam. There is a LOT of content to cover and this helps provide/solidify everything that you’ve been studying and show what domains you’re weak in allowing you to focus in on it.

Hopefully in a month or two I’ll be able to update my credentials to include the CISSP certification. I have 6 hours and 250 questions in my near future to find out.

Many thanks to SANS and especially Eric for a terrific course.

Posted in Uncategorized | 2 Comments