Friday 7 September 2012

SharePoint 2010 Consultancy Project. Part 3 - How to kill SharePoint farm with the help of SharePoint Designer


Introduction

These are a few posts that came up as a result of my work in one consultancy project helping a team that worked on implementation of a SharePoint 2010 based system. My involvement into the project was not too long, only a month, and my tasks were mostly bringing break-through solutions for the most peculiar requirements or the most intricate integration points. Some of those solutions will be described in these posts.  


Part 3 – How to kill SharePoint farm with the help of SharePoint Designer

In one of our SharePoint projects we had debates with customer if it is safe enough to allow business users/administrators to use SharePoint Designer in addition to SharePoint out-of-the-box web UI. On one hand SPD is a powerful tool, which could give lots of benefits even though it requires more advanced knowledge and skills. On the other hand customer’s concern was that changes made with SPD in a site may impact the whole farm in some way. My point at that time was that SPD is basically out-of-the-box tool so the biggest harm people can do with it should not extend outside of that site or in worst case web application.

Later in another project I found out that SPD can be more dangerous in terms of its impact to the whole farm performance. In fact performance of the SharePoint server can be downgraded with a custom SPD workflow; literally running that peculiar workflow kills the farm as it takes 100% of processor time and it can go on forever unless administrators take actions.
Here is a simplified workflow that I used to kill the farm. The workflow is extremely simple (see figure below). Basically this is a reusable SPD workflow consisting of a ‘Parallel Block’ and two ‘Start Approval Process’ activities in it.




The reusable workflow was assigned to a content type and then a list was created with association to this content type. Nothing bad happens yet when you just create a list item and kick off the workflow. But when user tries to do any approval action on one those two approval processes, the whole system gets busy. Processor usage goes up to 100% which never ends. Obviously no sites respond after that.
Clearly, that the process that gets all the resources is OWSTIMER. Interesting, that restarting ‘SharePoint Timer 2010’ service from Server Manager failed so I had to end OWSTIMER process directly in Task Manager. In fact this is not enough because when ‘SharePoint Timer 2010’ service recovers it continues with the workflow getting busy again. So the workflow has to be cancelled before time service is recovered.


Of course no ‘physical damage’ was caused and the problem could be resolved by restarting timer service and cancelling the workflow so no system restoration was needed. But still this example shows how easily users can bring SharePoint down using SPD (if they have proper skills and enough time of course;)

Friday 20 April 2012

SharePoint 2010 Consultancy Project. Part 2 - Reports

Introduction

These are a few posts that came up as a result of my work in one consultancy project helping a team that worked on implementation of a SharePoint 2010 based system. My involvement into the project was not too long, only a month, and my tasks were mostly bringing break-through solutions for the most peculiar requirements or the most intricate integration points. Some of those solutions will be described in these posts.  


Part 2 – Reports


Requirements and Technical Solutions

The system required reports built for contracts in Proposals (some details about information organization were provided earlier in ‘Closing CR Tasks automatically on Resubmit’ section in Part 1. So additional complexity for the reports was brought by the fact that the library was not plain; it had more complicated structure with document sets pushing down some attributes from top level to document level, custom content types for contracts, InfoPath form promoting attributes and synchronizing some attributes up to document set level, and some other nice features.
There were two types of reports required in the system

  • Summary reports with aggregation calculations

  • Grid reports with groups and totals

After analyzing requirements and keeping in mind that the solutions should be as simple as possible it looked like the best option for summary reports would be MS SQL Server Reporting Services (using Reports Builder 3.0) and for grid reports – SharePoint Designer Data Views. Both approaches were more or less out-of-the-box, which complied with supportability requirements.


After prototypes for summary reports based on SSRS had been finished, customer realized that it will be a delay in getting proper licenses for SSRS so the technical solution for summary reports should be reconsidered. At this point I had two solution candidates one based on custom web parts created in Visual Studio and SPD Data Views as for grid reports. SPD Data View won because of supportability requirements.

Summary reports with SPD Data Views


Data Sources and Parameters

The most intricate part in building Data View web parts for summary reports appeared in configuring data source. 

There are two options to add Data View to the page – first one is to insert web part initially based on one of the lists, the second – add an empty web part and after that select needed list as a data source for the web part. It looks like both options are to achieve the same but surprisingly with the first option less functionality is available for manipulating data source. So the tip here is: insert an empty Data View web part and after that bind data source to a list or document library.

From now you can configure the options for a data source. Note that all those configurations will guide how CAML querying SharePoint objects is built, which is important for performance.

  • Selecting needed fields. Here you can “unselect” columns that will not be needed for data view, which is good because extra columns will not annoy you when building views and secondly the CAML query will be optimized for better performance as mentioned above
  • Filters. Again for better performance it is important to configure all needed filters here on the level of data source (alternatively XSLT filters can be added).
    In those filters it is also possible to use Data View parameters. As you can see on picture above
    DateFrom and DateTo values are taken from relevant parameters. There are several types of parameters – they can be introduced as Control, Cookie, Form, Query String, or Server Variable.
    Another nice feature for filter conditions is that I can group sub-conditions to manage precedence of logical operations
  • Sort
  • Folder scope. For my tasks it was critical to query all contracts inside document sets, so fortunately I had this option for data source (it was really frustrating when I was not able to configure folder scope for SharePoint data source in Reporting Services)
There were some strange things about building filters as well. E.g. I could not find ContentType in the list of available columns even though many other system columns were there. Workaround for content types would be to modify query directly in code. Surprisingly, it worked fine for content type. 


Building Reports

Data View web parts suit perfectly for building data grid reports. For summary reports, when only groups and totals are displayed, it is trickier. Basically this can be achieved with “Sort & Group” functionality.


Problem 1 – Overall Total
Even for simple case with only one grouping level, adding overall total is not really straightforward. 

In example on picture below, all contracts are grouped by some property and totals are calculated for those groups. This can be done with simple group settings: for my group set ‘Show group footer’ and in ‘Advanced Grouping’ set ‘Always hide group details’. Then calculations for group totals can be done with formulas, e.g. format-number(sum($nodeset/@BC_TCDValueExclVat.),'#,###.00')

But I cannot easily add formulas for overall totals because there is not even a placeholder for them. Obviously you can go to the Code view again, and having enough knowledge in XSLT, add needed elements there, but this does not really sound as out-of-the-box. In my case it was easier because data source (SharePoint document library) had a column IsContract indicating if an item is contract or not. Since reports were built for contracts only, all items coming from data source had value ‘Yes’ in IsContract column. Now the trick is that I can build two levels of grouping: first based on IsContract columns (there will be only one group all the time) and then a property for actual grouping. Then I can use headers and footers for IsContract group level for overall totals.




Problem 2 – Incorrect totals for grouping hierarchies with more than two columns
That solution with overall total worked fine for me until I needed more than two levels of hierarchy. After I added third column to grouping hierarchy I realized that totals are calculated incorrectly. After inspecting XSL code for I found a simple solution to fix incorrect totals – names for group levels should be hierarchical themselves (see the picture below):



Problem 3 – Report with two hierarchies
Initially, when I just met the problem 2 described above and until I found a solution for it, I was really disappointed with the way SPD builds those groupings. But later it helped me building another report with two hierarchies under one overall total. As shown on picture below the report displayed two parts based on the type of contractAwarded To and Awarded By. Inside those parts there were groups of contracts based on two different properties. Those ‘To’ and ‘By’ parts could be split into two separate reports but overall totals were needed, which made them appear on the same report.



The solution was to build one data view web part with 4 groups representing two grouping hierarchies as shown on picture below. Note that the top level is just to get overall totals as it was described in ‘Problem 
2’.



One more thing to be done for displaying report correctly was to define conditional formatting rules hiding or showing elements depending on report part.


Deployment

Another intricate thing with data view web parts is deployment. It looks like data views are not supposed for moving from one site to another. The problem is that data source for data view web part is bound to the SharePoint list (or library) by its GUID. Obviously the GUID of data source will be different on destination site. Although there is a workaround changing web part to make it deployable. Basically, it suggests replacing GUID with list name. This is described in several blogs but neither of them worked for me 100% so after a series of “experiments” I found some modifications to be done to make it working. So here is the procedure that worked for me.
Preparing deployment package
  1. In SharePoint Designer open site page in code view and make the following changes:
  1. Set ViewFlag attribute to 0 in WebPartPages:DataFormWebPart element.
    Change
    ListId to ListName and replace list’s GUID with list name

  1. In SharePoint:SPDDataSource element
  1. Remove Update, Insert, Delete parameters
  2. In SelectParameters change ListId to ListName and change DefaultValue to list name instead of GUID.
  3. Add another parameter into SelectParameters as following setting proper link to current site <WebPartPages:DataFormParameter ParameterKey="WebURL" PropertyName="ParameterValues" DefaultValue="/sites/serf07" Name="WebURL"></WebPartPages:DataFormParameter>

  1. In stylesheet replace ListID with ListName and list’s GUID with its name

  1. In SPD (Design View) select Data View web part and then in Data View Tools -> Web Part save web part to file. This file will be used to deploy report to any other site collections

Deploying reports
Deployment package for reports consists of the web part file created as described in previous section.
Deployment instructions:
  1. Open web part file in text editor and modify WebURL specifying URL of the site where web part will be deployed to
  2. Upload web part to Web Parts Gallery (Site Settings -> Galleries -> Web Parts)
  3. Create site pages and add web parts

Consequences of changing GUID to list name
Price for changing GUID to list name was that some settings were not available after web part became “deployment-ready”. In data source configuration all buttons for Fields, Filter, Sort, and scope became grayed. Even more, when I tried to modify parameters, which were used in filter conditions, the whole page got broken. So from now to make required changes I had to switch to code view very often.

Summary reports with Reporting Services

As it was mentioned earlier Reporting Services were considered as the most appropriate solution for implementing summary reports. As a tool for creating reports the best candidate was Report Builder 3.0. Another available option was using SQL Server Business Intelligence Development Studio, which in fact is Visual Studio 2008 with BI templates for SSAS, SSIS, Reporting, etc. It is interesting that BI templates are not available in VS2010.
So Report Builder 3.0 is a nice tool, well integrated with SharePoint (click-once application), providing almost the same possibilities as BI Visual Studio but more user friendly with lots of wizards.
Not much to say about report building process – there is a lot of materials on web. The only thing to mention here is limitations of SharePoint data source related to document sets (more exactly to folders).
Building data source and data sets for my reports the best option seemed to be using a data source of ‘Microsoft SharePoint List’. With this data source I could have a nice query designer tool to create a CAML query retrieving my data. But somehow with this data source there is no way to specify QueryOptions to request Scope='Recursive'. So unfortunately it did not work for me completely because I had to retrieve documents inside a document set.
Another option was using XML data source and connecting to SharePoint web services to retrieve list data. In this case there is no nice Query Designer and another complication was that passing parameters to web service was not straightforward.
Steps and screenshots below show how to get list items from folders recursively:
  1. Configuring Data Source to SharePoint web services:


  1. Configuring Data Set and parameters


Thursday 12 April 2012

SharePoint 2010 Consultancy Project. Part 1 - Business Workflows

Introduction
These are a few posts that came up as a result of my work in one consultancy project helping a team that worked on implementation of a SharePoint 2010 based system. My involvement into the project was not too long, only a month, and my tasks were mostly bringing break-through solutions for the most peculiar requirements or the most intricate integration points. Some of those solutions will be described in these posts.  


Part 1 – Business Workflows


Functional Requirements and Technical Solution

The system includes a few workflows automating some business processes. Basically those workflows were a combination of approval, review, change request (CR) and some other types of activities. On the figure below a state machine for fragment of one of those workflows is shown just to give a general idea.
After analyzing functional and technical requirements it looked like SharePoint Designer workflows would be the best solution, taking into consideration the fact that the system should be supportable by people who do not have yet much experience with SharePoint development.


So the solution was to create a reusable workflow in SPD associated with a content type, which was based on Document Set (even though using document sets brought some extra complexity into the whole thing, in this context it does not make much difference from using just a folder in document library).

So nothing exciting so far apart form the fact that the functional requirements included some “non-standard” pieces and out-of-the-box workflows themselves are full of “peculiarities”.



Specific Requirement – Closing CR Tasks automatically on Resubmit

A few business insights just to describe the problem briefly: the business process was built around processing a Proposal, which in fact is a long workflow from submission to contract stage including multiple reviews and approvals.  The Proposal itself appears to the user as a big InfoPath form with lots of data and logic. But the Proposal comes with a bunch of documents e.g. different types of contracts etc. So technically the whole Proposal looked like a document set of custom content type containing all needed attributes. This document included master proposal form and other documents of specific types (defined as content types as well).

The first stage of the whole process is depicted on the figure with the state machine above. SPD reusable workflow suits ideally for implementation of that workflow logic. Approval, delegation, and request for changes could be perfectly implemented using ‘Start Approval Process’ activity. The challenge here was starting approval process for two different departments in parallel. There are several options to achieve the goal but some of them just do not work because of unexpected peculiarities in SPD out-of-the-box workflows. Playing with those options I even learned how to kill the whole SharePoint farm with one little out-of-the-box workflow! (It might be a subject for another blog). But finally the task was resolved with standard means. 

So everything works fine except one thing. The requirement was that change request tasks issued during approval activities should be closed automatically when user hits submit button in the InfoPath form. So instead of making changes in InfoPath form, saving it and then going to the CR task in the SharePoint tasks list and closing it as supposed by out-of-the-box, client wanted a shortcut. Just to mention that closing CR task is the only trigger to continue sequential workflow; otherwise the workflow is stuck. After thorough investigation I realized that there is no out-of-the-box way to implement the requirement. So it looked like 95% of smooth and easy out-of-the-box implementation will be wasted because of 5% of peculiar requirements that cannot be done with “black-box” limitations. All other alternative “custom” solutions would be too expensive in that situation.

But the solution for those 5% was found with a small custom piece:

  • I added another column to the content type for Proposal Document Set (the content type associated with workflow). The purpose of this column was to store information about open CR tasks (a coma-separated list of tasks IDs)
  • Then in SDP workflow I managed to maintain that list of IDs each time when CR task was issued:

  • And the last piece is custom code in Even Receiver for the content type. This code is waiting for resubmission and then it closes tasks programmatically. Good thing was that even if I close tasks externally it is still a trigger for out-of-the-box workflow to continue.

Well, it is not the most elegant solution but it worked!


Issues and Limitations of SPD Reusable Workflows

SPD reusable workflows are not that reusable as its name suggests. 

First of all it is limitations using lookups in activities. Technically when building a reusable workflow I can use lists connected as lookups to my master content type associated with the workflow. But if I want to prepare a deployment package, which in case of reusable workflows can be taken with “Save as Template” functionality, SPD says that template cannot be saved for workflows using lookups. Well, it can be explained that lookups in workflow rely on GUIDs of the connected lists, but it would be handy to have an option to store lookups relying on list names. To manage this limitation I had to create text columns duplicating values of lookup columns to use them in workflow, and event receivers synchronizing lookup values to text values. Again it is not the best solution because duplication is almost never good.

Another strange thing is that when I deploy my reusable workflow (“saved as template” from development environment) to new location, the link to content type is lost (even though I deployed the content type before deploying the workflow). Explanation about GUIDs can be used here as well but what really strange is – there is no direct way to restore association to content type after deployment. The only way I found to restore this association is “Copy & Modify” functionality. When you copy faulty workflow you can set link to content type in the copy. There were a few smaller issues like after coping workflows the names of approval activities were lost.



Workflow History – fixing “60 days” problem

One of the requirements was to have workflow history to audit tracks of business processes. SharePoint gives a possibility to review workflow history related to a list item. But by default the workflow history associated with an item will disappear in 60 days. In fact workflow history records are in the system and still stored in the Workflow History list (or any other list specified at workflow creation), but associations between list item and history records will be cleaned up. There is a way to change default settings for 60 days to increase the period or keep associations forever, but it is not recommended way to go because this can cause performance issues.

The easiest way to get workflow history I found is this:


  • First of all, Workflow History list has enough information (columns) to identify workflow type, workflow instance and item on which the workflow was run plus it has all information about what happened in that step (description, outcome, date, duration, user). So it is possible to create a custom view on that list displaying workflow history in the way you want (sorting, grouping, columns, etc.). It can be a view for specific workflow type or aggregated view for several workflows.


  • Now you need to provide a link to that view form the item on which workflow was run. The easiest way to do this is to add another column to the list which will contain a URL to the view mentioned in the first step. Obviously you need to filter workflow history records by item owning the workflow instance. This can be achieved with query parameters something like this /Lists/WorkflowHistory/CustomHistoryView.aspx?FilterField1=Item&FilterValue1={PrimaryItemId}.
  • And another synchronization piece of code in list item event receivers that sets URL to the workflow history link column and specifies item context (item id):

Friday 17 December 2010

Deploying InfoPath forms to SharePoint production environment


In one of my recent projects I had an interesting problem moving InfoPath 2007 form from my development environment to the production. Actually browser compatible InfoPath form was a part of the system implemented with SharePoint 2007.

Deploying InfoPath forms should not be a big deal, besides Microsoft provides quite detailed guidelines for deploying InfoPath 2007 Form Templates (http://msdn.microsoft.com/en-us/library/cc704269(v=office.12).aspx).   Approach described in that article works fine but it requires InfoPath designer installed in production environment. In my case InfoPath designer was not available in the production. Things are getting even trickier when InfoPath form is not a simple one but it has data connections to external sources, “code-behind” plus it should be browser-compatible. 

So here is the approach that I found out to meet requirements to the production environment when deploying browser-enabled InfoPath template with external connections and “code-behind” to production SharePoint server:
  • Obviously all data connections should be converted into a Universal Data Connection (UDC) files. A UDC file stores data connection settings and resides in a SharePoint Server data connection library (DCL). So initially I created those data connections in local data connection library (library in my site collection, in which InfoPath form was used). Then links to DCL in InfoPath form were converted into centrally managed connection library


  • In my development environment I published InfoPath form template for uploading to SharePoint server (URL to my development SharePoint server) and stored it to “a network location”


  • Then I copied deployment package that included published InfoPath template and all UDC files to production server and uploaded form template to production SharePoint server with SharePoint Central Administration application. UDC files should also be uploaded to centrally managed connection library (obviously you need to setup parameters of connections to production resources before uploading those files)
  • Now you can activate uploaded InfoPath template to your site collection and finalize all needed settings to use form template in your application. From this point the form should be working as in your development environment
Some more complexity to those forms was brought by promoted properties. Some of fields in InfoPath form were “promoted” to SharePoint library and accessible in application as library columns.


Basically, the process of deploying InfoPath form with promoted properties is exactly the same as described above. Additionally, fields to be promoted should be mapped to relevant site columns (this can be set in Form Options \ Property Promotion). To guarantee that SharePoint library in production possesses needed columns, relevant site columns were created as a part of deployment script and associated with forms library. When InfoPath form is being uploaded to the server and activated to a site collection it just links to the previously installed site columns by their names.

Wednesday 24 November 2010

Displaying dependent data from external data sources in repeating tables in InfoPath forms


Introduction


Displaying dependent information from external data sources is a very common requirement for InfoPath forms in SharePoint projects. Some scenarios can be easily covered with out-of-the-box means; others are trickier especially in web compatible forms. Below is a description of simple solution for displaying related information from external data sources in repeating tables.

Problem Description

This requirement occurred in one of real projects related to building time registering system based on SharePoint with extensive use of InfoPath forms for implementation of timesheets. There were lots of requirements to those timesheets like different versions for different departments, calendars with bank holidays, extra security for sensitive data and so on. 


One of the requirements was to display descriptions for specific absence reasons.  Here is a brief description of the problem and fragment of structural model. 


This is a fragment of structural model for the timesheet (this is just logical entity model with no implementation details yet). Timesheet contains a list of Absences. Each Absence has a reference to Absence Reason, which has a title and description.

Timesheet is implemented as InfoPath form. The requirement is to maintain a table of absence reasons selecting absence reason from drop-down with all absence reasons available in the system. When an absence reason is selected for record, comments corresponding to the selected reason should appear in another column in this table. 


All absence reasons available in the system are maintained in a SharePoint list.



Problem Solution

InfoPath form for Timesheet contains a repeating table for daily records where one of the properties is Absence Reason.
Absence Reasons lookup is added to the form as Data Connection to external source to the SharePoint list containing all values for absence reasons in the system.

Then in properties for field corresponding to Absence Reason in main data source (W1AbsenceReason) set default value as shown in figure below:

XPath expression for this value looks like this:
xdXDocument:GetDOM("Absense Reasons")/dfs:myFields/dfs:dataFields/dfs:Absense_Reasons/@Description[../@Code = current()/../my:W1AbsenseReason]

Limitations

This approach is compatible with Web forms, which makes it useful in wide range of applications. But be very careful with this approach on Web; when the form is rendered as a Web page and delivered to the browser, all data from external data source is loaded to the client, which can make the page too heavy if the number of records in referenced external data source is big enough.