Tips to conduct a successful upgrade from CRM On Premise to Dynamics 365 Online

This blog post is more relevant if you are looking to upgrade from CRM 2013 or 2015 on premise to Dynamics 365 Online. However, some sections are relevant for all upgrades of Dynamics CRM/365 in general.

With all the features that are added at a very rapid pace to Dynamics 365 and all the features that are exclusive to the cloud version, a lot of companies are looking to upgrade from the current on premises version of CRM to Dynamics 365 online. What are some of the key aspects to consider before and after upgrading?

Before you upgrade

It’s a good idea to look at some of the known issues with your version of Dynamics CRM. For example,

  • The growth of the Async Operation Base table that can cause performance issues (Microsoft provides a resolution here);
  • The growth of the Principal Object Access (POA) table which can also cause performance issues and is mostly due to an excessive use of the record sharing feature (read Scott Sewell’s article about the POA table here)

This is not meant to be an exhaustive list, but the idea is look at what are the known issues or gotcha’s that cause problems in your current version of CRM and fix them prior to starting your upgrade. That way, you will not be upgrading you problems.

The Upgrade Process

While we wait for Microsoft to give us great news allow to import existing Dynamics CRM/365 databases to the cloud for restore, the current migration path to Dynamics 365 in the cloud is to create a solution package that contains cloud-compatible components and import it into a Dyn365 Online organization. That operation is followed by a data migration using the tool(s) of your choice (KingswaySoft, Scribe, CRM Import Wizard, custom solution etc.) to move your data from the on premise version of CRM to the Dynamics 365 in the cloud.

If you are on CRM 2013 SP1 or above, check to Solution Import version compatibility to ensure that your Solution can import into a Dynamics 365 organization. If you are in version prior to CRM 2013, you must upgrade to CRM 2013 SP1.

Now that you know that your solution will be able to import in Dynamics 365, there are changes that need  to be made prior to the move to the cloud and after.

General Considerations

  • You must ensure there is an existing mapping for your CRM users on premise with the cloud subscription your Dyn365 will be running on. You can use Azure AD Connect to make sure you users exists in both active directory.
  • You need to ensure there is connectivity between your existing integration points and your CRM Online instance. Generally, this means that your integration points must be somehow exposed to the internet.

Before you move to Dyn365 Online

  • Download, install and run the Custom Code Validation (from CRM 2015, for 2013). This will allow you to identify the possible bad JavaScript code and update prior to the upgrade.
  • In Dynamics 365 Online, all CRM plugins and custom workflow activities are configured to run in an isolated environment often refer to as sandbox.
    • Running in sandbox means some operations are not allowed. Update your plugins and custom workflow activities following the guidelines if needed:
      • Remove any IO operations read/write disk
      • Remove any operation that access the event logs
      • Remove any operation that access the registry
      • Ensure plugins and custom activities connect to web the right way (see recommendation from Microsoft here)
      • Validate there are no lengthy processes such as asynchronous workflows and/or custom activities. Processes running in a sandbox are configured to timeout after two minutes. If you have any processes of the sort, your options are:
        • Find a way to reduce the execution time of the existing processes
        • Remove the processes from CRM and replace by new functionality
        • Move the business logic to an external process that connects to CRM and perform the business logic on a dedicated machine managed by CFIA.
    • Finally, update the plugins and custom activities to running in fully trusted environment to run in sandbox using the plugin registration tool. If this is not done, you will not be able to import your CRM Solution in Dyn365 Online.
  • In Dynamics 365 Online, access to the CRM database views is not allowed. Reports written in SQL must be replaced
    • Update SQL-based reports to use Fetch XML for querying data
    • When it is not possible to reproduce the same type of complex queries with Fetch XML, you must consider other ways to run your reports (e.g. PowerBI, manual data export to SQL Azure database…)

These steps will remove components that will prevent your solution to be uploaded for Dynamics 365 Online. Once you have gone through the list, you can export your CRM 201x solution package and import it to your Dyn365 Online Organization (import to a dev org as unmanaged).

After you have moved for Dyn365 Online

This is when the fun begins! These are some of the key steps that you have to go through after the solution from a previous version of CRM has been loaded into Dynamics 365 Online.

  • Update plugin and custom workflows libraries SDK references (remove references to SDK version 6.x, 7.x, 8.0, 8.1, and add references to assemblies to SDK version 8.2, fix errors compile errors if any is found)
  • Update client side JavaScript code with new/enhanced Xrm.Page API methods
  • Update client side JavaScript code that calls the OData REST Endpoint to call the Web API instead (use Jason Lattimer’s REST Builder for a HUGE time gain).
  • Update Business Rules to leverage enhanced features where needed (e.g. ability to clear values, default branch, client and server scope etc.)
  • Open each Process Workflow, Dialog, Action (one by one), fix errors if any and activate
  • Open each Business Process Flow (one by one), fix errors if any and activate
  • Open each CRM form from used entities, verify the look and feel, adjust as needed
  • Open each dashboard (if any), validate look and feel and adjust as needed
  • Validate Sitemap and Application ribbons
  • Update your Email Router configuration (or Migrate settings from the Email Router to Server Side Synchronization)
  • Review and update functionalities as needed (e.g. replace plugins with Synchronous workflows or Business Rules, leverage other new features where possible and required) – this is a classic one liner that can take days, weeks or months to be completed depending on how complex your system is J
  • Export the solution as managed and import in your pre-prod (or other environment based on your internal release model) for testing
  • Gear up for your data migration using the tool(s) of your choice, test it, test it, test it again, then run it J
  • Go to production J

There you have it. It is always good to have a checklist of things to look for when upgrading a Dynamics CRM to the cloud (also applicable for partner hosted scenarios).

Plugin on Retrieve and Retrieve Multiple – How bad is it?

I have managed to be in the Dynamics CRM/365 world for over 7 years without having to write a single plugin on Retrieve and Retrieve Multiple. The recommendation that I give is to stay away from those. The reason is simple, it sounds horrible from a performance standpoint, and even people from Microsoft have recommended against it in many scenarios. Faced with an issue recently where we had to really consider it, I did some research and testing to try to measure the impact of such plugins on system performance. This article provides some background as to why we recommend against these types of plugins, and it also provides some of our finding after we tested for performance.

Why are Plugins on Retrieve Multiple scary?

When looking at the event execution pipeline for Dynamics CRM/365, we need to consider that there are a lot of steps involved as part of every CRM transactions. To do anything, we need to go through the CRM web service APIs which will start the chain of events in the pipeline (pre-validate, pre-event, core action/database access and then post-event).


This means that in general, it is a good practice to build everything for optimized performance to give your user base a good experience. It’s not like having a custom database where you can create store procedures easily, add triggers and so on, taking advantage of the SQL Server features and infrastructure.

Now, back to Retrieve and Retrieve Multiple plugins.

Retrieve Multiple: Think about it this way, you retrieve a list of 200 accounts, your plugin on Retrieve Multiple fires once and gives you the list of accounts being returned to the screen with all columns being retrieved. For each of these row, you do some type of operation. It doesn’t too sound good, does it?

Retrieve: You double click on an account from a list view. As the account columns are being retrieved to display the account form on the screen, you plugin on Retrieve fires and gives you the account object. At that point you can modify the content of the columns being retured as required before they are returned to the screen for the user. This really doesn’t sound too bad.

Some Findings on the impact on Performance

To provide some context into what we were trying to do, I wrote about multi-language lookup in a previous blog post. One of the solution that we have considered for one of our client is to use a plugin on Retrieve and Retrieve Multiple in order to change the value of the lookup primary fields in order to display a value in the user’s current language. The method we used is similar to what Aileen Gusni does here and almost identical to what Scott Durow does here.

We store a Region in a custom Entity. Accounts have a lookup that indicates its region.

Scenario 1:

The Region’s Primary Field contains a concatenation of the English and French region names with a relatively safe separator (we use “|”). When you load the list view of accounts, we have a plugin on Retrieve Multiple that looks at the columns being returned. If the Region column is returned, we retrieve the user’s language, we split the name of the region with the separator (the name is available in the entity reference) and we replace its value in the target object by the region name in the user’s language. The plugin on Retrieve does the same operation on the single role being retrieved.

The average execution time for the Retrieve Multiple plugin when loading 150 rows was 15.43 milliseconds so 0.01543 seconds.
The average execution time for the Retrieve plugin to load one rows was too little for the system to return a value (we got 0 milliseconds every time).

Scenario 2:

The Region entity has English Name and French Name attributes. When you load the list view of accounts, we have a plugin on retrieve multiple that looks at the columns being returned. If the Region column is returned, we retrieve the user’s language, we then retrieve the English or French name from the Region entity and we replace its value in the target object by the region name in the user’s language. The plugin on Retrieve does the same operation on the single role being retrieved.

The average execution time for the Retrieve Multiple plugin to load 150 rows was 1003.94 milliseconds so 1.00394 seconds.
The average execution time for the Retrieve plugin to load one rows was 16.02 milliseconds so 0.01602 seconds.

What should you read into this?

While these numbers don’t look too crazy at all, especially in the first scenario, there are a lot of factors to take into consideration that are not really showing here and that will vary in almost any scenario.

  • What is the infrastructure you are running on? The faster your severs and networks, the better the performance will be.
  • What you do in these plugins matters a great deal. You should avoid or limit the number of read/writes to the database during the execution of those plugins.
  • Our tests were made with a low level of users in the system, it is critical to scale up and see what these numbers look like at peak time of your system.

With all this said, I still recommend against it. Use with a great deal of caution! If/when possible, use calculated fields instead of writing plugins on these messages. This will also keep you away from limitation such as this one.

Hope this helps!

Dealing with Multi-Language Lookups

Very often, CRM entities are used as reference data tables, for example to keep a list of countries, states or provinces or other business/industry specific data. For some businesses I have seen entities to keep a list of distributors, list of business roles, regions to only name a few. When used that way, CRM entities provide a lot of great features that cannot easily be met with option sets such as the ability to manage large reference tables, lookup search, lookup filtering, ease of adding/editing/modifying data by power users without a deployment.

One of the issues with using CRM entities for reference data is that they is no concept of multi-language lookup in the Dynamics 365 / CRM platform. Lookups will always display the value of the primary field by default. This can cause an issue in places where you must have a fully multilingual application. In this article, I provide a few possible solutions to solve this issue.

As an example, we’ll use the context of a task for which we need to track its type. The list of the available task activity types is stored as records in an entity called “Task Activity“. The Task entity has a lookup to the Task Activity entity. The information needs to be stored in English and French


1 – Task Form with Lookup to Task Activity


2 – List of Task Activities

Resolution Option 1 – Concatenate multiple languages in one field with a separator

You will be disappointed, this is not a fancy solution. In the Task Activity entity, we have one field for the name in both languages and use concatenate both field values in the primary field using a workflow or plugin.

  • Name English (Single line of text – 100)
  • Name French (Single line of text – 100)
  • Name (Single line of text – 203) – read only for users, populated with “Name English | French Name”


3 – Task Activity Form

This is the most common approach that I have seen when the number of languages is small (2 languages). This has a disadvantage of sometimes creating long name values that are not fully visible in the views and on the forms, but it’s cheap and you keep the ability to search using lookup, and display the columns in French or English the views if you need to.

Resolution Option 2 – Plugins on Retrieve & Retrieve Multiple

This solution is a little more interesting, but risky. In the Task Activity, we still have one field for the name in both languages and we still concatenate both field values in the primary field.

  • Name English (Single line of text – 100)
  • Name French (Single line of text – 100)
  • Name (Single line of text – 203) – read only for users, populated with “Name English | French Name”

The principle is to write plugins on the Retrieve and Retrieve Multiple events of the Task Activity. In both of these plugins, you need to retrieve the connected user’s language (query the user settings table), and then replace the text being returned in the Name field by the value in the user’s language. This value can be obtained by querying the task activity record and retrieving the name in French or English, or simply splitting the Name field (primary field) with the separator and return the part in the desired user’s language. One plugin will handle the lookup column in list views (Retrieve Multiple), and the other will handle the form views (Retrieve).

Generally, it is not recommended to write plugins on the Retrieve and Retrieve Multiple events for performance reason. If the operations executed in those plugins are simple and optimized, it might be a viable solution. This is a solution that can scale well if you are dealing with more than 2 languages because in all cases, users see only the value in their selected language and the multiple values are transparent to them.

Stay tuned, I have an upcoming post where I provide some metrics about the impact of a plugin on Retrieve and Retrieve Multiple on performance.

Resolution Option 3 – Automated mapping of Option Set with Reference Entity Records

This is a bit of a complex solution, by far the fanciest. The idea is to use an option set instead of a lookup to reference the task activities, but the option set values will be “controlled” with records from the Task Activity CRM entity. It goes like this:

  1. Create a global option set named Task Activity
  2. Create your Task Activity entity with a primary “Name” field. Put the name of task activities in the primary language (language of the CRM org)
  3. When a record is created in the Task Activity entity, use a plugin to create an option set value in the global option set
  4. When a record is updated in the Task Activity entity, use a plugin to update the corresponding option set value in the global option set
  5. When a Task Activity record is deleted/deactivated, use a plugin to update the corresponding global option set value by putting brackets around the name for example, and also pushing the value to the bottom of the option set list
  6. You can then get the CRM Translation file and get the Task Activities values translated as part of the global solution.


4 – Records to Option Set Value mapping

The outcome is that for entities that need to capture the task activity information, there will be an option set field as opposed to the lookup field:


5 – Task Form with Option Set

While this has low impact on performance and leverages the out of the box language-aware option sets, it requires a serious time investment to define the development framework for each entity that required this mechanism to be implemented. It also requires a translator to update the CRM translation on a regular basis (every time there is a deployment). This is a fancy solution that requires a lot of coding and maintenance. In addition to that, you lose the ability to search and filter the content easily like you would do with lookups. You should make sure yours lists are not very long if you don’t want to end up with Option Set lists that are very long which will result in poor user experience.

I have rarely seen companies making such large investments to circumvent the lack of multi-language lookup in CRM. This is usually seen when there are laws that force you to have a system running fully in multiple languages.

Resolution Option 4 – Custom Screen for Lookup views & search

This is another fancy one for which I unfortunately don’t have any screenshot. We want to leverage option sets to “overwrite” lookup values and selection process with the following steps:

  1. Create a set of standardized Web Resources for Lookup Display and lookup value selection
  2. Display the web resources on the CRM Forms and hide the lookup controls
  3. The web resources will have built-in logic to display the value in the language of the current user, as well as a mechanism to allow searches (could be auto-complete based)

Writing a standard web resource control for that purpose is relatively simple. However, you might have additional work to do if you want to take advantage of filtering based on other fields, or custom filters. Also, this solves the issue on the form in the sense that you will see the values in the right language on the form, but for list views, reports etc. the problem will still exist so you’ll need to find another solution there.

Closing Thoughts

As you can see there is no perfect solution. Each organization has to decide the level of investment and risk they want to take to make sure they have multi-language option sets. Living in Canada where we have two official languages, this is a challenge that we often see in public sector implementations because having fully bilingual system is mandated by law. There are very few countries where this is the case (which is probably why Microsoft has not made investments in this area). Most private sector companies will usually impose a primary language for the entire organization.

Hope this helps!

Microsoft Medics 365 – Session on Licensing & Upgrade

If you have questions about Dynamics 365 Licensing and Upgrade options, there is your chance to get them answered.

In the first ever edition of the Microsoft Medics 365, a panel of 6 Microsoft Business Solutions MVPs (formely known as Dynamics CRM MVPs) will discussing the new Microsoft Dynamics 365 Licensing Model as well as the Upgrade process on November 29th at 12pm ET.

Register here, we’ll be happy to share our knowledge and answer your questions!

Now that this is all over, feel free to check out the recording below. Any licensing questions, free free to ask here or on the Medics 365 Facebook page!

Where to store configuration data in Dynamics 365/CRM?

In almost all the complex systems that I’ve worked with, there has always been a need to store some configuration information. It could be URLs to external APIs or web sites, connection strings to a database for integration purposes, application specific parameters that drive business logic such as Security Roles, Accounts or Contacts. In Dynamics CRM/365, there is often a need to store the same type of information so that plugins, workflow and other integrated applications read them and perform business operation accordingly. In this article, I share the most common options and share some pros and cons associated to each of them.

Key Value Pair Entity

IF you’ve done some application design, you know what the concept of a key value pair table is. In the Dynamics CRM/365 context, it is an entity that contains two required text fields: one for the key and the other for the value. The key represents the name of your configuration variable and the value is, well, the value for the key. For each configuration element that needs to be stored in your system, you create a new row with a key and its value. The images below provide an example of a Key Value Pair entity in CRM in which we stored information to connect to an external web service and the ID of a Security Role, all as text values.

bp-1

1 – List of Key Value Pairs in Dynamics CRM/365

bp-2

2 – Key Value Pair Entity for Example

Pros

  • Very simple data structure
  • Easy to add/remove configuration values
  • Code to read and use the variables does not change over time
  • Retrieving a config element is a fast operation (one row with two columns)

Cons

  • Data type for the value is a text field (not practical for lookups or other data types as you may have to store GUIDs for example)
  • Inability to set default values
  • Inability to use FLS on specific config elements
  • The Keys must be hard-coded in code and/or documented and maintained somewhere

If you are planning to use a Key Value pair type of configuration table, my recommendation is to have one key field as text, and configuration value type field (option set with the type of field – example text, two options) and multiple value columns of different types (e.g. Value (lookup 1), Value (lookup 2), Value (text), Value (two option)). As a bonus, you can add some business rules to prevent the selection of the wrong data type based on the selected configuration value type.

Configuration Entity

Here, the idea is to have an entity in CRM with one field for each configuration element that needs to be stored. In the example below, we have a table that contains information to connect to an ERP web service as well as credentials, and a lookup to a System Admin role (similar information as above). It in this case, there should only one row in the configuration table.

bp-3

3 – System Configuration List View (only one row available)

bp-4

4 – System Configuration Entity form (shows all the configuration fields)

Pros

  • No need to have a list of configuration key names (use the field names enforced at the database level)
  • Each configuration element has the appropriate type (e.g Lookup, text field, two option, option set etc.)
  • Ability to enable Field Level Security on specific parameters
  • Allows for default values for certain data types
  • Easier to setup by an end user (create one row, set values as opposed to create multiple rows with key and value)

Cons

  • Schema change + Code update required anytime a new configuration element is needed
  • You need to ensure there is only one record for the entity (plugin validation on create)
  • Configuration table grows horizontally and not vertically over time

Wrap up

I have used both models extensively and they both work well. For a stable system with not a lot of moving piece, I tend to like the Configuration entity better. For system where things change all the time and new config items need to be added on a regular basis, using the key value pair entity is often more cost-effective. There is also the possibility to use an XML web resource for parameters or the plugin secure and unsecure configuration fields.

Regardless of the method you use, consider caching the configuration data when possible to increase your system’s performance. On the Client side with JavaScript, you can use a few different mechanisms for caching (local storage, cookie). On the server side for plugins, I often use a static cache but this only works for plugins executed in full trust mode (in other words, if you are in CRM Online, no caching on the server side).

Cheers!

 

Free Address Validation Tool for Dynamics CRM

We’ve developed a simple and free Address Validation add-on for Microsoft Dynamics CRM. This article presents the add-on and also provides a download link.

SharpXRM Address Validation is a light weight add-on for Microsoft Dynamics CRM 2015 and 2016. As its name indicates, it is used to validate the address information of CRM record such as Contact, Account, Lead or custom entities containing address information. It uses the Bing Location API to perform the address validation and as such, we have very little control over the result(s) returned. It requires a Bing Maps API Key which you must generate from the Bing Maps portal

The solution contains a web resource that has to be inserted on the form of the target entity and configured with the appropriate fields. For example on the account entity, you can add the web resource and configure it to validate the out of the box address 1 fields. The web resource adds a button on the form (see Image 1 below). Clicking on the button will launch the validation and proceed to present the result (see Image 2). When the results are displayed to the end users, they have an opportunity to make further edits before accepting the changes.

SNAGHTML189812e

Image 1

image

Image 2

 

  • (1) Input address from the account record
  • (2) Validated address, editable (in case Bing doesn’t return all information or other changes need to be made for example to add or keep an apartment number removed by Bing)
  • (3) Confirm will save the information to the account and close the validation wizard

Want to give it a try? Download the Managed CRM solution and configuration guide.

Don’t hesitate to provide feedback @ contact@sadax-technology.com.

We also have a commercial version of the tool which uses a robust address validation API to correct single addresses or multiple in bulk using CRM workflows. For more information, reach out to us

CRM Plugin Code Structure – By functionality or by event?

When we work on CRM plugins, there is a key decision at the beginning of the development cycle when it comes to structuring your plugin code. There are two main ways of writing your plugins :  by events or by functionality. In this post, I will describe the two approaches and discuss some of their pros and cons.

Plugins can be described as handlers for events fired by Microsoft Dynamics CRM (source). There can be multiple handlers for single events.  Let’s use an example for the purpose of this post.

For the event, we’ll use the creation of a case (pre operation). We will consider that when  the case is created, there are two actions that need to be performed : generate and assign a unique case number and validate that required fields have authorized values in them.

Plugin structured based on Events

For the plugin structure based on event, it means we’ll have a single plugin registered on the Case Pre create event, and inside the plugin class we’ll have two business logic blocks of code to execute (one to generate the unique number, the validate the required fields).

SNAGHTML4e81484     SNAGHTML4e74f7f

Plugin structured based on Functionality

In this case, we’ll have two plugin classes registered on the same event (again, one plugin for unique number generation and another to validate the required fields) – Sorry, no screen shot here but you get the idea Smile.

Pros and Cons

Functionality Based Plugins

  • Pros
    • Ease to see what plugins does what when looking at Plugin Registration tool (or similar tools) because of more descriptive feature name in plugin name
    • Ability to disable specific features for various scenario (e.g. disable system integration or auto calculation plugins during data migration)
    • Unit Tests can be written against a plugin class to test one functionality
    • Provides the ability to easily write generic plugins and register on multiple events by any power user
  • Cons
    • Large amount of plugin classes
    • Large amount of registered plugin steps; can become big and hard to read/maintain

Event Based Plugins

  • Pros
    • Plugin registration is straight forward and easy to maintain
    • Ease to easily the operations performed at each event when looking at the code
    • Plugin classes are standardized : always tied to an event and single point of entry for all developer to see actions executed on the event
    • Unit tests can be written against specific Business logic methods and/or classes (even independently from the plugin execution context depending on how they are written)
  • Cons
    • Must use alternate mechanism to disable specific features if needed (e.g. configuration data element)
    • Possible timeout if executing multiple large business logic blocs (mind you, if you have a timeout in a plugin – which happens after two minutes, you probably have a design issue)
    • General  dependency to having a developer around
      • Generic business logic code blocks can be written but a developer is required to insert the business logic on additional events
      • To view the all operations performed for an event, a developer needs to look at the code

So which model should you use?

I have used both models heavily on various occasions. For me personally, the decision on which one to use comes down to a project’s needs and a coding style preference. As an example, if you know you’ll have a lot of generic features and want to have reusable plugins, you are better off using the functionality based model. The same applies if you know there are processes that require to disable specific functionality on a regular basic. On the other hand, if you project is relatively small and includes simple business logic on a few events, the event based approach can be a good choice. Feel free to add feedback in the comment section if you see other pros and cons for each of these models or want to share experience in the area.