This is one of those posts for my own bookmarking benefit – the following links give you the detailed breakdown on compatibility with Microsoft Dynamics CRM 2011 / 2013 / 2015.
And while we’re at it: Dynamics CRM Build Numbers
This is one of those posts for my own bookmarking benefit – the following links give you the detailed breakdown on compatibility with Microsoft Dynamics CRM 2011 / 2013 / 2015. Compatibility with Microsoft Dynamics CRM 2011 Compatibility with Microsoft Dynamics CRM 2013 Compatibility with Microsoft Dynamics CRM 2015 And while we’re at it: Dynamics …View full post
If you’ve got a Dynamics CRM 2011 / 2013 / 2015 on premise installation that isn’t performing the way you (or your users) think it should then this article gives you information on how to diagnose where the problem(s) lie and some suggested solutions. Some of these tips and tricks apply to CRM Online as …View full post
A while ago I put together a website that explains popular cloud services in an introductory manner with the aim of letting people read about some of the tools that are out there and decide if they can use them for their business (or life). The content isn’t for the tech savvy, but hopefully it’s …View full post
The recent Kickstarter campaign for Anonabox proved to be insanely popular but ultimately unsuccessful after the campaign was suspended by Kickstarter. If you’re unfamiliar with it, it was a small network device that you could plug into an existing network connection and then either use the provided Wi Fi hotspot, or plug in another network …View full post
This is one of those posts for my own bookmarking benefit – the following links give you the detailed breakdown on compatibility with Microsoft Dynamics CRM 2011 / 2013 / 2015.
And while we’re at it: Dynamics CRM Build Numbers
If you’ve got a Dynamics CRM 2011 / 2013 / 2015 on premise installation that isn’t performing the way you (or your users) think it should then this article gives you information on how to diagnose where the problem(s) lie and some suggested solutions. Some of these tips and tricks apply to CRM Online as well, so read on cloud customers.
The first thing you’re going to need to understand is – what is slow about your CRM and how have you measured that? You are going to need to get a handle on what aspects of performance are being reported as problematic if you’re going to solve them. Often there’s a multitude of factors that lead to poor performance so it is important that you establish some kind of baseline measurement of the current ‘poor’ performance is, understand the acceptable performance and then work out what is causing the difference.
Following are some of the tools that I think are handy when troubleshooting performance problems with a CRM installation and they should be familiar to most developers and infralopers worth their salt.
I like to be ‘scientific’ in my approach and capture timings for operations using tools where-ever possible. Multiple test runs, using the average as the measure (but beware the magic of caching on secondary runs). The tricky issue of end user perception of performance is often measuring in feelpinions, you need to get users to try and time their operations where possible. The good news is there are built-in tools that can help you with this.
First up, have you run the Microsoft Dynamics CRM 2013 Best Practices Analyzer on your installation? If it highlighted any errors you should address those – they might not be having a direct impact on the performance of your system but you’re best to eliminate those before you get too far.
Also while you’re at it, how up to date are your patches? I know there’s times when you’re hamstrung and can’t apply the latest patch because of some #enterprisey reason but if there’s nothing stopping you, ensure you’re up to date.
If you’re not sure what patch level you’re at, check the CRM Build Numbers here.
Before you even try – are you using a supported browser?
From the Web UI perspective there’s some great built-in tools that can help you diagnose problems and measure performance.
If you go to /tools/diagnostics/diag.aspx">http://<CRM-URL>/tools/diagnostics/diag.aspx you’ll see the page below. Click the Run button and you’ll get the results of some tests executed from the browser that reveal any issues between browser and server, or with the browser itself. Note that the URL is for the root of the CRM website, it’s not for a particular organisation. This is a very handy way of getting end users to capture the performance of the CRM from their end and send it in to you. Also helps to run this over different times of the day if your scenario involves different performance at different times of the day,.
As of CRM 2013 SP1 onwards there’s a new browser diagnostic tool in town. Hit Ctrl + Shift + Q in IE and you’ll see the following.
Now click the Enable button and load a form you’re trying to analyse. Hit Ctrl + Shift + Q again and now you’ve got an excellent breakdown of the form performance.
This can be handy to compare performance from different browsers / end-users, and also to see the impact your form design is having. Loaded up with a billion sub-grids? That’s a paddling.
I’ll keep this brief – review your form design carefully. Consider your end users and tailor the form layout appropriately, you might not need to have all the fields on their all the time. Maybe designing a lightweight form that is used 90% of the time and allowing the user to switch to the detailed form the remaining 10% leaves users more productive compared to 1 gigantic form trying to be everything to everyone. Likewise remember to consider end user roles and segregating different form layouts that way. Yes it leads to a bit of extra development and maintenance but if it leads to a more useful form for users that performs better, the system is more likely to be used than a slow-as-molasses form loaded with a billion sub-grids that might come in handy.
CRM comes with a great web services API that allows integration by other systems. A pattern I’ve seen often involves getting .NET developers to write a simplified set of web services that conform to organisation specific data models, acting as a wrapper to the CRM. This simplifies integration and transforms CRM objects into the required models, it also provides a bit of abstraction so you can minimise disruption if you upgrade the CRM installation later. Sounds awesome, and you can bash out some code pretty damn quickly that gives you the desired results using LINQ. Like Peter Parker’s Uncle Ben said – with great power comes great responsibility. Getting code-complete quickly doesn’t mean you’ve written an efficient system. Assuming you’re writing queries against the OData service:
CRM has a number of built-in maintenance related jobs that perform things such as index recalculations. By default these kick off once a day roughly around the time that the CRM was installed. Which is usually business hours. An excellent tool to review these schedules and change them to a time of less interruption to the user is the CRM Job Editor – with editions for CRM 2011 / 2013 / 2015.
What seems to be the average memory use and CPU usage in perfmon (or for more manager friendly graphs Resource Monitor)?
If you’ve got multiple CRM servers deployed behind a load balancer, are you able to side step the load balancer and browse the CRM server directly (from the end user desktop environment), and does this make any difference to the performance? If it does then check out the NLB configuration for any issues.
What does the performance of the CRM Web UI seem like from one of the servers itself (via Remote Desktop)? How does it compare?
Are the Windows Event Logs full of errors or other information indicating problems with disk, networking (connectivity, DNS), authentication or other critical things?
What’s the network topology? When you’re browsing the CRM Web UI and getting your own feel of system performance, are you doing it only metres away from the data centre, while the users complaining about performance are in some regional office connected over a wet piece of string? If the performance symptoms being complained about seem to be geo-specific, replicate testing from their end as much as possible (see the built-in diagnostic tools in the Web Interface section).
Have you got latency issues between end users and your CRM servers? CRM can be a bit ‘chatty’ and this can cause you pain over a connection with high latency (e.g. think London to Canberra). In some organisations I’ve seen better performance through Citrix because the browser to CRM server chattiness occurred “locally” within the same data centre. Your mileage will vary and so will the tools at your disposal to tackle this.
Double check that Dynamic Compression is enabled for your CRM website in IIS 8 / 8.5. After you’ve done that, check the outputCache setting for omitVaryStar as per this CRM Tip of the Day. Yes it applies to IIS 8.5 as well, and it’s not just a CRM issue – it affects all sites hosted in IIS. Without this setting you may find that output isn’t being cached by the browser properly which causes a performance drag by making additional requests for content upon page load. Be sure with this one to test before / after with browsers that have had their cache emptied (then load a few different forms) to measure performance difference.
Of course Dynamics CRM performance is heavily dependent on the performance of the underlying SQL Server installation. So, have you run the SQL Server Best Practices Analzyer (2012 edition)?
SQL has some excellent statistics that it keeps regarding costly queries, index usage and other tuning related things. From a CRM perspective these can help reveal what your most costly queries are. This article from Nuno Costa does a much better job of explaining these than I can, so check it out.
If you’re doing a lot of queries that involve WHERE clauses with custom attributes or ORDER BYs with custom attributes chances are you can benefit from an index on those attributes – particularly if the number of records is large. Adding an INDEX to the SQL Table is the only supported thing you can change in the SQL Database. Things you need to consider – how unique is the data? How often are you reading from it vs writing to it (inserts, updates)? Because the cost will come in terms of index calculation as you make changes. These days this calculation happens ‘online’ and doesn’t block but it still taxes CPU and Memory of course.
But how will you know which attributes need indexes?
Run queries similar to the ones that are performing slowly, directly in SQL Server Management Studio and make sure to include the execution plan. SQL will tell you the cost of the query components and reveal if an index would benefit that query.
What if I’m a CRM Online customer?
If you put a support call in to Microsoft you can have them add the index for you.
I’m yet to come across an update beyond the CRM 2011 version, but there’s an extensive performance and optimisation whitepaper from Microsoft and a lot of the same principles still apply.
There’s a lot of moving parts to Dynamics CRM. Every performance analysis will be different as context is everything for an on-premise installation. However I hope you’ve found this a helpful overview of some important things to be aware of with regards to Dynamics CRM performance optimising and troubleshooting.
A while ago I put together a website that explains popular cloud services in an introductory manner with the aim of letting people read about some of the tools that are out there and decide if they can use them for their business (or life). The content isn’t for the tech savvy, but hopefully it’s still helpful to people on the learning curve. So if you’re interested in finding out more about cloud tools and software for productivity and general business, why not check out freemiumtools.com.
The recent Kickstarter campaign for Anonabox proved to be insanely popular but ultimately unsuccessful after the campaign was suspended by Kickstarter. If you’re unfamiliar with it, it was a small network device that you could plug into an existing network connection and then either use the provided Wi Fi hotspot, or plug in another network cable all traffic routed through the device would go over Tor. The idea behind the device is to make it dead easy for people to use Tor holistically, preventing software from making direct requests to the internet. The other advantage is that it would work without particular applications having to be aware of Tor or configured to use Tor. Just plug it in an go.
I think the intent behind this device is great but there’s a bunch of things people forget and confuse when it comes to online privacy, anonymity and security. I think the danger with a device like this would have been lulling users into a false sense of security, or illusion that they couldn’t be traced / tracked / monitored / discovered / whatever it is they thought they were achieving by using this. This prompted me to think about a bunch of things and following is my brain dump on what some of the differences are between privacy, anonymity and security – why you might want to pursue any or all of these online. This is stuff I’ve been mulling over for months after reading books like Black Code, No Place To Hide and following the details of the Snowden revelations as well as the metadata collection debate currently happening in Australia.
Let’s start with online security. This covers your basic approach to doing things securely on the internet – ensuring you use a password manager to enable strong, unique passwords on each and every site you use; using anti-virus software and personal firewalls; enabling two-factor authentication and using extensions such as HTTPS Everywhere to enforce the usage of https on websites. You make a conscious effort to avoid using insecure websites and applications that do dumb things like email you your password, don’t use https, having stupid password restrictions and so on.
More advanced approaches to security include encrypting communications (instant messaging, text messaging, emails), encrypting files and whole drives (computer, smartphone).
Why do we do these things? We do these things to avoid having our accounts compromised, money stolen, identity revealed (more on this later), personal information leaked – including nude selfies. The consequences of these things ranges from pain in the arse to major impact on our lives.
Privacy is about controlling information about yourself – consenting to provide that information understanding how it will be used, your rights regarding deleting that information and how long it is stored for. Clearly a major trend in the last decade is the erosion of our privacy in the online world through constant mishandling of our personal information leading to leaks.
This is not to be confused with scenarios where we opt in to applications to receive a benefit – providing personal information when there is a net benefit in applications like Facebook. These systems work because without opting in and providing your information you won’t be able to establish the connections with your friends. Essentially you get to use this application for free because you’re providing personal information. That the application aggregates all of this personal information and uses it for marketing purposes is something (most of us) are consciously aware of and acknowledge. The benefit we receive is worth it to us.
What complicates privacy are the many different facets of information about ourselves and who we want to reveal them to is very granular.
Anonymity is about the right to pursue your life anonymously without having to provide identifying information. In an online world this means the ability to use pseudonyms and non-identifying information when interacting with applications and other users on the internet.
A disturbing angle relating to anonymity is the practice of having your online habits tracked across multiple sites over a period of time through advertising networks. While we can read and agree to privacy statements of individual sites and receive a pretty obvious benefit in return for providing some information to the Facebooks of the world, it’s less obvious the benefit we get from being tracked. ‘More targeted advertising’ is usually the result, but for most people that’s a pretty dubious benefit. It’s great for business, but not the individual.
Trying to be anonymous on the internet can include trying to opt out or actively block this kind of tracking (if you’re interested, check out Disconnect.Me). Modern browsers have privacy modes that attempt to limit some of this but it’s really only a quick and convenient way of browsing a few sites you don’t want to appear in your local history. They’re not known as porn mode for nothing. These browser modes do nothing to prevent your requests from being monitored by your ISP, tracked by the servers your requesting information from and more.
So what about Anonabox?
Anonabox seemed to be popular because it makes using Tor easier. Tor enables you to cloak details about your web requests. Requests are routed through the Tor network rather than straight out from your ISP. The Tor network is a series of nodes around the internet that bounce requests between them. The idea is that you’re making it harder for people at the remote end to trace back to you, and you’re also disrupting people who may be monitoring your traffic (ISP, government, local Wi Fi snoop).
I think the main demand for this box (in the campaign) has come from people who are aghast at the Snowden revelations and want to stymie mass surveillance of the internet by governments. But I think that’s flawed – in that I don’t think the Anonabox is the panacea it seems.
The accusations against the five eyes governments involve mass surveillance of the internet with the (begrudging?) cooperation of telcos and internet companies who provide these services. Using Anonabox or Tor only scrambles your network traversal. They still have access to your information either straight from the pipe or from the company itself. Furthermore, using a normal browser with Anonabox still means you’re subject to the same advertising based tracking and so you’ve defeated nothing. (To be fair they recommended using the Tor Browser Bundle in conjunction with Anonabox).
Whistleblowers and people who face persecution in their country (for sexual orientation, political or other reasons) are pretty serious about security, privacy and anonymity. In order to keep themselves safe they have probably already researched effective ways to keep their identities hidden. While Anonabox is trying to make this easier, without the proper education of users there is still a risk that they make mistakes that reveal their identity or personal information.
I don’t think there’s a single easy solution to trying achieve anonymity, maintain full privacy and security online. Anonabox looks like a step in the right direction but seems to be at risk of giving people a false sense of security that they are totally anonymous and private on the internet.
As the technology for driverless cars continues to improve we inevitably approach the point where we want this amazing technology to go mainstream. There will be resistance to allow these cars on our roads on a number of fronts though.
At some point in the future people will die or be seriously injured as a result of a driverless car accident. To think that this will never happen is ridiculous. However this shouldn’t be the first thing we think about. People seem to be afraid of introducing driverless cars because they don’t know who to blame, or hold accountable when this happens.
At the moment we have a system where drivers are licensed and held responsible for their actions. Car owners must register their car and meet regulatory requirements to ensure their car is roadworthy. Car manufacturers are accountable for the quality of the cars they produce in regards to meeting safety standards and legislated requirements. And governments are accountable for the system of road rules and safety standards that cars must meet. Obviously the only thing that changes in the driverless car scenario is the removal of the driver.
A driverless car is still owned and registered by someone – they are the ones who would be accountable for the actions of their car. Manufacturers could provide some kind of surety / guarantee about the quality of their car, and perhaps provide liability insurance or protection on behalf of the owner in an attempt to sell the car and assure them it was safe.
The frustrating thing is that this ‘problem’ which will delay the introduction of driverless cars is a problem of skewed perception. I am confident that the introduction of driverless cars will dramatically reduce the number of deaths and injuries on our roads. We don’t need these cars to be perfect, they only need to be better than the current system of human driven cars.
Currently we measure the number of people who die on Australian roads each year in the hundreds. This is way too high, and doesn’t mention the thousands of people who are seriously injured as a result of road accidents. If the introduction of driverless cars cuts this in half – wouldn’t that be amazing?
For a few years now, COTS based solutions have been all the rage – taking an existing off the shelf product and configuring and customising it to meet your organisation’s needs rather than build from scratch. Stand on the shoulders of giants! This article does a pretty good job of articulating the pros and cons of using Microsoft Dynamics CRM 2011 / 2013 as that platform for your organisation’s needs.
What started as an exercise in skills refresh for me has finally grown into a side-project that went live today: Squeegee
A personal finance software SaaS product. Yes there’s a thousand of those out there already, but I used this as an exercise in learning how to take an idea all the way to fruition. There’s clearly more work to be done, but it’s a satisfying milestone to reach.
It would be great if you could check it out and let me know what you think!
Prime Minister Tony Abbott and Attorney-General George Brandis today announced a new government initiative known as Operation Data Protect. This nationwide program will act as a data backup service for the nation, relieving the millions of Australians with connections to the internet from having to worry about safeguarding their data.
Mr Abbott reveal that the scheme would be rolled out later this year and would retain 2 years worth of data for every internet connected Australiaa. “People are afraid. Their precious memories are only a dropped laptop or stolen mobile phone away from being lost forever.”
Senator Brandis told the media contingent that after reviewing commercial services currently available, the Government had decided to step in. “Some of these services aren’t even located in Australia. There was a real risk that data was being sent to ‘the cloud’. That’s not even a country.”
“We simply could not stand by while a generation of Australians lost their documents through a lack of a backup” added the Prime Minister. In addition to backup, the service will index all the data stored so that users can easily find their files when they need to. “Handily this helps copyright owners check their records of ownership against the backed up data so we can ensure that they have received every last cent they are owed” confirmed Senator Brandis.
A glossy brochure was distributed at the press conference, listing some of the other benefits of the service. In the future, people applying for public service positions will be able to let the government simply refer to their indexed backup instead of having to complete arduous selection criteria. When asked about the whereabouts of Communications Minister Malcolm Turnbull, Prime Minister Abbott informed the press conference that he was “out negotiating a great deal on the hard drives required. Malcolm practically invented hard drives in Australia and knows a fair price when he sees it.”
Questioned about the security of the data stored on behalf of Australians, Senator Brandis said he would use a really strong password – “with those funny characters and everything” and would keep this written inside the cover of a random book on his parliamentary bookshelf. “I can’t give away which one, but it rhymes with twine-teen gatey-floor” said the winking Senator.
I have a number of domains I originally registered with GoDaddy and I’ve finally dumped them. I’ve transferred both the DNS hosting and the Registration over to DNSimple. I got sick of trying to be sold dumb shit, piss poor web interface, and just sleazy marketing with GoDaddy. They were damn cheap I’ll give them that.
If you want to do something similar, here’s an overview:
The general process is:
It is that simple. But why DNSimple? Nice, clean UI. Simple pricing structure (yes dearer than GoDaddy but it’s worth it). Two-factor authentication. You can still have WHOis privacy to obscure your details from the public registers.