Intranet analytics

Get started with intranet analytics

By Marcus Österberg
Originally published in Intranets ‐ handbook for intranet managers

Table of content

Introduction

The time when major design decisions were made based on subjective measures, like someone's personal taste, is behind us. At least it should be. Now you can, and should, evaluate almost everything based on data. Previously performed tests, and the insights gained from working with analytics are now readily available to intranet teams. Instead of attending pointless meetings about what shade of blue you ”like”, you can evaluate alternatives based on real users of your intranet. Users, through their clicks, provide signals on what works, how they find things and which routes they take.

This chapter is about how to get started with using analytics in an initial review of an intranet. Often no one knows the usefulness of most of the intranet content and there is little effort toward measuring the impact of the intranet on the performance of the business beyond noting the number of users. For many teams the reason is they do not have a training in basic analytics and which of all the statistics to make use of.

This chapter is a guide for those who want to start evaluating their intranet in a structured way and set tangible business goals for measuring what success looks like on their intranet.

About intranet analytics

The intranet is not just a collection of pages, images and documents. Rather, it is there to support employees in their daily work efficiently carrying out work tasks like; reserving a place on a training course; filing an application for leave or finding the phone number of a person in the human resources department. Working with web analytics for an intranet is about improving or simplifying activities like these, activities that the intranet should support. Intranet analytics is not about accumulating vast amounts of data, it is about using the data to gain insight into the users' experience of the intranet – with the intent of improving that experience.

Brief description from Wikipedia on analytics on a public website:

”Web analytics is the measurement, collection, analysis and reporting of web data for purposes of understanding and optimizing web usage.”

It is easy to focus on analyzing the collected traffic data, however, all kinds of tools can contribute to getting an overview of your intranet performance and should be included in the concept of intranet analytics. Problems such as not fulfilling accessibility requirements, using unnecessarily high resolution images, and many more may be easy to identify without using statistics.

KPI:s and metrics

A Key Performance Indicator (KPI) (sometimes for intranets simply called a ”metric”) is a measurable value that demonstrates how effectively business objectives are being attained. Organizations use KPI:s to evaluate their success at reaching targets. There are two fundamental categories of KPI:s and metrics, and we need them both to understand the big picture:

Your intranet should have a regular ongoing review process and a major review should take place if you are considering an upgrade or change of platform. Reviews are often called ”Content audits” although their conclusions are not restricted to the quality of content. When reviewing the current intranet, both quantitative and qualitative measures can be used to find out what is working, and what needs to be improved. The basic level of review should enable you to group content into the useful pages that have an obvious value to actual users, and all the rest with at best dubious value.

When looking through the content there are some things to consider when deciding which group the content belongs to:

Content that does not relate to these points is often distracting and it should probably be removed, or at least be hidden out of sight and not indexed by your search engine. Please, do define requirements for new pages to include; a clearly displayed call-to-action and having an explicit link to the goals for the intranet (addressed later in this chapter).

Glossary – Call-to-action (CTA)
The elements on a page which the user is supposed to interact with, which confirm that the sessions is successful. It can be a button, a link, or whatever the reason for the existence of the page is. A CTA is the way the designer of an intranet is trying to walk the user through an activity, perhaps one small interaction at a time.

Using analytics as part of your intranet review process

For analytics to be meaningful you need to first define what is good – good content, successful task accomplishment, good quality and what is good for business – these are the things that are worth analysing. Otherwise the risk is that you are working to improve things that lack the prospect of worthwhile improvement.

The working process for intranet analytics is as follows:

  1. Develop business goals or evaluate existing business goals
  2. Produce reports and methods for analyzing KPI:s and selected metrics
  3. Analyze!
  4. Improve the intranet based on any findings in the analysis (and then iterate)
Image 1: Begin with defining your business goals

Start by developing the business goal. You're never done, begin the second iteration with reflecting if the measurability of the business goals can be improved.

Now is the time to go through each step in this working process and examine what is worth thinking about.

1. Working with business goals

In 2016, a mind-boggling 31% of intranet managers (according to Web Service Awards) still have no clear objective for their intranet. 63% say they have no defined objectives for the intranet sub-pages.

Obviously, it is difficult to know how useful the intranet is if you do not know what it's supposed to achieve. The very first step of intranet analytics is to set down a list of goals.

Avoid vaguely worded wishes such as ”enable findability” or ”keep employees informed with news”. Instead, try to stick to concrete activities and tasks employees need to be able to do to get their job done and bring value to the business, good examples of these include; reporting time at work, updating skills profile or submitting request for absence.

Management documents, such as the organization's vision, plans and the budget often provide already defined objectives KPI:s (Key Performance Indicators) that can be reused or adapted, as measures for your intranet. Break down each goal to something that can be isolated, and measured, as part of the user experience of the intranet. A challenge may be to align the measurable business objectives with the long term objectives. It is important to try to anticipate the long-term consequences of the goals we define, in order to avoid negative side effects.

In addition to the business objectives it can be beneficial to review other metrics, such as the number of page views per visit, they may tell the back-story of why the objectives performed the way they did. However, most generic metrics struggle to prove what value they indicate – is the result unequivocally good or bad? These are not where you should put most of your effort. Instead, they help you investigate things afterwards and gather data helping you tell the story of the users when reporting.

The amount of page views, unique users, sessions, or similar, does not provide any evidence of increased business performance. These may be nice anecdotes between us intranet geeks, but they provide very little insight in which way the intranet is successful. Instead they make for personal glorification and a good gut feeling, but not great business!

Glossary – Segmentation
A group of users defined by one or more common properties. Used to filter statistics, to look at a certain group's commonalities, and even to compare differences between two or more groups. A segment could be; those who entered with a mobile device; those who began their visit at a specific landing page or users separated by their geographic location.

About segmentation and why it matters

Image 2: The fact that an average user (the grey circle) has 10 page views per visit might not be that insightful if actual users typically is split into two groups with either 1 page view or 20.

Segmentation is important because of the risks of looking at average values. Average values are sometimes not representative for the majority of users. An average value is meaningful only if the data is in a normal distribution (as illustrated in the diagram), meaning that the average value has relatively few surprising deviations.

Image 3: Illustration of a normal distribution. Most data is predictably similar to the average value.

In other words, the average value "10 page views per visit" is not that meaningful a value if about half of the users have only one page view and the rest of the users have 20. Then the average does not describe the reality. More interesting is the observation that there are at least two very different groups in these data.

To get clarity around what groups you need to inspect use segmentation to divide the data into different groups that can be explored separately. The idea is to filter out a subset of the users, see how they perform and if they stand out in some way. Segmentation is done to be aware of differences, differences in how each segment behaves, depending on where they are located, their profession, which sub-pages they have visited, and so forth.

Using segments

When reviewing your intranet using analytics, it is a good idea to focus on a few segments per repetition of the analysis. One segment could be to filter out users who only have a single page view and inspect where those user sessions are taking place. Another segment could be to watch the behaviour among those who use the intranet on a daily basis, what differentiates them from monthly users, for instance. Yet another segment may be to look at those who repeatedly only make a single page view. Looking at these different usage patterns could lead to some significant improvement actions. If, for example, you conclude from your analysis that the single page viewers segment have simply given up on the intranet, a targeted campaign could be implemented, aimed at figuring out a way of convincing them to try out some new features.

As mentioned previously, average values do not always provide a particularly accurate picture of reality, or actionable insights. The majority of users may have few usability issues, the aim of segmentation is to reveal the portion of users within the whole who have problems. With intranet analytics, you can simultaneously manage several user groups in a structured way.

Examples of business and operational goals measurable on an intranet

Content published on the intranet needs to align with business goals and the intentions we have for the intranet. It is not nearly enough to assert a public interest or to continue treating the intranet as an information dump just because you like to produce content. Below I set out some examples of business goals for an intranet:

As you notice, these examples of business goals are not limited by what statistics are captured in the statistics tool. To measure real business goals, you have to work both with quantitative metrics (those you gather in your standard intranet statistics tool) and qualitative metrics (such as users' subjective opinions). Depending on the business goal you might also need to look at other systems.

Regarding accessibility and a measurable way to evaluate it, the design should probably adhere to accessibility guidelines. One such guideline is WCAG 2.0 at level AA, which in the EU seem to be a mandatory regulation in 2018 for governmental intranets.

2. Producing reports and methods for analyzing KPI:s and selected metrics

If you manage an intranet, you probably have access to some standard reporting tools. In order to monitor how well the intranet delivers on goals and metrics we need to gather appropriate data using these tools. It's almost certainly necessary to tune your reporting tool so that your reports actually measure your business goals. When making adjustments to your standard reports it is good to collect data that tells the story about the goal so that we learn why something performed better, not only that it did.

It may prove useful to work with a web developer until you have developed some knowledge of the systems involved. A large part of the benefit of working with reports and methods is the learning process. Working with the system itself to get to know the users' behaviors and needs, is something that makes each iteration of this work begin on a new, higher, level.

There are a number of widely used techniques you can use to improve how your reports are designed and we go through some of these below.

Conversion funnel

Image 4: Funnel visualization using Google Analytics – Only 2.56% followed the desired path which indicates room for improvement, probably we are distracting the users with more tempting content.

A conversion is when a user-action results in a measureable contribution toward a business goal. For instance signing up for the corporate newsletter (Business goal: x% of intranet users subscribe to one or more newsletters) or being successful in some self-service process (Business goal: At least every other error report to the IT department should be made through the intranet, through purpose-built intranet forms, and less than 10% per email). Probably the most common method to visualize a multi-step process is to use a conversion funnel. It is particularly well suited to visualize where users fail in a process with several stages. You define a starting point and then measure the click-through rates on each part – that is, how many users you loose on the road towards the goal.

Glossary – Click Through Rate (CTR)
Click through rate refers to the percentage of users who chose to click on a particular link in a search result, or follow the call-to-action placed on a page. The goal is to have a high and predictable CTR that indicates that the users understands what they see on the screen.

If you have a low click-through rate, that indicates the need to improve the usability of the place where many users disappear. Or perhaps they get cold feet because of some poor design decision you need to improve.

Think about the goal that at least 90% should succeed with submitting their application of leave. If 25% of them drop out at the first step in the process, then we have to try to figure out what caused it or, possibly, set a more reasonable goal we are actually able to achieve.

It is particularly interesting to create a segment of users to find out when we're failing the users. The segment ”those who do not complete all the steps in the conversion funnel” might reveal some interesting pattern, something you're able to alleviate. Are there any common denominators where the users fail or flee? Where do they go? Is there possibly an indistinct call-to-action that users do not understand, see or discover?

A conversion funnel does not require many steps to be suitable as a visualization, it might be obvious in itself presenting the results to internal stakeholders. A simple use-case for a conversion funnel can be to show the amount of users of the search engine that actually click on any of the results, how many times they rephrase their search-terms, and how big the drop-off is. It could be one of several means to evaluate whether an enhanced relevance model for the search engine has been an improvement according to its users.

A conversion funnel is not limited to the intranet, it could very well be the measure of impact of a newsletter by email or even something in the real world. One thing is certain though, do not report data or specific numbers, so called ‘data puking'. It ought to be the last option to try out on stakeholders. Few are interested in data alone, even fewer actually understand the data. You're supposed to tell them the story of the data and visualize the impact. Conversion funnels are in a way the opposite of data puking since it's in a visual way telling a story reminding you to explain where the user's end up and your hypothesis on why.

A / B testing to compare two different design options

A / B testing is a method to see which of two alternative designs works best for real users. For a limited time two options are presented to users (this could be alternate versions of a whole page design, a form or just a page title). The options are randomly distributed to users and afterwards you inspect how the two options performed. If a winner emerges this is the one you choose to be on the intranet (until a new challenger appears).

To be successful, you need to be able to answer three questions about your A / B testing:

Example of an A / B test on an intranet is to segment ‘users connecting via a desktop computer' and evaluate two design decisions:

In a less ambitious scenario, A / B testing can evaluate what wording users respond to. Often used to settle arguments when writing or designing – let the users decide!

Try designing the test to serve the same option for a certain group of users each time. It can be confusing, and may impair the test's outcome, if the user gets different versions during the test period. This is a good time to talk to a web developer, besides offering cookies you probably need to talk about browser cookies :)
As with the conversion funnel, A / B testing is a widely used technique. In marketing communications it is very common to prepare and send two different messages, such as offers of different kind. Examples of use outside of the intranet can be helpful in convincing stakeholders that the technique can be useful for the intranet.

Multivariate testing to evaluate multiple changes at once

If you want, and dare, to make it a bit more complicated you can perform so-called multivariate testing. It involves testing several adjustments simultaneously in competing packages, essentially it is a mix of other methods, mainly A / B testing.

An example is when you are designing your very first landing pages for the intranet. Which of the following options worked best?

The option you should continue working with is of course the one with the highest click-through rate on the call-to-actions. First round of this test would provide answers about how many call-to-actions users can manage on a single page. Depending on the result, you can design the next test to try to optimize further.

Also in this method, we need to use segmentation to isolate subsets of gathered data. Perhaps primarily to tell the story around the test and explore which segments of users were outliers, and which performed as our pre-stated hypotheses.

Checking other quality factors

All objectives cannot be measured by using the collected data in your statistics tool. It is not solely about users' clicks and their behavior. We need to evaluate other forms of quality indicators. Below is a list of quality indicators (not drawn from statistics tools) that a communications department can use to measure quality:

Using search engine data

Image 5: An easy test you can perform on your own intranet is to search for what's not supposed to give any results, as in the illustration above.

Where I work, we have about one million pages and documents in our search index. With the help of the search engine, we can, behind the scenes, compile a lot of interesting statistics. For instance we can see from search statistics that 89% of the intranet lacks keywords. In other words, 9 out of 10 pages and documents are not really competing in the battle to reach the top of the internal search engine. No wonder that the search engine has a hard time figuring out what is most relevant to some queries.

For those of you who do not already have appropriate software that can inspect the whole intranet, or at least large parts, I would like to suggest some tools you can use by yourself or with a more technically inclined colleague. Search Engine Optimization Toolkit is an official addition to Microsoft's Windows Server. This server version of Windows is already present in virtually every major organization and is probably not uncommon among developers in your vicinity.

Image 6: Summary report from SEO Toolkit, listing violations in different categories regarding SEO and content quality.

Other sources where you can find qualitative or quantitative data are, for example:

Image 7: A simplified edition of Kibana as a search analytics tool displaying search's click-through rate, volume of searches, popular queries, zero hits and more.

The next step is of course to look at, compare and analyze all the data you have gathered.

3. Analyze!

The objective of the analysis phase is to understand why users behave in a certain way or think the way they do. This is where you identify the obstacles that prevent users from carrying out activities and what you can do to remove the obstacles, improve efficiency, conversion and usability.

The first thing you need to consider in your analytics is whether you have enough data to draw conclusions. Even if you find that you have too little data to draw large and far-reaching conclusions, your data can suggest how you can transition the test to a larger scale.

Do not get stuck in this phase, it's easy to find that you have spent more time analyzing than improving the intranet.

Be aware of any seasonal variations that can affect the results of your review. There are peak times during the year for specific content and support. For example it is common for people to forget their passwords over the summer holidays, so demand for replacement passwords peaks when people return to work. This is an example of how to use your intranet analytics to support the priorities of the editorial work on the intranet. If people have extraordinary difficulties remembering their passwords at the end of holiday season then it is probably efficient to introduce some well-placed support information about how to act when one has forgotten one's password.

If you are going to place this seasonal content, it could be an opportunity to do an A / B test for the best possible phrasing. Try out a query for the title, for example, "Forgot your password?" but also the more active "Get a new password". If you place that as a featured image, you can test what kind of image or illustration performs the best.

The data you have collected can sometimes contain unwanted variations, or extremely rare occurrences. To be really confident in the data you can talk to a statistician. If one is not available, use historical data to spot unusual patterns, something you may not need to look into. Or ask your web developer to explain how come you got such data. When confident your data quality is ok, it is perfectly fine to filter out these odd cases, so they don't affect the assessments. What you end up with is statistics which may not be as accurate but it is still possible to see trends over time. The accuracy is actually often more crucial in comparative evaluations, such as between the contestants in a A /B test if no option wins a landslide victory.

Document the analytic effort and findings

Based on the data you collected, processed and analyzed, you should document the findings and lessons learned. You may choose to introduce your conclusions by stating the level of confidence you have in the analytics. This is prudent especially if major decisions are likely to be taken on the basis of your analysis. Nothing wrong in being humble, this is not an exact science after all. Keep a log for future reference, writing down which methods were used for each test.

Remember that behind a conclusive A / B test we may very well find data that explains why one option performed better than the other. For example, the user's type of device (mobile users versus those on a desktop computer) may prove to be the dividing factor. Maybe its call-to-action is not as easy to spot on a small screen? It can also be worth testing again what is indicated by various segmentations, to verify that the assumptions were in fact correct.

Please also note tests that need improvement, and require another iteration through the entire analysis. It is not unusual that you cannot find conclusive proof to make the case for one option or the other.

4. Make improvements

Based on the evaluated hypotheses, and conclusions drawn from the analysis phase, you make a list of prioritized improvements. The improvements you think will matter the most. It's not always easy to make those judgements. Some efforts may have a minor impact while being an easy task. Other tasks can be complex because they have external dependencies. No matter what, now at least you have a list of things to do, stuff you know is going to improve the user experience.

An example of a task in the improvement phase is to rearrange the positions of buttons to enhance usability, something perhaps only beneficial for the user-segment with small screens. When you have acted on an improvement you should think about analyzing the change to make sure the desired result actually happened.

If you have performed an A / B test "manually", then this is the time to select the winning option for future use.

The last thing you do in the improvement phase is to fill the wish list with improvements you didn't have the time to perform this time around. The list might prove useful in the future when you happen to have time to spare, or maybe to merge with some similar activities.

Some of the business goals you set can take quite a long time to affect no matter what you improve, so have a reasonable expectation of how quickly you may begin to see results.

...and then what?

After making improvements, it's time for another iteration of the process. Intranet analytics is not a project, and it is never finished. The usefulness of each iteration of the analysis is that you get the chance to look critically at objectives, goals and metrics to see if you think that they still prove useful. Most likely, over time you will revise, supplement and clarify them.

Good luck!


Further reading