Quantcast
Channel: Blog – Blast Analytics
Viewing all 149 articles
Browse latest View live

Switch to Adobe Analytics Server-Side to Improve Data Quality and Site Performance

$
0
0

Senior Analytics Implementation Consultant, TJ Webster, dives into the world of Adobe Analytics Server-Side and helps you decide if implementing server-side is best for your organization. Learn how to deploy Adobe Analytics server side, plus, understand the pros and cons of going server-side.

The post Switch to Adobe Analytics Server-Side to Improve Data Quality and Site Performance appeared first on Blast Analytics & Marketing.


5 Data-Driven Content Marketing Takeaways from 2019 Contently Summit

$
0
0

The 2019 Contently Summit was creatively set with a “university” theme, where attendees were inspired to earn a “Masters of Content,” and become “Masters of Content.” Discover five high-level data-driven content marketing takeaways gleaned by Director of Marketing Strategy, Joe Wuelfing and get (re)inspired to apply these tips to your organization.

The post 5 Data-Driven Content Marketing Takeaways from 2019 Contently Summit appeared first on Blast Analytics & Marketing.

Why You Need a Data Layer for Web Analytics Implementation Success

$
0
0

Jason, our Director of Analytics Implementation, explores the value of implementing a data layer and what benefits it can bring to your organization. Learn how integrating a data layer increases the reliability of your data and decreases the level of effort to maintain and modify your web analytics implementation in the future.

The post Why You Need a Data Layer for Web Analytics Implementation Success appeared first on Blast Analytics & Marketing.

How to Make the Business Case for Analytics

$
0
0

Learn how to make the business case for increased analytics investment within your organization. After you read this post, you will have a roadmap for how to frame the business case, how to communicate the value and Return on Analytics Investment (ROAI) of your project, and how to get new ideas funded and executed in a way that ultimately contributes to the overall goals of your organization.

The post How to Make the Business Case for Analytics appeared first on Blast Analytics & Marketing.

How to Calculate Statistical Significance for Session-Based Metrics in A/B Tests

$
0
0

Most times an A/B testing strategy focuses on impacting the bottom-line or KPIs for a business. However, there will be times where other metrics will be of importance and it’s critical to have a strategy in place to ensure these A/B testing KPIs are accurately tracked, and that the correct statistical significance analysis is being used to evaluate and optimize customer experience performance. In this blog post, Director of Optimization, Roopa Carpenter, describes what steps need to be taken to resolve these issues, provides an example client use case, and introduces the new and improved Blast Statistical Significance Calculator.

The post How to Calculate Statistical Significance for Session-Based Metrics in A/B Tests appeared first on Blast Analytics & Marketing.

Taking Action with Google Analytics 360 and Salesforce Integration

$
0
0

As an organization you need to understand your customers' full journey so you can gain insights and ultimately take action on the touchpoints that are most effective for your potential customers. Learn from one of Blast's experts, Senior Analytics Optimization Consultant, Alex Molineux on the best ways your organization can truly break down data silos, increase conversion and improve customer experience.

The post Taking Action with Google Analytics 360 and Salesforce Integration appeared first on Blast Analytics & Marketing.

The Last Statistical Significance Calculator You’ll Ever Need

$
0
0

In the second installment of this blog post series, Senior Analytics Optimization Consultant, Alex Molineux unveils Blast's new one-stop statistical significance calculator that now serves all of your A/B testing needs. Plus, learn how your organization can utilize this calculator to make better business decisions.

The post The Last Statistical Significance Calculator You’ll Ever Need appeared first on Blast Analytics & Marketing.

5 Exciting Updates from Optimizely’s Opticon Conference 2019

$
0
0

Senior Analytics Strategist, Jill Stolt, travels to San Francisco to attend Opticon 2019. Read about several Optimizely product updates featured in this article, plus the major themes featured at this year's conference.

The post 5 Exciting Updates from Optimizely’s Opticon Conference 2019 appeared first on Blast Analytics & Marketing.


Increase Your Competitive Advantage with Tag Management Governance

$
0
0

Is your tag management system the Wild West or Fort Knox? Learn from our VP of Analytics, Joe Christopher, the importance of a tag management system (TMS) and how it plays a critical role in your organization’s ability to move quickly when implementing analytics and marketing tags/platforms onto your website. Find out how to successfully implement a TMS in a way that respects security, transparency, and performance while ensuring efficiency, flexibility, and data quality are achieved.

The post Increase Your Competitive Advantage with Tag Management Governance appeared first on Blast Analytics & Marketing.

How to Get Your Business AI Ready to Improve Customer Experience

$
0
0

Providing an engaging customer experience is no longer optional for businesses but instead, is demanded by users. In this blog post, Director of Optimization, Roopa Carpenter explains the difference between AI and ML, discusses how to identify the business need for AI, how to make data AI ready, and finally how to implement artificial intelligence to improve customer experience and anticipate your customers' needs.

The post How to Get Your Business AI Ready to Improve Customer Experience appeared first on Blast Analytics & Marketing.

Mastering Adobe Analytics s.Products Syntax

$
0
0

Adobe Analytics is one of the more difficult analytics tools to properly implement. It has many custom settings, variables, and syntaxes to follow in order to properly track a site. Adobe's s.products variable is perhaps the trickiest one to get right. In this blog post, Senior Analytics Implementation Consultant, Brent Scheffer deep dives into the world of Adobe s.Product syntax and teaches readers how to successfully implement.

The post Mastering Adobe Analytics s.Products Syntax appeared first on Blast Analytics & Marketing.

OKRs Elevate Your Analytics Processes and Outcomes

$
0
0

Analytics can truly be scary waters to navigate. In this blog post Senior Analytics Strategist, Lara Fisher, explores the world of OKRs and their positive impact on analytics processes and outcomes. Learn key takeaways and real life examples of a successful large-scale analytics OKR implementation.

The post OKRs Elevate Your Analytics Processes and Outcomes appeared first on Blast Analytics & Marketing.

6 Tips to Build Trust with the CCPA

$
0
0

If you don’t have a focus on privacy today, you won’t have customers tomorrow. Citizens around the world want their data to be protected. Learn about the impacts of the upcoming CCPA data privacy regulations going to effect January 1, 2020, and how your organization has an opportunity to build trust and increase competitive advantage by elevating data privacy as an essential, ongoing initiative.

The post 6 Tips to Build Trust with the CCPA appeared first on Blast Analytics & Marketing.

CCPA Compliance Guide for Google Analytics 360

$
0
0

Google Analytics 360 (as well as the free Standard version) may require modifications from how you currently leverage it as the CCPA privacy law goes into effect on January 1st, 2020. Even if your organization is not based in California, you likely serve California consumers and thus you will be impacted by this law. Read this guide to understand how you can make your Google Analytics compliant with CCPA regulations.

The post CCPA Compliance Guide for Google Analytics 360 appeared first on Blast Analytics & Marketing.

How to Calculate Statistical Significance for Session-Based Metrics in A/B Tests

$
0
0

Customer Experience Optimization teams are put in charge to improve the digital customer experience. Most times the A/B testing strategy focuses on impacting the bottom-line or primary key performance indicators (KPIs) for a business.

However, there will be times where other metrics will be of importance and it’s critical to have a strategy in place to ensure these A/B testing KPIs are accurately tracked and that the correct statistical significance analysis is being used to evaluate performance.

“…it’s critical to have a strategy in place to ensure these A/B testing KPIs are accurately tracked and that the correct statistical significance analysis is being used to evaluate performance.”

At Blast, our Customer Experience Optimization team came across this very circumstance for one of our clients. The typical binomial A/B testing metrics, such as lead completions, were no longer the goal. Instead, a greater emphasis was placed on continuous metrics (defined below), such as average page per session and other session-based metrics. Tracking continuous metrics can pose several challenges, mainly centered around the difficulty in calculating statistical significance for A/B test results.

In this blog post, we’ll describe what steps need to be taken to resolve these issues, provide an example of how we put this method to work for one of our own clients, and finally, we’ll introduce our new and improved Blast Statistical Significance Calculator.

Binomial vs Continuous Metrics — What’s the Difference?

image representing the difference between binomial and continuous metrics

More often than not, KPIs, such as transactions, cart adds and lead completions, are the primary metrics for A/B testing. These KPIs are known as binomial metrics because they result in only two outcomes (e.g. transaction vs no transaction, lead completion vs no lead completion). It’s easy to determine whether a metric is binomial.

The general rule is that if you can refer to it as a “rate” (e.g. Transaction rate, lead completion rate, add to cart rate), then it is a binomial metric. Using these types of metrics as a KPI for A/B testing doesn’t usually pose a lot of challenges. The available testing platforms are well-equipped to handle this type of data and can provide results with statistical significance impact.

“The general rule is that if you can refer to it as a “rate”, then it is a binomial metric.”

However, there will likely be times where the Customer Experience Optimization team or a client will want to focus on other metrics that look at the averages instead of the rates (e.g. average pages per session, average events per session, etc.). These metrics are considered non-binomial (or continuous) because there are more than two possible outcomes. It’s not simply a matter of whether the conversion happened or not.

The Challenge of Using Continuous Metrics as Goals for A/B Testing

Anyone who’s had to figure out how to calculate statistical significance for continuous metrics knows testing platforms are not always well-suited to handle this task, particularly if the metric is session-based. For example, a number of testing platforms counting methodology are visitor-based, not session-based. Therefore, when looking at their respective results dashboard, the traffic for each variation is expressed as “Unique Visitors” instead of Sessions:

image representing the traffic for each variation expressed as unique visitors instead of sessions

The workaround here is to integrate the analytics platform with the testing platform so you can pull in test data and analyze performance in an analytics report, such as a Google Analytics custom report.

workaround to integrate the analytics platform with the testing platform

Having access to overall performance for session-based metrics is the first step. However, if a team is attempting to analyze performance in analytics, there remains a big challenge on how to determine if the results you are seeing are having an impact or in other words, how to calculate statistical significance for such metrics.

Overall Results Won’t Do! We Need Session-Level Data

Standard A/B testing significance calculators (as shown below) are used to dealing with binomial data, where one can simply enter overall traffic and conversion volume per variation. However, these statistical significance test calculators don’t work well when the data is continuous. In other words, you can’t just enter overall sessions for the Original vs overall sessions for the Variation to accurately determine statistical significance for “average” session-based metrics.

image representing test data

Instead your team needs to obtain session-level data from your A/B tests in order to perform a proper statistical significance calculation.

It is possible to get this session-level data for an A/B test in your team’s analytics platform, although, your team will need to implement a custom dimension for Session ID (e.g. Using GTM) to start tracking this data. Please check out our guide for detailed step-by-step instructions on how to implement this custom dimension.

The New Blast Calculator Tackles Session-Level Data

Even after taking the necessary steps to obtain session-level data, challenges still remain. As stated above, most A/B test significance calculators are not built to handle continuous metrics. Specifically, there is no option to add or upload session-level data, which is necessary to do a proper statistical significance calculation for these types of metrics.

Recognizing that there was a need to have a readily available statistical significance test calculator to handle these types of metrics, Blast decided to create its own and make it available to everyone! Specifically, we created a statistical significance A/B test calculator that has the ability to handle the various types of metrics a Customer Experience Optimization team may need to analyze, including the continuous metrics described in this blog post.

Further, the new Blast Statistical Significance Calculator will also have an option for calculating statistical significance for typical binomial metrics, such as transaction rate, add to cart rate and lead completion rate.

Putting the Blast Statistical Significance Calculator to the Test

Blast had to tackle the challenges described above for one of our own optimization clients, who wanted to run a few tests that were more focused with on-site engagement instead of the usual primary KPIs. In order to meet their needs, our analytics implementation team used the step-by-step instructions outlined in the above mentioned guide to implement the Session ID custom dimension for this client’s Google Analytics account.

screenshot representing the script needed to implement the session id custom dimensions

As a best practice, their testing platform was already integrated with their analytics platform (Google Analytics). This allowed the Customer Experience Optimization team to access the newly built custom dimension in Google Analytics (GA) to obtain session-level data for the Original and Variation treatment in our test.

Without taking the steps to create this custom dimension, we still would have been able to view test performance in GA but only at the aggregate level, making it difficult to do a proper statistical significance calculation.

To conduct a meaningful analysis of our A/B test results, we took the following steps to get the results ready for use in the Blast Statistical Significance Calculator:

1. Create a Custom Report in Analytics — Including the Custom Dimensions for the Test Integration, Session ID and Targeted Metric

image representing the creation of a custom report

2. Export the Custom Report to a CSV File

Please note if your team is not using Google Analytics 360 and your data is likely to be sampled, you’ll need to ensure you are exporting all data from the test and not just sampled data. If you need a solution to get around the sampling issue, one way to achieve this is by linking your Google Analytics account to Unsampler.

With Unsampler, you’ll be able to create a similar report (as described above) that will include all of your data and further, you can directly export your report to a csv file.

3. Format csv file for upload

formatting csv file for upload to statistical significance calculator

With the csv, your team will need to filter data by the treatment (Original or Variation), then copy and paste metric data in a new tab.

filtering data from csv file for statistical significance

Save the new tab as a separate csv and this will be used for the Blast Statistical Significance Test Calculator.

4. Upload the csv file to the calculator — After having the csv file properly formatted, you can then go to the Blast Statistical Significance Calculator. Select “Continuous” from the Test Type dropdown, set preferred significance threshold, and upload the csv file

image showing how to select test type for statistical significance calculator

Taking the following steps, we were able to calculate statistical significance and properly analyze our A/B test results to see if there was a significant impact.

image showing analysis of statistical significance A/B test results to see if a significant impact

Conclusion

The process outlined above for how to calculate statistical significance for A/B tests is one that your team can immediately put into practice. While this post discussed continuous metrics in terms of session-based metrics, it can also be used for measuring A/B test results for other metrics, such as avg. transactions per user and avg. pages per user or other user-level based data.

Instead of creating a custom dimension for Session ID, instead your team would need to create a custom dimension to obtain user-level data (e.g. Client ID #1).

“The new Blast calculator is meant to provide teams the flexibility to perform various statistical significance calculations depending on their needs…”

The new Blast calculator is meant to provide teams the flexibility to perform various statistical significance calculations depending on their needs, including:

  1. a binomial calculation for typical primary KPIs (e.g. Transaction rate, add to cart rate, lead completion rate),
  2. a calculation for non-binomial metrics (e.g. “Average“ metrics) with a T-test approach, and
  3. a nonparametric calculation for non-binomial metrics where a team wants all data points to be considered.

The post How to Calculate Statistical Significance for Session-Based Metrics in A/B Tests appeared first on Blast Analytics & Marketing.


Taking Action with Google Analytics 360 and Salesforce Integration

$
0
0

For many of your customers the decision to make a purchase is not the first or last interaction in the customer lifecycle. It is more likely that the full customer journey is made up of a variety of touch-points, across different channels and devices, as your customers shift between online and offline.

As a marketer you want to understand the full customer journey so you can gain insights and ultimately take action on the touch-points which are most effective for your potential customers.

If your digital analytics tool is Google Analytics (GA) 360 and Salesforce is your CRM and/or Marketing Platform, then the native integration developed for these tools gives you the power to analyze the full customer journey, and take actions to increase lead capture and conversions on your site.

Benefits of Integrating Your CRM and Digital Analytics Tool

image representing benefits of integrating crm with google analytics

Prior to the release of the native integration between Salesforce and Google Analytics 360, data from the two tools would likely sit in separate data silos. On the one hand you have all your vital customer and lead touch-point data stored in Salesforce while Google Analytics 360 contains more detailed analytics data relating to user engagement and the marketing campaigns that attracted visitors to your site.

By themselves these separate data sources provide a partial view into the customer journey, but not the full view into the customer lifecycle that marketers crave.

Imagine the look of joy on your marketing team’s faces if they could have access to data that outlines the full customer lifecycle!

Data that shows: salesforce cloud

  • When users first saw your marketing media
  • When they first engaged with your media
  • When they became leads
  • What marketing or sales efforts lead to their first conversions and (hopefully) retention

Salesforce and Google Analytics 360 Integration Use Cases

Having access to data is merely the first step, now imagine the impact the marketers can have if they take this data and act on it. For example:

  • You can use advanced attribution analysis in Google Analytics 360 to truly understand which marketing efforts are effective with users at different stages of their customer journey. With this information you have an increased ability to optimize marketing channel spend accordingly.
  • By using behavioral data from Salesforce Marketing Cloud, you can create Google Analytics 360 Audiences to use for retargeting campaigns.
  • If you are looking to attract more qualified leads, you can use your Salesforce lead/sales data to create Audiences in Google Analytics 360 to leverage lookalike audiences in Google Ads or Google Campaign Manager (formerly DoubleClick).

These are a couple of examples of the type of enhanced actions you’ll have available to you once the GA 360 and Salesforce integration is complete. We outline these and others in more detail later in this post.

By integrating Google Analytics 360 and Salesforce, you can seamlessly push the data relating to all these interactions and touchpoints back and forth between the two tools. These new data points will provide visibility into customer lifecycles for your marketing teams, account managers, technical support teams and customer success teams.

The World’s Most Popular CRM & Data Silos

Salesforce is the first solution that comes to mind when you mention CRMs and is a hugely popular tool.

The only thing lacking from the Salesforce platform is digital analytics.

The nature of the product landscape, with CRMs and digital analytics tools being separate, has lead to data often existing in silos when the data would be significantly more valuable and actionable if it were combined.

In the past, anyone who found these data silos frustrating enough, and had the resources available, could export data from Salesforce and Google Analytics and then work to combine the datasets in an external database. While this approach works (and there are cases in which we’ll still help clients do this if they’re not on the paid version of Google Analytics) it is a challenging and time consuming task.

This approach also leads to delays between when an actual interaction or conversion takes place and the time at which the data point is available to you to act on. Your marketing efforts are going to be significantly more effective if you’re able to act on customer journey data as close to real-time as possible.

“Your marketing efforts are going to be significantly more effective if you’re able to act on customer journey data as close to real-time as possible.”

Data silos are a problem when you consider CRM and digital analytics tools.

Your CRM contains your lead data and tracks key customer interactions that lead to them progressing through the sales funnel. However, the data required to give context to this journey such as when and where a user first became aware of your company or product, which media they have interacted with before and after becoming a lead, and how users have interacted with your website lives separately in your digital analytics tool.

Thankfully the Google Analytics 360 and Salesforce partnership has lead to a native integration that combines these two data sources, giving you insight into the full customer journey and the ability to act on these insights.

Top 5 Insights from Integrating Google Analytics 360 with Salesforce

The GA 360 integration with Salesforce opens up a host of analysis and reporting options that would not have been available to you before. Here are some of the insights we find most useful to review once the integration is in place.

icon representing top insight marketing attribution and reportingAdvanced Marketing Attribution Analysis

When lead and opportunity data is available to you within GA 360, you have the semi-magical ability to create audience segments for audiences that are at varying stages of their customer journey as noted in Salesforce. These audiences will vary from business to business depending on how you classify leads.

Let’s say you have classifications of ‘Leads – Open’, ‘Leads – Qualified’ and ‘Lead – Unqualified’ in Salesforce. You can create these segments in Google Analytics 360 and apply them to analytics reports.

Using these segments and attribution reporting provided by Google Analytics 360 you can start to dig into the marketing touch points that reached these audiences beyond the form submissions that entered them into Salesforce. You’ll be able to review whether different marketing media are more effective at bringing in qualified vs. unqualified leads, or whether there are trends around which marketing channels appear to support each other in the acquisition of leads.

Using Google Analytics 360’s Attribution Model report you can analyze how models other than Google’s standard ‘last touch non-direct’ rank your marketing channel effectiveness in capturing the attention of all your different lead audiences.

You can also use data available in the Google Analytics and Salesforce integration to create ‘customer’ audience segments. You can perform analysis similar to that mentioned above but instead you’ll be focusing on which marketing efforts contributed to a lead actually converting to become a customer. Insights in marketing performance relating to this final conversion are of course extremely valuable as optimizations to your marketing here will have a direct impact on business performance.

This type of advanced attribution analysis is key to truly understanding how your full marketing mix is having an impact on the customer journey.

“…advanced attribution analysis is key to truly understanding how your full marketing mix is having an impact on the customer journey.”

As mentioned before, it’s unlikely one interaction with your marketing media leads directly to a conversion. It is much more likely a customer interacts with your business via a number of different touch points over time. Understanding this journey and the effectiveness or your marketing media and messaging at each step of the journey is key in helping you optimize your marketing efforts so you are consistently delivering the right message at the right time to potential customers in the future.

icon representing top insight lead and customer onsite behaviorBetter Understand & Optimize Behavior

Using the different audience segments mentioned above, you are able to analyze the interactions these different audiences have on your website. Using Google Analytics Goals and Ecommerce tracking, you can review how well your site is doing in getting users to take important steps toward converting, and how effective your site is at capturing final conversions.

By doing analysis like this you are likely to find insights into where your site is failing to move different types of users through the customer journey. For example, your analysis may show that the majority of users who submit their information and become leads read at least 3 of your marketing pages outlining the solutions your business provides and also visit your site at least 2 times.

With insights such as these you can focus on providing an onsite experience that drives users to view multiple marketing pages. You can also adjust your marketing to target those users who have engaged with marketing pages once before, remind them about your solution offerings, and try to get them back to your marketing pages for that key followup visit.

icon representing top insight offline conversionTracking of Offline Conversions

Depending on your business, your final conversions may occur offline — your customers interact with your online properties toward the beginning of their journey, but the final purchase happens offline with a Sales Rep.

This final offline conversion would not traditionally be tracked in Google Analytics 360, meaning its analytics reports only contain data relating to the beginning of the customer journey — not ideal for marketers who are looking to market to potential customers throughout the entire customer journey.

With this offline data now available within Google Analytics 360 you are able to gain insight into what touchpoints the customer interacted with before converting. These are likely some of the most important touch points within the customer journey so gaining insight into how well these performed is key for any marketing team.

icon representing top insight custom funnelsMore Complete Custom Funnels

As a Google Analytics 360 customer you have access to Custom Funnels. With the integration of Salesforce data you will be able to build more complete Custom Funnels that contain funnel-steps relating to interactions that would traditionally only be logged in Salesforce, such as first time a Sales Rep called a lead.

These Custom Funnels will more accurately display the full customer journey than any siloed Google Analytics or Salesforce data ever could. When you start to analyze the funnels by applying different audience segments to them you’ll start to get a much better picture of how all your marketing and sales touchpoints impact completion of each step of the customer journey.

When you identify drop off points in your funnel, you can use personalization or remarketing to target those audience segments.

icon representing top insight advance analysis with big queryAdvanced Analysis with BigQuery

You have the benefit of being able to export your Google Analytics data directly to BigQuery for advanced analysis. BigQuery gives you access to hit level data which can provide deeper insights into your customers online activity at the user level. You can build out custom attribution models, calculate customer lifetime value predictions, connect with other data sources or connect to your preferred data visualization tool.

Top 5 Use Cases for Taking Action with the Salesforce & Google Analytics 360 Integration

Insight without action is, well, worthless.

Insights alone don’t do anything to evolve your business. As marketers and analysts we never want to finish up an analysis by simply saying “oh, that’s interesting.” We want to take that insight and act on it in a way that improves an aspect of our business, be it lead capture, conversion, retention or one of the many steps in between.

Below, we’ve taken the top 5 insights the Google Analytics 360 and Salesforce integration can provide and put together our top 5 list of actions you can take post-integration.

GA 360 & Salesforce Integration Use Case #1:

Lookalike Audiences for Google Ads and Google Display & Video 360 Targeting

Prior to the Salesforce Marketing Cloud and Google Analytics 360 integration you were missing out on a huge opportunity to increase the reach of your lead capture efforts. Sales Cloud contained data relating to your leads and customers but this data was kept completely separate from Google’s suite of digital marketing tools (Google Marketing Platform).

It is only when these data silos are broken down that you can use this customer and lead data in a meaningful way in upcoming marketing efforts.

With lead and customer data from Sales Cloud integrated directly into Google Analytics, you have the ability to create lead and customer Audiences within Google Analytics 360. You can then create lookalike audiences based off of these newly created Audiences for use in Google Ads and Google Display & Video 360 campaigns to try and capture more qualified leads.

GA 360 & Salesforce Integration Use Case #2:

Personalize Experience via Optimize 360

Google Optimize 360 is a powerful experimentation and personalization tool that can deliver customized onsite experiences for your users. When Salesforce data is made available to Google Analytics 360 and associated tools such as Google Optimize 360 you will have the power to target audiences by characteristics that were data points that previously existed in Salesforce alone.

For example, you can target qualified leads, customers, and also users who are on your site but are not yet leads — how many audiences you want to focus on will vary depending on the length and complexity of your conversion funnel.

With Google Optimize 360 you can show each of these audience segments a version of your site that is designed to move them toward the next stage of the customer journey.

For instance, you can show non-lead users a version of the site in which the forms they need to fill out to become leads are given prominence, or copy on landing pages is written specifically for users who you know may not be as familiar with your offerings as qualified leads of customers.

Qualified leads may be shown pages that highlight more detailed benefits of your product offerings that regular users would see; you know this audience has shown an interest in your offerings before so you can take your opportunity when they revisit your site to show them more complex product information you may be wary of showing new users for fear of overwhelming them.

GA 360 & Salesforce Integration Use Case #3:

Marketing Cloud Email, SMS, Push Notification Marketing

The integration allows audience data to be pushed from Google Analytics 360 into the Salesforce Marketing Cloud. In GA 360 you have the ability to build complex audience segments based off of user interactions, demographic data and also (if you also have Google Analytics 360 integrated with Sales Cloud) lead and offline conversion data.

These audiences you build can then be shared with Marketing Cloud for activation via email, SMS or push notifications. This functionality significantly increases the options open to you for direct activation via Marketing Cloud.

The audience segments you create and then act on can be as simple or as complex as fit your needs and allow to really focus your marketing efforts on specific subsets of your future leads, leads and customers.

GA 360 & Salesforce Integration Use Case #4:

Improve Onsite User Experience for Increased Conversions

As discussed previously, the integration allows you the ability to create audience segments for users at each stage of the sales funnel. Using these segments, product managers and user experience teams can analyze Google Analytics 360 reports and funnels to see which parts of the site are performing well at getting users to move toward a purchase and which are not.

The insights gathered here around when and where users are most likely to fall out of the sales funnel will allow UX teams to focus on aspects of the site that are proving most troublesome for users.

GA 360 & Salesforce Integration Use Case #5:

Optimize Search Marketing Performance

Breaking down data silos is again the name of the game here.

With offline conversion data shared between Salesforce and Google Analytics 360, this data can be de-siloed (again!) and shared with Google Ads and Search Ads 360. If you also set up Goals in Google Analytics 360 to track completion of multiple steps in the Salesforce customer journey (i.e. form completion, first sales call, etc.) then this data can be imported into Google Ads and DoubleClick Search as well.

The addition of these new metrics into Ads, Display & Video 360 allow you to analyze and optimize those campaigns for sales and all steps in the customer journey to ensure you’re truly optimizing your campaigns based on data points that tie back exactly to the sales funnel.

When making decisions based off of focused data points such as these, you can be confident the insights you’re drawing from the dataset and the actions you’re taking are the most effective you could achieve, your marketing efforts are being informed by existing funnel and conversion data.

Google Analytics Standard and Salesforce

ga360 integration with salesforce

If you’re currently using Google Analytics Standard, then this integration is one of the many benefits that makes upgrading to Google Analytics 360 well worth it. If you are bummed out because you still have the standard version of Google Analytics, fear not as it is still possible to integrate your Salesforce Sales and Marketing Cloud implementation with Google Analytics.

However, unlike the built-in integration available to GA 360 customers, it will require more manual configuration. This includes setting up custom dimensions in Google Analytics to capture the client ID of the user that visited your site and setting up custom fields in your CRM to push your Google data back to the Marketing Cloud.

If you would like help with how to set up a Salesforce integration for Google Analytics standard. Please reach out to us.

Start Taking Action On Your Data — Integrate Google Analytics 360 and Salesforce Today

Everything we have covered now has you excited to take advantage of the integration developed by Google Analytics 360 and Salesforce, right?! The integration allows you to gain insights into the customer journey and take action on these insights to optimize your site and marketing spend while increasing conversions throughout the sales funnel.

If you’d like assistance in setting up this Salesforce+GA integration or working with the new data and actions the integration provides, please let us know. We’d love to help, and we’re also available for questions and comments below!

The post Taking Action with Google Analytics 360 and Salesforce Integration appeared first on Blast Analytics & Marketing.

The Last Statistical Significance Calculator You’ll Ever Need

$
0
0

In case you missed it, there is another blog post that was published prior to this one that provides further context into what you’re about to read. You can definitely read this as a standalone, but it may be beneficial to read the other post if you’re looking for some more background.

The A/B testing significance formula is confusing. I’ve been doing this for long enough to know that it rarely clicks for anyone right away. If you know R or Python, you can calculate significance with relative ease, however that has its own learning curve. Luckily, we live in the age where tools exist online to help you determine if your variation data is statistically significant.

There are many A/B testing statistical significance calculators online; some use t-tests, others binomial tests, and some even include the functionality to do both. We at Blast, for example, published a Revenue per Visitor Calculator that uses a nonparametric approach to calculate statistical significance to determine which variant has the highest revenue.

Since we started hosting that calculator, we’ve gotten a number of requests to build something that fits ALL A/B testing data and scenarios — this includes both continuous and conversion based metrics. In the last few months we’ve been developing and testing this new calculator against some of the other popular calculators out there, and we are finally ready to unveil it.

Statistics Can Be Difficult

image representing the blast statistical significance calculator

Statistical significance can be confusing. With terms like p-value, confidence, power, and probably a dozen others, simply having data can seem like the easy part. As an analytics and marketing consulting firm, Blast gets a ton of questions regarding these things. When working with clients, we want to provide the most rigor in our deliverables, so statistics becomes a necessity.

Revenue per visitor (RPV) has been a big point of interest for many of our clients, so we created a statistical significance calculator that performed analysis on RPV data. As we worked through many of these optimization projects, we realized the value of having a significance calculator that can perform different significance analysis depending on the metric being used. Further, we acknowledge that for some of these metrics there is more than one statistical approach that is capable of providing reliable results, depending on the client interests.

“Statistical significance can be confusing. With terms like p-value, confidence, power, and probably a dozen others, simply having data can seem like the easy part.”

To meet the various needs, we decided to create a new, totally free statistical significance test calculator. So, without further ado, here is our new one-stop statistical significance calculator that will serve all of your A/B testing analysis needs!

This blog post will not only explain how our new statistical significance calculator works, but also what it’s doing. The calculator is equipped to handle two sample tests (a control and one variant), and will provide clear and concise results. With this blog post, I hope to remove some of the mystique that surrounds these statistical methods.

Binomial Testing

The first option in the significance calculator deals with binomial metrics, or in other words, metrics that teams often refer to as “rates.” Things like conversion rate statistical significance, bounce rate statistical significance, transaction rate statistical significance, and so on; really anything that either did or did not happen in some proportion.

“One common misconception is that binomial results should be based solely on P-value to determine significance.”

The binomial test uses the proportion of conversions from the control, and compares to the variation, just like any other significance calculator you may have encountered. To use this calculation your team will need to input total traffic and total conversion volume for each variation.

image of binomial test used to calculate statistical significance

One common misconception is that binomial results should be based solely on p-value to determine significance. However, doing so lacks statistical rigor and if your team is following this approach, there is a real risk that your team will proceed with a change that won’t actually result in a positive impact. In addition to considering p-value, it is just as important to ensure results have the necessary level of power. The chart below outlines the differences between these two:

 Test Results  Reality  Probability of Happening  Logic
Impact Present Impact Present A

Statistical Power:A/(A+B)

No Impact Impact Present B
Impact Present No Impact C

Statistical Significance (P-Value):
C/(C+D) 

No Impact No Impact D

For example, if the original treatment has a 20% conversion rate that means users have a 20% probability of converting. If the variation is showing a 25% conversion rate, this doesn’t necessarily mean that your team is seeing a positive impact. What if in the control case, we had 10,000 visitors, and with the variant, we only had 100; with such a drastic difference in size between the two “samples,” we can’t really draw a good conclusion on whether the variation was actually better than the control.

This is where power comes in. Power is the inverse probability of rejecting something that should be true; a power of 80% (.8) will say that 20% of the time, things that should be detected, are not. If our variation is actually better than our control, a low power tells us that the test probably won’t pick up on that.

Using power, you can actually determine how large your variation sample should be so that you can be confident your results aren’t misleading you, or if you did use enough points (assuming you’ve already ran the test). Ideally a power of 0.8 or greater is desired.

Since it is important to account for power, you’ll see that the Blast Significance Calculator will display the power value in the results. Along with power, our test also displays relative lift, and of course, significance. Relative lift is defined as:

(conversion rate of variation – conversion rate of control)/(conversion rate of control)

And simply put, explains, in percentage form, how much one test is better than the other. As an example, using the rates above:

(.25 – .20)/(.2) = 25%

A relative lift of 25% sounds promising! However, the calculator makes it clear that the results are still not significant because our test was underpowered — our power is only .33.

image showing test results in Blast statistical significance calculator when running a binomial test

The calculator will not only tell you if your results are significant, but also all the key metrics like power and p-value.

This means that almost 70% of our positive results have a good chance of being interpreted incorrectly. You can still proceed; however, it’s not a great idea as there is much less integrity and rigor in your calculations. The best way to proceed is to use a sample size calculator prior to launching a test to determine the amount of visitors necessary for your test. This reduces the risk of your test results being underpowered.

Typically people test at a 95% significance threshold, meaning that for 95+ out of 100 tests, we see our variation being better than our control. The p-value is the inverse of that. You may hear that a p-value of .05 or less is desirable, and that essentially means the same thing as 95% confidence. If you need some more literature on it, you can view this page on p-values.

As an aside, the 95% level is widely used, but it is not something your team should blindly use. Every scenario is different, and you need to evaluate the level of risk you’re willing to accept. For example, if you need to be absolutely sure your two samples are different, a 99% confidence level is recommended, while faster moving teams may be okay with a 90% confidence level.

In the former case, you may need more data to reach significance at the 99% level, while in the latter, you can get by with less data but take on more risk. In either case, your data needs to have sufficient power, and we actually programmed the calculator to tell you not only when you have reached significance, but also if that significant result is of sufficient power to base a business decision.

Continuous Metrics

image representing how to determine statistical significance

Continuous distributions can be thought of as data samples that contain many different numbers. Data that are considered to be continuous include (but are not limited to):

  • Session-based metrics:  average duration per session, pages per session
  • User-based metrics: average order value per user, average revenue per user, average transactions per user, average pages per user

On our Statistical Significance Calculator, we provide two choices on how to deal with continuous data: parametric testing using a t-test, and non-parametric testing using a Mann Whitney test. I’ll spend some time on each of them.

Choice 1: Sampled T-Test Using “Average” Metrics

If your team is interested in looking at “average” metrics, such as average duration per session, average pages per session, or other aggregate measures, then you may want to consider using the t-test approach.

A t-test takes two samples (original and variant), and compares them to see if they’re drawn from the same big data set, called a population. You can perform a t-test with a small amount of data, but to be confident in the results, more data is always better (if you’re using e-commerce data, this shouldn’t be a problem).

“You can perform a t-test with a small amount of data, but to be confident in the results, more data is always better.”

Imagine we are testing height between two groups. Because we can’t test everyone in the world at all times, we take samples, maybe 50 people for our control, and 50 people for our variation. These two groups of 50 are our samples, and everyone in the world could be our control “population”.

When we compare our control and our variation samples, what we are figuring out is if they both came from the same human population, or maybe our variant is from an alien species. We hypothesize that these two samples are from different populations, and set out to test.

Because we did take the two groups from earth, our t-test “fails to reject the null hypothesis that the two groups are from different populations.” Because it was just a hypothesis, we don’t “accept” a hypothesis because it was just a thought we had and we only tested 100 people total, we just fail to reject it.

Now let’s put this into the context of a test on your website with two variations, the original and the variation. The population idea becomes a bit more abstract, but the implementation remains the same.

Let’s say that our control had 10,000 sessions, while the variation had 8,000 sessions. We can use our Statistical Significance Calculator to determine if the average session duration associated with the control is less than the average session duration associated with the variation. If the numbers are too close to tell, we probably fail to reject, but maybe when we test, we find that there is really good evidence that our variation is greater than our control; so much so, that we can be 95% confident that our variation average session duration is higher than our control average sessions duration.

This number is called our confidence, and the p-value is the inverse of this (.05 in this case). What the p-value says is the same thing, but worded differently; there is a small amount of evidence suggesting that these two samples are not from two different populations, but if it’s only 5% or so, then we are okay with that.

If that all sounds like a lot, and you just want to know “is the test statistically significant?”, we also provide a very clear printout at the top of the calculator that will let you know.

Sampling and the T-Test in Detail

The statistics behind a/b testing can be confusing, so I’ll talk about it at a high level briefly to hopefully shed some light on how to go about testing statistical significance. T-tests rely on parameters (mean and standard deviation) to compare our two samples. If you’ve seen a bell curve (called normal distributions), mean is the middle of it at the “average”, and standard deviation explains how spread out the data is.

image showing normal distribution

For some metrics, data isn’t spread evenly in this bell shape, so we can’t test it using the standard mean and standard deviation approach.

image showing uneven data spread  However, there is a workaround, which applies a “sampling distribution” using the “Central Limit Theorem”. I would recommend this video for more of a thorough explanation of this approach.

If you have two samples, a control and variation, instead of using every point, you could take a smaller sample from each sample. This is really useful for data that isn’t in that bell shape we talked about.

To accommodate for this non-normal data, we can use aggregate metrics, like the “average”; so instead of pages views, we get average page views. If we have 10,000 points in our control group, we can “sample” a number of points. We then take the mean or average of this smaller group of points, and we get a “sample mean”.

We can do this a whole bunch of times until we have 30+ sample means for both the control and variation data samples. It’s important to understand, that really all we’re doing is taking a subset of points from each sample, and taking the average of them.

Example:

Base Samples:

Control: 5,6,7,8,9,10,11,20,5,4,2,25

Variation: 2,4,5,3,2,4,6,5,6,10

Sample  Control  Sample Mean Variation Sample Mean
1 2, 25, 11 12.67 6, 5, 5 5.33
       
30 10, 11, 20 13.67 10, 2, 3 5

As you can see, we “sampled” 3 points from each base sample, and found the mean of the 3 points. We can do this 30 times, and we’ll end up with a “sampled” distribution of means. So even if our actual data doesn’t look like a bell curve initially, the samples do (so long as you take around 30 or more samples). We can then compare sampled distributions since they have a mean and standard deviation similar to that of a bell curve.

The takeaway here is that sampling data a whole lot of times will create a normal distribution for these metrics and as a result, it becomes possible for your team to measure statistical significance for the “average” of these metrics using a t-test. Specifically, you would need to use a t-test that allows you to enter user level or session level data.

You may be wondering why you can’t just use total conversion counts, and that has to do with the scope of the test. If you are only concerned with conversions, then that is definitely a good approach. However, continuous metrics aren’t as binary as yes or no, and they contain magnitudes like revenue or time on site. With these metrics, a t-test needs to take into account those magnitudes so that you can find the variation that has the greatest value, rather than the most conversions.

“The Blast Statistical Significance Calculator…takes the burden off users and still provides statistical rigor to the results.”

Now using a t-test for continuous metrics would normally create an additional burden for users; they would need to manually create the samples and calculate sample variances before analyzing for statistical significance. The Blast Statistical Significance Calculator implements a pooled variance approach, and uses the standard equation to calculate sample variance on the back-end. Doing so takes the burden off users and still provides statistical rigor to the results.

Choice 2: Non-Parametric Testing — All Points Considered

Our second approach for calculating significance for a continuous metric is meant for teams that know their data is not normally distributed and they want to use every data point for significance calculations. Specifically, the third selection on our calculator is the Non-Parametric calculator (the approach that is used for the RPV Calculator).

image showing results in Blast statistical significance calculator using non parametric test type

We have already written about this approach in a past blog post, but I’ll explain again here in lesser detail. Instead of using mean and standard deviation, a non-parametric test (the Mann-Whitney Wilcoxon in this case) compares each number of the Control sample to each number of the Variation sample and determines which sample is relatively larger. So every point in our control data sample would be ranked against every point in our variation data sample.

The methodology behind this can be summed up by “which variation is relatively larger?”, and it’s pretty straight forward.

Before moving on, I want to address a point of contention for statistical significance in the testing space. As mentioned previously, e-commerce data has a habit of being very skewed. There is nothing wrong with skewed data, but special care needs to be taken to run tests with it.

Which Continuous Testing Approach Should You Use?

“How do you determine statistical significance?” is a common question we get, and unfortunately there are many ways to calculate it. The two options we provide for continuous metrics in our new calculator are the main two approaches used in the industry, and each have their strengths and drawbacks.

On one hand, t-tests are more powerful and more likely to reach a sufficiently powerful statistical significance faster because it’s measuring the difference in sample means (or the averages), while non-parametric tests use ALL the data, and by default, are meant to handle non-parametric (skewed) data.

Normally, performing a t-test for continuous metrics would require extra effort for users since they would have to manually calculate the variance; however, as mentioned above, our calculator eliminates the need to do this because it does all of the variance calculations on the back-end.

As a result, the prep work needed for either approach (t-test or non-parametric test) is roughly the same. Most of the leg work will take place in setting up your analytics platform to track session-level data (ex. Session Id) or user-level data (ex. Client Id) and then ensuring your CSV is formatted properly.

Once this setup has been completed and you want to test, either approach is viable to use. That said, you should expect to get different results when using either the non-parametric or the parametric approach since they go about the calculation differently.

We encourage your team to determine which approach is right for your business needs, and use that calculation. We have provided all the tools to run either test, but we cannot make the final decision for you, as every use case is different.

The Calculator User Interface

If you have seen our Statistical Significance Calculator, you probably noticed there are many options in each of the testing interfaces.

image showing the statistical significance calculator interface

For Binomial testing, it’s as easy as entering your conversions and traffic for your control and test variation and then deciding what kind of test you want to perform, however there are many more when looking at the continuous metrics.

Most of the options have to do with the formatting of your CSV file. If you’re in America, the defaults should work fine. If you’re outside the U.S., you may need to adjust so that your decimal and delimiter are different (in America, decimals are ‘.’ and delimiters in CSV files are ‘,’ — other countries’ decimal and delimiter conventions may vary).

For the continuous t-test and non-parametric approaches, you’ll also see an option to upload your CSV file. The “Hypothesis” is all about how you want to compare your two data sets. If you want to check if the samples are different in any way, use “two-sided,” and if you want to test if the variation sample is greater or less than your control, use “greater” or “less” respectively to select that.

Lastly, including outliers (values that fall well outside the normal range) can be useful for some testing, but more often than not, you’ll want to remove outliers from your data. By default, our calculator will remove outliers, however this functionality can be turned off by selecting “No” under “Remove Outliers.”

Conclusion

While it was fun developing the new Blast Statistical Significance Calculator and this blog post, we didn’t do it just for kicks. This calculator was designed to not only provide a better user experience, but also to provide a ton of functionality by introducing the two new testing approaches.

Even though our calculator is a fully functioning product, that doesn’t mean we are going to stop improving it. The industry continues to EVOLVE and new methods are coming out all the time. As they’re tested and shown to be viable, we’ll be updating the calculator to reflect what the industry deems most reliable.

Thanks for reading! Definitely give the Blast Statistical Significance Calculator a try! After you’ve tried it, we’d love to get your feedback in the comments of this post.

The post The Last Statistical Significance Calculator You’ll Ever Need appeared first on Blast Analytics & Marketing.

5 Exciting Updates from Optimizely’s Opticon Conference 2019

$
0
0

mark yoloton vice president, digital & interactive at salesforceAs Mark Yolton of Salesforce shared during the Opticon conference 2019, “…we have smart people that have been working on testing and optimization for a long time. Combined, we have decades of experience (maybe even centuries!), yet we still get it wrong. That’s why formal testing is important.”

Opticon19, the digital experience optimization conference hosted by Optimizely, took place September 11-13, in San Francisco. In addition to some incredible product enhancements that were highlighted, there were several themes carried throughout the conference as well, including:

  • Focus on ROI
  • Center of Excellence
  • Micro Conversions
  • Focus on Why
  • Focus on the Customer

Top 5 Opticon 2019 Product Updates

image representing opticon conference 2019

1) Impressive Performance Edge

Page performance can dramatically affect your customer experience and also limit your ability to scale your experimentation program, so Optimizely has focused on improving the load of experiments on a page to under 50 milliseconds – barely discernible by the human eye.

2) Customized Reports and Models via Data Labs

While Optimizely reports were initially built for simplicity, as testing matures, flexibility becomes key. Data Labs allows customization of reports, more complex metrics and segments that enable deeper insights, and expansion past a standard T-test so you can now incorporate Bayesian inference model – or build your own model.

3) Enhanced Personalization and AI

Taking personalization to the next level, Optimizely now enables you to incorporate your third-party data (such as Google Analytics), first-party data, and Adaptive Audience – Optimizely’s machine learning. You can create on-the fly segments simply by typing in keywords to target visitor interests and drive results.

4) Full Stack with Rollout Control

With its expanded open source SDK offerings, including a fully customizable option to build your own, Optimizely enables you to create tests in the language you prefer. This makes it easier to merge winning test designs into your main code, minimizes merge conflicts, and speeds up time to market. With rollout control (to limit the percentage of your audience as well as quickly revert if any issues arise) and Jira integration, testing can fit into your existing development process with minimal disruption.

5) Get Buy-in with ROI Model Builder

Optimizely also now enables you to quantify your testing efforts and impact using the ROI Model builder. This model is fully customizable and relies on your data. It’s an easy way to document proof that experimentation drives revenue and get the executive buy-in needed to grow your testing program.

See Optimizely’s recap of its customer experience optimization Conference

Other Opticon Conference 2019 Themes

Focus on ROI

While various teams work towards different key performance indicators (KPIs) to measure their efforts, ultimately everyone’s working to increase return on investment (ROI). When you can show an impact to ROI, it becomes easier to justify the work, increase investment in testing efforts, and get the broad buy-in necessary for a successful experimentation strategy. Not to mention, those that show they can drive ROI are often the ones that get extra resources and earn promotions.

Create a Center of Excellence

laura pflug eCommerce website manager at brooks runningSimilar to creating an analytics-focused organization, a customer experience-focused organization driven by testing and experimentation requires broad buy-in from across the organization. As Laura Pflug from Brooks Running called out, the HPPO (highest paid person’s opinion) matters. Starting with upper level support, it becomes much easier to bring in the additional resources from Product, Development, Creative, Marketing – all areas of an organization necessary to implement tests and roll out winning results.

A Center of Excellence enables a structured approach for your organization to follow, offering templates based on your brand and site standards. With a triage and scheduling process, and leveraging the ease of use that Optimizely provides, you can enable any stakeholder to participate in tests directly relevant to their efforts. Optimizely found that organizations running 21 or more tests a month are most likely to drive a more than 14% increase in revenue. As testing becomes a part of an organization’s culture, it becomes easier to scale and enables everyone to share in the success.

Micro Conversions

While the ultimate drivers for most businesses are revenue and ROI, different groups within the organization most likely drive to varying KPIs that more closely tie to their specific efforts.

Focusing on micro-conversions can help bring that high-level ROI down to details that you can proactively take action on. These proxy metrics become leading indicators that you’re moving the needle towards the ultimate goal of increasing ROI. For example, driving more resource downloads or more page views on key content on your site may end up driving more word of mouth, resulting in increased sales. Identify these key micro-conversions to track, and then test, test, test.

With the advancements in SDK Full Stack integration, it has become much easier to integrate other sources of data within your organization to identify downstream and potentially unexpected impacts your tests may drive (good or bad).

Focus on the Why

To really get an understanding of your customers and visitor base, be sure to focus on “why” users are taking the actions you want them to take. It’s not enough to simply increase a desired click or a desired micro-action if those don’t ultimately result in the end conversion or sale. Digging deeper into understanding why a user took the action, what is it they are trying to accomplish? Are you making it easier for them to accomplish their goals?

“Experimentation gives businesses direct insight into what customers want, and this is what digital transformation should be centered around.”

Focus on the Customer

Once you understand the “why,” you can truly focus on the customer and increase customer engagement. As Ashton Kutcher referenced in his keynote, if you’re nice, work hard, and do the right thing, you’re sure to succeed. Do right by your customers, and you’ll reap the rewards.

“(There’s a) Positive correlation between improving the customer experience and committing to experimentation.”

Start Small & EVOLVE

rebecca bruggman staff technical program manager, experimentation at optimizelyInterestingly, Becca Bruggman from Optimizely targets a 10-30% win rate in Optimizely’s own testing. This low level experimentation success rate shows that the tests you’re running are pushing the needle enough to be a disruptor and drive the increases that can really impact your business.

When just starting out, even small improvements can help build the momentum and buy-in needed across your organizations. In my experience, even these smaller improvements can add up, build excitement, and set your organization in the right direction of establishing an experimentation-focused culture. Blast understands the value of experimentation so much so that we incorporate it in our consulting methodology, SIOT: Strategy, Implementation, Optimization, and Training.

We believe improving the customer experience through experimentation and optimization is fundamental to helping your organization EVOLVE.

The post 5 Exciting Updates from Optimizely’s Opticon Conference 2019 appeared first on Blast Analytics & Marketing.

Increase Your Competitive Advantage with Tag Management Governance

$
0
0

Is your tag management system the Wild West or Fort Knox?

A tag management system (TMS), with proper tag management governance, plays a critical role in your organization’s ability to move quickly and efficiently when implementing analytics and marketing tags/platforms onto your website. By using a TMS, it’s easier to remove old tags that you no longer use, onboard new tags (even for a POC/trial), and update tags to work with new features you wish to launch. 

With so much flexibility, a TMS can, unfortunately, cause a wide variety of security and performance issues if not governed properly. Tag management governance is all about determining who can make/publish changes, what happens when things go bad, etc.

As a tag management consultant for many large enterprises, we’ve directly seen the TMS usage on both sides of the spectrum: zero governance and complete governance

Don’t Wait to Enact Tag Management Governance

Often, it isn’t until a catastrophic issue occurs that tag management governance gains the proper focus.

A poorly planned TMS design and a lack of governance will eventually lead to problems. The issues are typically related to security, site performance, or just a complete tangled mess within the tag management system that makes things unmanageable. 

The proper tag management governance can help reduce the chance and severity of issues.

Bypass IT: The Wrong Approach

image representing tag management (system) governance

“The serious risk of a poorly governed TMS should be top of mind for all organizations.”

Unfortunately, a common example we see with a tag management system implementation is that it was deployed as a means to avoid having to work with the IT and development teams (or go around them completely).

This is the wrong approach and thought process. 

Even though a TMS makes it super simple for just about anyone with basic knowledge to paste in code or use an interface to set up a tag, there are so many things that can go wrong. Common issues range from completely breaking the checkout flow and causing massive revenue losses to performance issues that quadruple the load time on completing common tasks within a page. 

The serious risk of a poorly governed TMS should be top of mind for all organizations.

Benefits of Tag Management System Governance

Your IT team wants security and performance. Your analytics and marketing teams want speed, flexibility, and reliable data. Through TMS and data governance, everyone can get what they want, while creating competitive advantage for the organization.

The reduction of risk through proper governance helps increase your organization’s competitive advantage. Poor site performance, or even potential data leaking from your website to unintended marketing tags, can destroy the trust you have with your customers. When you build and maintain trust with your customer, you are taking steps to increase competitive advantage. 

image of sanjay saxena, svp of enterprise data governance at northern trustCompetitive advantage extends to the enablement of taking better action with your data. According to Sanjay Saxena, SVP of Enterprise Data Governance at Northern Trust, “Good data quality gives organizations confidence in their products and services. This, in turn, enables companies to make data-driven decisions that lead to better client relations, better products, and premium pricing.”

Proper TMS governance drives business value. A Harvard Business Review article found that only 3% of companies’ data meets basic quality standards. Through data governance, including validation processes and quality assurance, you can increase the quality of data. When you decrease data collection errors and ensure that the right data is collected, you’ve increased trust in data.

Tag management system governance includes documented answers to the following sample of best practice questions:

  • Who has access to make changes?
    In addition to who has access, there MUST be a process to remove users once they end their relationship with the organization.
  • What’s the quality assurance (QA) process, and who is responsible?
    No two organizations are identical to each other, and the QA process is slightly different. A customized QA process should be developed that meets the needs of both IT (for security and performance) and marketing (for speed, flexibility, and data reliability). How does the QA process align with automated QA tools, such as ObservePoint?
  • Who can publish the changes?
    Oftentimes, the person making the edit in the tag management system should not be privileged to also publish the change. The publish needs to be coordinated, documented, and audited to reduce risk in production environments.
  • What happens after each publish?
    After a change is published, it should be documented somewhere, and teams need to be informed of the publish so that they can alert you to any significant issues or changes in metrics. There must be a QA process in production to validate the results. If you’re using an automated QA tool, such as ObservePoint, now would be the perfect time to kick off web journey and web audit tests.
  • What’s the workflow from start to finish to get a change live?
    This must be documented in a CoE (Center of Excellence) so that everyone is aligned on what to expect. If you’re able to align the steps into a Jira workflow, this creates the ultimate level of transparency for others to immediately understand the status of the work. You should also set standards in terms of how long a typical tag should take from start to finish.
  • What are the coding standards within the TMS?
    The users that are making code edits within the tag management system should follow common best practices and align their coding to your unique organizational standards. While it varies by TMS, how are exceptions treated? Is there a need to put everything in its own try/catch to avoid browser errors?
  • What data is exposed to the TMS and to the individual tags?
    Are you aware of what data is being sent to each vendor? For example, are there serious privacy issues, such as sending the user’s email address in plain text to a third-party marketing platform?
  • How often is the TMS audited?
    The audit should ensure stale tags are removed, processes are being followed, and opportunities to improve performance are completed.

Through application of the data governance best practices, your organization will start moving away from the Wild West approach and closer to a Fort Knox TMS governance that will increase your organization’s competitive advantage and benefits. 

“…moving away from the Wild West approach and closer to a Fort Knox TMS governance will increase your organization’s competitive advantage”

Increasing your organization’s TMS governance can be daunting, to the point that you don’t know how to effectively start. By auditing what you’re doing today and establishing a roadmap of where you want to be, you’ll be ready to get executive buy-in. 

We’d love to hear your questions and ideas on how to increase tag management system governance, in the comments below. Together, we can ensure the digital analytics industry is increasing maturity and using tag management in a way that respects security, transparency, and performance while ensuring speed, flexibility, and data quality are achieved.

The post Increase Your Competitive Advantage with Tag Management Governance appeared first on Blast Analytics & Marketing.

How to Get Your Business AI Ready to Improve Customer Experience

$
0
0

Providing an engaging customer experience is no longer optional for businesses but instead, is demanded by users. In fact, the 5th Edition of the “State of Marketing Report” by Salesforce reveals that 60% of customers expect companies to take it one step further and anticipate their needs. As businesses strive to keep up with customer expectations and engage with them in a relevant manner, technology is paving the way to meet these needs.

While often portrayed in the media as robots stealing people’s jobs, artificial intelligence (AI) has proven its ability to improve the customer experience. The Salesforce report shows that businesses are increasingly embracing the use of AI to drive personalization and overall, improve the customer experience. In fact, the use of AI has grown by 44% since 2017.

So how does this impact your business? First, if improving the customer experience isn’t already a top business priority then it needs to become one. According to Salesforce research, 80% of customers stated that the customer experience is as important as the product or service being offered. Second, your team should evaluate the methods that are currently in use to meet and, more importantly, anticipate customer needs. If AI is not already a part of the discussion, then it needs to be introduced.

“…if improving the customer experience isn’t already a top business priority then it needs to become one.”

With that being said, we strongly caution against diving into AI without proper planning. For AI to be successful in driving the customer experience, your team needs to understand how to get AI ready, including:

  1. Understanding the difference between the buzz words
  2. Identifying the business need
  3. Learning how to make data AI ready

AI and Machine Learning: What’s the Difference?

image representing john mccarthy and tom mitchellBefore having a discussion on AI, it’s important to understand the relationship between the buzz words,  AI and machine learning (ML).  John McCarthy coined the term “Artificial Intelligence” and defined it as “the science and engineering of making intelligent machines, especially intelligent computer programs.”

Machine learning is not separate from AI, but instead is a subset of AI. Tom Mitchell’s definition for ML is the most widely known: “A computer program is said to learn from experience ‘E’, with respect to some class of tasks ‘T’ and performance measure ‘P’ if its performance at tasks in ‘T’ as measured by ‘P’ improves with experience ‘E’.”

image showing how machine learning is a subset of ai

In layman’s terms, machine learning is simply a technique for realizing AI. Machine learning involves large amounts of data and algorithms to learn how to perform the task. The important takeaway here, and what will become relevant later on as your team gets AI ready, is that ML cannot access any knowledge outside of the data provided.

How to Use AI to Improve the Customer Experience

image representing how amazon uses ai to improve the customer experience

As businesses adopt AI to improve the customer experience there isn’t necessarily one predominant use case. Amazon is well-known for using machine learning to provide product recommendations based on the customers preferences.

However, using machine learning to provide customer recommendations is not limited to retail use cases. For example, Adobe AI powered personalization (Adobe Sensei) allows businesses to deliver personalized recommendations in other verticals, including publishing, B2B and the travel industry.

image showing how adobe sensei improves customer experience using ai and personalization

Another example where businesses are using AI to improve customer experience is with online customer support. Specifically, more businesses are turning to live chat bots to provide assistance to customers on their website. The benefit of a live chat bot is that it can provide real-time assistance at all hours of the day, and generally at a lower cost than having a live chat representative.

image of ai being used via a chat bot example on amtrak site

Some businesses are taking AI a step further and coming up with creative use cases that make it easier for customers to make a decision and engage further along the customer journey. For example, Ulta’s mobile app allows customers to try on makeup virtually in their GLAMLab.

ulta's virtual glamlab using ai to improve customer experienceThe ability to try makeup on virtually instills greater confidence in the customer that they are picking the right product and reduces the likelihood of them returning their purchase.

Looking at these AI-based customer experience examples, the main takeaway is that businesses are improving the customer experience in a variety of ways. The key is to determine which AI solution works best at meeting your specific business needs.

Where Does AI Fit Into Your Business Needs?

Businesses that prioritize the customer experience likely have an established culture of experimentation. With experimentation, these businesses understand that testing the customer experience shouldn’t be done for the sake of testing but should be used to eliminate known points of friction in the customer journey.

A similar approach needs to be taken when it comes to adopting AI. In other words, there shouldn’t be a rush to implement AI just for the sake of using it. Instead, your team needs to strategize about how the AI solution will solve actual business problems. When having this discussion with your team, the key to this strategy is to

  1. focus on well-defined problems vs broader, more general use cases, and
  2. identify a measurement for success.

Identify a Specific Use Case

A poor use case for AI is one that is too broad. For example, the use case “we want to make customers happy” lacks specificity and doesn’t provide insight into what friction is actually impacting customer satisfaction. Without this information, it is difficult to know where exactly to implement AI. For example, to make customers happy, do you need focus on personalized recommendations or provide better customer support? Taking the time to research and identify specific business needs or areas of friction in the customer experience where AI can be used will be essential for its success.flipkart logo

Flipkart takes the right approach by utilizing ML to solve a well-defined problem unique to their business. Specifically, Flipkart does business in India, where home addresses lack standardization and accuracy: “Factors like lower literacy rates lead to customers entering incorrect PIN codes, or commit spelling errors while filling up addresses, magnifying the problem. A lot of Indian place names are translated from local languages into English phonetically, resulting in variable spelling patterns.“

The inconsistency in these addresses impacts speed of delivery, which among things, harms customer satisfaction. To address this issue, their team created a ML model “consisting of different locality features that people commonly write in addresses, which is gaining accuracy with incremental deliveries.”

Implementing machine learning is expected to improve speed of delivery, increase efficiencies and reduce the cost of labor required to manually sort packages. Since Flipkart took the time to identify a specific business problem, their team was able to leverage machine learning to solve a real issue that was impacting the customer experience, and their business.

How Will Your Team Measure Success?  

In addition to having a well-defined use case, your team’s strategy should also identify a measurement for success. An essential question to ask the team is, “what’s the business result we are trying to achieve with AI or ML?” Knowing how success will be measured beforehand is imperative in understanding the true impact of AI on customer experience.

“Knowing how success will be measured beforehand is imperative in understanding the true impact of AI on customer experience.”

If your team is unable to identify these metrics (e.g. transactions, return rate, call volume, delivery costs, net promoter score) then the use case is likely not the best one to move forward with for AI.

After your business use cases and measurement for success have been identified, the next step is to evaluate the state of your data, as this will serve as the foundation for your AI solution.

How to Make Customer Experience Data AI Ready

image representing how to make customer experience data ai ready

As mentioned earlier, the driving force behind AI and machine learning is data. To get your business AI ready, your team must have a solid understanding of the data that is currently available. This includes knowing the types of data available, as well as where and how it is stored (e.g. analytics data, data from customer support, qualitative data, etc…). Moreover, it is equally important to understand what data is missing but necessary to execute AI for your identified use case. If specific data points are missing, a plan should be in place to start collecting such information prior to implementing AI.

One of the biggest challenges that businesses need to overcome when adopting AI/ML is having easy access to all this necessary data. Oftentimes, different data types are siloed, which reduces the diversity of data available for use in AI, and increases the risk of inaccurate or incomplete data. This will lead to sub-optimal outcomes in AI.

Bottom line: to make data AI ready it is imperative to transition from siloed to shared data.

How to Transition From Siloed Data to Shared Data

Making this transition is easier said than done, but the following steps will help your business get started:

  1.     Establish a culture of knowledge sharing
  2.     Leverage technology that unifies data — Customer Data Platform (CDP)
  3.     Protect privacy to maintain customer trust

Establishing a Culture of Knowledge Sharing

Creating a culture of knowledge sharing is necessary to breaking down data silos. Similar to establishing a culture of experimentation, where success is dependent on participation across teams, creating this culture of knowledge sharing must start at the top.

“Creating a culture of knowledge sharing is necessary to breaking down data silos.”

Specifically, the C-suite will need to commit to creating this data-driven culture, including regularly communicating this commitment across the different teams and further, holding teams accountable to ensure follow-through. This data-driven transformation is far more likely to be successful when driven by the leaders rather than initiated by an individual team or teammate.

Leveraging the Right Technology

Having the right technology in place is also necessary to transition from siloed to shared data. One useful platform to incorporate into your business is a CDP.  CDPs allow your business to collect first-party data from multiple online and offline interactions and match them to a single customer profile. In other words, this technology allows your business to unify your customer data to get a more comprehensive view of the customer and, ultimately, utilize this data to engage customers in a highly relevant way. To learn more about CDPs check out our previous blog post.

Protecting Customer Privacy

Finally, it’s important to highlight that along with easy access to data comes greater responsibility to ensure the privacy of customer data to maintain customer trust. A Salesforce study shows that customers today are far more likely to engage with companies they trust.image of salesforce study results An integral part of building and maintaining trust is ensuring that when a customer hands over their personal information, the organization is doing all it can to keep it safe. It should be highlighted that customers are generally willing to provide their personal data if they ultimately see value from doing so — such as using AI to improve the customer experience for them. As a result, it is strongly recommended that privacy measures are evaluated, and upgraded if needed, to ensure customer data is thoroughly protected.

“An integral part of building and maintaining trust is ensuring that when a customer hands over their personal information, the organization is doing all it can to keep it safe”

Conclusion

Realizing the importance of AI in improving the customer experience is necessary to remain competitive these days; however, businesses will need to plan accordingly before implementing AI. As outlined in this blog post, there are several steps businesses need to take to ensure proper strategy and technology is in place to successfully execute AI.

To summarize, businesses need to identify specific use cases (and applicable measurement for success) where AI or machine learning can actually improve the customer experience. With use cases in hand, your team needs to have a firm grasp of the current state of your data, including knowing whether additional data is needed. Likely the biggest challenge here is to ensure that data is easily accessible. This requires establishing a culture of knowledge sharing that is driven by executive leadership. Further, it requires having the necessary technology in place, such as a CDP, that will help unify customer data and give the AI solution all the data it needs to execute properly.

Finally, it is imperative to ensure privacy measures are in place to protect customer data and maintain customer trust. AI will provide the value that customers need to see but it won’t have an impact if customers do not trust your organization.

I hope this post has helped clarify the difference between Artificial Intelligence and Machine Learning, and the steps you should consider before jumping into AI. Using AI to improve the customer experience can be a daunting undertaking, considering the amount of preparation that is required. However, if you follow the steps outlined above, I’m confident that you will find AI success. If you have questions for how your specific business can get AI ready, including identifying uses cases, auditing your current data, or evaluating privacy measures, I’d be happy to provide some initial thoughts.

The post How to Get Your Business AI Ready to Improve Customer Experience appeared first on Blast Analytics & Marketing.

Viewing all 149 articles
Browse latest View live


Latest Images