Posts

Enhancing Online Viewing Experience and Boosting Ad Revenue

AvatarJanuary 25, 2018

According to various estimates, internet users today attract an overwhelming number of advertisements (ads) as per their browsing behavior. Advertisements disrupt the users’ browsing experience by appearing on desktop and mobile devices in the form of pop-ups, banners, auto-play videos and full-screen images. Ad blocking tools have sprung up as a natural response for neutralizing them. Over 200 million users of such tools bear testimony to the fact that internet users dislike online ads.

 

Ad blockers are giving advertisers a run for their money by ensuring that ads never reach their intended audience. As a result, they have dented the revenues of online business entities that rely on them to sustain themselves. A 2016 estimate pegs global losses due to ad blockers at a whopping $40 billion.

 

Are Advertisers and Publishers Effectively Dealing with Ad Blockers?

Several websites block access to their content unless users deactivate ad blockers or pay to access content. It is an attempt to counter the revenue crunching effects of ad blockers.

 

Research shows that such measures are counter-productive. A huge chunk of internet users would rather stop visiting such sites than being arm-twisted into paying up or disabling these tools. Nonetheless, the majority of users are not averse to watching ads as long as they are not annoying. It sets a clear mandate for advertisers and publishers to serve ads that enrich users’ browsing experience.

 

Is There a Way to Enrich Users’ Viewing Experience?

Yes, advertisers and publishers can start by identifying non-performing ads. An ideal way of identifying non-performing ads is to map the browsing behavior of users with multiple ads. However, it is impossible to accomplish this task manually when you take browsing behavior of millions of users into account.

 

Our client who is a media conglomerate was facing a similar problem. One of their biggest advertisers wanted their ads to appear only in certain premium slots. Our client carried out a manual inspection and discovered that the advertiser’s ads appeared randomly in viewer streams. They were aware of the problem and had huge volumes of data at their disposal to affirm the veracity of the issue. In spite of this, they were unable to fix it. The reason behind this was the huge time and effort required for extracting insights from data rendered it un-actionable. They were in need of an automated solution to achieve efficiency and address the issue.

 

How Our Solution helps in Enhancing Viewing Experience

We wanted to make efficient use of the huge ad server data our client had at their disposal. We, therefore, built an automated BI mechanism that analyzes and reports ad server data in a meaningful way. Our solution addresses three major concerns –

 

1.      Repeated ads

When users see the same ad multiple times in quick succession, it leads to Ad Fatigue. Repeated ads become less effective as viewers learn to ignore them. Our solution helps in this regard by identifying ads that do not adhere to frequency limits set by ad publishers. Non-adherence with pre-set frequency limits indicates an issue with the ad server. Also, our solution helps account executives to serve advertisers better by highlighting ads pushed in the ad server without a frequency limit.

 

2.      Repeat ads of same industry category

Competitive ad separation is an essential requirement for online ad publishers. It dictates that two ads belonging to the same industry category should not appear during the same ad break. For instance, a McDonald’s ad should not immediately follow a KFC or Burger King ad during a viewer’s scheduled ad break. Our solution helps in avoiding such situations and helps publishers in drafting better ad media plans.

 

3.      Ad Load Latency

Ad Load Latency is the time it takes to load an ad at the start of an ad break or after a previous ad ends. While the latency is too small for most users to notice, it may become noticeable while rendering ads of heavier file size on networks with slow bandwidths. This issue has a significant impact on users’ viewing experience, and publishers strive to keep the latency as low as possible. Our solution tracks the latency of every ad with details about the resolution at which the rendering took place. It allows ad publishers to address the issue holistically.

 

The Impact of Our Solution

The following numbers give a statistical account of the impact made by our solution –

 

  • 99% Reduction in offending ad views: An offending ad view occurs when the same user views a particular ad multiple times in the same viewing session on a particular site under analysis. Offending views dropped by 99% as compared to earlier, e., before implementation of a fix to the ad server based on our solution
  • 85% improvement in problematic viewing sessions: A viewing session is a unique viewer’s single continuous viewing session. A problematic viewing session is one which has at least one offending Ad view as per the definition above
  • Saved millions of dollars: A fix to the ad server based on suggestions from this solution’s implementation has saved millions of dollars in lost ad revenues on a yearly basis

Our solution has helped in demonstrating to our customer that effortless exploratory analysis of their ad server data is possible. It has helped them in discovering ad view patterns and identifying areas of revenue leakage. Apart from saving millions of dollars for our client and enhancing their users’ viewing experience, our solution has armed them with insights to create a formidable media brand.

 

As our client’s understanding of the possibilities unearthed by the solution continues to grow, so does our capability to guide them through the advanced stages of business intelligence.

Near Real Time Ad Analytics powered by Big Data

AvatarMay 23, 2017

One of the key challenges for the advertising industry is ensuring that the right ad is shown to the right audience at the right time. Simple as it may sound, the secret sauce for achieving this involves complicated process of integration of ad servers, content delivery devices, targeting algorithms and data about demographics of the viewers. The task can be particularly daunting given the massive data sizes and the unstructured nature of ad viewership data. One of our customers – a leader in the multimedia video delivery industry was unable to provide granular details about ad views to the advertisers. The problem was complicated because of the sheer volume of data – with 100s of millions of views per day – was massive.

 

The Need for Granular Ad Campaign Details

One of the advertisers wanted the ads to be precisely positioned to ensure the effectiveness of the campaign. An ad was to be shown only on specific websites, forspecific shows, at specific time slots during the show and even in a specific order. Before InfoCepts deployed its solution, the monitoring and tracking of the ads was carried out manually in spreadsheets. The reports were high-level and couldn’t provide granular details such as exact impressions of the ad with specific timeslots. The unstructured nature of weblogs and massive data sizes,millions of rows per day, meant the spreadsheets had no chance of providing granular data.

 

Without solid data, the media company would have had to offer the advertiser a best guess on the number of wrong impressions i.e. ads that weren’t placed as per the specifications provided by the advertisers. For a company that values the quality of its data, the precision of its targeting, and a high-standard of excellence—it needed a solution which could not only track such wrong impressions but also highlight those wrong impressions. The solution to this problem involved quickly processing huge amounts of unstructured data and hence needed the power of Big Data analytics.

 

The Solution

An assortment of Big Data technologies were used in conjunction with MicroStrategy, a powerful reporting tool  to enable quick, easy and effective analysis of large volumes of data. The media company can now effortlessly monitor incorrectly placed ads through a dashboard. Moreover, they now track performance of each and every ad along the following parameters –

 

  • Number of Impressions
  • Section of the site where the ad appeared,
  • The video content during which the ad appeared
  • Details of series, episode, break number, type of break (such as before, after, or in between the show), and also
  • The ad sequence within these breaks.

Not only did these details allow the media company to supply the advertiser with specific insights on where, when, and how ads were served, it also allowed them to analyze viewer patterns for various types of ads.

 

The Value of Deep BI and Data Analytics

The success of the campaign encouraged other divisions in the media company to tap the in-house InfoCepts team for assistance in gleaning useful insights, empowering them to provide the customers a complete low down on effectiveness of ad campaigns and actionable insights which could be used to  tweak the ad strategy to maximize the ROI.

 

The media company now has an enterprise-grade ecosystem with a cluster of servers that comprises multiple best-in-class technologies. It provides the power to process large quantities of data rapidly and to gain critical insights:

 

  • It now has access to a dashboard that provides a 30-day rolling window, which will expand to show up to six months of data. The company can now view impressions by the day, month, or quarter. It can slice and dice data by site, show, and even episode.
  • It can now identify issues, along with the causes, enabling the company to quickly deploy specific integration teams to resolve issues.
  • It can track frequency cap violations and triage the problems to various integration teams.

 

The bottom-line result – our customer has seen multimillion dollar savings, better ad monetization, better ROI for its advertisers and a better viewer experience for its audiences.

 

If you are struggling to find the right business intelligence from your data and you need fast and actionable business insights, we can help.

Analytical Deep-Dive Provides Insights on Bonus Content for Multinational Media Company

AvatarDecember 28, 2016

As more consumers turn to on-demand video streaming services such as Netflix and Amazon, media entertainment companies are moving from the traditional DVD/Blu-ray market and further into online streaming.  But, with an almost endless amount of content choices available to consumers, media companies are looking for new ways to attract consumers’ attention to their unique content.

 

To help uplift and promote its home entertainment digital business, one multinational media company launched a new content initiative—developing bonus content like deleted scenes and wallpapers for the titles they have available for viewing on household devices and set top boxes.

 

Since this was a new initiative for the marketing and branding team, they had no system in place to track how the bonus content was performing and whether it was effective in increasing viewership of their content.

 

The Difficulty of Effectively Tracking Bonus Content

At the start of its bonus content initiative, the only data which the media company received from various streaming service providers was basic point of sale (POS) information that showed what movies or series was sold or rented and the number of sales. While this information helped the company determine what movies or series content was doing well, it didn’t allow them to see if the bonus content was helping boost their sales or not. And, even more specifically, what type of bonus content was most effective.

 

At the time, the company was using MicroStrategy as their reporting platform. However, they had limited knowledge of its capabilities and were only using its traditional form of grid style reports downloaded to Excel to determine high level behaviors of movie content. This functionality was not enough to allow them to track the user experience with bonus content. So for help, the media company turned to InfoCepts.

 

Using Meaningful Data and Visualization to Inform Decision-Making

Digging into the project, the InfoCepts team discovered that the web logs of the content servers could be used to identify the user behavior. This data could then be sourced to a pixel perfect dashboard on a daily basis to build pre-canned and meaningful visualization.

 

The InfoCepts team thus built a user-friendly pre-canned dashboard that integrates the data from web server logs and makes it easy to understand the user analytics and their experience with bonus content. The dashboard includes at-a-glance, visual metrics on a number of criteria, such as:

  • Overall traffic analysis summary across bonus content
  • Title performance analysis from historic period thru current date
  • Popularity by content type
  • Title performance comparison of current vs. previous month
  • Total number of active days of the bonus content across a title or movie

This dashboard not only provided the media company with traffic analysis over hosted bonus content, but through visualization also helped the branding team to review their strategies and plan and release their budget for hosted bonus content based on the popularity and viewing experience of the users. By looking at the historic trend analytics on the dashboard, the team can determine what type of bonus content will perform best.

 

For example, the marketing and branding team discovered that videos and gallery content (wallpapers and screen savers) were the most popular types of content, while interactive was the least popular. Since interactive content is quite costly to develop, having these insights helped the team use their resources wisely and focus on the types of content that perform best for the titles that are most popular.

 

Additionally, the dashboard helped the content team track and direct their resources to popular titles rather than developing new bonus content for less popular or non-performing titles or movies, which is based on the historic trend of viewing experience of the bonus content.

 

Post Live Impacts of the Dashboard Analytics

The dashboard InfoCepts created for the marketing and branding team has been a powerful tool to help the team achieve its goals. Specifically, once the dashboard was implemented, the team was able to use analytics derived from it to:

 

  • Substantiate ROI: The analytics dashboard allowed performance KPIs, such as hits/views and unique users, to be readily available to decision makers so they could measure marketing performance and justify the ROI of bonus content.
  • React and answer in real time: Thanks to dashboard analytics, the branding team could respond to real-time data insights. For example, the team tracked performance KPIs for the new bonus content to understand how the content was performing during its initial stages of go live and was then able to make adjustments to the content based on the real-time data insights to boost hits and engagement with users.
  • Fact-finding analysis: The dashboard also proved its value to perform investigative analysis. For example, spikes on certain days of the month or the far better performance of hits seen during summer vacation than over the Thanksgiving holiday helped the marketing team to effectively plan campaigns for the calendar year around these trends to boost sales.

 

More Insights to Come

The information the marketing and branding team is getting through the dashboard is already helping to drive more sales through better promotion and production of bonus content. However, there is still more personalized insights that additional data points could offer.

 

In the next phase of the project, the InfoCepts team will incorporate geolocation and time log data into the dashboard. This information will help the marketing and branding team create promotions and content for specific geographic regions or times of day.

 

For example, specific titles may do better in certain geographic regions than others or different times throughout the day may have higher viewership opportunities. With this information, the marketing and branding team can further personalize its promotions and content offered to sync better with geographic or time of day preferences.

 

Ultimately, thanks to the insights that the team has been able to extract, the marketing and branding team has been more successful in creating and promoting bonus content that achieves the team’s goals—engaging end users in a meaningful way that pushes them to stream their content over other choices available.

 

Does your marketing team have a complex data analytics problem it’s trying to solve? If so, let’s talk. We can help.

Using Automated Geospatial Functions in Grid Analysis to Select Ideal ATM Locations

AvatarNovember 2, 2016

The banking industry generates a huge volume of data on a day-to-day basis. To differentiate themselves from the competition, banks are increasingly adopting big data analytics as part of their core strategy.

 

One challenge banks face is the selection of ideal geographic locations for new automated teller machines (ATMs). Two analytic methods are used to determine optimal locations:

  • Key performance indicators (KPIs) analysis, which we covered in depth in a previous blog post and will summarize here.
  • Grid analysis, which will be the focus of this post.

There are hundreds of KPIs to be considered for each small area, which we refer to here as “grid.” At the country level, there are millions of grids, and so a huge volume of data gets generated.

 

Banks must be very strategic in picking the next location for their ATMs and branches. Hence the need for 100+ KPIs to justify which area is best suited for a new ATM, based on the maximum potential.

 

A bank that has a nationwide, or an international, presence faces a problem: How should it effectively analyse potential locations within small geographic areas? Executing this analysis manually is tedious and time consuming, so we developed an automated process.

 

Sample Use Case: Automated Geospatial Functions in Grid Analysis

First, let’s consider a geographic area in which we need to find the best location for ATMs. We take as an example the U.S. state of Alaska, which we divide into grids. In this example, we use grids of 10 kilometres by 10 kilometres. We begin by entering the dimensions onto the state map, which results in the image below.

 

Geospatial Grid

 

But here it’s difficult to find the grids that are probably covered with snow or otherwise uninhabitable by human beings. Also, of the remaining grids, many would be shared across multiple districts. Hence, we need additional effort to re-divide the grids across each district.

So we create grids to meet these criteria:

  • Be reflective of state or district boundaries
  • Be robust enough to handle the volume of data when dealing with millions of records
  • Be smart enough to consider only the habitable areas and to omit unusable areas, such as bodies of water, forests, etc.

The ideal solution would, with minimal manual effort, entail all of the underlying sub-shapes like districts, while also excluding uninhabitable areas.

 

Continuing with our example, we take a small area of the state and omit the lake in the middle. The district-level breakdowns are maintained during grid division. This makes the grids more meaningful, as illustrated below, and leaves us with less manual work to remove unnecessary grids.

Geospatial Grid 2

For this process, we perform the following steps:

  • Shape the original area for grid division.
  • Remove irrelevant areas from the original area.
  • Divide each relevant area into grids.
  • Segment and store each relevant area of grids.

We next employ a series of user defined functions (UDFs) using Vertica geospatial functions, along with Java. Vertica allows us to write custom codes using Java, which can then be deployed in the database and called using Structured Query Language (SQL) queries like inbuilt functions. Considering the volume of data and the processing time involved, we can deploy some of the functions as stand-alone executable Java Archive (JAR) files that run without logging into the Vertica database.

 

Case Study: Grid Analysis for a Major Banking Client

We developed a solution very similar to this use case for one of our banking clients. Some of the challenges we faced and the solutions developed included:

  • Challenge – If the parent shape has sub-shapes within it (e.g., state files inside a country or districts inside of states), the boundaries of the sub-shapes need to be retained while creating the grid.
    Solution – We designed the solution so the process loops through the entirety of sub-shapes and retains the child shape while creating the grids. This maintains the sub-shape boundaries even after grid division. If there is any grid that falls across two districts, the process divided the grid into two shapes—one for each district. This measure ensured we had separate grids for an area overlaying multiple child shapes.
  • Challenge – The sheer volume of data resulting from the operation. Each operation can create millions of rows within each of the multiple steps, creating unnecessary load on the database.
    Solution – We created temporary tables to store the irrelevant data during different steps of the operation and to store only the final relevant data in the database.
  • Challenge – How to identify the individual grids within each area once multiple, overlaying grids were created. Most of the grids would be squares or rectangles, rendering it impossible to tell each one apart from the others by merely looking at them or by looking at the coordinates.
    Solution – We saved all of the grids with information on the parent sub-shape. We saved grids with more information on each row for better understanding on parent sub-shape.

This process can be run for a shape file containing many or no child shape files inside. In case the parent shape contains children, the grids align such that all the grid shapes are in the same formation, regardless of the location of the child shapes. Also, to identify the grids, we stored the parent information inside the grids, so that the user can locate which sub-shape the grid belongs to. This process drastically reduces manual effort by automatically dividing the grids for KPI-based analysis.

 

Conclusion

We implemented the above solution by dividing geographic areas into grids and then calculating 100s+ KPIs for each area. With the automated process, we reduced significant manual effort. This solution can also be applied in other industries, such as retail, to find an ideal store location and more.

To learn more about the use of data-driven geospatial analysis for banking or other industry applications, contact us today.

 

Use Real-Time Data to Make Personalized and Timely Offers

AvatarSeptember 13, 2016

Throughout the day, customers use their credit, debit, and online and mobile banking channels to make purchases. There is a huge potential to capitalize on these transactional moments and provide real-time personalized offers—and financial institutions have taken notice.

 

Recently, one of our financial services clients was looking for an innovative way to interact with its customers and provide relevant offers that would drive more customer engagement and use of its banking services. Through the use of real-time analytics that incorporated the customer’s current transaction, the channel, the location, and the customer’s profile, the bank wanted to be able to deliver immediate and relevant marketing offers being offered in the vicinity to its customers.

 

To meet this need, we helped develop a real-time analytics solution to increase its customers’ usage of the bank’s credit cards, debit cards, and internet and mobile channels. For instance, a customer is at the mall and completes a transaction using a debit or credit card. Through the use of our real-time analytics solution, the bank immediately tracks the terminal location where the sale was made and checks all the offers available in that mall or nearby areas offered by merchants associated with their financial institution. The bank can then make a targeted offer to the customer via a text message or email right at that location near real time.

 

If, for example, the transaction occurs in the evening (near dinner) and the customer enjoys eating out—which the bank already knows based on the customer profile it has created from past transactions—the system will send the customer a food-related offer located in the mall. The customer is pleased because the offer is both timely and relevant; the bank is pleased because the customer’s use of the offer will lead to further revenue for the bank.

 

Making Real-Time Analytics for Banking Transactions a Reality
From the example above, it’s easy to see the value of a real-time analytical solution for financial institutions as well as other businesses. However, implementing such a solution is not without its challenges, particularly in an environment where the source system data is highly confidential.

 

Here are some of the challenges we faced in implementing a real-time analytics solution for our financial services client, and how we dealt with them:

 

Challenge 1: Querying confidential data without impacting the source system.
Because all the data present within the different business units (credit, debit, and online banking) was confidential, we had no direct access to the source system. This made it a significant challenge to get real-time feeds from the respective source systems—debit, credit, and internet banking—without impacting performance. Incremental fetches or SQL queries on a source system at regular intervals, such as every minute, can negatively impact current processes running on the source system.

 

To address this issue, we helped the bank put in place a change data capture (CDC) process on selected tables. We then mapped these tables with a similar structure in another schema that is accessible for real-time data processing.

 

Challenge 2: Reading large amounts of data from the tables.
With the likelihood of needing to read four million or more records per day, it was not feasible for the real-time data processing application to connect every time with the new schema to fetch the incremental data for processing and complete the process in a timely manner.

 

To address this issue was a two-step process. First, we created a database  stored procedure in a new schema and scheduled it to run every minute to fetch the incremental data and generate the small files in the shared location (NFS Directory). Then, the real-time data processing application read the small file from the shared location and renamed the file after it was processed.

 

Challenge 3: Generating customer offers in 15-20 seconds.
In order for real-time analytics to be successful, customers’ data needs to not only be quickly processed, but the system needs to be able to act on the analytics received to generate relevant offers to customers. Because the real-time data processing application receives multiple files per minute from different source systems (2000 to 3000 transaction per minute at peak time) it is extremely difficult for the real-time processing application to validate and generate offers in a very short timeframe, such as 15-20 seconds.

 

For this challenge, we used Vertica’s Big Data SQL Analytics Platform for batch processing and its in- memory database for lookups. All heavy SQLs were run in Vertica as a batch mode to update customer profiles, offers, terminals, merchants, business rules, etc. Additionally, every beginning of day (BOD) synch process updated the respective in-memory database tables.  As in-memory operation is faster and in the same cluster as the real-time data processing application, this helped to process a large amount of data per minute.

 

Challenge 4: Managing changing business rules efficiently.
Business units need the ability to continually change business rules as new products and services became available or other insights warrant the need for change. To reduce the impact of this issue, the bank wanted the system to be able to handle various scenarios with little or no interaction required by the tech team to modify the existing rules or to add new rules.

 

To achieve this requirement, we built a GUI application in Java for rule creation and modification. We also provided hooks for plugins for any special business case to make it simple to alter without needing technical assistance.

 

Challenge 5: Communicating with the customer through a different channel.
The bank has numerous channels its uses for communication with its customers, such as SMS gateway, email, and phone banking. Therefore, our solution needed to be able to access and utilize these different communication systems.

 

To address this issue, we wrote java plugins for each communications systems and embed each with a real-time processing application so that when the matching offer was selected, a personalized offer message is generated, which could be sent to the customer through one or more channels preferred by the customer.

 

How the System Works
With these challenges addressed, here’s a brief look at how the real-time analytics solution we developed works as an overall process:

 

  • Implement a CDC process on required source systems to get latest feed at near real time.
  • Create customer profiles on transactional behavior based on the last one year of data.
  • Load scripts for all master data (offers, terminals, and merchants, etc.)
  • Implement rea-time data processing application for stream processing, rule configuration/application and final offer generation for customer.
  • Audit logs.

 

Process Flow Diagram
The diagram below shows a visual representation of how we applied our solution to achieve real-time analytics and offer generation.

 

Real Time Spend Process Flow

 

While robust implementation and accurate business rules will inevitably generate less offers, these robust rules lead to better outcomes. Customers receive only the most personalized and relevant offers. This not only helps the bank establish a deeper relationship with the customer, it also helps eliminate customers receiving unwanted offers, which could have a negative impact on the customer/bank relationship.

 

Conclusion
A real-time data processing system empowers businesses, such as financial institutions, to capitalize on live data in near real time.  As noted in this post, the banks real-time data processing application enables various business units at the bank to define their business rules for new offers or campaigns for new products or services. This allows banks to generate additional revenue for specific products or services and provides customers with valuable offers they find relevant and appreciate.

 

Finally, the application of real-time analytics can also provide an opportunity to implement new business cases and broaden its return on investment. For example, the real-time service recovery of customer complaints or other service related issues.

 

If you’d like to learn more about how real-time analytics can help your financial institution or business boost its revenue and deepen customer relationships, get in touch. We have the experience and deep expertise to deliver real-time analytic solutions in even the most complicated business environments.

Improving the Performance of Data Mart Reports in MicroStrategy

AvatarJuly 6, 2016

Improving the Performance of Data Mart Reports In MicroStrategy

If you work regularly with data marts in MicroStrategy, you know how frustrating it can be when it takes hours to execute a report. Fortunately, there are settings that can be tweaked to improve the speed and performance of these reports. By taking a few simple steps, you can reduce the time it takes to load hundreds of millions of rows of data from hours—to just a few minutes.

 

Why use data marts?

A data mart is a mini data warehouse, or a subset of data derived from a primary data warehouse, which is typically stored in the same database server as the warehouse. Each data mart table is associated with one report via the MicroStrategy Desktop interface.

 

Data marts have a variety of business uses. Because each data mart table is associated with one report via the MicroStrategy Desktop interface, data marts can be tapped to create tables in a database that can be used for running “what-if” scenarios. They are also used to build tables for third party tools, and for building a smaller, portable database for Online Analytical Processing (OLAP) analysis.

 

Tips for improving performance

While a data mart can be stored in the warehouse or in an alternate database server, it’s best to use the existing warehouse database in order to improve execution time and performance.

 

While creating DSN for pointing target schema, used for storing data mart tables, it’s very important to set your maximum response buffer size to a higher value in your DSN.

 

To do this, go to ODBC Data Source Admin and click on the “configure” option against the newly created DSN. Then: Click on options; then click on “advanced” in this window; and finally change the value for maximum response buffer size from the default (65536) to 1048575. Once that’s done, be sure to save before you close the window.

 

Next, while creating a database instance on top of your data mart DSN, be sure to configure data mart optimization. Do this by: right clicking on the project and selecting project configuration; then select the “database instances” section. Next, select the newly created data mart database instance and click on “modify.” This will cause a new window to pop up. In this new window, go to the “advanced” tab.

 

In the data mart optimization section, select the checkbox: “This database instance is located in the same warehouse as,” then select the warehouse database instance (used by other MicroStrategy reports to fetch data from DW).

 

In order to improve the performance of your datamart reports, it’s important to set Modify Table Creation VLDB properly at the database instance level so that it applies to all data mart reports in that project. The default value is: explicit table. You need to uncheck the “use default inherited value” checkbox and select “implicit table.”

 

Once you’ve taken these steps, go through the following checklist to ensure the optimal performance of your data mart report:

 

  • Check if the dataset is a graphical report or grid. If a graph, then convert it to grid.
  • Add all report objects to the template. Nothing should be missed.
  • Move all the attributes to rows and ensure all forms are selected for display.
  • Ensure that view filters and other physical filters are removed from the report (so that complete data is fetched).
  • Remove derived metrics as well, since these are not supported by datamart reports.
  • Ensure that metric and attribute joins are exactly similar in the original report and its corresponding data mart report.
  • Ensure that VLDB properties applied in the original report are also applied in its corresponding data mart report.
  • Check the format of data mart SQL. It should generate “create table as (select * from)” structure (refer Modify Table Creation VLDB property for details)
  • Lastly, confirm that the original report SQL and its data mart’s SQL is similar. Ignore the filters (if any) in original SQL while comparing. The count of rows should also be same.

By taking time on the front end to ensure that all your settings are optimal, you can dramatically reduce the time it takes to execute your data mart reports.

 

If you run data mart reports and are searching for more efficient ways to execute said reports, contact us. We are happy to demo this utility or to discuss developing another solution for your requirements.

Using Search to Simplify Business Intelligence

AvatarJune 3, 2016

By its nature, business intelligence (BI) reporting is sold as the one-stop solution to simplify access to complex business metrics and facilitate decision making. While a visually clean MicroStrategy or Tableau dashboard may be attractive at first glance, it could involve wrestling with underlying data to make it look sleek and perform faster. These behind-the-scenes complexities can unintentionally spill over into the user interface/user experience (UI/UX), negatively impacting an otherwise useful solution.

 

The complexity has several sources, including:

  • Data with cryptic labels and/or different semantic meanings that are employed by old online transaction processing (OLTP) applications
  • Myriad menu options to provide access to various data sources and reports, which may be intimidating for users unfamiliar with complex web-based/desktop data processing tools
  • Difficult overall workflows, including cumbersome steps in reaching up to the dashboard

We have seen many BI solutions, some really insightful, fall by the wayside or become the object of ridicule because influential end users found them difficult to use. So how do we simplify the search results that arise from complex underlying data? Or streamline the tedious steps in navigating the myriad menus in a complex web application? Or ease the transition from existing business terminologies to new naming conventions for a newly created data warehouse?

 

We recently faced these challenges when a life sciences client wanted to create a simple interface for end users while also ensuring the ability to access their most complex BI reports. Search engines were a perfect solution to this problem.

 

Search engine technology has some distinct advantages compared to using the standard relational database management system (RDBMS)-based data warehouses. The data in RDBMS are bound by strong data type definitions and an inviolable table structure. A search engine is designed to run across data types and multiple fields to bring back results that can be easily related to specific data points, assuming the user is familiar with how data points relate to each other.

 

We have a number of options, depending on which flavor of search engine we wish to use. The newer tools also include powerful, appliance-based software like ThoughtSpot, Google, or simple open-source solutions, such as Solr, Elastic Search, and Sphinx. Looking at this client’s requirements, we opted for Sphinx based on the presence of structured data as input.

 

The solution arranges search results by relevance to users’ subject areas. For example, when searching for subsidiaries, a finance department user will find the list of reports from the finance domain at top, whereas an HR department user will always see the HR reports at top of search results.

 

The BI tool also provides the ability to perform just enough filtering and aggregation to allow for expression-based searches. Such a search interface opens immense possibilities for implementing complex search situations that hide complexities behind the efficient cloak of a capable search engine.

 

The overall solution required us to provide a listing of data results for a given search term and link the results back to a MicroStrategy-based BI report containing the data. There are two ways to associate search results and the report:

 

  • By just-in-time evaluation of data values and report metadata to determine the report’s likelihood of containing that search keyword or
  • By maintaining a metadata-based mapping between the data indexed for the search engine and the MicroStrategy reports

We opted to use the second technique for its cost effectiveness. The usefulness of this feature is illustrated by a users’ ability to see matching fields where the search keyword has a direct hit. The UI provides a listing of fields immediately related to the search keyword, along with reference to the reports where the direct hits were found. Users can then click on the fields to see related content.

 

For example, a search for subsidiaries is likely to bring back a listing for products sold by a given subsidiary as part of related fields. Users can then click on products to see the detailed product list that is directly related to the search keyword. A link on this product listing will take the user to the most relevant MicroStrategy report, which will be executed after answering relevant prompts. This eliminates the need for the end user to deal with running a complex BI report.

 

In the end, those who are determined enough to circumvent the world in a simple yacht always do it, and a determined organization can make use of a given BI tool to achieve desired results. For general users, it makes sense to use the available means of circumnavigating the world; or rather they may just need to move from point A to point B in a commercial plane at a cost. The use of a search engine with BI tool provides that secure, commercial option to make things work in a significantly easier way.

 

Learn more about how our BI tools simplify decision making for businesses to achieve measurable growth.

 

Using Data-Driven Geospatial Analysis to Optimize the Location of New ATMs in Banking Industry

AvatarMay 19, 2016

The data-driven banking industry has been an early beneficiary of big data and its many business intelligence (BI) applications. A particularly useful application is the selection of ideal geographic locations for new ATMs.

 

For banks, opening a new ATM involves substantial cost, in terms of real-estate investment and ongoing operation. Fortunately, banks can optimize this decision by analyzing abundant internal metrics, such as market potential, location of existing ATMs, and historical cash withdrawal trends, among dozens of others. External sources also provide a wealth of geospatial information, ranging from foot traffic to weather patterns, which affect cash inflows/outflows. These two factors – the expense of a new ATM and the abundant data available for forecasting an ATM’s performance – make optimizing a new ATM’s location an ideal use case for big data analytics.

 

Before the era of big data, banks used geospatial analytic tools like MapInfo software, which requires manual data intervention on sets of shape files to analyze a defined geographical area. This approach has many shortcomings, including:

  • Dependence on a desktop-based installation;
  • Scarce skill and limited availability of specialized developers;
  • Limited ability to scale for large queries; and
  • Less efficient use of very large data.

To improve the use of available data, banks need a method of geospatial analytics that allows for faster, parallel processing of data libraries on a large scale.

 

A banking client recently sought to improve its decision making on the location of new ATMs. It wanted to leverage an internal data warehouse, which contained key metrics on the client’s and its competitor’s ATM performance. The client also wanted to employ efficient big data technology, so we selected Vertica on the Hadoop platform. Vertica is a massive parallel processing (MPP) database by Hewlett-Packard Enterprise, and the open-source Hadoop software framework is used for distributed data storage and high speed processing.

 

The business goal was to identify optimal locations for new ATMs and to enable other geospatial capabilities by using customer and other geographic data. This increases an ATM on-us ratio and results in higher lead conversions using data driven intelligence.

 

Our approach entails validating the ATM’s proposed catchment using a combination of geospatial data and business metrics, scenarios, and drivers. The proposed catchment could be defined by a combination of pin codes, city boundaries, or a free-flow drawing produced by the bank’s analysts.

 

Once a potential area is defined, we use Vertica to analyze raw business and geospatial data on the particular catchment. We enhance Vertica functionality with Java-based user defined extensions to create functions, which serve as an analyst toolbox. The analysis produces estimated ATM key performance indicators (KPIs) for the proposed location based on historical data. The process entails the following steps:

Geospatial Picture1

The KPIs are automatically attached to the final shape file, which can be imported into any geographic information system (GIS) application to visualize KPI’s and make effective decisions.

 

This process can be performed for single or multiple, proposed catchments. The cost and estimated KPIs can then be compared and contrasted to validate the best location for the new ATM. The system calculates over 160 KPIs. The following table summarizes some of them.

 

Geospatial Picture 2

The implementation also involves building a master data model consisting of metrics on ATM use, point-of-sale (POS) transactions, debit/credit cards, customers, and other data points.

 

If you are considering new BI applications and analytics tools, let’s schedule a time to discuss what options are available.

Simplify Media Planning and Boost Ad Revenue with a MicroStrategy Media Planning Report

AvatarApril 22, 2016

Today’s digitally-focused advertising world is complex and fast-paced. Decisions involve multiple entities—advertisers, publishers, agencies—and a variety of technologies to buy and sell ads. In addition, the fast-paced environment requires fast decision-making or there can be significant lost opportunity costs for everyone involved.

To further add to the complexity of managing digital ad campaigns, the creative ads that will be run—whether via search, social, or display—must be hyper-targeted to engage audiences. They must also be built as scalable ad experiences that can run on a variety of platforms, such as web, mobile, tablets, etc.

Managing the Creative Ad Process
For media planners, managing ad creative to ensure that they have the right ad ready to distribute to publishers at the right moment can be extremely challenging. Due to the need to accurately message audiences and build these messages in a scalable manner, ad creation can easily get behind schedule. But because late ad creative will likely result in revenue loss for the advertiser, media planners need to try to avoid late delivery of advertising materials as much as possible.

For example, in the Interactive Advertising Bureau’s (IAB) terms and conditions sheet, it states that if the advertising materials are late, the media company is not required to guarantee full delivery of the insertion order. The advertiser will still need to pay for their creative, but they will lose the opportunity to get their ad in front of their target audience for as long as they want. As a result, they’ll likely have fewer clicks or views, and less opportunity to sell their product.

Bringing All the Ad Campaign Pieces Into a Coherent Picture
To help one agency deal with these kinds of difficulties, InfoCepts worked with their media planning and business planning team to create a database-generated report to improve visibility into the ad placement process.

Using MicroStrategy to build the report, the InfoCepts team incorporated numerous fields of information related to tracking and measuring key performance indicators of a digital ad campaign. These fields included details such as:

  • Name of advertiser, sales team, agencies, and traffickers
  • Pricing method
  • Creative status, potential revenue loss due to late creative, late by days, etc.
  • KPIs (impressions, cost, duration of placement)

With all of this information now tracked and accessible, the media planning team could use the report to gain better insight into not just the status of the creation of ad creative, but also how well ad campaigns were performing.

Armed with this type of information, it is much easier for media planners to identify in advance when ad creative will be late. Allowing them to take proactive rather than reactive measures to solve the problem and avoid revenue losses. Meanwhile, the ability to easily see and track KPIs made it easier to make smart decisions about ad placement during campaigns and for future campaigns.

Interested in learning more about how InfoCepts can help you better plan and track your digital ad campaigns? Please contact us.

Boost Efficiency by Automating Your Daily MicroStrategy Change Report

AvatarDecember 2, 2015

If you’ve ever worked on a large MicroStrategy project with multiple developers simultaneously, you know there’s a high probability that at some point, modifications made by one developer in the production environment can negatively affect the entire project.

The standard way to deal with this situation is by running daily MicroStrategy change reports that allow you to see changes made each day to the production environment. Unfortunately, it often takes significant time to identify the root cause of the issue, and even more time to hunt down the developer who made the modification so he or she can fix the error.

At InfoCepts, our team experienced this situation multiple times and knew there had to be a better, faster way — not only for ourselves but also for our clients. Fortunately, we found one.

From Manual to Automated Change Reports

To automate daily change reports in MicroStrategy, we created a freeform SQL report to provide details about all objects created or modified in the production environment throughout the previous day. How does the report work? Essentially, it uses metadata and Enterprise Manager tables to provide the required information in Oracle or even other database environments, if modified.

The one prerequisite, however, is that MicroStrategy administrators must have access to all MicroStrategy metadata and Enterprise Manager tables, given that the change report depends on these tables. Then, because the SQL is already created, administrators simply need to schedule the report to run daily in MicroStrategy. Once this is done, they’ll receive an email each day with the full report.

While a similar report can be created manually in MicroStrategy by the MicroStrategy administrator, that approach takes time and doesn’t always lead to results. Automating daily reports, however, comes with three distinct advantages, allowing users to:

  • Save time in report creation.
    When using our automated report solution, MicroStrategy administrators don’t have to modify the SQL. This saves time and makes the process much simpler — in fact, the administrator doesn’t have to understand the SQL or how MicroStrategy is storing all of the metadata to create the report.
  • Gain instant visibility.
    With the standard daily change report, it’s impossible to know who modified what. So, even after the root cause is identified, administrators have to email the team of developers to find out who made the modification. Only then can the issue be fixed. With our automated change report, administrators receive details on which developer modified what, and can go directly to the source to fix the problem.
  • Get to the root cause faster.
    Our automated report makes it easy to identify the root cause, which is rarely the case with manually-created change reports. With the manual method, our own MicroStrategy administrators often struggled to find the root cause, but that doesn’t happen with our automated solution.

These benefits make a big difference in day-to-day operations, affecting everything from decision-making to efficiency and your bottom line. Consider, for example, a recent problem with one of our sales reports showing incorrect sales data. By using our automated daily change report, our team discovered quickly that a metric and formula used in the sales report was modified the day before. We didn’t have to spend time on root cause analysis or emailing the developers asking who had made the modification. Instead, the report provided the information we need — and saved our MicroStrategy administrator roughly three to four hours of effort.

If you’re looking for ways to streamline change reports and other database administration and warehousing tasks, our team at InfoCepts can help. Get in touch to learn how.